Regarding effect of Containerization on Local Data Storage

Hi Team

The future vision of our application is to migrate to Containerization which will be on Open Shift(OCP) Infrastructure in ON-Premise setup. In our initial discussions with Infra team we are understanding that this migration to Open Shift Infra will effect the below storage requirements.

  • Local Data Storage
  • Mount from external Storage to Pega Local Data Storage

OCP team has confirmed “Mount is not available on OCP infra” , even receiving the file from external storage to our local storage should be an issue.

Below are some of the scenarios which we have evaluated that can effect



File Listeners reading from Local data storage



Not Feasible - The file to be received from external system to Internal Local Data Storage is an issue



Data mount created in Application from external storage



Not Feasible



Connect File mount created from Application to external system



Not Feasible

Any help in this regard to resolve this will be helpful.

@ShwetaB96

Can we compare this situation with migration to Pega cloud from on- premise infrastructure?

In that case we generally leverage Pega Repository and Pega SFTP Services to

  1. Copy files into Pega repository using Pega SFTP Services for Pega file listener to pick files from repository instead of any other on premise location

  2. Use the same Pega SFTP service to read file from repository and then store it in on-premise location.

Basically use Pega SFTP service for two way communication

Please check this url for more information

@ShwetaB96 The key is to use a Repository Rule in the Pega application and implement a synchronization mechanism to that repository if the upstream or downstream applications are unable to access it directly.

For File Listeners or other file access use cases, I recommend using an S3 repository as the source for the listener. Support for S3-compatible repositories not on AWS was added in 8.8.4 and Infinity 23.1 to allow use of on-premise object storage, though this depends on the vendor implementing full API compatibility.

When producing files (e.g. Connect-File), produce the files to a repository rule, then either modify the downstream application to access that storage or you can implement a synchronization script. For example if your repository was S3 but the downstream application needs to retrieve files from a SAN mount, you can use the aws cli to move files from S3 to SAN every few minutes.

Of course any other storage repository would work as well using this approach.