Archived Case Migration

  1. If we’re moving to Pega Cloud and have previously archived case data in a repo, is there a process for ensuring the archive cases are searchable and retrievable in the new cloud system? Would this be handled automatically by the pyArchival_ReIndexer Job Scheduler?

  2. In a disaster recovery situation, what would happen if the archival process ran on the DR system and attempted to archive a case that had already been archived? This assumes the repo is accessible from both the primary and DR system.

Thanks
-Ryan

@rwtaylor

  1. there is a process for ensuring that archived cases are searchable and retrievable in the new Pega Cloud system. The pyArchival_ReIndexer Job Scheduler is designed to fix corrupted Elasticsearch indexes, which can help in making archived cases searchable. However, it is important to ensure that the archived cases are properly indexed into Elasticsearch as part of the archival process. The indexing of copied cases into Elasticsearch is a crucial step to ensure they are searchable in the new system.

  2. In a disaster recovery situation, if the archival process runs on the DR system and attempts to archive a case that has already been archived, the system should ideally recognize that the case has already been archived and avoid duplicating the archival process. The pyPegaArchiverUsingPipeline job uses a crawler to identify eligible cases for archiving, and it should validate the resolution of all subcases before copying them to the secondary storage repository. This validation step should help prevent re-archiving of cases that have already been archived.

:warning: This is a GenAI-powered tool. All generated answers require validation against the provided references.

The Case archiving and data expunging processes

Archived Case Search in 8.3 Cloud

How to access Pega cloud storage after Archive and Purge the Pega cloud application case types data.

Secondary storage repository for archived data

@MarijeSchillern I am also working on migration from on-prem setup to customer owned AWS cloud. I have implemented archival solution in on-prem platform where oracle is database. Now when I am migrating files to AWS cloud platform where backend is Postgres, Pega is not able to reach pzPVStream from archived files since it is formed on oracle platform.

Did you faces this issue?

@Girish Gaikwad if you’re encountering a product issue you should use the MSP to log a support ticket.

I see that you have openend ticket INC-C39688 and our support team is engaging with you there.

@MarijeSchillern Yes Marije. I have opened incident and will follow up over there. Thanks for your response.

I’m glad to see your support ticket was successfully resolved.

GCS confirmed that this behavior is expected. The data class for which the Association rule is defined does not have the required properties: pzInsKey, pxObjClass, and pxInsName.
These properties are mandatory for archival of any data class in Pega.
In the logs, the following root cause was observed:
Caused by: com.pega.pegarules.pub.PRRuntimeException: [InvalidReferenceException .pxObjClass Unexposed properties cannot be selected for classes mapped to external tables]
However, please note that pxObjClass must always be an exposed column.

Recommended Actions:

  • GCS provided you with a custom solution by adding columns in the backend database table.
  • Restart the Pega platform.
  • Populate the appropriate values for these columns for the existing records