We’re in the process of defining an archival policy for an application. One question that recently came up was whether it’s possible to migrate the archived data from a network file share to a cloud solution like Azure Blob storage and how that process would work.
Is it as simple as moving the file structure and reconfiguring the repository in Pega? Or are there other changes required to ensure the old cases are available via ElasticSearch?
@rwtaylor Migrating archived data from a network file share to a cloud solution like Azure Blob storage involves more than just moving the file structure and reconfiguring the repository in Pega. While changing the repository is possible, it may not be straightforward. The retrieval of archived cases relies on ElasticSearch, and there could be risks associated with the migration process. It is important to thoroughly assess these risks and ensure that the necessary configurations are in place to maintain the availability of old cases via ElasticSearch
This seems to be saying that if the files were moved to a new repo, the pyArchival_ReIndexer Job Scheduler should be able to handle the reindexing automatically. Although perhaps that may require deleting the existing elastic search files and have the system rebuild everything. Am I reading that correctly?