Thanks for the explanation. We are also in the process to move towards externalizing the embedded services starting with Kafka and these answers are really helpful. However I had a follow-up question on this.
When we externalize kafka and connect to an external cluster ( confluent) , I believe the current file system ( kafka-data) will be created fresh in the external cluster. Will Pega continue to maintain the metadata, topic information in the stream DB tables? Or will these be handled in the external kafka clusters?
As in the current processing scenario, if the queuing fails for some reason the information is stored in the Pega DB and when the node is available it will pick up the queued messages from the DB. How will this scenario work when the topics are in teh external clusters?