Externalization Of Kafka Services

Hi

When we externalize kafka and connect to an external cluster ( say confluent) , I believe the current file system ( kafka-data) will be created fresh in the external cluster. Will Pega continue to maintain the metadata, topic information in the stream DB tables? Or will these be handled in the external kafka clusters?

As in the current processing scenario, if the queuing fails for some reason the information is stored in the Pega DB and when the node is available it will pick up the queued messages from the DB. How will this scenario work when the topics are in the external clusters?

@AnooijtD16582545 please could you check the ‘Related Content’ questions and carry out a PSC search on the key words?

When Kafka is externalized and connected to an external cluster, the metadata, topic information, and queuing are handled by the external Kafka clusters. Pega does not maintain this in the stream DB tables. In case of failure, the external Kafka performs replication, rebalance, dequeue, and enqueue between active and inactive clusters, ensuring data is safe and ready for business continuity and resilience.

:warning: This is a GenAI-powered tool. All generated answers require validation against the provided references.

Externalisation of Kafka service

External Kafka in your deployment

Connecting a Pega Platform virtual machine deployment to use a streaming serv

Configure External Kafka as a stream service