Extend PEGA infrastructure 8.7

Hello , i’m trying to extend pega infrastructure with an extra node. infrastructure is with docker compose. New node 3 added is with the same docker compose confing as node 1 and 2. Old infrastructure with two nodes work properly but with 3 nodes i give error . 3 nodes are in the same cluster. Can someone have idea what to fix .
2026-04-03 14:19:40,629 [OBSCHEDULER_THREAD_8] [ STANDARD] [ ] [ ] ( JobSchedulerLifecycle) ERROR - Job[pzQueueProcessorMaintenance] execution reset has failed.
com.pega.pegarules.pub.database.LockGoneException: Database-LockFailure-LockLost PZQUEUEPROCESSORMAINTENANCE
2026-04-03 14:19:39,579 [BSCHEDULER_THREAD_12] [ STANDARD] [ ] [ ] ( JobSchedulerLifecycle) ERROR - Job[pzRuleQueueItemProcessor] execution reset has failed.
com.pega.pegarules.pub.database.LockGoneException: Database-LockFailure-LockLost PZRULEQUEUEITEMPROCESSOR
2026-04-03 14:19:39,567 [OBSCHEDULER_THREAD_4] [ STANDARD] [ ] [ ] ( JobSchedulerLifecycle) ERROR - Job[pyExportJobsEnablementMonitor] execution reset has failed.
com.pega.pegarules.pub.database.LockGoneException: Database-LockFailure-LockLost PYEXPORTJOBSENABLEMENTMONITOR
2026-04-03 14:19:30,717 [BSCHEDULER_THREAD_11] [ STANDARD] [ ] [ ] ( JobSchedulerLifecycle) ERROR - Job[pzResetExpiredLogCategories] execution reset has failed.
com.pega.pegarules.pub.database.LockGoneException: Database-LockFailure-LockLost PZRESETEXPIREDLOGCATEGORIES

the problem is that node 3 was added with the same setup as the other nodes, but the Stream service is not joining cleanly, which is why you see under-replicated partitions and those lock lost errors.
Fix node 3 by giving it its own unique node identity, hostname, and persistent volume, and make sure its Stream service ports and cluster discovery settings match the other nodes exactly.
In Docker Compose, do not reuse the same local storage or broker identity from node 1 or node 2.
After correcting that, restart the full cluster so Kafka/Stream can rebuild replication properly across all three nodes.
Once the partitions are healthy again, the Job Scheduler lock errors will stop because the node will no longer lose its cluster ownership.

Thank you @Sairohith , when 3 nodes are up problem is not faced , problem with partition occur only when at least one node is down. Configuration for nodes are the same. Node 3 has unique node indetity , hostname and persistent volume. Stream services listen and listed by other nodes