Maximum Scalability Required when shifting from Agent to QP

Hi Team,

We have the below setup to achieve maximum throughput in case of Agents

No of Nodes having a particular Node Classification that is defined on Standard agent : 34

No of Agents of same type : 10

By the no of agents we mean that since we want multiple copies of the same agent like ABC0 , ABC1 and so on till ABC9. Therefore 10 copies of the agent from 0 to 9

Current throughput that we can achieve is 10*34 = 340 threads at a time

Now we are moving to QP , how can we achieve the same kind of throughput ?

@ShwetaB96

On Queue processor configure the number of threads to be executed per node (default is 5). increase this to desired number (10 per node)

@ShwetaB96 QP running under kafka and default have 22 thread in one node.

@SriharshaAnika

Hi Sriharsha,

Even if I increase this no to 10 then also the maximum i can achieve is 20

But we need it to be 340, any design that we can have for this

@GinoP851

What i understood is that it can have 20 max for a QP in a cluster

Please can you help me to understand why you say it is 22

@ShwetaB96 Can you check this article

Pegasystems Documentation?

@ShwetaB96 20 per node will be processed at a time.

you have 34 nodes right. so it’s 20*34= 680

@ShwetaB96

You need to create multiple Queue processors to process those many items where you need to check the queue items(based on count of SCHEDULED AND READY TO PROCESS/Delayed ) before queuing item to particular queue processor. So you can achieve load balance and high throughput of items processing.

Anyhow Queue processors are recommended for high throughput over agents

Please make sure your system resources before creating multiple que processors.