Greetings. Hope you are well! I am working on oc container deployments on Pega 8.8.2 and CDH 8.7. Can we use HPA to scale down and scale up? What will be the impact if batch nodes are running campaigns and we specify targetCPUItlization of 80%, and if it scaled to say 12 pods and when partitions are all distributed across all the 12 pods and targetCPUUtilization is < 80%, Scale down is terminating some batch pods even though the partitions are in progress, we are getting “Shutdown in progress” as pods are getting terminated by just looking at targetCPUUtilization.
Can someone please let me know how scale down will work?
@KOMARINA In Pega Kubernetes deployments, you can use Horizontal Pod Autoscaler (HPA) to scale up or down based on performance load tests.
However, if batch nodes are running campaigns and the HPA scales down the number of pods based on CPU utilization, it could disrupt ongoing processes. If partitions are in progress and the HPA terminates some batch pods, you might see a “Shutdown in progress” message. Therefore, you should carefully monitor and adjust the autoscaler metrics to ensure that the scaling does not disrupt ongoing processes. If necessary, you might need to manually intervene to prevent the HPA from scaling down too aggressively.
This is a GenAI-powered tool. All generated answers require validation against the provided references.
Greetings. This is a known issue when Dataflows shutdown, In progress Partitions will have a problem however Pega already has a HFIX for this and is dependent on the pega platform version you are running on. This solved the problem and we are able to use HPA without any issues.