AWS offers the Amazon Elastic Kubernetes Service (EKS) in the below variants:
Classic EKS variant with self-managed worker nodes (https://docs.aws.amazon.com/eks/latest/userguide/worker.html). In this variant, the customer has to completely manage the EC2 instances for the Kubernetes data plane on which the Pods run.
EKS on Fargate (https://docs.aws.amazon.com/eks/latest/userguide/fargate.html). As a customer, you don’t have to manage the EC2 instances in the cluster yourself. You only specify the Pods here (e.g. with their memory requirements). AWS automatically allocates the underlying instances. As a customer, you have no direct access to these instances and you do not see them as such in your own AWS account.
Based on the reply on the other post
Based on the reply from Brett Allen from Pega, it is mentioned that Pega does not support AWS Fargate and the option to use managed node group is not tested but may work.
We are planning to use Option 2. Since the above post is over 3 years ago, could the team please help update if there are any changes to the above.
@SaikatP5@MarijeSchillern I’ve used managed node groups in my personal lab deployments since then with no issues and don’t see any reason they wouldn’t be suitable. If you’re satisfied with the AMI and other settings EKS uses for the managed group I don’t know of any issues with them.
@KEISUKE.T The persistent storage would have been required for Stream nodes back then. Now with externalized services that may not apply. However Fargate is not tested or specifically supported, nor is any other specific approach to building the cluster. Administration of the cluster is a client responsibility.
@BrettAllen
Thank you for your quick reply.
Do you know is there any documents in terms of persistent storage volumes for Stream node?
I’d like to know what technical limitations is.
From Pega’s point of view, is it acceptable to use Fargate as long as it is tested on the user’s side?