AWS Authentication Profile without IAM Credentials

I am trying to configure AWS S3 repository. In the repository rule, as yo know, we have to provide the AWS Auth Profile. AWS Auth Profile requires AWS Access Key ID and Secret Access Key of IAM User.

Issue here is, our enterprise AWS account does not allow IAM users. Policy requires that all AWS logins must be federated thru IAM roles integrated with Okta/AD SSO.

How we can create AWS Authentication profile in Pega when Company policy does not allow creating IAM users?

One of the workaround we are tried was to get temporary access credentials (access key id and secret access key) using SAML2AWS (GitHub - Versent/saml2aws: CLI tool which enables you to login and retrieve AWS temporary credentials using a SAML IDP).

Approach was to periodically refresh the Pega AWS Authentication profile with temporary credentials retrieved using SAML2AWS. However, AWS authentication is not working with temporary credentials.

Any recommendation / best practices?

@SudheeshM8807 If you’re running on EKS this can be achieved. You can create a repository using DSS instead of the Repository Rule form which allows you to skip the Authentication Profile.

Pega-Engine storage/class/:/type aws-s3

Pega-Engine storage/class/:/bucket

Pega-Engine storage/class/:/rootpath

The pods need to run using a service account that can use OIDC to assume an IAM role that can access the bucket. This takes a fair bit of configuration on the AWS side, you will need someone with the expertise needed to set up your cluster and IAM.

Thanks @BrettAllen, good to know.. We are still on-prem VMs - not planning EKS move this year. Do you happen to know why the temporary STS credentials are not recognized by AWS Auth profile?

@SudheeshM8807 This is exactly similar to BrettAllen’s response, try if this works

  1. Grant EC2 service account (which runs pega service) have the relevant IAM role to read/write the S3 bucket (mys3bucketname)

  2. Create a DSS

DSS.jpg

  1. Repository “myrepository” will be created

  2. This repository doesn’t need any authentication profile and runs on top of the EC2 service account privilege(which has relavant IAM roles for the S3 bucket which you want to access)

Hi @BrettAllen, Can we have two DSS for connecting two different S3 buckets? if yes how should we do it via DSS

@PramodDornala : create the DSS as follows

Pega-Engine storage/class/Repository1:/type aws-s3

Pega-Engine storage/class/Repository1:/bucket Bucket1

Pega-Engine storage/class/Repository2:/type aws-s3

Pega-Engine storage/class/Repository2:/bucket Bucket2

whereas Repository1, Repository2 are your Pega Repository rule names which would be created and Bucket1, Bucket2 are your S3 bucket names at AWS

@Kannesh On point 3 where it say Repository will be created. Will Pega automatically cerate this repository and if that is the case, do we need to do any configuration apart from two DSS?

@BrettAllen After creating the DSS rules , if we are not able to see the repository rule. Does it indicate that the EC2 instance doesn’t have an IAM role attached.

Thank you @KanneeswaranK17088174 , I was able to create and connect my listener to a specific folder