How to Enable Logs Persistance (for Logs Retention eventhough POD Dies) in Kubernetes Cluster

Need help on the enabling Pega Application logs preservance without configuring any external configurations like Confluent-D etc tools.

Implemented Solution: Pega Logs Retention in OpenShift | Support Center

@KishoreSanagapalli

The team that I work with uses EFK, Elasticsearch-Fluentd-Kibana is a standard logging stack that is provided as an example to help you get started.

More details for EFK can be found at the Addons Helm Chart

@shanp

Thanks for your input. We’re working with an alternate method to retain the Application logs irrespective of the POD termination status. I’ll post the solution that we have been working on, here, once it’s successful.

@Kishore Sanagapalli
I’m working on similar setup, tried to use sidecar container method, but all logs are not being retained.

were you able to achieve log retentions with pods? Please help.

@AnilN_7

Hi,

If you’re implementing Sidecar Container concept, you’ll need to do the following.

a. Need to Re-direct the logs to a Central location (NFS), instead of storing them in pods / containers. meaning, need to create a separate folder with help of NFS and the path need to be feeded in the Kubernetes configmap.

b. Print Console Output to a log stack file (getting stored in Centralized NFS Location) and filter them as per the need.

c. Use Kibana / Confluent-D which supports logging stack option.

d. Create necessary indexes to identify the log type and re-arrange them.

e. Might need to create a couple of Ingresses to manage the load and traffic to achieve the logs re-direction points.

@Kishore Sanagapalli , Yeah my solution will write logs even when container restarts, when HPA triggers a new POD

Thanks,
Azeez

@Kishore Sanagapalli

Alternate Method to retain the Pega Application Logs without the need of ELK, Kibana, Fluent-D etc.. Please refer the below link for more details.

Pega Logs Retention in OpenShift | Support Center

@Kishore Sanagapalli @PhilipShannon

Solution Implemented successfully. Please find the ref: How to Enable Logs Persistance (for Logs Retention eventhough POD Dies) in Kubernetes Cluster | Support Center (pega.com)

@Kishore Sanagapalli

In my case, I mounted GCS bucket as a volume to the Pega container and changed the default log location to this gcs mount. Now my logs started to save in the GCS buckets from where I can see the logs. Even if the pod is deleted logs will stay in the GCS bucket.

Thanks,
Azeez

@AzeezS16755384

Similar Setup I did but some changes brought in to prlog4j2.xml file. It’s working successfully in OpenShift Environment.

Does your setup write the logs when container restarts without any changes to prlog4j2.xml file?

@AzeezS16755384

I also figured out the same but with few changes to prlog4j2.xml file changes as well. It’s working successfully.