How To: Reduce the External Dependency on JDBC Driver in Openshift Environment

It’s evident that Pega Application PODS needs JDBC Driver to connect with Database. This can be supplied through a Driver URI / Customized pega docker image with JDBC Driver URI to support the need.

But Due to the Driver URI setup, A portal where the JDBC driver uploaded is not consistently available and can’t customize the docker image due to client side policy issues.

Also, There’s no integrated repository available with Provided Openshift Cluster too.

How to avoid the issue on this context?

@Kishore Sanagapalli

There’s a solution implemented as a POC. I’ll post it once its test result is successful.

@Kishore Sanagapalli

We have removed this dependency by mounting the file during startup of the default containers.

According to documentation, the DB driver file has to be put in the ${CATALINA_HOME}/lib/ folder:

You can create for instance a secret that contains your DB Driver file and reference that in the deployment YAML.

Deployment YAML adjustments:

volumeMounts:

  • name: pega-db-driver

mountPath: /opt/pega/lib/db-driver.jar ← in older version this must be /user/local/tomcat/db-driver.jar

subPath: postgres.jar

volumes:

  • name: pega-db-driver

secret:

secretName: db-driver

defaultMode: 420

The db-driver secret should then contain the actual postgres.jar file.

Additionally, you have to make sure that in the pega-environment-config configmap the JDBC_DRIVER_URI is set as empty. This way the lookup is skipped and the .jar file from the lib directory is used directly.

@ManuelZ8

Hi, Thanks for posting your idea. Could you please share me more information about the changes you made in the Deployment file?

I tried make the changes already once and it did not work for me. Since you confirmed that, you implemented it already, It’ll be helpful for me if you share the piece of changes you made to the deployment file.

@Kishore Sanagapalli

The deployment.yaml adjustments are already in the post. You just have to add the volume and volumeMount

deployment.yaml

...

volumeMounts:

  - name: pega-db-driver

    mountPath: /opt/pega/lib/db-driver.jar

    subPath: postgres.jar

...

volumes:

  - name: pega-db-driver

    secret:

      secretName: db-driver

      defaultMode: 420

...

Additional to the deployment changes these two things have to be done:

  1. Adjust pega-environment-config.yaml
pega-environment-config.yaml

  JDBC_DRIVER_URI: ""

This is value has to be empty so Pega looks for the existing file during startup and does not try to fetch the driver externally.

  1. Create a secret that holds the driver file
db-driver.yaml

name: db-driver
kind: Secret

data:

  postgres.jar: >-
    ............

The file name here has to match the subpath in the deployment.yaml file

@Kishore SanagapalliI don’t know how you could circumvent this restriction. You could discuss with your DevOps team in which ways it is possible to place large files into specific folders during startup. If you know how to achieve this, I’d be happy to know. Maybe you can leverage a custom init container and/or a PVC to get the file to the place where you want it to be.

As long as the db driver is in that folder during startup and the JDBC_DRIVER_URI environment variable is empty, the application is good to go and will boot up.

@ManuelZ8

In Openshift, There’s an option of Object Bucket Claim to utilize the need. Exploring the option for my issue as a solution. I’ll post the complete solution once the POC is success.

@ManuelZ8 I get it how did you implement. But the problem what we have is to implement the same method is that, Key/Value Pair secret should hold the jar file is not successful.

Getting error doing so is: “Too long: must have at most 1048576 bytes” for field “data” is the problem for us.

@ManuelZ8

There’s a new solution developed in our environment using httpd:latest docker image by exposing port 8081 and customized folder permissions to avoid using root permissions in OpenShift.

The solution is working fine but the only concern is that Critical and High-level vulnerabilities identified in the scan report. We will be looking for alternate image which is free from vulnerabilities.