Impact on column size to 20,000 characters

Hi,

what will be the impact if i raised the column size to 20,000 characters.

Why not encode and save and decode when retrieving

Can you tell you requirement and why you will need that? its like saving a clob in the column which will slow down the inserts and updates.

1 Like

hello @Bhargavj3227 ,

Raising a database column size to 20,000 characters is a big NO in most Pega applications. Irrespective of your requirement bumping up the column size may solve you ask byt it will eventually introduce significant performance, scalability, and maintenance risks, especially if the table is transactional. Encoding the data (for example, using Base64) does not alleviate these concerns; in fact, it typically increases the payload size by approximately 30–40%, resulting in an even greater performance impact during inserts, updates, and reads.

If you must do this and unavoidable, the below approaches can be used with caution, ensuring that retrieval of the data occurs strictly on a need only basis.

  1. Use Attachments / Repository-Backed Storage (if the content is large or semi-structured): Store it as a Data-WorkAttach-File or in a managed repository (S3, Azure Blob, File Storage). Persist only a reference (pzInsKey / URI / metadata) in the case.
  2. Split the Data into Structured Fields (if only portions of the content are frequently accessed): Normalize the data into smaller, queryable properties, Keep large raw payload external or archival-only.
  3. Externalize to an Integration Store (For JSON/XML payloads or logs):Persist externally (object store, NoSQL, log service),Store an identifier in Pega.

One reason comes to mind. I´ve had a scenario where I wanted to store the JSON response of a GenAI-agent in a CLOB-column. Even then, you want to make sure you’re not reaching levels where it would generate a Pega alert. Ultimately, I support the approach suggested by @VikasT when there is even a remote chance of an output of 20k characters…