I am using Amazon S3 connectivity for file attachments. I copied the ruleset pyMaxDragDropAttachSizeMB and increased the value from 25 MB to 30 MB, which allowed me to attach files successfully. To support 1 GB files, I further increased the value to 1024 MB. However, when attaching files larger than 1 GB, I receive an error stating that the maximum size limit for attachments on the work case has been exceeded, which is expected.
The issue is that when I attempt to attach files smaller than 1 GB, the attachment does not save to the case at all.
According to Pega documentation, the attachment size limit is 1 GB, but uploading files of this size often results in a timeout. PDN forum articles reference a default maximum file size of 25 MB, so I am unclear about the actual default limit.
My question is: Is it possible to upload an attachment larger than 1 GB to the work object (case) in Pega? If so, what is the recommended approach to do this successfully? [Except multipart approach]
Default Limit: Pega’s out-of-the-box limit for file attachments to a work object is 25 MB.
Configurable Limit: You can increase this via:
pyMaxDragDropAttachSizeMB (as you’ve done)
Initialization/MaximumFileUploadSizeMB in prconfig.xml or DSS settings2
Documented Maximum: Pega documentation states a theoretical max of 1 GB for attachments, but this is rarely practical due to timeout and memory constraints.
Why Files <1 GB Fail to Save
Even though you’ve set the limit to 1024 MB, files under 1 GB not saving could be due to:
Timeouts: Large files may exceed HTTP request timeout thresholds.
Memory Constraints: JVM heap size or server memory may be insufficient to process large uploads.
Amazon S3 Integration: If you’re using S3, ensure the connector is configured to handle large payloads and that the file is successfully uploaded before the case commits.
UI vs Backend Mismatch: Sometimes the UI allows the file to be selected, but backend validations or connectors silently fail.
Recommended Approach (Without Multipart Upload)
If you want to support >1 GB files without multipart, here’s what you can try:
Increase Server Resources:
Boost JVM heap size.
Increase HTTP request timeout settings.
Optimize S3 Connector:
Ensure S3 bucket policies and connector settings support large single-part uploads.
Use pre-signed URLs for direct-to-S3 uploads, then link the file metadata to the case.
Use External File Reference:
Instead of uploading the file into Pega, store it in S3 and attach a reference (URL or metadata) to the case.
This avoids the need for Pega to handle the file directly.
Audit Logs & Tracer:
Use Pega Tracer and logs to identify where the failure occurs—whether during upload, save, or commit.
uploading >1 GB through the standard Pega attachment control is not reliable unless you bypass the app server or use chunking. Your change to pyMaxDragDropAttachSizeMB only raises the client-side cap; you must also raise the server cap via DSS initialization/maximumFileUploadSizeMB (set to ≥1024 on every node and restart). Then raise web tier limits: Tomcat maxPostSize/maxSwallowSize (or Jetty), any proxy/WAF body size (e.g., Nginx client_max_body_size), and increase idle/request timeouts end-to-end. Even with that, big single POSTs often time out. The practical approach is direct-to-S3: generate a pre-signed PUT and attach “by reference” (store the S3 key/metadata in the case) so the file never traverses Pega; S3 supports single-part PUT up to 5 GB, so 1–2 GB works well. If you must keep “in-Pega” storage, use Pega’s repository direct upload (if available in your version) or compress/split files. Your “<1 GB saves nothing” symptom almost always means a lower layer still blocks the request; align all limits and set the UI cap below the smallest server/proxy cap to avoid silent drops.
I tried this way , this is working and able to attach 1GB file and the same file shown at Amazon S3 bucket.
By default, Pega allows file uploads up to 25 MB in size.
To enable uploads larger than 25 MB, you need to configure the following environment setting
in the prconfig.xml file:
Note: Pega supports a maximum file upload size of 2 GB (2048 MB).
Additionally, ensure that the pyMaxDragDropAttachSizeMB rule is copied into your ruleset and set to the same value as the DSS (Dynamic System Setting).
Recommended System Enhancements for Large File Uploads.
To support large file uploads efficiently, consider the following actions:
Increase JVM heap size
Extend HTTP request timeout settings
Use high-speed CPUs
Ensure sufficient RAM and memory resources
These enhancements will help Pega process large files more reliably and improve overall performance during file uploads.
Verified and making sure the heap size 16GB as Image 1
Pega side attached 1 GB file as shown in Image 2
File shown in Amazon S3 bucket as shown in Image 3