Hi All,
Do we have a maximum limit or a recommended number of records to be written into a Pega Database Table, probably created in the PegaData Schema?
For example, If we create a sample table in PegaData schema having 8-10 columns and If we use a Obj-Save method with Commit Method considering there are more than 10-15 million records to be written, other than performance and system resource burn out, what are the other issues that can occur. I understand that 10-15 million records is not recommended to be written at once but just in case if it is required what is the best approach.
Thanks and Regards,
Subhajit
Hi Team,
Is there any thoughts to this?
Thanks and Regards,
Subhajit
Hi @SubhajitC16577932
I ran a search and this is the information I found pertaining to your question:
There is no specific maximum limit on the number of records in a DB table created in the PegaData Schema. However, it is essential to monitor the database usage and performance to ensure optimal operation. The actual limit depends on the database system you are using and its configuration.
Here are some reference materials to help:
Hope that helps!
@SubhajitC16577932
Hello,
I don’t think it would be required. Worst case, it is requested and then you should explain the risk of such approach.
The time taken to write such a volume of records is also the time your table will be locked. This means any other transaction will not be allowed.
On the other side your temp and rollback segment might be at risk.
And your DB will be busy for quite some time if it fails at the last record before the commit.
There’s certainly a limit based on your infra & DB setup/capacity/tuning, but there are clearly much better/clever approach to avoid doing it. And keep in mind that a single failure would rollback all which has not been saved since last commit.
This means such approach could be highly time consuming and ending with nothing.
So, I would personally not look for such limit since it should never be done 
Regards
Anthony