Inserting Multiple Records into a Pega Data Type from a Custom DX Component/Constellation SDK (Constellation 24/25)

Introduction

With the shift towards Pega Constellation architecture, DX components have become the primary way to build rich, decoupled user experiences. While DX APIs provide a clean and standardized way to interact with cases and data, certain real-world scenarios expose functional gaps that architects and developers must address through design decisions.

One such common challenge arises when a single UI submission needs to create multiple records in a data type.

This article explores this problem in detail and discusses practical and architectural options available today in Pega Constellation 24/25.

Problem Statement

Imagine a scenario where you are building a Custom DX component / Constellation SDK in a Pega Constellation application. The component captures user details along with a dynamic list of skills and submits them in one action.

You have the following data model:

Data Type 1: User

ID Name
USER1 John
USER2 Mary

Data Type 2: Skill

ID UserID Skill
1 USER1 Java
2 USER1 Python
3 USER2 HTML
4 USER2 Pega

Creating a User record is straightforward — one submission, one record.

However, Skills are a one-to-many relationship. A single UI submission may contain multiple skill entries for the same user.

The Challenge

As of Pega Constellation 24/25, the DX API for data supports creating one data record per request. There is no native bulk insert capability exposed via DX APIs.

So the question becomes:

What are the best and safest ways to insert multiple Skill records from a single DX component submission?

Possible Solutions

Let’s explore the available options, along with their pros, cons, and architectural implications.

Option 1: Loop and Call Create Data Record DX API Multiple Times### Approach

  • The DX component loops through the list of skills entered by the user

  • For each skill:

    • Call the Create Data Record DX API

    • Insert one Skill record at a time

Pros

  • Simple and quick to implement

  • Uses standard DX APIs

  • No additional Pega artifacts required

Cons

  • Multiple network calls for a single UI submission

  • Performance degrades with larger datasets

  • Complex error handling:

    • Partial failures

    • No native rollback mechanism

  • Client-side retry and consistency logic becomes messy

When to Use

  • Small datasets (1–5 records)

  • Non-critical data

  • Minimal consistency requirements

## Option 2: Use a Case Type as a Transaction Orchestrator### Approach

  • Create a dedicated case type (e.g., Bulk Skill Registration)

  • Submit the entire payload using the Create Case DX API

  • Use:

    • Data Transforms / Activities

    • Background Processing (Queue Processor / Job Scheduler)

  • Insert all Skill records as part of case processing

Pros

  • Strong transactional control

  • Centralized exception handling

  • Supports rollback and retry strategies

  • Enables approvals and audit trails

  • Easy to extend for future business logic

Cons

  • Creates a work case for each submission

  • May conflict with case volume licensing or contract terms

  • Adds operational overhead if the data operation is purely technical

When to Use

  • Business-driven data creation

  • Approval or validation is required

  • High audit and compliance needs

:warning: Important: Always validate this approach against your Pega licensing and case volume agreement.

## Option 3: Custom Service REST### Approach

  • Create a custom Service REST (POST)

  • Accept the entire request payload as JSON

  • Process the payload using:

    • Activity or Automation

    • Data Pages / Savable Data Pages

  • Insert multiple Skill records in a controlled backend flow

Pros

  • Single API call from DX component

  • No unnecessary case creation

  • Full control over:

    • Transactions

    • Exception handling

    • Commit and rollback

  • Easy to version and secure

  • Scales well for bulk operations

Cons

  • Requires custom service governance

  • Must handle authentication and authorization explicitly

When to Use

  • Pure data operations

  • Bulk inserts

  • High-performance requirements

  • DX-driven applications without case semantics

:pushpin: This approach aligns well with Constellation’s API-first philosophy.

## Option 4: Data Page with JSON Input Parameter### Approach

  • Create a Data Page which is backed by:

    • Savable Data Page

    • POST method

    • Request payload passed in the body, not the URL

  • Invoke save logic using:

    • Data Transform or Activity
  • DX component calls the Data Page

Pros

  • No case creation

  • Reusable across UI and integrations

  • Clean encapsulation of data logic

Cons

  • Risk of large payloads passed as query parameters

  • Potential 414 – URL Too Long issues (Highly unlikely. Occurs only If JSON is passed as a URL/query parameter or If Data Page is invoked via GET with large inputs)

  • Limited visibility and debugging compared to Service REST

  • Not ideal for complex transactional logic

When to Use

  • Moderate payload sizes

  • Internal application usage

  • Low risk of large request volumes

## Architectural Comparison Summary

Option Performance Transaction Control Case Creation Best Fit
Option 1 :cross_mark: Low :cross_mark: Poor :cross_mark: No Small datasets
Option 2 :warning: Medium :white_check_mark: Strong :white_check_mark: Yes Business workflows
Option 3 :white_check_mark: High :white_check_mark: Strong :cross_mark: No Bulk data operations
Option 4 :warning: Medium :warning: Moderate :cross_mark: No Internal reuse

## Final Thoughts

After analyzing all approaches, to my knowledge the most balanced and scalable solution in most real-world Constellation implementations is Option 3 – Custom Service REST.

It offers:

  • Performance efficiency

  • Strong transaction management

  • Clean separation of concerns

  • No unnecessary case creation

That said, there is no one-size-fits-all solution. The right choice depends on:

  • Business semantics

  • Data volume

  • Audit and approval needs

  • Licensing constraints

I have personally used Option 3 and Option 4 and found both of them to be pretty efficient and easy to implement.

## A Wish for the Future :rocket:

It would be extremely valuable if Pega provides native support for bulk data record creation via DX APIs, similar to case creation.

Such a capability would:

  • Reduce custom implementations

  • Improve consistency

  • Align DX APIs with real-world data modeling needs

Until then, architects must thoughtfully choose the approach that best fits their ecosystem.

:speech_balloon: What’s your take on this?
Have you handled this differently in your Constellation projects?
Feel free to share your experience, feelings, thoughts or alternate/additional approaches. Also give a shout-out on the above content if you feel anything incorrect or notice any gaps.

Happy exploration!!!

Thanks

JC

Constellation 101 Series

Enjoyed this article? See more similar articles in Constellation 101 series.

@JayachandraSiddipeta thanks for sharing above design alternatives. One question if the business ask it to perform the bulk upload from the Pega portal then what are the choices we have, pls share your thoughts on this question.

@RajeshKumarY6297

As we know, Pega does not provide an out-of-the-box option in the Work Portal for importing Data Type records. The only available capability is through App Studio’s Data Type Designer, primarily intended for developers or citizen developers.

If the requirement is simply to upload records in bulk—without the need for advanced validation, governance, or additional processing—one possible approach is to build a custom widget exposed through a landing page. This widget can allow users to upload an Excel file, map the Excel columns to the target Data Type fields, and persist the data using approaches such as Option 3 or Option 4 for record creation.

I explored a similar concept, but from a different angle—bulk creation of work cases based on an uploaded Excel file. In this approach, the Excel may contain varying column names, which can be dynamically mapped to the fields of the selected case type. The field metadata can be derived at runtime, enabling flexible mapping and case creation. This implementation pattern can potentially be extended or adapted for bulk Data Type record creation as well.

General Caution: Custom components should be considered only when the requirement is critical, has a significant business impact, and there is no suitable out-of-the-box feature or alternative mechanism available in Pega.

Hope this helps.

Thanks

JC