[CLSA Course] : Case Design Patterns | Limited Availability and Concurrency

Can you please help understand the below paragraph(Screenshot-2) from the course(Screenshot-1) as how Parallel concurrent request will be notified by using Data Instance sequence queue?

The example mentioned is about ticket booking and multiple users can access booking website, How the users who are booking gets to know for successful booking by using Data instance sequence queue ?

@FnuA8586

In a messaging system, messages are never updated, only inserted.
Messages are ordered based on database-assigned timestamps.
There is no synchronization or locking issues when records are inserted.
Every message is assigned a unique id.

Here we are saying there is limited reservation capacity
Reservation requests that do not exceed the capacity limit are accepted in the order they were submitted.

How Reservation requests are handled that exceed the capacity limit is a design decision.
One option is to reject, forever, any message that exceeds capacity.
The user is instructed to “try again later”.

Or the user could be asked if they want to be placed on a “waiting list”.
If someone cancels an earlier reservation, the capacity that reservation had consumed is subtracted,
The waiting list would then be examined, in sequential order, looking for the first reservation(s) that fit within the capacity limit.
Those reservations are then removed from the waiting list and appended to the confirmed reservation list.

Due to the time needed for reservations to committed, this is very difficult to implement synchronously.
A background process could run at a very short interval.
Someone making a reservation could be told to wait to for email stating whether their reservation is confirmed.

Calculation speed can be enhanced by maintaining an “accumulator” that persists a total of something at the end of some time period, typically end-of-day if keeping track of insurance benefit accumulation.
In theory a much shorter time period can be used for a different type of application.

After the trip has run its course, its associated accumulator data could be purged

@pedel

Thanks @pedel for your prompt reply and explaining this in details! I was thinking of tagging you in this ticket based on similar response by you on other question :slight_smile: (Same case vs Sub case parallelism | Support Center).

2 questions I have based on your response:
1). The messaging system as you explained for the design, in this messaging design user will be notified by other channel like email, mobile sms etc. This design I am thinking how would I implement if user is on call with a CSR and capacity limit is a concern ? In that scenario, user may want to know over the call itself if tickets are booked or not.

  1. As you mentioned doing this synchronously would be difficult to implement. Would the “Calculations” with in case designer make it synchronous(as you mentioned in other question Same case vs Sub case parallelism | Support Center) ? or you think a race condition can exist when the capacity is less.

@pedel

Hello @pedel , just want to know your thoughts. if you get time, please share your thoughts.

@FnuA8586
Question 1: Since single-node-per-cluster background processing is required to scan the reservation queue, a CSR would have to “poll” to see of the Reservation made it under the capacity limit. The delay needs to be short obviously.

Question 2: Personally I would avoid Calculations as it is a rollup across child cases. This places a constraint your solution. Plus Calculations would use background processing - if I recall it is Trigger-based?

The recommended approach is NOT to use child cases. Locking would “get in the way”. Also, there could be a very large number of Reservations for a Trip - another reason not to use child cases.

There should not be any race condition if a single node in a cluster queries committed, timestamped records. Decisions are based on read-only, inserted-only, timestamped-to-the-nanosecond data.