We have a File Listener which would listen to a particular location in Azure Blob. For every file that’s read, it would fetch file contents and does the processing in the Service File Activity.
The issue we observed is when multiple files are dropped, it’s returning the content of the first file that’s read for every other files that are dropped during that point of time. Not sure if anything needs to be configured differently. We are unable to trace all the sessions as well because at a point of time, only one File listener session is getting traced.
Could you let us know on how this needs to be resolved. I can set up a call to discuss further on this.
Hi Keerthi,
Please ensure that each file read operation retrieves the correct file contents. Verify that the file metadata (name, size, timestamp) corresponds correctly with the actual file contents being processed and double-check the processing logic within your Service File Activity. Ensure that it correctly processes the content of each individual file without mixing them up.
Check if there are any concurrency issues in your service. Ensure that each file processing task or session is independent and doesn’t interfere with others.
@AnkithaReddyR
All these pre or basic checks have already been done and taken care of.
Once the file listener is invoked, it would fetch the current file name.
Then fetch the contents using the repository locations by passing in the file name that’s read
This is where we are seeing same content is being passed during multiple Files Drop.
So, question of concurrency or processing logic isn’t a question here. File listener is configured to run on a specific node and multiple files would mean multiple requestors/threads are created and hence, we expect no concurrency issues are observed at File listener end.