pzStandardProcessor data flow failing and not processing queue

Hello,

in one of our environment I see that the pzStandardProcessor Queue processor keeps queuing items that does not schedule for processing.
I checked its dataflow, and saw it is in “Failing” State.

The very same scenario happened for the pyFTSIncrementalIndexer, but I could restart it by the activity Data-Decision-DDF-RunOptions.pxRestartById.

This solution is not working for the pzStandardProcessor, and the activity always returns errors.

The environment is a three node cluster, with only one node set as BackgroundProcessing, recently upgraded to Pega 8.8.4.
The Stream Service is provided by an External Kafka cluster, which seems to be working fine.

I tried adding another node, but nothing changed.
Did anybody meet the same issue? Any suggestion?
Should I delete all the items from the queue? Which are the right table to work on, in case?

Thank you

Filippo

@Phil5873 I fixed the issue by myself.

It was due to old nodes associated to the partition of that dataflow in the table DATASCHEMA.pr_data_decision_df_part

By purging and recreating those records, in the database table, I caused the dataflow to come to a final failure, allowing me to restart it smoothly.

Thanks for sharing your solution and marking it to help others @Phil5873!