I find myself again searching for information on preventing unnecessary logs from being written with the upgrade to 23.1.2. This time its
System Pulse Acknowledgement - Message Received
So I found the following information in 23.1.2 Resolved Issues for System Administration
After deploying a ruleset versioning update, the new rules were not being picked up by the queue processor and the system runtime context continued to be the previous ruleset version. This type of issue occurs when the SystemPulse fails to propagate the latest version of the rule to all nodes in the cluster, leading the system to utilize the existing version of the rule. In order to capture whether the deployed rules are propagated to all the nodes in the cluster, additional diagnostic log messages have been added. These loggers handle defined rule update pulse types for diagnostic information, CACHE, RFDEL, IMPRT, CLDEC, PURGE, DSMST, and are based on the environment’s production level and DASS: systempulse/rulesUpdate/logAcknowledgement/enabled. Logging has been enabled by default for production level >= 4 (i.e. PRE-PROD or PROD) and disabled by default for lower environments.
But yet again, no information on which owning ruleset this DSS should be created under and whether this can be applied without a restart and whether the value to disable should be set as false or something else.
We have seen since the upgrade 05/06 (8.6 > 23.1.2) upto 600k log entries per day for this logging, which quite frankly is ridiculous.
The other concern we’ve seen is that the number of minor garbage collections has also increased from around 30k per day to 200k per day. And we can see the following log entries that seem to contribute to this GC run.
14/06/2024 15:42:57.566
2024-06-14 15:42:57,566 [oyant_bouman.event-3] ( SystemPulse) INFO - System Pulse Acknowledgement - Message Received: [createDate:Fri Jun 14 15:42:56 BST 2024, pulseType:RFDEL, pzInsKey:, sourceNode:pega-kn-p2, systemName:, className:Data-Content-Image, parameters:FileToDelete=/datacontent/image/gen_1f6b4d9f89848ef527158b341b88df93_applicationdata_0.js, tenantID:null]
This log entry seems to appear upon a user simply logging in to the system and we’ve no idea what its doing. The FileToDelete param is obviously different for each user logging in.
So if someone can answer the question of what we need to do to turn this logging off that would be great. Also if anyone knows if these are related that would be fantastic to know also.
Further update after some more investigation, looking at the histogram logs we take every night I see the following.
Date
num
#instances
#bytes
class name (module)
05/06/2024
11200
1
32
org.atmosphere.cache.CacheMessage
06/06/2024
24
235968
7550976
org.atmosphere.cache.CacheMessage
07/06/2024
11
897635
28724320
org.atmosphere.cache.CacheMessage
08/06/2024
9
1055871
33787872
org.atmosphere.cache.CacheMessage
09/06/2024
8
1117011
35744352
org.atmosphere.cache.CacheMessage
10/06/2024
5
4343068
138978176
org.atmosphere.cache.CacheMessage
11/06/2024
4
9230230
295367360
org.atmosphere.cache.CacheMessage
12/06/2024
4
14280335
456970720
org.atmosphere.cache.CacheMessage
13/06/2024
3
21214465
678862880
org.atmosphere.cache.CacheMessage
14/06/2024
3
21943274
702184768
org.atmosphere.cache.CacheMessage
15/06/2024
3
20485973
655551136
org.atmosphere.cache.CacheMessage
16/06/2024
3
20463810
654841920
org.atmosphere.cache.CacheMessage
17/06/2024
3
21344316
683018112
org.atmosphere.cache.CacheMessage
18/06/2024
3
22226599
711251168
org.atmosphere.cache.CacheMessage
So prior to the upgrade 05/06/2024 hardly any of these entries and then lots of them increasing in number and size. Any idea what this class is doing and what its for?
@CraigA52 thanks for having provided INC-B24907 which I can see is still open and being looked at by our GCS team.
Results from the investigation:
Solution type description:
A diagnostic HFix-B483 has been provided which adds Loggers for custom pre-defined rule update pulse types for diagnostic information, based on environments production level and DASS setting: systempulse/rulesUpdate/logAcknowledgement/enabled with ruleset: Pega-Engine
Pulse Types that have been considered: CACHE, RFDEL, IMPRT, CLDEC, PURGE, DSMST for this custom logging.
Dass would toggle logging if set (true/false), otherwise the logging would be based on environment’s production level. It has been enabled by default for production level >= 4 (i.e. PRE-PROD or PROD) and disabled by default for lower env’s.
By default if the dss is not added, and then the logging is governed by env’s prod level:
So to summarize as to how to observe the Hfix changes:
If the DSS “systempulse/rulesUpdate/logAcknowledgement/enabled” with ruleset: Pega-Engine is not set, then the logging would be based on the environment’s production level i.e., If production level >= 4, logs are enabled by default, which means for env’s greater than or equal to PRE-PROD the logs are enabled by default and for lower envs than it they are disabled by default.
Whereas if the dss is set with value true/false, it would toggle the logging based on value. If true, logs would be enabled otherwise it would be disabled, it is not influenced by the env’s production level.
Detail:
Issue 1: " when we apply this DSS which is happening this evening would the increased heap usage associated with atmosphere also be affected?"
GCS Answer: – most likely it will not affect heap usage associated with atmosphere.
Issue 2: "You mention there is a known bug in our patch release related to this framework, but you failed to mention if there is a hotfix for it."
GCS Answer - unfortunately GCS couldn’t find a BUG item. This is one of the reasons GCS suggested you create new INC, to track question related to atmosphere class loader
I have done some research and GCS may have been referring to issues linked to atmosphere-runtime library. This is not relevant as we are not using atmosphere-runtime-native jar in our code Hence, the bug was closed no action.
BUG-842505 (atmosphere-runtime 2.4.5) - the latest Pega versions run 2 . 7 . 9.1, rather than the previous version ( 2 . 4. 5. 14)