Addressing Work Policy Violations in LiveCycle ES2

If you see messages such as follows in your J2EE appserver logs, you might want to consider increasing the heap allocated to the appserver JVM hosting LiveCycle, or allocating more CPUs to the server instance:

Work Manager backing off for 60 seconds will not affect the performance of your short-lived orchestrations. However, if you have long-lived orchestrations deployed, their processing will be affected.

Although your mileage will vary, the following heap and garbage collection settings for the Sun HotSpot JVM have been found to be effective in avoiding Work Manager policy violations. Please note that we have several customers currently in PRODUCTION with heap sizes of 8 GB or more:

-server (run the JVM in server mode)-Xms2048m (minimum heap size is 2 GB)-Xmx2048m (maximum heap size is 2 GB)-XX:NewRatio=1 (allocate half of the heap to the “Eden” Generation where most of the new objects are created and then destroyed by garbage collection runs)-XX:MaxTenuringThreshold=100 (force an object in the “Eden” Generation to survive at least 100 garbage collections before incurring the copying cost of moving that to the “Survivor” Generation)-XX:PermSize=512m (minimum size of the “Permanent” Generation is 0.5 GB)-XX:MaxPermSize=512m (maximum size of the “Permanent” Generation is 0.5 GB)-XX:+UseParallelGC (Use the parallel garbage collector)-XX:+UseParallelOldGC (Use the parallel garbage collector for the heap’s “Old” Generation)-XX:ParallelGCThreads=4 (allocate 4 threads for the parallel garbage collector, assuming the server instance has at least 4 CPU cores, use a higher setting if the server instance has more CPU cores).

In the x64 world, make sure you deploy LiveCycle to servers with either Intel Xeon or AMD Opteron processors. In the Solaris SPARC world, deploy LiveCycle to servers with SPARC64 VII or UltraSPARC IIIi or IV CPUs. In the IBM AIX world, use POWER7 or POWER6 CPUs.

If deploying LiveCycle to hardware-virtualized environments, make sure you allocate at least two vCPUs at minimum 2 GHz clock speed, preferably 3 GHz or more, minimum 6 GB of RAM and 60 GB of storage to each of the VMs. If using SAN storage, insist on Tier I SAN storage (highest performance).

Micro-partitioning CPUs by clock ticks in POWER AIX environments is not economic. Since most LiveCycle components are licensed by the number of server CPUs (two CPU cores represent one LiveCycle CPU license), it is in your economic interest to deploy LiveCycle to the best performing CPUs you can afford. LiveCycle performance is CPU-bound in most cases. Micro-partitioning will unnecessarily hamper your performance/price ratio.

Although HyperThreading (Intel) and Chip Multi Threading (Sun/Oracle) tricks the operating system into reporting multiple CPU cores, these only provide incremental benefits (~30%) instead of orders of magnitude (2x) benefits.

VN:F [1.9.22_1171]

Was this helpful? Please rate the content.

please wait...

Rating: 9.0/10 (4 votes cast)

Addressing Work Policy Violations in LiveCycle ES2, 9.0 out of 10 based on 4 ratings

I am looking at a similar issue. The scenario is like this, when the form rendering happens the first time rendering on IE6+Adobe Reader8.0/9.1.3 causes form/IE to be hung. As soon as a refresh is pressed the rendering happens. But by that time the size of the PDF has bloated to 1.5MB from 848 KBs. Sometime it touches 2MBs.
I saw the Document MaxInline Size/Disposal TimeOuts/Form Service Max Cache Document Sizes as well. But in addition to all these issues, I was seeing the below error as well – com.adobe.idp.dsc.workmanager.workload.policy.WorkPolicyEvaluationBackgroundTask run Policies violated for statistics on
‘adobews__1434603786:wm_default’. Cause: com.adobe.idp.dsc.workmanager.workload.policy.WorkPolicyViolationException: Work
policy violated. Will attempt to recover in 60000 ms
I am running with an AIX 5.3 – Java version = 1.5.0, Java Compiler = j9jit23, Java VM name = IBM J9 VM :

Not sure whether there is a deficiet of Memory on my Adobe LC Server and whether the WorkManager raised the first alert because of running low.