We created the assembler process which assembles 52 documents and gives the mashed document as output. This works excellent.
We ran this process in performance testing in pre-prod for 4 hours with 5 users mashing up 52 documents form (2614 threads). What we observed is the LC server response time tends to increase slowly from 10 sec to 15 sec. So, we were scared about what if we run this process continuosly in production?
What could be the reason for this increase in response time? do we have to change any config setting for this service or any admin settings to control this?
here is the attached graph for the response time:
Thanks for your reply.
We are using LiveCycle 8.2.1. We ran the performance test for single user today and we see over 12 hours span, the response time increased from 11 sec to 18 sec. The CPU utilization was 55%. Also, we stopped the test for 1 hour after 12 hours and started the test again.
This time the response time started with 18 sec and growing up from there. It clearly indicates if we are going to run the process for long time without rebooting the server, the response time tends to climb high & high. So, can we suspect something on memory leak?
May I know what changes were done specific to Assembler process in the latest releases compared to 8.2? (If you got the release notes specific to Assembler process). We need to be convinced enough over significant upgrades and it will fix all our problems. The reason is we have several servers in LC running across environments and it's a huge task and costly process to upgrade all the systems from current scenario (LC 8.2, Windows server 2003, 32 bit system, 4 GB RAM) to LC ES2, windows server 2008, 64 bit systems.
I highly appreciate your quick response on this as this has become the pressing issue for us.
I'm not sure of the exact details of what was change in the code for Assembler ES2 SP2, but I know it's more performant. You can probably get in tough with support to get more information about it. I don't think this is mention in any of the release notes.
That aside, I talked with colleague of mine about your test results. He said the only time he's seen something similar (11sec to 18 sec. stop and start again a 18 sec) is when there was an issue with the file system. Especially on Windows, since Windows is not as efficient as Unix for multiples files in the same folder.
Can you check the Global Data Storage (GDS) folder and see how many files you get there?
I'm not ruling out the memory leak, but it would be interesting to see if there's something wrong with the file system.
Also, I would open a ticket with support so they can track it as well.
In the GDS folder, I see 3 sub folders audit, backup & docm and a sessionInvocation file of type DEL file which gets automatically created and deleted approximately for every 10 seconds.
Audit & backup folders are empty. docm folder contain 2 files ( 1 is a session file of 0 kb size) and other is a zipped file of size 302 kb. When I unzipped it, I found it contain 66 items. I dont see the GDS subfolders are getting updated since a week, except the session invocation file.
I'm uploading the GDS folder when the transaction is run along with this mail. This might help. Can you explain a little on issue with file system?
Well, our process retrieves documents from Documentum using DFC connectors, assemble the documents and write back the assembled document to documentum. We captured time taken for each subprocess over a period of time. Except assembler process, every othe process looks consistent in terms of response time. But the assembler process response time is the one that grows.
To help you further, we already have a ticket with Adobe on this. (Tkt No: 181829210) where you can see all the information about lca, response time graphs & env. details. So far, the support team could not able to take this in right direction to resolve this issue. Your help on this would be greatly appreciated.
Make sure your heap size is configured to the optimal values.
See the following post: http://blogs.adobe.com/livecycle/2010/10/heap-size-sweet-spot-for-live cycle-es2.html