0 Replies Latest reply on Jan 4, 2011 9:09 AM by Marcos J Pinto

    Output process failures and hardware limitations

    Marcos J Pinto Level 1

       

      Hi!

       

       

       

      I've been trying to optimize a PDF generating process with LC Output for the last days, I was wondering if someone could give me a

       

      hand here, please? 

       

       

       

      I've built this simple process to generate PDF files from info stored in XLM format.

       

      The process is very CPU intensive and draws a lot of computing power and memory.

       

      What it does: reads XML files from a watched folder and merges the info with a XDP template.

       

      Each PDF is created with 1000 lines, taking around 30 seconds each file.

       

       

       

      I set up the Repeat Interval with 25 seconds and the batch size as 1. If I reduce the repeat interval to, say, 22 seconds OR raise the batch size to 2, there are lots of failures mainly due to timeout.

       

       

       

      As there may be different file sizes (for the test I used identical file sizes) in a production environment, I run the risk of defining a repeat interval that will cause failures OR leave the CPU idle between files.

       

       

       

      I've doubled the RAM from 4 to 8GB, with almost no difference in results or performance.

       

       

       

      Questions:

       

       

       

      1. Is there a way of reading the next XML file from the watched folder only after a file generation has been completed? How?

       

      2. Will I raise these limits if I double the number of cores? How do I set up such an environment? Are there any literatures on this?

       

      3. The 8GB environment was created on a virtualized environment. Will it make a difference if it is not VMWare but MS Hyper V?

       

       

       

      Thank you very much for any hints.

       

       

       

      BR,

       

       

       

      Marcos