Skip navigation
Currently Being Moderated

Heap out of Memory Issue

May 20, 2012 11:59 PM

Tags: #jvm #cf9

Hi, I am using cfthread in my application and the cfthread uses one java library to download data from different servers. Generally we download 50-60MB data or 50,000 - 60,000 images in small batches by looping.


Most of the times I facing the issue out of memory for heap.


Below is my JVM settings.


java.args=-Duser.timezone=America/Chicago -server -Xmx2048m -Xms2048m -XX:MaxPermSize=256m -XX:PermSize=256m -XX:+UseParallelGC -Dsun.rmi.dgc.client.gcInterval=600000 -Dcoldfusion.fckupload=true -Dsun.rmi.dgc.server.gcInterval=600000 -Xbatch -Dcoldfusion.rootDir={application.home}/{application.home}/servers/cfusion/cfusion-ear /cfusion-war/WEB-INF/cfusion/lib/coldfusion.policy{application.home}/servers/cfusion/cfusio n-ear/cfusion-war/WEB-INF/cfusion/lib/neo_jaas.policy



Can any one please suggest where do I make any JVM related changes for optimize use of heap memory?


Or How can I able to detect whether any other issue(memory leak) is there in my application?

  • Currently Being Moderated
    May 21, 2012 4:09 PM   in reply to Upen@Roul

    Best to get an idea of what the CF JVM is doing. Can do that many ways via JVM logging , JDK tools jconsole and jvisualvm , others CF Monitor (CF8 9 10), CF Server Manager (CF9 10) both in a limited way, CF Jrun metric (CF7 8 9) CF  tomcat metrics (CF10) and perhaps FR and seeFusion have some tools. I think in your case JVM logging would be best to analyse what is happening then knowing what is occurring in CF JVM apply a change and monitor again. How to enable JVM logging and tools to help with reading or understand the log latter.


    Some Questions:


    CF version (suspect 9.0.n but you do not say) and Edition?

    Java version that CF is using eg 1.6.0_24?

    RAM available?

    Operating System and CF and Java are 64 bit?

    Probably no bearing Windows Linux? IIS Apache?

    Sample of log error message that shows the heap has problem?


    JVM logging:



    Add these without return line feeds to your JVM args. Copy or backup your JVM.CONFIG before applying change. CF needs to restart to apply changes.

    -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -verbose:gc -Xloggc:cfjvmGC.log

    Creates a log file in ColdFusion\runtime\bin\cfjvmgc.log

    Jrun4\bin\ in case multiserver


    Use GCViewer tool to graphically examine the "cfjvmgc.log" contents:


    Would not like to suggest a change to make without some details on the actual error and a look at the JVM logs.

    Once have some details then there are some things that come to mind:

    -Setting initial size same as maximum can lead to a fragmented heap tho might be ok

    -Could be non heap memory value that is filling eg Perm or Code Cache

    -Set New gen value (eg -Xmn184m) so JVM does not make poor guess on size

    -Garbage Collect every 10 minutes OK, so you are trying to keep heap evacuated for now

    -Could try a different GC routine other than UseParallelGC


    HTH, Carl.

    Mark as:
  • Currently Being Moderated
    May 22, 2012 6:08 PM   in reply to carl type3

    Amendments and addition to earlier post that apply to CF10.


    JVM log file "cfjvmgc.log" will be in ColdFusion10\cfusion\bin or ColdFusion10\"instance"\bin case Enterprise Manager > Instance Manager > added new instance.


    This ServerStats could help resolve JVM heap issues. No need to restart CF10 to apply JVM logging or JDK style Jconsole tools  to get a look at memory heap and CPU usage. Ref:













    Hope that’s helpful for readers, Carl.

    Mark as:
  • Currently Being Moderated
    May 23, 2012 4:51 PM   in reply to Upen@Roul

    Too much time is being spent in Garbage Collection. Could be because of heap (Xms Xmx), non heap (PermSize MaxPermSize) or garbage collector routine (UseParallelGC) suitability for the work load. The warning  can be disabled by adding the option -XX:-UseGCOverheadLimit to JVM args however I would prefer to fix the problem, which will be causing some slow application response, rather than simply turn off the warning.


    Do not have enough details to make a recommendation on what JVM arg setting to alter since the frequent GC's that are not releasing memory might be due to multiple issues. JVM logs if enabled and details analysed could assist to find a solution. If suspect matter is heap related then you could apply a change to set the New generation space, which is part of heap (heap = Old + New where New = Eden + 2 Survivor spaces), then  JVM args would look like eg 1. If suspect Permanent generation was not big enough then eg 2.  If suspect GC routine suitability another set of JVM args.



    java.args=-Duser.timezone=America/Chicago -server -Xmx2048m -Xms2048m -Xmn184m ...etc


    java.args=-Duser.timezone=America/Chicago -server ...etc -XX:PermSize=256m -XX:MaxPermSize=512m ...etc



    As I recall there was a problem (leak) with UseParallelGC with Java 1.6.0_17 that was fixed in 1.6.0_21 (?) onwards so perhaps no surprise your getting better uptime with Java update, tho with GCOverheadLimit happening you are not far from heap full problem.


    You have Enterprise licence. Are you able to get any useful diagnosis from running CF Monitor?


    Regards, Carl.

    Mark as:
  • Currently Being Moderated
    May 28, 2012 4:51 PM   in reply to carl type3

    I’d propose that the frequent “GC overhead” errors are more simply just a reflection that something is holding memory (in the CF heap). When the JVM repeatedly tries to do a GC and cannot recover much (in a couple of minutes), then it throws this message. The solution is to find what’s holding memory.


    As Carl noted earlier in the thread, there are many ways to attack this, but I would propose that JVM tools are not the answer. The simpler question is “what in CF could be holding memory for extended periods, that may be in your case”, Upen. Such things most often are either excessive use of session variables, application variables, server variables, and/or query caching. And all these can be caused to be used all the more by large amounts of spiders/bots and other automated requests being made to the CF server, which causes CF to create a new session on each page request (from such an automated request) rather than “once per session” as would be the case from more typical browsers. Too much to explain here, Upen, but I’ve discussed it before a:t




    Hope that helps.





    Mark as:
  • Currently Being Moderated
    May 28, 2012 8:38 PM   in reply to Upen@Roul

    Thanks for the update. Personally, I’m never a fan of “killing threads”. The solution seems instead to make them stop taking so long. (And of course, this is going beyond the subject of “heap out of memory issue”.)


    But you made a mention of images. Are you by any chance doing CFIMAGE action=”resize”, or imageresize(), or imageScaletoFit()? If so, any of these could be your culprit, especially if you’re processing many images that way.  The bad news is that there’s a default that may be hurting performance.  The good news is that there’s a simple fix.


    Check out a blog entry I just created to explain the issue (with solutions):



    I realize it may not be your problem, but let us know either way.


    And as for finding out what IS holding up your long-running requests, I strongly recommend you consider a couple of other blog entries I’ve done, on both being misled by “timeouts” and on doing stack tracing in CF to know the exact line of code at a point in time in a long-running request:




    Hope that helps.



    Mark as:
  • Currently Being Moderated
    May 31, 2012 5:21 AM   in reply to Upen@Roul

    Great to hear. Thanks for the update.



    Mark as:

More Like This

  • Retrieving data ...

Bookmarked By (0)

Answers + Points = Status

  • 10 points awarded for Correct Answers
  • 5 points awarded for Helpful Answers
  • 10,000+ points
  • 1,001-10,000 points
  • 501-1,000 points
  • 5-500 points