4 Replies Latest reply on Jul 14, 2013 2:15 PM by orotas

    Issue on space . Disk space increases and decreases automatically

    Gokul2011 Level 1

      Hi All,

       

      In our QA environment for past 2 days we are facing issue on space . some unusual behaviour. Taken threaddumps at that time and found the issue

       

      "172.30.104.53 [1373723166026] <closed>" daemon prio=3 tid=0x0000000101dca800 nid=0x135f in Object.wait() [0xfffffffd8a9ff000]

         java.lang.Thread.State: WAITING (on object monitor)

              at java.lang.Object.wait(Native Method)

              - waiting on <0xfffffffdf9743658> (a com.day.j2ee.servletengine.HttpListener$Worker)

              at java.lang.Object.wait(Object.java:485)

              at com.day.j2ee.servletengine.HttpListener$Worker.await(HttpListener.java:587)

              - locked <0xfffffffdf9743658> (a com.day.j2ee.servletengine.HttpListener$Worker)

              at com.day.j2ee.servletengine.HttpListener$Worker.run(HttpListener.java:612)

              at java.lang.Thread.run(Thread.java:662)

       

       

       

      Please let me know where exact issue on thread.

        • 1. Re: Issue on space . Disk space increases and decreases automatically
          Jörg Hoh Adobe Employee

          This thread has nothing to do with the disk usage of CQ5. It's a thread which is going to be recycled by the servlet engine.

           

          Jörg

          • 2. Re: Issue on space . Disk space increases and decreases automatically
            Gokul2011 Level 1

            Hi Jorg,

             

            Can you please let me know what causing this issue and how to resolve it?

            • 3. Re: Issue on space . Disk space increases and decreases automatically
              mildred_ignacio

              Run:  tools > report > diskusage to see what consumes the filesystem

              or for unix, have the du -sh crx-quickstart/repository/*/* list of the filesystem.  you can check also if it is the datastore that consumes the space. for datastore, to do cleanup , you have to do it manually on crx console. run datastore GC(remove unused object) .  for tar pm optimization, (this is set to run by 2am to 5am) by default so you can see that it decreases automatically.

              this can be also run manually on crx.

               

              http://helpx.adobe.com/crx/kb/DataStoreGarbageCollection.html

              • 4. Re: Issue on space . Disk space increases and decreases automatically
                orotas Level 4

                If you are seeing daily increases and decrease in your disk utilization (for example it increases during the day/overnaight but in the morning the disk space has recovered what you are seeing is the impact of Tar optimization.

                 

                The Tar Persistence manager is the underlying storage mechanism for CRX. Data is stored in append only tar files. This means the when you update a node or property the new values are written to the Tar files, the indexes are updated to point to the new location, but the old data is also still left in the file. This mechanism allows for a much faster write mechanism. So the more you frequently you update existing content in your repository the larger you Tar files becomes.

                 

                There is a process called Tar File Optimization that by default is scheduled to run from 2 AM to 5 AM server time. This process identifies all the orphaned data in the Tar files and deletes, there by reducing the size of the tar files on disk.

                 

                So if you are in heavy content migration mode, or moving large amounts of content between instances you can see large swings in your disk space utilization as the Tar file balloons up during the day and then shrinks back down over night. In some cases depending on how large your repository is the 3 hours allotted by default is not sufficent to complete the optimizaiton so you may not be recovering all your disk space. During normal production operations this will normally average out over time and the 3 hour window is enough. However during periods of heavy usage, especially during QA or content migration you may find that your tar files are ever increasing in size. If that happens you need pick a period of time over say a weekend and trigger the Tar File optimization to run until complete to recover as much of your disk space as possible.

                 

                See http://helpx.adobe.com/crx/kb/TarPMOptimization.html for details on Tar File Optimization.

                 

                As someone else pointed out you may also have an issue with your data store which requires a different clean up method.

                 

                Another possible culprit is your Lucene index files. Depending on your data model and repository size you can swings in your Lucene indexes because like the Tar File it periodically cleans itself up and large amounts of content change can cause this cycle to become more pronounced.

                 

                This blog post dicusses in more deapth both Tar File Optimization and Data Store garbage collection. http://blog.aemarchitect.com/2013/06/17/importance-of-aem-maintenance-procedures-for-non-p roduction-boxes/