This thread has nothing to do with the disk usage of CQ5. It's a thread which is going to be recycled by the servlet engine.
Can you please let me know what causing this issue and how to resolve it?
Run: tools > report > diskusage to see what consumes the filesystem
or for unix, have the du -sh crx-quickstart/repository/*/* list of the filesystem. you can check also if it is the datastore that consumes the space. for datastore, to do cleanup , you have to do it manually on crx console. run datastore GC(remove unused object) . for tar pm optimization, (this is set to run by 2am to 5am) by default so you can see that it decreases automatically.
this can be also run manually on crx.
If you are seeing daily increases and decrease in your disk utilization (for example it increases during the day/overnaight but in the morning the disk space has recovered what you are seeing is the impact of Tar optimization.
The Tar Persistence manager is the underlying storage mechanism for CRX. Data is stored in append only tar files. This means the when you update a node or property the new values are written to the Tar files, the indexes are updated to point to the new location, but the old data is also still left in the file. This mechanism allows for a much faster write mechanism. So the more you frequently you update existing content in your repository the larger you Tar files becomes.
There is a process called Tar File Optimization that by default is scheduled to run from 2 AM to 5 AM server time. This process identifies all the orphaned data in the Tar files and deletes, there by reducing the size of the tar files on disk.
So if you are in heavy content migration mode, or moving large amounts of content between instances you can see large swings in your disk space utilization as the Tar file balloons up during the day and then shrinks back down over night. In some cases depending on how large your repository is the 3 hours allotted by default is not sufficent to complete the optimizaiton so you may not be recovering all your disk space. During normal production operations this will normally average out over time and the 3 hour window is enough. However during periods of heavy usage, especially during QA or content migration you may find that your tar files are ever increasing in size. If that happens you need pick a period of time over say a weekend and trigger the Tar File optimization to run until complete to recover as much of your disk space as possible.
See http://helpx.adobe.com/crx/kb/TarPMOptimization.html for details on Tar File Optimization.
As someone else pointed out you may also have an issue with your data store which requires a different clean up method.
Another possible culprit is your Lucene index files. Depending on your data model and repository size you can swings in your Lucene indexes because like the Tar File it periodically cleans itself up and large amounts of content change can cause this cycle to become more pronounced.
This blog post dicusses in more deapth both Tar File Optimization and Data Store garbage collection. http://blog.aemarchitect.com/2013/06/17/importance-of-aem-maintenance-procedures-for-non-p roduction-boxes/