You can configure Akamai to cache the entire markup of your site based on urls , this could be both content and assets, you will need to get in touch with Akamai to see what configuration options will suit you. Basically for a single page like www.yoursite.com , it can cache both assets as well as content, you can also restrict it i beleive to only assets based on the urls of the assets.
The cache at akamai level can be configured to be invalidated after a certain time like 1 hr or whatever time its configured to , there are tools to manually clear the cache for a page on demand.
There are currently no out of the box tools for integration to Akamai. In terms of how you implement an integration it depends on how you Akamai account is configured. There are two primary options (although there are others - your Akamai instance may be configured differently). Both of these assume that Akamai caches the content when a user requests it - doing a push out to Akamai is different and not really well suited to a CQ implementation.
1. You let Akamai decide what is and isn't cached based on URL rules you put in place. In this model you point the DNS for you website to Akamai and every request first flows through Akamai and it decides if the request is subject to caching or not. Requests not subject to caching then pass through to you systems. In these models you don't need to do anything to enable caching at the CQ layer - all the configuration work is done at the Akamai layer, and you build CQ templates and components as you normally would.
2. You use a different domain name for requests that are subject to caching. So www.mysite.com would point directly to your systems, but images.mysite.com might point to Akamai. Then you also need to configure Akamia - oftentimes in this scenairo you just enable Akamai to cache all requests to this domain name. Then you need to put logic into your CQ implmentation to add the domain name to certian types of references (like images, or CSS, and JS). There are a number of ways to do this, creating a custom link rewriter, using the Link Externalizer, and others. Which approach you take depends on what types of assets you want to cache.
Which of those you pick depends on how much of your content you want cached at Akamai. If you are caching the HTML of your site then option 1 is the only viable option. If you are caching only some assets then option 2 is possible. Which you choose really depends on where you want to control what is and isn't cached.
As far as invalidation goes that can be a little bit more tricky. There are some complex issues with trying to selectively flush the Akamai cache related to dependency tracking. If you are following a standard dispatcher configuration your application probably relies on two key capabilties of Dispatcher. The first is that when a node is activated in CQ dispatcher will flush any file that starts with that node. So for example let's say you have a page /content/myapp/home - you might have files cached like /content/myapp/home.html and /content/myapp/home.selector.html and /content/myapp/home/_jcr_content/par/image.img.jpg/10039392.jpg. Dispatcher has logic to flush all if these when it receives a flush notification for /content/myapp/home. In addition you will probably also be making use of the concept of auto-invalidation where all the HTML on the site is flushed anytime anything is activited. This reduces the need to do dependency tracking. Akamai doesn't really have any similar concepts. It has two flush APIs you can call - CCU and ECCU. CCU allows you flush specific files - so you can say fluse /content/myapp/home.html but that's all it will flush no other variations. ECCU provides a pretty wide range of wildcards you can can use to flush based on patterns so you could do something like flush everything below a path.
The challenge is the varying response times of these APIs. The response time of CCU is generally in the 7 minute range. The ECCU response time is 40 minutes however. That means that duplicating exactly what dispatcher does is difficult.
In addtion there are normally limits to how many of either of these requests you can send and so that means you need to self-throttle.
Most of my clients follow at TTL approach to flushing Akamai cache rather than trying to invalidate it because the benefits often aren't worth costs, especially if you are using dispatcher. However that's a business requirements discussion. You need to really understand what your requirements are and then take a look at possible solutions, taking the rules of the Akami APIs into account.
If you are using dispatcher then there is an options that involves watching the stat files and the modified dates on files cache by dispatcher and flushing based on changes to that (http://www.cqblueprints.com/xwiki/bin/view/Blue+Prints/Cache+Flush+Service). This is not an approach I have ever tried to implement so I can't speak to its effectiveness or what issues you might encounter - just something I saw posted.
Thanks Orotas for detailed reply!