I managed to setup (sucesfully) my first CQ cloud, yeah
Then I realized that content activation from author to publish doesnt work.
I checked the replication agents settings and found several replication agents created on author.
But all of them are pointing to dummy URL http://invalid-hostname-for-backup-replication-agent.
I disabled all of them and created new one called publish1 with URL poiting to my publish instance.
The question is:
* Shouldn't be replication agents (including flush agents) configured automatically after creating the cloud ?
* Did I miss something ? Is it a known issue ? Is it planed to fix it ?
Additionally I reviewed dispatcher.any configuration file on dispatcher instance.
It seems that cache section are renamed to _cache, which means that caching is not enabled by default neither on publish nor author instance.
Is it planned to change this ? Are ther any other areas which needs to be configured manually after creating the cloud ?
It's very good to heat that you got a cloud running, I think we can get you an answer soon on our choice of replication agent configuration.
If you have a minute, can you list some of the main sticking points that you encountered along the way to your first cloud (does not matter if it was your own error or ours). This feedback can help us a lot.
The extra replication agents that you see are intended to be used when you add a new Publish instance to your cloud. You are right that the existence of these agents interferes with the tree activation (symptom is that the activation never completes). That's a bug. The workaround is to first add whatever publish instances your cloud will need and then disable the extra replication agents. Do these steps before attempting to use the tree activation feature. [Update: this bug has been resolved in the July 2012 CloudManager release]
Regarding the dispatcher config. It is something we need to review - we have not given it the attention it needs. Caching needs to be enabled with care. The risk is that if any cached items allow selectors and the backend code doesn’t limit the number of accepted selectors then there is a cache flood denial of service attack.
Regarding 'any other areas which needs to be configured manually'. The intent of Cloud Manager is to provide a preconfigured topology that is 'pretty close' to what you need for a production system. However every real env is going to have subtle differences in requirements and so the reccomndation is that you review the configuration we provide to verify it meets the specific requirements of your intended usage.
If you have specific suggestions as to improvements to the default configuration we are emitting that would be great to hear.