In the past, all of our WebHelp projects have been published to servers that are on private networks, behind firewalls, with secure access, etc. However, this product is cloud-hosted on a server that you can access publicly. The concern has been raised that someone could find the HTML files via Google keyword search and then scrub or replace content. It is of particular concern re: competitors. While we could write an authentication script requiring user credentials to access the help whenever it is launched, that's not going to get triggered if the HTML content of a topic came up on an internet search.
Is this a common concern? Is there something we could embed in the metadata or other things that we can set at the project level, etc., in RH for WH output that would address this concern by mitigating the risk?
Thanks in advance...
Could you use a robots.txt file? I think it prevents search bots indexing the pages. If you do a search you should be able to find information about it. Or one of your web guys should know about it, I'd think.