I used to use the rule processors until I found out about evaluateXPathExpression do be so much easier to use an have never looked back. Is there anyone out there still using rule processors? If so, why?
With an early CS4 we found that evaluateXPathExpression() was actually faster, but had a memory leak causing erratic InDesign Server crashes. Since then we've stuck to XML rules.
Recently I retried evaluateXPathExpression for those assumed performance gains but could not reproduce the effect, it was roughly the same. Therefor we still stick to XML rules, but will revisit the topic again soon with more exhaustive tests.
Btw, we use XML rules with single XPath and collect the results into an array. Originally XML rules were meant for incremental processing. If I remember that correctly they also allow for optimizations such as "skip the children here" - never used that.
With the mentioned short comparison test I could neither reproduce the memory leaks, maybe they're gone.
On the other hand there are plenty ways to reduce use of XPaths at all, by navigating the tree on your own, removing wildcard "/*/" and depth "//" searches that were chosen just for laziness of typing and so forth. Just today I eliminated most parts of one XPath and reduced local execution time by 60 percent.
For completeness sake you should also consider the ExtendScript XML subsystem with its .xpath() function, it has some advantages if your XML does not exceed a certain size.
Dirk Becker wrote:
With an early CS4 we found that evaluateXPathExpression() was actually faster...
I just ran a test in CS5 grabbing a few hundred varyingly-deeply nested elements with a "//element" xpath. The ruleProcessor was faster in this instance, though the difference was indeed negligible—hundredths of a second. Using evaluateXPathExpression() is certainly more straightforward.