If you have 7.x, you should just use 7.1 (currently in RC) from beginning to end. Much simpler, and in the vast majority of cases, much better results.
If you have a case where it doesn't work well, the best approach to move things forward is to post an example (original file, not screenshots), so that I can examine. That way there is a possibility of making improvements in future versions. This is a much more productive situation, compared to trying to mix-and-match versions and use hybrid workflows.
Hi Eric, thanks for the response and welcome back from your break away from all of us. Hope the return is not too traumatic.
While I agree with your reasoning that it is preferable to work completely either in 6.x or 7.x, it misses the point I was making and really does not resolve the problem identified by Vit, Noel and others, namely that there are already situations, and in the future there will, by inference, be more situations, where the "auto" approach to CA correction and Defringing will not work as well as the hands-on approach we had with a greater range of tools prior to 7.1.
There must be a limit to how many files you can examine in an attempt to improve the new approach, and as much as it may be improved it seems logical to assume that it will not be possible to cover all situations. The effect of Noel's point in one thread was to suggest that it is silly to throw away something that already works well (the manual approach with several tools ) for something new and more restrictive (the 7.1 auto approach) and therefore why not retain both? Again, I have not even tried 7.1 yet, but have simply read the threads discussing it and with the shortcomings already identified by people like Vit and Noel, at least with respect to some images, I have to sympathise with Noel's suggestion.
But the crux of my OP was much more focused, namely, is there a fault in the reasoning I have outlined ? You have not addressed that question.
I collected several (over a hundred) test files from users during ACR 7 dev cycle which exhibited various types of fringing, and found that the new tools available in 7.1 do a better job than the ones in 7.0 and 6.x, on all of those test images except a couple. Therefore, my plan is to use those few remaining test images (and others like it that I may come across going forward) to refine the tools. We strongly believe that the new tools are better and have significant advantages (both workflow and quality-wise) comapred to the old ones.
The problem with switching back to earlier versions of ACR (or going back and forth) is that in general the parameters won't be fully honored, and doing partial edits in one version (using PV 2010) and the rest of the edits (PV 2012) in another version won't work optimally. Ideally the pipeline needs to know all the edits you plan to apply in advance, so it can internally optimize the pipeline for quality. If you do partly in one, and partly in another, that's breaking the model.
Thanks, Eric, you've answered my query and confirmed my suspicion that you can't switch back and forth between PV 2012 and PV 2010.
Sorry, Vit, looks like you will have to get used to the way defringing is done in 7.1.
For my part, I'm quite looking forward to trying it out. If Eric is so enthusiastic about it it's bound to be pretty good.
I have no problem with "getting used" - my opinion is that ACR in general is really excellent piece of software and I'm glad that various imperfections from the past have been gradually solved the right way, which isn't the case with some other raw development programs
However, there is still room for improvements and I'm quite sure defringing will be also improved somewhere in the future
Certainly I will not pretend that the methods available in 7.1 are perfect. The key is to have suitable "failure case" test images so that the team can make improvements going forward. This is an iterative & incremental process, and the feedback & test images we have gotten from you in the past have helped the product a great deal -- thank you.
I'm glad I was of help here
Here is another example. First image uncorrected, second with some moderate settings and third with somewhat increased hue range as evident. Yes, second image is quite acceptable, but third one is showing something I don't like. My opinion, as I already said in another thread, is that desaturation should gradually diminish with distance from the edge and not sharply at certain distance like in this version. Also, amount of this desaturation should depend on how sharp and bright the edge is - parts around feet are obviously way "overdesaturated"
Hi Vit, thanks for the example. This is tricky. It turns out that the new Defringe function is only loosely edge-based. It is important for it not to rely too much on edges, because longitudinal CA can be visible in cases that are far away from an "edge" (and even then, not strong edges, but very gentle gradations). I'll think about it some more.
This is tricky.
I can imagine...
I too have found cases where it works very well, and others where it cuts too much into surrounding areas, necessitating more painstaking local correction, and/or backing off the global settings.
Of course - this sample was really a tough case. Also, I understand that this subject is quite complex and that there are various reasons why current algorithm was made the way it was made in current version
It's important to notice that not all blue color on this sample (actually a crop from bigger image) is fringing. Photo was taken from inside of the building and I set WB appropriate for the inside, which was about 4000K. Most of blue color around the doors is daylight - it looks blue because it has higher color temperature. It's a matter of discussion what to do with such case - my opinion is that this blue actually doesn't need defringing - better idea would be for user to use local adjustments and correct color temperature of that area so it looks less blue
I think that it would be a good idea to introduce another slider to set the threshold for the lightness of the bright side of the edge that should be defringed - from blown highlights to some lower value (similar to what we had before) - unless, of course, you can make really good algorithm to detect automatically what is fringe and what is just different color temperature or just a blue color, which would be quite demanding I suppose. For instance, on third photo, current algorithm used way too low treshold for edge that didn't exist (around feet) - somewhere as low as sRGB value around 80, 100, 120 which - so I think - really doesn't need defringing in most cases.
I also hope you would consider my other whish about gradually diminishing the effect of defringing with the distance from the edge. I really can't find an explanation why this transition should be sharp like in current version. And also, to make this distance shorter towards bright side of the edge - on my older example in another thread, it looked like both are about equal (I hope I'm wrong), so with hue range too wide, sky around the tree was also "defringed", while with narrower hue range, defringing wasn't effective enough
I will certainly consider these points. Thanks for your suggestions.
For the current design, we needed to trade off correction quality and the complexity (including number) of controls.