So, what would the fix be?
I mean an image can be thought of simply as a distribution of red, mixed with a distribution of green, mixed with a distribution of blue.
So if the color is off by the same amount across the histogram, then the fix is white balance (or camera calibration).
If the color is off by different amounts in different tonal ranges then there needs to be a redistribution - Manually (e.g. in Photoshop) it could be done using separate RGB channel curves.
Not sure how it could be fixed automatically...
So, can you fix it by adjusting RGB percentages (either WhiteBalance or CameraCalibration), or does it require a larger hammer?
There is a color-cast that changes toward the outside of the image compared to the center because of the say the light-angle changes, so a color-cast calibration image is taken with a diffuser on the lens, and then each image taken by the camera is corrected pixel-by-pixel based on what the calibration image shows as the color shift of each pixel. This is like having a WB eye-dropper for each pixel instead of for the overall image. I doubt the Adobe RAW engine has a way to correct one image based on another calibration image, so it would be a new function that LR would need to add for these type of cameras. I suspect Adobe regards such things are a niche too small to bother with, at least in tough economic times, even if the engineers would want to do it. If enough cameras start to have peripheral-color-correction built-in maybe Adobe would reconsider.
I would guess, without any inside information, that the lens corrections being included in the current release of the Adobe RAW engine are a direct result of Adobe being forced to figure out how to do them internally a few iterations back by camera manufacturers requiring such corrections before they would allow their raw files to be decoded by Adobe. And those manufactures may have helped Adobe with the technique because they had already built such corrections into the camera bodies to produce their camera-JPGs. I suspect PhaseOne is not likely going to help Adobe in any way due to their being competition between them, so Adobe would have even less reason to try such corrections. Of course you never know, and can always hope.
Just like overall WB, peripheral-color-correction is best done with RAW data. Otherwise, a different calibration would need to be performed for each color-temperature of light. It would seem that the current peripheral luminance correction could be enhanced to adjust each sensor-pixel-plane distinctly.
The complication is, of course, that the correction gradient is based on the interaction of the lens and the sensor, so every combination of lens and camera would need its own profile. With the cameras in question, there aren't likely that many lenses, so if not every camera and lens was supported, the combinations would not be that great.
Alternatively, a lens-light-angle calculation could be performed somehow for each lens, and a angle-color-cast calculation could be done for each sensor, where the combination of these two corrections could be applied to each combination of lens and sensor, but I'm not sure how these models of angle and cast could be measured and created using only cameras and lenses.
It might be sufficient if Adobe would provide a peripheral-color-correction calibration tool and let the individual users create their own profiles for their particular camera and lens combinations based on the diffused light shot, instead of trying to supply Adobe-created calibrations and guessing what people had.
Pixelbound, Adobe, Rob &
What about this
Take 2 images, from as close a standpoint, ie with camera on the tripod etc.
1 is the original image,
2 is the one through the white lens cap.
You now have the colour cast from centre to corners in the capped image as a reference to work from.
All that needs to be done - I guess in photoshop - is to remove the colour as referenced by the capped image from the shot.
Simple in photoshop - like everything is - a plugin / mask / reverse biased logarithmic flip flop or what ever.
The process in lightroom would most definitely be a develop setting - where one image would be the reference and this could then be applied to one or a selection or all the images.
As Ssprengel remarked, this is specalised and probably many people would not need it. Then many people did not want the lens correction, and the number of posts and discussions about that are rising most days.
Rob, one for your sleepless nights
subtracting R,G,B on a pixel by pixel basis from an image, very very simple to do surely.... I'm sure Pixelbound would be impressed, and another notch on your bedpost of plugins for lightroom.
Well, I really don't know enough about Lightroom to know how doable this is parametrically. Certainly it would be most doable as a pixel edit, and the integration of pixel edits into Lightroom is my single most desired feature request for Lightroom.
Regarding plugins - plugins have no access to image data nor any support for image manipulation. So even something as simple as applying an R,G,B channel offset or such stuff, is darn hard. Image access is one of the most requested features for the SDK - but its not there yet. Many work-arounds have been discussed, but mostly they suck, and are not really any better than just doing the stuff in Photoshop or the like.
Fingers crossed for Lr4 ;-}
I dont mean as a pixel edit, rather in the same "way" that the lens corrections are applied, I meant non descructively, its only a mask with colour subtracted - kinda thing.
As for the plugins
I forgot about that tiny little detail, the illusive SDK.
note to self dont buy a hassleblad and wide angle combination
rather assume that my Canon 5D and wide angle does not suffer
ignorance is a powerful thing
I'm sure the Lightroom engineers could do it parametrically, but unless there is enough demand, they never will.
yup, Sorry Pixelbound, my feeling as well
Is it doable via Photoshop/plugin or other external/pixel editor?
I'm sure it is
I'm a photographer not a photoshop fundi, so I've no idea there
Phase backs come with a translucent sheet designed for create a color cast correction shots - you end up with a what should (in a perfect world) be a flat grey image that if applied to the selected image would do - well Nothing - but since we don't live in a perfect world, what you get is a image that has color shifts in it - which you apply as an inverse to the target image.
Generally I only get noticeable color cast when I use movements. - although I suppose if I looked really closely I'd find them on my 35mm lens. Where this becomes a huge problem is stitching - if you are using back movements to eliminate paralax (pretty common technique) what you end up with is say - a left hand image that is has a cast shift from cyan to magenta - then you try to stich that to the right hand image and - yuck big nasty mess in the middle. No fun at all.
Sadly - there's no easy way to automate this as the color shift depends on how much shift/rise-fall and tilt/swing you've used for that particular image - I always shoot a correction shot every time I move off dead center. Kind of too bad that no one as bothered (or figured out how to) include tilt/shift data into the back (would require actual electronic connections on a technical camera and I guess we don't want ot pay for that
For the time being I'd settle for the ability to apply a color cast correction image to a target image - same way that Color One works.
Very simply put yup, however it would need to work on a "grey" image as shot through a filter, opaque white lens cap or similar and then this applied to the main image.
It would then need to work almost as a local white balance (or selective white balance - another request) and applied across that whole "grey" image to make it white
Yes, your color cast shot is essentially 50% gray with a cast toward cyan or magenta. the goal is to remove the cast essentially pixel by pixel.
I don't think applying a circular gradient - even if you could figure out where the center is, relative to the color cast would work, as the slope of the gradient depends on the microlens design of the back, the size of the sensor, the amount of shift from zero, etc. Keep in mind the cast is caused by light rays hitting the sensels at too steep an angle for the microlenses to handle. Actually - I don't suppose it matters if the sensor uses micorlenses or not - it's still an angle issue. Change the sensor size and you change the angles. Change the amount of Shift and you change the angles. Hence the need to have a calibration shot for each setup - On a camera like the Rm3di, I suppose you could write down the offsets for each setup - but it's faster to just shoot a calibration each time you change the setup. You just need some way to apply them.
By the way - this also eliminates vignetting if the lightest gray is taken as the zero point, then you adjust each other pixel to first make it neutral then modify the value to bring it up to your "mid point". Which is what color one pro does.