This content has been marked as final. Show 141 replies
Thank you for sharing this, Marco. You obviously put a lot of work into it.
As a very minor comment I would thank Apple for the Control+Option+Command+8 option that makes the text readable on your page. (I just feel sorry for Windows users who may be forced to read all that white-on-black text.)
>(I just feel sorry for Windows users who may be forced to read all that white-on-black text.)
I suggest you save your condescension for those who need it.
I can override the page style in my (Windows) browser just
as easily as you can in your (Mac) browser. But don't let
your prejudices get in the way of reality if that's what you
get off on.
No, this has absolutely nothing to do with a web browser of any kind. You are massively under-informed.
The Control+Option+Command+8 option in the Mac inverts whatever you have displaying on your screen, regardless of what application you're using, Photoshop, the Finder, MS Word, it just doesn't matter.
Don't get your panties in a bunch, man.
And it takes just a fraction of a second to toggle the effect on and off.
From this experience I think that Adobe has to implement the Lab space choice in Camera Raw along with the yet provided four color space. First to avoid the unnecessary color conversions for calibration and accuracy intents, second to help who prefers working in Lab space.
I'm pretty sure that transforming from ProPhoto RGB (16 bit) into LAB after conversion would be the same as exporting into LAB from within Camera Raw since Camera Raw is using ProPhoto RGB chromiticies in linear up til the final transform into the output color space. Thus, whether you do it in Camera Raw or after, in Photoshop it would be 6 of one and 1/2 dozen of the other...
I agree, the final result could be similar but...
for calibration intent the only valid formula for color distance is the deltaE 2000, thus the lab space. I could put directly the lab value in the scripts avoiding multiple color conversions and extraneous color engine involving.
for who prefers working in lab space the workflow could be easy and faster.
Are you not agree?
Thank you for the kind words. And thanks for a comprehensive article.
And there is some non-uniformity (errors) in the Adobe transforms between Lab and RGB. This has been discussed elsewhere. If you convert Lab to RGB to Lab or RGB to Lab to RGB you will see discrepancies that are beyond mere rounding errors. This is the primary reason that I use Adobe services for conversions rather than the CIE suggested algorithms. My assumption is that ACR is using these same services.
Also, there are some incorrect routines on the web for DE2000 calculations. The ones I use have been validated against CIE test data. Just a warning. I do try to update my scripts when I see something suspicious.
And, I for one would like to see Lab mode as an ACR output option.
Cheers, Rags :-)
ProPhoto is as accurate as you can get in ACR.
Then you can convert to whatever you'd like.
If Im not mistaken, it would be very kind to disclose that both generic input profiles for D65/A illuminants do NOT refer to a linearized state.
Reference to preset Brightness 50 + Contrast 25 and resulting colors could save us from any trip to Lab.
I see that the maximum error in luminosity column is 0.02 in 16 bit target case and it doesnt move from 0.00 in 8 bit target case (0-255). The maximum error in chromaticity column instead comes from another world and it is exaggerated. In these cases the image noise doesnt exist because the patches are uniform. Its hard for me believe that the cause is only in the Adobe sampling interface limits.
At first I think over an error in your color distance calculation formula (from lab to lab value) but the Bruce Lindbloom site and GMB math give back your same results. Then I made this test and I believe that the error can be found in your lab space consideration. For example: what illuminant do you refer for Lab space? D50 or other? ;-)
>And there is some non-uniformity (errors) in the Adobe transforms between Lab and RGB. This has been discussed elsewhere. If you convert Lab to RGB to Lab or RGB to Lab to RGB you will see discrepancies that are beyond mere rounding errors. This is the primary reason that I use Adobe services for conversions rather than the CIE suggested algorithms. My assumption is that ACR is using these same services.
A while ago I tested that one conversion between color modes on Colorchecker images costs about one deltaE 2000 point and I agree with you, it is a little too much. However you can verify it easily and quickly using a 6x4 pixel Tiff image and ColorLab.
>Also, there are some incorrect routines on the web for DE2000 calculations. The ones I use have been validated against CIE test data. Just a warning. I do try to update my scripts when I see something suspicious
Why do you consider the gray patches pure introducing in this way errors in the pipeline that are not so little?
>>> What illuminant do you refer for Lab space? D50 or other?
I assume nothing since I use Adobe services for Lab/RGB transforms. The Gretag Lab values are from a D50 light source according to Gretag. The others are identified by the source such as Profile Maker. Just as a point of discussion, if you adhere to the CIE math for XYZ/Lab transforms, the Lab values will always be at Illuminant E, the equal energy standard. Mathematically, XYZ will be at the true light source, Lab will be E, and RGB is defined in the RGB ICC profile. Ive seen lots of code that ignores both the CIE and ISO.
>>> Why do you consider the gray patches pure introducing in this way errors in the pipeline that are not so little?
Im not sure I completely understand your question, but. I chose to set the neutral target values to ab=0 because it seems a logical and rational target for calibration at least. This would be a very simple code change (option?). The color patches are unaffected. I did a lot of testing both ways and concluded that any differences were an order of magnitude smaller than the image noise and other inherent errors. As I said, this would be easy for me to make an option.
The original Gretag CC target values were given in XYZ at Illuminant E. Many folks were converting these to Lab assuming D50 or D65 and publishing the results. The current values are being published in Lab at D50 and sRGB at D65. Several others have noted that this transform results in out of gamut colors with several tools including Adobe. According to Gretag, they shouldnt. And any chromacity in neutral tones is absolutely insignificant. But the rule of small numbers does exit.
Adobe ACR has its own unique way of addressing the white point. And some posts indicate ACR starts with ppRGB skipping both XYZ and Lab transforms, initially at least. So I have no idea what it really uses. There is a TIFF metadata tag for the white point, but a lot of software ignores or dismisses it, even if present.
I had a recent challenge with a Nikon image where the white point seemed to be significantly different in the highlights, mid tones, and shadows. It could not be corrected in ACR. I tried a tip from Eddie Tapp to correct it in Photoshop, and happily it worked. I also tried opening the image with Nikon software and found that no adjustments were needed at all. WB was consistent across the tone range. Happily, I have only seen this in two of thousands of images.
Cheers, Rags :-)
>Mathematically, XYZ will be at the true light source, Lab will be E, and RGB is defined in the RGB ICC profile. Ive seen lots of code that ignores both the CIE and ISO.
I report in synthesis two answers from an Italian color scientist (prof. Boscarol) on his forum.
1.Lab is not absolute and there are infinite Lab. Every Lab is based on his white point (illuminant) that identifies the values 100,0,0. Practically now all applications, all systems, ICC profiles and all that turns around computer graphic, graphic and graphic arts are based on LabD50. Photoshop from the beginning uses LabD50 and only Linocolor for a while has used LabD65.
2.The illuminant E is a theoretic reference illuminant that has his spectrum uniform on one for every wavelength. It has never been using neither in color conversions from/to Lab/RGB nor in color distance formula nor in no way that I know. It is still only a theoric reference.
>I had a recent challenge with a Nikon image where the white point seemed to be significantly different in the highlights, mid tones, and shadows. It could not be corrected in ACR.
If you look at my Nef_2663 image and at the gray row analysis you can see this white balance behaviour. In this case your single-patch white balance and gray row pure consideration are inopportune. The single-patch white balance arranges perfectly the chosen patch and lets the error increasing when leaving it. In my case you can see a consistent color cast towards the white patch. It seems that the other scripts smear the error on all patches and color cast becomes less visible. The shadow tint intervention tries to mitigate this behaviour but it is not a solution to the problem. Canon 350D profiles dont show this problem.
The ICC uses LAB D50, as does Photoshop.
And no, before Photoshop 5, Photoshop used an indeterminate white point for LAB (aka it always mapped the image 255/255/255 for LAB white).
The whitepoint was only locked down in Photoshop 5 when ICC profile support was added.
>>> The ICC uses LAB D50, as does Photoshop.
>>> (prof. Boscarol) on his forum:
>>> 1.Lab is not absolute and there are infinite Lab.
>>> 2.The illuminant E is a theoretic reference illuminant
I must disagree with the implied conclusions in both statements, but with a friendly, not confrontational tone. And, Adobe and other software may be assuming Lab is D50. I cannot and would not dispute that.
What I said is that if you follow CIE math, the color values in Lab mode will be at Illuminant E. I stand by that conclusion. But there is little in the way of enforcement when it comes to standards.
Lets start with the CIE color values from measurement to numbers. This is the illuminant times the subject (measured) times the standard observer. It requires matrix arithmetic because the operations have to span the visible spectrum. So it is a little more involved that simple multiplication, but that is not conceptually important at this point.
The light source may be defined by a standard such as D50 or D65 or it can be provided in a custom set of tables (from measurements). The subject (raw image) values are adjusted for the measurement instrument to represent illuminant E (equal energy). This would be the spectral response of the color filters and such in an image sensor. The standard observer values are provided by the CIE, again at illuminant E. The resulting XYZ values are at the white point of the illuminant used for the source light.
Before moving on, the image sensor spectral response tables are too often not available. Thus assumptions are made that make calibration so bloody difficult. Enough said.
These XYZ values are then transformed to Lab values. The math in this step effectively removes the original light source from the XYZ values, resulting in illuminant E. This is how it is described in the literature, Berns, Hunt, and Wyszecki. If one adheres to the rules, Lab will always be illuminant E. XYZ values without a white point definition are as meaningless as RGB values without a profile definition.
But there is no enforcement body. I have seen lots of code that does not adjust for the XYZ white point at all. In that case, there would be an infinite set of possible Lab white points.
If you assume Lab is always D50, D65, or whatever floats your boat the transforms from RGB to Lab to RGB again would not be compromised. The ICC does define a white point in each ICC RGB profile. So if the input colors are correct, the output colors will be correct. It isnt too important what white point is used for the intermediate step.
But if you take a file in Lab mode from some other source, the white point would be very important. If I give a Lab file to Fred in E and Fred assumes it is D65, color conversions will be wrong. If Fred creates one at D65 and hands it to Adobe, chaos reigns. The ICC does not have a profile for Lab mode that I am aware of. So I dont know that the ICC attempts to trump the CIE as suggested. If someone knows of a verifiable reference for this, I would love to hear of it. The TIFF metadata does have a tag for white point. But I have never seen it used in a Lab mode file, including Adobes.
Illuminant E is no more or less theoretical than any other Standard Light Source. But it is at the core of all color matching algorithms and the basis of the standard observer target values. It is most often simply referred to as the equal energy light source so it might not ring a bell like D50 or D65.
The conversions between Lab and LCh and the algorithms for Delta E 2000 color differences are all implicitly dependent on Lab values at illuminant E.
I rest my case.
Cheers, Rags :-)
If you use CIEL*a*b* correctly - you have to choose a whitepoint.
Yes, once you have converted into L*a*b* - the white point is usually neglected (only inside L*a*b*) because it's been normalized out. But that doesn't make it illuminent E.
You also typically neglect the RGB and XYZ whitepoints until you need to convert them to something else...
L*a*b* is not absolute, and is always relative to the whitepoint.
If I give you L*a*b* values without a whitepoint, will you be able to determine exactly what the measured surface looks like? No. You have to include the whitepoint, THEN you can calculate the appearance.
Another way to think of it is: L*a*b* (and more closely L*c*h*) are to XYZ as HSV is to RGB. That is, both are relative to the other colorspace and form some ordering of value/lightness, hue and saturation. But both cannot specify absolute color without a whitepoint, and in the case of HSV it also needs the RGB primaries and transfer function.
So, you still have to specify the whitepoint for the LAB values, same as for RGB and XYZ.
As for illuminant E : D50, D65, etc. were derived from measurements of real skies and light sources. The A, B, C, and F* standards were derived from measurements of real light sources (as best I recall).
But E was purely theoretical, and bloody difficult to produce in the real world. It was never measured, just created as an abstract notion that simplifies many color calculations.
prof. Boscarol was quite correct in both statements.
>What I said is that if you follow CIE math, the color values in Lab mode will be at Illuminant E.
It could be possible.
>If I give a Lab file to Fred in E and Fred assumes it is D65, color conversions will be wrong.
I complete the answer provided by prof. Boscarol that I have truncated because I haven't give the right importance. "...only Linocolor for a while used LabD65 and for this reason when you opened in Photoshop an image scanned with Linocolor, this image was always too much yellowish."
>The ICC does not have a profile for Lab mode that I am aware of. So I dont know that the ICC attempts to trump the CIE as suggested. If someone knows of a verifiable reference for this, I would love to hear of it.
On your FTP rest both Lab color space profiles by X-Rite and by GretagMacbeth. As you can see with an ICC profile tool like Colorthink or like the freeware "ICC Profile Inspector" (http://www.color.org/profileinspector.html) not only the PCS white point is D50 (this is always D50 in all cases) but even the media white point tab (wtpt) is D50 and not E in both cases.
http://www.boscarol.com/pages/cs/620-lab.html here at bottom you can find a list of applications with their lab & adaptation & visual degree. Maybe there is the English version.
>Illuminant E is no more or less theoretical than any other Standard Light Source.
Yes, I would say that is not used in the practice.
Chris and Marco,
Thanks for the feedback.
And thanks for the color.org link and profiles Marco I was aware of the tool, but was unaware of any Lab profile to run it against.
I have some other activities that are consuming my time right now. But Ill watch this thread and get back when appropriate. There might be some controversy here, but the discussion is interesting. I think we can all agree that if different tools are using different assumptions we will get different results. That is all.
Cheers, Rags :-)
>If I give you L*a*b* values without a whitepoint, will you be able to determine exactly what the measured surface looks like? No. You have to include the whitepoint, THEN you can calculate the appearance.
I am not a color scientist, but I would like to bring up a point for discussion. If one is interested in describing what is perceived by the eye, rather than the color of the reflecting surface, one shouldn't need a white point.
What is needed in this case is how each of the receptors for short (S), middle (M), and long (L) wavelengths (blue, green, and red receptors) are stimulated. In this case the CIE XYZ tristimili should be sufficient. Can not these XYZ values also be expressed in terms of CIE L*a*b?
Of course, perception varies according to color adaptation of the eye.
> On your FTP rest both Lab color space profiles by X-Rite and by GretagMacbeth.
Can you share these profiles Marco, or add a link where we can download them from?
Bill - yes, you need to specify the white point in both cases.
Human white adaptation is not perfect and is affected by far more than local receptor stimulea.
And to really specify the appearance, you have to include adapted white, surround color, absolute luminance, etc. See the CIECAM02 spec for details, or Mark Fairchilds book "Color Appearance Models Second Edition". The book goes into a lot of detail on why these other factors are necessary.
>And to really specify the appearance, you have to include adapted white, surround color, absolute luminance, etc. See the CIECAM02 spec for details, or Mark Fairchilds book "Color Appearance Models Second Edition". The book goes into a lot of detail on why these other factors are necessary.
Thanks, Cris. I did a Google search and bookmarked the CIECAM02 calculator. I'm sure that Fairchild's book would be beyond my level of expertise and I will take your word on the matter.
BTW, if you want to specify a color appearance of an object, what is the best way to do so without supplying a Pantone patch? Spectral power distribution or merely XYZ values with the whitepoint.
Appearance is the hard one. (CIECAM02)
Color is the easier one. That just needs some calibrated colorspace coordinates, and a whitepoint (which can be part of the colorspace spec.).
Bill Janes wrote:
>> If one is interested in describing what is perceived by the eye, rather than the color of the reflecting surface, one shouldn't need a white point. What is needed in this case is how each of the receptors for short (S), middle (M), and long (L) wavelengths (blue, green, and red receptors) are stimulated. In this case the CIE XYZ tristimili should be sufficient. Can not these XYZ values also be expressed in terms of CIE L*a*b? <<
Makes sense for me. Thinking about the original Color matching experiment which was laying the ground for CIE XYZ, it doesnt seem to include any particular consideration or definition for white: http://www.fho-emden.de/~hoffmann/ciexyz29082000.pdf
Now given that in this scheme any color needs to be defined by 3 coordinates, and that any matrix space as a subset of CIE XYZ bears 3 x 3 = 9 degrees of freedom to set the corners, it reasons why the Calibrate tab just offers 3 x 2 sliders for Hue and Saturation, respectively because 3 of 9 degrees of freedom are already reserved for the definition of white via CT, Tint and Exposure. In other words, the whitepoint isnt introduced on top of an RGB space, its part of shaping its 3 by 3 matrix, thus reducing the number of further variables. What ACRs six Hue/Saturation sliders in the Calibrate tab are doing is moving the 2D CIE xy coordinates of the triangle corners. Note the terms: 2D, Hue and Saturation.
This inevitably leads to question about the sense of a 3D Lab / dE2000 based analysis and calibration procedure, respectively. But if everyone likes to see it complex, including this latest turn of mixing things with a Preferred (output-referred) rendition...
>Can you share these profiles Marco, or add a link where we can download them from?
>If Im not mistaken, it would be very kind to disclose that both generic input profiles for D65/A illuminants do NOT refer to a linearized state.
>Reference to preset Brightness 50 + Contrast 25 and resulting colors could save us from any trip to Lab.
I have not understand. What do you mean?
Tone curves leave their fingerprint on color saturation. Brightening S-curves such as e.g. resulting from preset Brightness 50 & Contrast 25 are certainly the tool of choice to compensate for dynamic range compression, however as a side effect from RGB math theres a broad increase of color saturation for a majority of colors with the exception of the highlights. For example, it makes about +5% HSB-saturation for the red, green or blue patch of the known ColorChecker (compared to a linearized state with Brightness & Contrast zero).
If you profile a linearized state, without tone curve so to speak, described effect typically results to a high-sat. look later on in practice. Alternatively, you can leave a somewhat reasonable tone curve in place during ACR calibration, while focusing on that what the Calibrate tab sliders suggest to do: to get the *Hue & Saturation* right (at least on an average basis). My 2ct that both generic profiles for D65/A illuminants were built accordingly, are harmonized with said preset tone curve rather than referring to the colors of a linearized state. Though I got no response and confirmation on this when I asked.
Please check Simon Tindemans website again regarding the reasons why he offers his hue/sat.-preserving Curvetools, or, alternatively the option useXMP which is probably more suited for the common user (including myself). The tone curve in general and brightness differences in particular are ignored: Use this option to apply a pleasing tone curve directly in ACR, at the cost of a little color accuracy.
Btw, the latter is basically a scripted version of a nice Calibration procedure which for some reason I also like to do by hand takes 10 minutes or so simply based on the HSB-hue & -saturation readings of the primary patches and some key memory colors. Guess this disclosure is prone to cause another discussion on merits of HSB versus good old overrated Lab :).
Further, to be clear, this shall not imply that I would prefer a more accurate low-sat look in practice. The Saturation slider in ACR is by far the better choice to add saturation than described side effect from the tone curve and RGB math. This Saturation slider silently operates in saturation blend mode which is based on a proprietary HSL separation including a perceptual G>R>B weighting of the Luminosity axis; finally quite close to Lch saturation which approx. preserves the perceived Lightness per color.
Marco, for my purposes Im fortunately through with this subject, concluding that its neither complicate nor does it make sense to seek for ultimate precision. On the other side Id have some nice feature requests to facilitate the final step from a somewhat accurate state + tone curve to a preferred color rendition. Though I have yet to convince Adobe :).
Best regards, Peter
The original color matching experiments were done with emissive sources: narrow band lights. And they had either black, or carefully controlled surround (which includes the white point).
Tone curves causing saturation: ah, the Velvia look....
I'm at the beginning stages of shooting digital, at least seriously, and I find many images too saturated rather than not enough.
I have been scanning now for several years, shooting Kodak Portra negs. They come through the scanning process without looking oversaturated. The Nikon D80 comes on much stronger. I find myself fighting the overly saturated colors.
The relationship between tone curves and saturation apply to scanned images as well.
Maybe I go back to B&W!
What software are you using to process your RAW files?
(the cameras by default crank up the saturation on JPEG Files)
Check out the Capture NX or...?" thread on the Photography forum. I left a post there that addresses your comments in #28 here. It's a very short thread as of now.
>Tone curves leave their fingerprint on color saturation. Brightening S-curves...
What do you think about my test "Results validation moving Adjust sliders and Tonal curve"? It is based not on subjective eyes but on objective ColorLab. I read already this critique but after this test I have the temptation to believe that this saturation problem is only fantasy. :-)
>Please check Simon Tindemans website again regarding the reasons why he offers...
I belive that the calibration process is intended to searching for an accurate "general purpose profile". When you need it is possible tweaking them on the single image but I belive that you feel the need very few times. For this purpose all color patches have to be below the lowest deltaE2000 and none has to skew out. With this finality I used the scripts and in the article you can find the way I found the best result.
LinearPreset can be interesting because linearizes the tonal curve applied from Canon (and in consequence from Adobe) to his profile. To understand see Dcraw results in article and think at the Canon highlight recovery capabilities. However with LinearPreset or CurveFit calibration method the results are very close.
I have tried a variety of programs to process RAW, ACR, Bibbler, Capture NX, Lightroom; all have something to offer. I favor those programs offering Highlight recovery. It may be that Shadow Highlight in PS will be adequate, but one can spend bucks like crazy on this stuff!
As most who know me know, I favor simple, powerful approaches to processing, those that turn over control to the operator. For instance, with Epson scanners, their native program cannot be beat. And, I don't get excessive saturation with their program. In fact, I cannot recall when I had to use the scanner Sat controls. But then, I have the scanner calibrated to Kodak Portra. I scan the neg, invoke the profile and voila! i am there. maybe an occasional tweak in curves or levels. Sometimes when the light is bad, I use the eyedropper to set gray, but that's all.
Speaking of curves, why is the default for Lightroom and ACR not linear? That is a mistake, IMO. I have to remember but usually forget and have to go back to square one when I switch. By then, I have mostly compensated for it elsewhere (like Sat!).
But then, I am relatively new at all digital, and my contract work with Intel limits the time I can devote to mastering the camera and software. It's dark when I leave, dark when I go home! :-(
I intend to read over this thread with greater attention. I hope I haven't derailed it too much!
Because the vast majority of users (amateurs) don't like the look of linear.
But you can save the Curve adjustment as linear as your new default if you want.
can I ask you why ACR doesn't offer linear (gamma 1) output and constrains who prefer this color management approach to adopt Dcraw?
Chris Cox wrote:
>> The original color matching experiments were done with emissive sources: narrow band lights. And they had either black, or carefully controlled surround (which includes the white point). <<
Well, that doesnt sound really convinced regarding if and how this environmental whitepoint would have been considered (or was it black). Also, the specific light sources were finally seen as irrelevant according to Grassmanns law. The claim whether totally correct or not - was for general validity. Accordingly, a color would be sufficiently defined with by its three XYZ tristimulus values, at least with regard to the human eyes response, and in an absolute colorimetric sense, not referring to the appearance of a color as part of a scene.
That said, I could imagine that it makes some sense to equip CIE XYZ / Lab with a whitepoint (on the top so to speak) in order to facilitate RelCol conversions and in particular while using it as a profile connection space. NikonScans Lab version includes a wtpt tag.
>> Tone curves causing saturation: ah, the Velvia look.... <<
May I understand this as a basic confirmation? Though my intention was to suppress this effect by considering it during calibration (at least on a somewhat average basis). There are better ways to add saturation IF needed, imo.
>> I have the temptation to believe that this saturation problem is only fantasy.<<
The only problem it creates is with regard to the validity of a dE2000 analysis of a linearized state compared to practice when most a likely a sigmoidal tone curve is used to compensate for dyn range compression. Anyway, you might wish to try this:
/> open the captured ColorChecker with Camera Raw
/> ProPhoto RGB, 16 bit
/> Shadows 5, Brightness 50, Contrast 25 (curve tab linear)
/> click-whitebalance the second gray (patch # 20)
/> adjust Exposure to reach RGB = 190 for this gray patch (# 20)
/> convert to Photoshop and measure HSB-saturation e.g. of the Red patch
/> re-open the captured target with ACR (same settings)
/> now set Shadows, Brightness and Contrast to zero
/> convert to Photoshop and measure HSB-saturation of this Red patch again
/> report on the difference
Again, just reconsider Simon Tindemans website; why he offers the option useXMP, or, alternatively his CurveTools.
>> LinearPreset can be interesting because linearizes the tonal curve applied from Canon (and in consequence from Adobe) to his profile. To understand see Dcraw results in article ....
??? Canon cameras do not apply any tone curve to the Raw data, afaik, unless you go for JPG's from in-camera conversion. Raw is Raw. Also Im not aware of any communication between the camera and ACR about which tone curve to use. IF your linear Raw data are not really linear, then your sensor doesnt go linear with light levels. To understand, see this comparison with DCraw data:
Peter Lange, "Exposure to the right and highlight saturation" #6, 6 Oct 2006 11:12 am
Best regards, Peter
Marco - because it's rarely needed, and you have all the bits to convert back to linear in 16 bits/channel (if you so desire). About the only people who need a single shot gamma 1.0 image are researchers (who have been asking for that feature more than a few times).
>Canon cameras do not apply any tone curve to the Raw data
Of course not.
But their software on your computer does, by default.
> About the only people who need a single shot gamma 1.0 image are researchers (who have been asking for that feature more than a few times).
I don't know that we have been asking to work with imagery in gamma=1 space, but more about asking Photoshop to respect gamma=1 space because so much scientific imagery begins there. Most of us use PS for presentation only, therefore gamma doesn't need to remain at '1'. There was some concern in the beginning with PS5 assumed gamma=2.2 as a default. But since PS6 and working with any document in any space and having been educated, there hasn't been much concern (... leastwise on my part).