This content has been marked as final. Show 143 replies
Users of Compressed NEF (i.e. D70) need to be mindful of the mathematically lossfull, visually lossless ;~) algorithm used by Nikon. It dumps a lot of the high order bits.
Compressed NEF for the D70 only uses 683 values of the available 4096. The distribution is as follows:
Range Values Percentage
0-128 129 18.89%
129-256 122 17.86%
257-512 77 11.27%
513-1024 80 11.71%
1025-2048 114 16.69%
2049-4096 161 23.57%
The 0 to 128 range is a 1 to 1 mapping. The 2048-4095 range is a 161 to 2048 mapping. The very top end values are as follows: 4026, 4041, 4055, 4070, 4085, 4095.
Each pixel value is actually an index into the lookup table that is stored in each NEF.
Users of compressed NEF need to recognize that they do not have the same magnitude of data in their raw files that other users have.
Your comments on the compressed NEF format are most interesting and I am not certain how to interpret the significance of this compression. Since, as you say, it is visually lossless, I suppose it is not a bad thing. According to the Weber-Fechner law our eyes can perceive only about 70 tones in the 2048-4095 range, we don't really need all 2048 of those tones, most of which are wasted as I mentioned in en earlier post on this forum; 161 tones should be sufficient.
I would appreciate your insight with regard to the practical effect of this compression as well as any references for further reading.
The practicle effect of the D70 lossy compression is that your test is flawed for cameras whose compression scheme is lossless. Nikon chose to use a lossy compression scheme to save file size and speed writes. For "normal" tone curved images-ei, jpegs from the camera, the effect is indeed "visually lossless" since the tone mapping throws away excess levels in the highlights. But, when shooting raw and wishing to delpoy an advanced tone curve to remap highlight detail, the D70 lossy compression leaves you with considerably less than the full 2048 levels of the brightest stop. That makes hilight tone mapping with the d70's compression less useful.
As for Rags' arguments, well, let's just say they are also a bit south of totally useful since he doesn't seem to understand the nature of linear exposure. I am far more inclined to listen to the like of Thomas Knoll when it comes to the discussion of digital cameras, linear exposure and the effects of tone mapping on digital captures.
"The practicle[sic] effect of the D70 lossy compression is that your test is flawed for cameras whose compression scheme is lossless"
The test is not flawed for cameras with lossless compression--it is not applicable to such cameras. However, according to Steve's figures there are still 161 levels in the 2049-4096 range and this is sufficient to make the point. The test also demonstrates that there is not that much headroom between zone V and zone IX, about 2.2 EV according to Bruce's tests for his Canon, which is very close to what I found in my previous testing with the D70. Totally blown highlights are completely lost and can not be recovered, whereas the lower tones do contain data.
"For "normal" tone curved images-ei, jpegs from the camera, the effect is indeed "visually lossless" since the tone mapping throws away excess levels in the highlights."
I wasn't aware that the compression technique for raw capture (NEF) was used for jpegs, so this comment may be superflous. Why would you apply NEF compression to jpegs; doesn't jpeg have its own method of compression?
"But, when shooting raw and wishing to delpoy an advanced tone curve to remap highlight detail, the D70 lossy compression leaves you with considerably less than the full 2048 levels of the brightest stop. That makes hilight tone mapping with the d70's compression less useful."
Quite true, but it would take some pretty fancy mapping to make full use of all the 2048 levels. The advantage of non compressed files may be more theoretical than practical. I would like to see some examples.
As for Rags' comments, it appears that anyone who does not agree with your assertions does not undestand linear capture. Nonetheless, Rags does offer some valuable insights. Of course, Thomas Knoll is an undisputed master of digital imaging. Michael Reichman referred to Mr. Knoll's thoughts regarding the brightest stop's information in his treatise "expose to the right", but I haven't had the pleasure of communicating with him on this subject and exploring his thoughts on exposure, blown highlights and noisy shadows.
As Jeff stated, the highlight tone mapping is much less useful with compressed NEF such as found in the D70. Too aggressive and you will lose details.
If you compiled the DCRaw program, you may be able to put a debugger on it. Stepping through the program, you can see it extract the lookup table and then index against it for each of the pixel values. You probably could also extract the values from the lookup table.
A while back in one of the forums (DPReview or Rob Galbraith) there was a discussion on NEF mossy vs. lossless and someone provided a link that had the values. Since the count matched what I had independently observed running DCRaw in debug mode, I made the dangerous leap of faith that the values were accurate. The same discussion also referenced the D100 as only having 500+, which I had also seen.
As a D70 owner, I will still expose to the right just not as aggressively. To hold detail in white on white highlights I will spot expose +2. Otherwise, I will look for a well distributed histogram that favors the highlights. I recently compared a shot pushed too far to the right to ones where I was not as aggressive. I believe that I saw minor banding that obscured details in the highlights. It could have been other factors but
As a serious amateur, I enjoy the D70. I do however look forward to sometime in the future where I will have the full set of bits from my capture device and can try some of the more advanced tone mapping techniques.
After a brief search I was not able to find the thread to which you referred, but did find a lot of interesting stuff and this reference which gives me most of what I need to know:
My approach to exposure is similar to yours--expose to the right but try to avoid clipping of highlights. Since the NEF compression throws away quite a bit of highlight data, I presume highlight recovery would be less effective than with an uncompressed NEF. Nonetheless, I think most would agree that proper exposure is still key, just as with Kodachrome. Jeff wants to retain as much highlight data as possible, but he doesn't really tell us how he avoids burning out the highlights and has not demonstrated how he uses all that highlight detail to such advantage. I guess he brackets important exposures, just like us mortals.
I too am a serious amateur who would like to have a more capable camera but I'm not certain that uncompressed NEFs such as offered by the D2X would help me all that much. Encrypting the white balance is a definite step backwards. Apparently DNG achieves compression equal to the compressed NEFs without any data loss.
Yes, I bracket in either 1/3 or 2/3 incriments when needed in a changing light situation where exact metering isn't possible. I also look very carefully at the resulting exposures and try to get the lightest exposure just short of hilight clipping. But much of my time is spent shooting in the studio where I can control the scene dynamic range and add fill to open shadows and exactly meter the highlights and place textured highlights accurately.
Particularly when shooting with controled lighting (studio) it's very easy to work the highlights for exposure and then fill shadows or even decrease the shadows by using "black fill"-which is basically black flock to absorb light.
After conversion I can then tone curve as I wish although I do try to nail the shadow point in Camera Raw. A lot of the tone curves I apply must be done locally so that requires either a dual process Camera Raw method or local tone cuves in Photoshop.
I usually try to stay out of this forum. I consider it a form of self-flagellation.
Thank you for the offline comments about my light measurements article. I have word smithed one paragraph and the summary table. I hope you find it clearer.
As to clipping, remember that your camera is showing the histogram and clipped highlights in a rendered color space, even if you shoot raw. These indicators are useful as a rough guide. But only that. If you shoot JPG or TIFF, what the camera says is clipped really is. There is always an exception. In this case, it is Kodak JPG ERI. But since Kodak has left the theater, that seems a moot point. Clipping is also highly affected by the dynamic range of the scene itself. That is, how black is the darkest black and how white is the brightest white.
I use ACR 95% of the time, and I love it. I especially love all of the new features in CS2. I do wish that I didnt have to spend so much time calibrating my cameras to ACR. I do pay close attention to the histogram and clipping illustrated in ACR. This is real simply because it represents the cooked image I am trying to achieve. With proper calibration, I seldom have to adjust the basic images. A clear exception is when I am shooting soccer at night. I dont think I could ever try that with film. Constantly changing lighting and fast paced unpredictable action.
There are a very few times that I have reverted to the Kodak or Nikon software for a particular image.
I cannot comment on Nikons private?, "visually lossless" compression at all. I have been shooting my Nikon D1X in compressed raw for three years. For raw I would clearly prefer a completely lossless compression.
My humble opinion is that the expose to the right mania has caused more damage than good. It is based on a fundamental misconception of computer architectures. And, it ignores some physical properties of light and electronics. Worst of all, it has caused some to completely disregard basic photographic techniques and fundamentals. Then, the failures are blamed on technology gremlins.
Thank you for the kind comments. Im reaching for my horrible flagellum right now.
Cheers, Rags :-)
So long, and thanks for all the fish!
Exposure is based on Zone V.
"Exposure is based on Zone V"
Not any more. . .
In regard to the histogram, when you are checking shots for clipping, do you base that decision solely on the histogram or do you make some changes based on the 1 stop or so that it overestimates clipping? Or am I misunderstanding the difference between what the histogram shows and what the flashing clipping points on the preview mean together? Do they indicate clipping identically or reveal info separately?
How would you expose for a picture of a black cat on a dark background? If you place the exposure too far to the right so that the lightest tones are just short of clipping, the tones will be distorted by the expansion that takes place when the image is rendered from the raw as the gamma curve is applied (as shown by Rags' histograms) and any s-curve applied by the rendering will be applied to the wrong tones in the image. Anyone who exposes in this fashion does not understand linear capture.
In the above case, a gray card or incident light reading would get you closer to proper exposure. Since the subject is unusually dark, you would probably open up 0.5 to 1 stop as suggested by Kodak. Alternatively, you could use the zone system approach and place the cat at Zone III or so. As explained by Adams, once you PLACE one zone, the others FALL into position. Actually, you can use any zone for placement and the others will then fall into place.
In what Adams refers to as a short scale subject (where the dynamic range of the subject occupies only a portion of the linear portion of the characteristic curve), there is some lattitude in placement. For negative film Adams placed Zone III at Zone III (rather than Zone IV) so as to have less grain and higher acutance in the negative.
With positive film or digital most of us try to place Zone IX at Zone IX so as to make best use of these media. Of course, with film some photographers slightly underexposed Kodachrome and others exposed their Velvia at ISO 40 rather than the nominal value of 50 (O.3 EV overexposure) in order to make best use of the characteristics of the medium. Once the characteristics of the mediam are taken into account, I don't really see that much difference in exposing for film or digital. We digital we could place Zone V at Zone V, but any misjudgment would risk clipping of the highlights. As Bruce Fraser has pointed out, current tools are inadequate for really accurate placement and we are left in somewhat of a quandry.
Your comment about the camera histogram being based on a rendered image is key, since the rendering does more than simply apply a gamma curve--it reduces the dynamic range of the scene to a lower value that can be realized in a print or on a computer screen and maps the tones so as to achieve a pleasing rather than literal interpretation of the linear tones. For interested readers, Kodak discusses this matter briefly in their white paper for ROMM_RGB (ProPhoto RGB).
If we really want to know what is going on with the raw image, it is best to examine the unrendered image. That is why I used DCRaw and a 16 bit linear space for my tests.
Well, if you want a black cat to look black on a dark background and see any fur on the cat, you would indeed need to "over expose" to get any detail in the fur. Just because the cat is "black" doesn't mean that's what you want it to look like. Ever shot fur? I have. You have to pump a lot of light into fur in order to get detail whether you are shooting film or digital.
Look, you can TRY to dispute linear capture all you want and pretend it's just like film-but all that's gonna do is lead you down the path of always fighting the reality of the technology. Rag's lengthy theory not withstanding, digital sensors are photon counters. . .but unlike film, they respond in a linear nature. If you want that black cat to have any detail and you expose it based upon a zone 5 reading, you will have far fewer levels than if you upped the exposure. Heck, the same would be true for film. . .
As for tone curves, well, yes, you can use all manner of elaborate tone curves to absolutely control the tone rendering of a scene-that's what Camera Raw & Photoshop's tools are for-making the scene look the way you want it to. I actually use levels and curves far less than I use layers set to screen or multiply to adjust the tonailty of images. Far more precise in terms of locality and easier to see to adjust as well.
As for the histogram, if you are talking about the camera histogram, forget it. At least on Canons, they are based upon the luminosity of the sRGB camera conversion. The highlight warnings are a full stop too conservitive and if you only use that histogram, you almost always lean toward under exposure. Far more useful and accurate is Camera Raw's histogram that is based upon the color space you are converting to and can shot you clipping either in the histogram or using the video lut animation of clip points.
Jeff Schewe wrote:
"Well, if you want a black cat to look black on a dark background and see any fur on the cat, you would indeed need to "over expose" to get any detail in the fur. Just because the cat is "black" doesn't mean that's what you want it to look like. Ever shot fur? I have. You have to pump a lot of light into fur in order to get detail whether you are shooting film or digital."
That's not answering the question, Jeff. Are exposing the cat at Zone IX to get the maximum number of tones? I hope not. You totally ignored my comments on how tone values can be distorted by improper placement prior to rendering. Do you dispute that this can occur?
"Look, you can TRY to dispute linear capture all you want and pretend it's just like film-but all that's gonna do is lead you down the path of always fighting the reality of the technology. Rag's lengthy theory not withstanding, digital sensors are photon counters. . .but unlike film, they respond in a linear nature."
No one is disputing the linear nature of linear capture. Heck, if you had read my post you would have seen I went to the trouble of getting a 16 bit linear capture for analysis. Film is also a photon counter, but analog with a log response, giving more values in the lower tones where they are lacking with digital. Fortunately, with modern sensors and techniques, this limitation can be overcome and I fully embrace the new medium and am not fighting it. The eye exhibits a log response and that is the end product of photography with either digital or film.
"As for the histogram, if you are talking about the camera histogram, forget it. At least on Canons, they are based upon the luminosity of the sRGB camera conversion. The highlight warnings are a full stop too conservitive and if you only use that histogram, you almost always lean toward under exposure. Far more useful and accurate is Camera Raw's histogram that is based upon the color space you are converting to and can shot you clipping either in the histogram or using the video lut animation of clip points."
With regard to video LUT animation, that was removed from Photoshop in Ver 6 and even before that was available only on the Mac. What version of PS are you using?
The luminosity histograms on cameras have been discussed previously. As I understand it, the newer professional Canons and Nikons show individual histograms of each channel and might be more useful. Perhaps someone can comment. The pertinent histogram under discussion here is the composite RGB histogram of the dark skinned person in Rags' treatise. If you want to contribute to the discussion in a useful way, you could comment on the standard deviation of the tones with underexposure, normal exposure, and over exposure and how this affects tone rendering.
"With regard to video LUT animation, that was removed from Photoshop in Ver 6 and even before that was available only on the Mac. What version of PS are you using?"
Camera Raw 1, 2 and now 3 have the ability to display the clipping point of both highlights by channel and shaddows by channel.
In CR 1 & 2, the clipping was called via the option key when draging the shadow/exposure sliders-in that regard thay acted as the old video lut animations. In CR 3 there are check boxes for shadow and highlights without the option key.
As for the my discussion of histograms, it was in answer to Stephen Gingold not you Mr. Janes (it's not all about you bud)
I believe that Jeff is referring to the clipping display in Camera Raw, obtained by Option/Alt-dragging the Exposure and Brightness sliders.
There's no way to say this particularly gently. Rags' article is a lengthy chunk of solemn nonsense. If you can swallow statements like "the gamma of light, based on the inverse square law, is 2." without shooting coffee down your nose, you need to do a good bit of basic research.
If you can accept (or even interpret as meaningful) statements like "Tonal values are simply integer numbers and the values are linear. How they are encoded in the computer is irrelevant." you need to refine your understanding of how computers are used to edit images by changing numbers.
"Brightness" is the attribute of a visual sensation according to which an area appears to emit more or less light. As such, it's a psychophysical quantity that can only be recorded by interrogating subjects who are experiencing said visual sensation. We can correlate these reported sensations with things that we can measure physicallyphoton count, illuminance (which is what incident meters measure) or luminance (which is what reflective meters measure), but the relationship varies at different brightness levels. If we take photon count as the input and the reported sensation of brightness as the output, the relationship can be described reasonably well by a gamma curve somewhere between 1.8 and 2.4, depending on the absolute brightness.
The Zone scale deals with lightness, which is the perception of how dark or light a tone is relative to some absolute brightness. Again, lightness is logarithmic in relation to photon count. Rags calls this "Tonal value."
Digital cameras just count photons, and record a value exactly proportional to the number of photons that impinge on the sensor. That's what we mean when we say that digital capture is linear, or has a gamma of 1.0. Digital cameras and other photon counters know zilch about lightness, which is an attribute of human sensation.
Underexposure does not retain shadow detail, nor does it make shadow detail easier to recover. It increases the number of values recorded in the shadows only because most of these values are noise. The only thing in the universe that is black is the event horizon of a black hole. If you get close enough to photograph one of these, you'll have other issues to worry about, but even although no photons are present, the camera will record some values. Those values represent the internal noise of the system, and they overwhelm weak photon counts. If you want to capture shadow detail (which by definition is not black), you'll do a better job the further up the tone scale you place it. The question is, what highlight do you want to retain while doing so?
It's totally unclear what is meant in this context by "overexposure" "normal exposure" or "underexposure." The definitions are circular.
Forget about the black holes - what's truly amazing is the empty space between science and science fiction - granted that one knows what each term really truly represents - and what basis one speaks from.
A sharp axe does not yield a better cut downing green trees.
This thread has become overheated and I would like to return to the points I made in my original post and invite your analysis so that we can all learn. As I stated, I can't follow all of Rags' arguments and will let him explain them himself, but I thought he made some good points that were worthy of discussion.
Yes, I know what Jeff was referring to the clipping display in ACR or Levels, but it now has nothing to do with the LUT and he replies with a non sequitur rather than retracting his irrelevant statement--no one stated that ACR did not show clipping levels. For some reason Jeff just gets under my skin. His attitude is condescending and cynical.
I certainly appreciate your exposition on photometry and human perception, but I presume it was prompted more by technical misuse of some of the terms by non-experts than anything actually germane to the substance of the discussion :). By now we all should know that a digital sensor is a photon counter and is linear--that point does not to be re-iterated. Of course, Jeff will reply that I may know that, but do not understand its significance. I would rather drop this point.
That said, here is what I strive for with exposure and I would appreciate your critique. For a subject that contains zone IX values, I like the histogram in ACR to be as far to the right as possible without clipping, realizing that the camera displays may not be accurate and taking this into account. I agree with Norman Koren that one can leave a little headroom on the right, since the highlights contain an abundance of levels. However, as you point out,the effect on shadow detail and noise may be an overriding consideration. If the dynamic range of the sensor is exceeded, one must sacrifice either shadows or highlights or use HDR if possible.
I did not mean to say that underexposure facilitated recovery or preservation of shadow tones. Since there are more tones below Zone V than above it, loss of those higher tones by overexposure and clipping would be more significant than loss of equal number of the more numerous shadow tones from underexposure.
When the image is rendered from scene to print or screen tonal values, the highlights and shadows must be rolled off to fit the dynamic range of the medium while maintaining adequate contrast in the mid tones, and you lose tones in these areas anyway. I do not think it is necessary to exceed the limitations imposed human vision as dictated by the Weber-Fechner law in the final image--in other words, we do not need 2048 discrete tones in Zone IX but rather 70 or so. According to Norman Koren, fewer levels are needed in the shadow zones, due to visual interference-- mostly flare light-- from the light areas.
In your future books I hope you will cover scene and output spaces and scene to output rendering in more detail and possibly some material on tone perception.
With a low key subject with no higher tone values, I would not expect the histogram extend all the way to the right, but would want it to be representative of my visualization of the picture, making sure that the shadows are well exposed.
In summary, I would regard the exposure as proper when I obtain the intended tonal values without needing to use the exposure control of ACR. I certainly do not like it when I need to use positive exposure compensation. As an aside, Nikon digital cameras in high contrast situations often underexpose, supposedly to avoid blowing the highlights--this can be quite annoying. I wish you would talk to them. :)
A digital sensor is simply a photon counter as explained. And raw values represent linear voltages as explained. And perception is not linear as explained. However image tones and binary encoding have nothing at all to do with each other. This is being denied by many respected photographers and unfortunately blindly accepted by too many others.
But digital sensors are subject to the same reciprocity (and other) laws of physics as film. This has been debated before, but in the end, these laws apply to anything related to frequencies and wavelengths, including audio. Thus, they can certainly exhibit a characteristic tone curve as in film. It has been proven in the Nikonians forums with images analyzed from users. Nikon has never talked to me, but they offered the same advice to the same users. To this, we add the optical CFA, color filters. Hence my belief is that we should be paying attention to spectral data as provided historically with film. Adobe is free to ignore this, but that does not make me ignorant.
Do not for even a second confuse this with electronic (digital) noise. Both are real, but very different issues. In fact, it is even possible to experience optical noise. We call the most common form of this lens flare.
Then, when it comes to lossless or lossy compression, LUTs (look up tables), tone curves (custom or otherwise), various perceptual color spaces, automatic white balance (the non-scientific kind) and such, these are all fair game for friendly debate. I include in this various techniques for sensor interpolation and demosaicing. These are emerging and competitive technologies. We can judge the merits on the results.
Some folks want reproductive accuracy. Some simply want visually pleasant results. Some want artistic effects. Thats the human condition.
We always wanted to have a private home development lab. Now, we are one. Dont expect Adobe or any one else to be a panacea for your every whim. Make the best of the available technology. Focus on photography and having fun. Throughout the history of mankind, perfection has been a goal. Never a lasting accomplishment.
I love my Adobe products. I hate the defects. But, most of all I resent the way Adobe treats me, a customer.
If you really think that your camera exposure metering is off, you should spend some time attempting ACR calibration. If you think Adobe and Nikon are talking, think again. If I think I should have responded, I dont.
Cheers, Rags :-)
Exposure is still based on Zone V.
"But, most of all I resent the way Adobe treats me, a customer."
Uh, you do know this is a User to User forum, right? What terrible thing has Adobe done to you?
"If you really think that your camera exposure metering is off, you should spend some time attempting ACR calibration."
Uh, maybe you mean calibrating your metering?
"If you think Adobe and Nikon are talking, think again."
And you know this how?
That is pure and un-useful speculation bud. . .you are not doing the industry any favors. . .
LOL! What he said.
Seriously, this is all getting a little too doctrinaire and, if you don't mind my saying, pedantic. I for one welcome a little rocking of the boat.
I happen to concur with the conventional wisdom that the top 50% of the brightness range of a digital image is devoted to the brightest f-stop of the image. So what? It doesn't we need to suddenly concern ourselves with the highlights, any more than we did in the old "non linear days".
The highlights are not suddenly important to our images because linear encoding devotes more bits to them. This is a bit like the old joke of looking in the wrong place because the light is better.
The image is the thing. If the image is low key, as with the aforementioned black cat, then it's the photographer's duty to walk away from all those data bits.
The same logic applies if the image is high key, and most of the interest and texture is in the top three or four stops, there is still no reason, necessarily, to overly concern ourselves with the brightest 50% of the pixels.
And yes, as a matter of fact, the most interesting pixels, by and large, are right about where they always were - centered on zone V.
These conversations will be more interesting if diverging points of view are not simply tolerated, but engaged on their own merits. So I say go for it Rags!
OK. Back to the beginning.
Yes, the camera devotes half of its bits to the brightest f-stop. This is not in any way a good thing, it's just how things are.
To get the best image, you need to make use of all the bits. You've paid dearly for a camera that captures 12 bits rather than 8. If you want to avoid noise and posterization in the shadows, you want to capture all 12 bits. For a low-key subject, you have two choices:
1.) Make a capture that places Zone V at around 50% lightness in the camera's encoding of the tone scalethat's around level 47 on a 0-255 scale.
2.) Make a capture that is as hot as possible without clipping highlights, then use tone-mapping to darken the image.
In my experience, 2 works MUCH better than 1. It MAY involve negative exposure compensation in the raw converter, but the midtone control is Brightness, not Exposure, and it's the control that is most effective for placing the midtone when you tone-map from linear to gamma-encoded space. The tone mapping is not a simple gamma correctionit's whatever you make it, and the more bits you've captured, the more control you have over the tone mapping. It's not about preserving some mythic non-reproducible invisible highlight detailit's about making full use of the camera's dynamic range to provide maximum flexibility on tone-mapping.
Where we came in was how to determine when we're running into irrecoverable highlight clipping. Trying to estimate where 255 lands by metering on 47 is a very uncertain excercise....
actually ...<br /><br />eye have to say i welcome the intellectual peer review (Bruce), the diverging perspectives (Rags), and everyone else who has contributed to date (moderators and antagonists alike).<br /><br />!! <applause> :-)<br /><br />eye couldn't hope to gain such specialized, nor timely, debate from any one other source.<br /><br />if i may add my own perspective, this reminds me much of my university days. perhaps there are really two mechanisms of thought going on here. one is focusing on optimizing the benefits of existing technology, and the other on the potential of evolving technology .. or the classic engineer vs. artist perspective.<br /><br />entropy and enthalpy are forces which balance each other and it's everyone's responsibilty to ensure that neither become so dominant that they spawn chaos or narrow thinking. that, imo, is perhaps the most important part of the human condition. <br /><br />lastly, the only thing that should make anyone shoot coffee down their nose is the morning's headlines. :-)
Rags Gardner said:
"A digital sensor is simply a photon counter as explained. And raw values represent linear voltages as explained. And perception is not linear as explained. However image tones and binary encoding have nothing at all to do with each other. This is being denied by many respected photographers and unfortunately blindly accepted by too many others."
Rags, I certainly appreciate your input in this forum and recognize that you know something about computers, having recently retired from a career at IBM.
Let's concentrate on a topic we can all agree on: digital sensors are photon counters. The consequences of this property are discussed in a very informative post by Roger Clark, MIT PhD in astrophysics and an accomplished photographer. In summary, when photons shrike a photo sensor, electrons are released and accumulate in what can be likened to a well, and result is a voltage, which is read by the ADC and digitized with a bit depth of 12 in most cameras. When the well is full, the sensor is saturated and the full well capacity of a Canon 1D Mark II is about 52,300 electrons. Full well is attained when the camera is used at ISO 100. If you use ISO 400, the full well is not utilized, but rather the gain is increased to attain the same voltage and noise is increased.
The standard deviation of counting is determined by the Poisson distribution and is simply the square root of the count. If you double the count, the error is reduced by a factor of 1.4, not halved and you reach a point of diminishing returns.
The table below is adapted from Roger's calculations for the Canon 1D Mark II with electron counts and noise levels shown according to Roger's methods for various Zones. The shadow noise and the number of usable levels are more related to the laws of physics and statistics than the digital encoding (presiming you use enough bits).
I am a proponent of exposing to the right, but one must be aware of the dangers. If you double the number of photons by giving another EV of exposure, you decrease the noise by a factor of 1.4 and gain something in the shadows, where noise is critical. I will leave it to the experts to determine the number of levels gained in the shadows.
However, if you inadvertently blow one stop of highlights you lose 2048 levels according to the prevailing wisdom for a 1.4x gain in shadow noise. Is this wise? In my experiment, I gave an additional 0.5 EV of exposure over nominal and lost the highest 0.1 density (1/3 stop) of highlight detail for an insignificant gain in shadow noise. Which is better? Highlight recovery in ACR is great, but is of no use if all 3 channels are blown.
These results are very similar to what Rags noted in his tests and confirm his conclusion that there is not much headroom above Zone V with digital. For those who insist on Zone IX exposure, just use proper placement.
Here are plots of the linear 16 bit images analyzed by Norman Koren's Imitest. Linear is used to avoid changes produced by gamma and rendering. Yes Jeff, I am aware of linear encoding, and no Bruce, your black hole in not demonstrated. :)
I have no financial interest in Imitest, but am merely a user of the program and a fan of Norman Koren (who also has weighed in on exposing to the right).
Your comments are most appropriate and remind me of a passage from Ansel Adams in the negative:
I can recall seeing Edwin Westin, who was not particularly of scientific persuasion, using his meter in rather unorthodox ways. He would point it in several directions, take a reading from each and fiddle with the dial with a thoughtful expression. "It says one-quarter second at f/32, I'll give one second"
Does this sound familiar?
There is so much to be learned from re-reading Adam's book. The basics have not changed despite protestations that "digital is different" or "exposure is not based on zone V". For those who have forgotten, the Kodak card also has a white side, which you are free to use.
Which goes back to the original post-that Mr. Janes was testing the point of clipping with a Nikon D70-which uses lossey compression and is therefore _NOT_ a 12 bit digital capture (more along the lines of 10 or 11 bits depending on who you listen to). So Mr. Janes seems hellbent on proving something, but what? I don't know. . .
The important issue is one of knowing, without guessing, the exact ISO of YOUR sensor, and of knowing how to expose so that a tone in your scene that needs textural detail is exposed just shy of clipping.
If you think you can determine that by metering incident light or metering off a grey card, then I would point you to one of Bruce's last posts: "Trying to estimate where 255 lands by metering on 47 is a very uncertain exercise...."
Which was the whole point of the previous discussion, if what you are trying to do is keep textural highlights from clipping, trying to meter for, dare I say it, zone V is far less accurate than metering for the textured highlights-particularly when you factor in that sensors are linear.
I don't care whom is a retired engineer from where. . .I do have over 25 years experience as an advertising photographer and studied photography at RIT with Minor White, Stroeble, Todd and Zakia. And I'm here to tell you that Rags simply does not understand the zone system nor practical applications of exposure of film let alone digital. Angel Adams developed the zone system as a way of exposing for the shadows and developing for the highlights of a B&W film negs. Trying to apply that to either color neg, chrome or digital capture is a perversion of the zone system and pretty much doomed to failure.
With the linear nature of digital sensors (and yes, Mr. Janes, it WOULD be useful for you to grasp the actual implications and not just the technical ones) there is one truth-if you flood the sensors to clipping, there is no detail left. So, the optimal approach to exposing digital is to know exactly where in the scene you want your textural highlights and expose to maintain the detail. And to date, there are simply no real good tools either built into the exposure meters of DSLRs or wide angle reflected light or incident meters to easily determine that exposure.
Metering to make a zone V in a scene to be a zone V in a processed raw file is _NOT_ the way to go about it. Metering to maintain textural detail is the only way to control that textural detail. So, contrary to the zone system in which you exposed for the shadows and processed for the highlights, in digital you should expose for the highlights and process for the shadows-cause it's a lot easier (and provides better signal to noise) to make mid tones and below darker than to try to lighten them up.
Care to actually shoot any black cats Bill? Meter to make zone V zone V in the processed shot and see if you can even see the darn cat...if it's anything like _MY_ black cat, about all you'll see is his glowing eyes.
"In summary, I would regard the exposure as proper when I obtain the intended tonal values without needing to use the exposure control of ACR."
Ya see, you still don't get it. . .Camera Raw was designed with tools to allow pretty much total control over the the global tone and color of a raw capture. All of its tools where designed to be used-including the exposure control. It's there to be used and it's foolish to refrain from using it. The exposure is "proper" when you get the image the way you want it-regardless of the tools you employ to achieve it. And it sure as heck ain't based on zone V. (never was really)
"For some reason Jeff just gets under my skin. His attitude is condescending and cynical."
Back at ya buddy boy. . .
You make all these assertions with no experimental data to back up anything you say. Is this scientific? It's nice to know about your photographic degree and training, but wasn't this before the advent of digital?
With regard to the lossy compression of the D70, unnecessary tones are discarded but their overall magnitude is not changed. I do not have a D2X for testing and apparently you don't either. Therefore, hold your conclusion that my test is worthless. The D70 does record tones at a normalized exposure of 0.95 (data number 3900 in 12 bit, out of 4095) and does cover almost all the 12 bit range; 10, 11 or 12 bits, the concept is still the same.
Adams exposed for the shadows and developed for the highlights since he mainly used negative material. However, he also did color with chromes and exposed for the highlights. You can meter from Zone IX or V and place the tones properly; the difference is a fixed number of stops since we are dealing with a log scale and the effect is the same, notwithstanding Bruce's comments.
I don't flaunt titles, but it is Dr. Janes (pathology), but you can just call me Bill. No, I haven't taken pictures of a black cat, but I too have have 30 years of photographic experience with medical specimens, photomicroscopy, and electron microscopy.
Since you have such a narrow view, offer no data of your own, and do not respond to scientific reasoning, and are nasty, I do not intend to respond to your sarcastic comments.
Dr. Janes says: "Now that the pixel mafia is back from their meeting, I hope they will add their insight and expertise to the discussion."
Well pardon me all to heck, I thought that was an invitation to discuss your original post. Silly me. . .
Dr. Janes says: "The brightest reproducible tone occurs at a 12 bit data value of 3900, which corresponds to 243, 31266, and 62533 respectively in 8 bit, Adobe 15+1, and 16 bit encoding."
For _YOUR_ camera, which has been pointed out to compress the lightest portions of the capture by way of lossey compression, therefore your test is less than a useful test due to the compression-which I understand can not be turned off in the D70. So, while useful for the D70, what you tested does not have broad application to other cameras whose compression scheme is lossless.
Dr. Janes says: "You make all these assertions with no experimental data to back up anything you say. Is this scientific?"
Ok, so at what point do I go back and start proving things to your satisfaction? Do I need to prove to you that sensors are linear capture devices? Do I need to prove the method of how light meters behave? Do I need to prove that an 18% grey card has a 50% reflectance? Do I need to cite Ansel Adams to prove he was exposing for the shadows and developing for the highlights? Do I need to prove that a sensor, past flood clips? Jeeeze bud, this really ain't rocket science.
And yes, my degrees are pre-digital. But my first photographic assignment that was assembled digitally was in 1984. I started working in Photoshop in version 2.0 in 1992 and I've been doing digital capture in one form or another since 1995/6. Whooptie-*******-do.
Dr. Janes says: "Since you have such a narrow view, offer no data of your own, and do not respond to scientific reasoning, and are nasty, I do not intend to respond to your sarcastic comments."
Nasty? Me? Surely you jest my good doctor. It was you who got personal bud. Look back at my posts. . .point me to one "nasty" phrase directed at YOU and I'll fall on my sword. Othwise I just see this as a fun debate that seems to have gotten out of your direct grasp-deal with it. . .
Bruce Fraser said:
"...Where we came in was how to determine when we're running into irrecoverable highlight clipping. Trying to estimate where 255 lands by metering on 47 is a very uncertain excercise...."
I agree with everything Bruce said (afterall, he is a recoginized authority), but the crux of the problem, which he does not resolve, is placing the highlight tones without clipping. By his own admission, the tools to do this reliably are not currently available.
You can use the camera luminance histogram of the jpeg rendered image preview, but this may be misleading as previously pointed out. However, it will get you into the ball park.
Another method is to use the concepts of the zone system and take a reading of the cat. From your previous tests, you can use that reading to place the black of the cat where ever you want. However, you can't be sure where the highlights will fall. You do not have to take a reading from Zone V--this is merely related to the meter calibration (many meters are calibrated for 12%, not the 18% used by Ansel Adams)
The exposure adjustment in ACR has its limits as shown by Rags' experiments but perhaps these can be overcome with the brightness control. But does anyone deny that proper exposure is advisable?
Any advice on placing those tones, Bruce?
"From your previous tests, you can use that reading to place the black of the cat where ever you want. However, you can't be sure where the highlights will fall."
Which is exactly why it is more useful to expose for the highlights. If there are no highlights near clip, you can increase the exposure to get a better tone rendering of your black cat. Should you expose to place the black cat at near a textural highlight? I doubt it, but rendering it lighter on the raw capture would reduce the noise and produce a better signal to noise ratio, assuming the cat doesn't claw you.
"The exposure adjustment in ACR has its limits as shown by Rags' experiments but perhaps these can be overcome with the brightness control."
None of the controls in Camera Raw are designed to be used alone-they all integrate together and must be used together. Alter the exposure setting and you completely change the mid-tone control point of brightness and contrast. Get exposure, shadows, brightness and contrast set optimally you can then further tweak the overall tone rendering with curves. And with a very high degree of precision particularly in the highlights. If one _DOESN'T_ use any of Camera Raw's controls together with the other controls you are loosing considerable functionality and giving up a lot of potential control over the tone rendering, which would be a shame really.
"But does anyone deny that proper exposure is advisable?"
The big question, and one that is currently very difficult to predict with digital capture and today's meters is what exactly is the "proper exposure"? The nature of linear sensors has altered that definition. The zone system was based upon a different technology and has only limited usefulness with digital. The controls available for raw processing-particularly in Camera Raw further alter the traditional approach to film based exposure and processing. It's basically a whole new ball game.
>I can recall seeing Edwin Westin, who was not particularly of scientific persuasion, using his meter in rather unorthodox ways. He would point it in several directions, take a reading from each and fiddle with the dial with a thoughtful expression. "It says one-quarter second at f/32, I'll give one second"
artists like to break the rules. it reminds me of a certain graphic designer i used to know ... :-)
as to Ansel's books ... once upon a time i owned a small bookstore-cafe and managed to squirrel away many --if not most-- of his publications. unfortunately they're locked up in storage in another country, along with a collection of writings and photographs by Alfred Stieglitz and Freeman Patterson--both deeply philisophical photographers who managed to break the rules of their time. i hope one day to retrieve and revisit the timeless knowledge recorded on those pages.
thanks for sharing your memories.
Uh-oh. I'm surrounded by Science. What to do...I once shot photographs for a living ("you call that a living?") and for the past 37.3 years, give or take, have been making exposures, some of them good, some of them awful. I cannot play either a doctor or a professional photographer -- no, not even on TV. (C.V. will not be made available on request. No, don't thank me.)
So how do you shoot a black cat? Here's how I did it -- first, the wrong way and second, the right way. 1) Cat presented himself in completely adorable pose (cue the "Kodak Moment" music). Quick, grab camera. Fumble for on/off switch. Where the devil did they put the damned switch...ok, there it is. Uh-oh, low light and ISO 100. Quick, fumble for firmware settings to arrive at ISO 800. Cat might not hold pose much longer...aim, focus, shoot. OOPS. Didn't think fast enough. Of course, indicated meter reading will overexpose the shot. Yep, it's overexposed. Shot later panned by a certain programmer who haunts this forum because it was overexposed (the photo, not the forum). Abject humiliation. However, the print made from the overexposed digital file turned out ok. Not gorgeous but acceptable. Made note of this to aforesaid programmer, who, for some reason, did not respond. :)
2) Once again, cat posing in a photographically appealing fashion. Quick, grab camera. Fantastic luck -- found on/off switch a bit faster this time. Already at ISO 100 -- good, good, more photons in evidence today. And this time, remember that indicated meter reading will not be sufficient. (Mind racing, fumbling for useful mnemonics...ah, here they are: dark...down...minus -- aha! Yreka! Decrease exposure!) Turn magic EV compensation wheel provided in convenient location on camera. Better do it fast, as Kodak Moment is disappearing rapidly. Cat has reached zenith of cuteness and is about to become bored. So pluck the magic twanger and SHOOT, fool! <click>
Shot at 1.5 or so stops less than indicated meter reading, all praise to magic EV comp wheel. The resulting inkjet print can't rival the best stuff I did on Afga enlarging papers back in the days when dinosaurs roamed the earth, but it's the best damned inkjet print I've made and has a nice range of rich dark tones, and that print, while not Agfa-like, is making it damned difficult for me to give up the Epson 1280 with its dye-based inks. Pigment, schmigment. But I digress. There's plenty of detail throughout the dark-on-dark areas of the cat's fur. Further, I was gratified to see that the victim's rather bright white fur patches also contain detail. I am rarely impressed by what I shoot, but in this case I will make an exception.
So that's one way to take a decent shot of a black cat. Hell with the numbers! Sometimes you have to make your decision in a hurry, turn the damned EV wheel, and spray 'n' pray.
"Abject humiliation. However, the print made from the overexposed digital file turned out ok. Not gorgeous but acceptable. Made note of this to aforesaid programmer, who, for some reason, did not respond."
Your post is hilarious as well as perceptive. Some of these resident experts never respond when you make a good point but will jump all over any small error in technical semantics.
I recently made a post in this thread about the full well concept of sensors, which made some good points about ISO, noise, and exposing to the right. But so far not a single response. I guess that is good news, since they have not found anything to nit pick about.
Talk about casting pearls among swine (just kidding) ;)
>So that's one way to take a decent shot of a black cat. Hell with the numbers! Sometimes you have to make your decision in a hurry, turn the damned EV wheel, and spray 'n' pray.
Well said, Mike. Let's not let the science obtrude so much that we lose track of what - hopefully - is our primary interest, taking pictures.
If science and numbers help in our goal, fine, but they need not dominate our thoughts now, in this digital age, any more than concern with chemistry and density measurements did in the days of "analog" photography. I think the following quote from Adams is as relevant today as it was when he first wrote it:
"But the practical photographer is not necessarily a sensitometrist; he may employ curves which are based on arithmetic relationships and which are quite satisfactory to him as symbols of the exposure-opacity relationship. But he must not confuse the actual sensitometric curves with any symbols he selects for personal evaluations. I see no reason for the practical photographer to plot curves except in preliminary studies and in some aspects of color
Well stated, but didn't you know that digital is different and the Zone System does not apply to digital. :)
-->I recently made a post in this thread about the full well concept of sensors, which made some good points about ISO, noise, and exposing to the right
I think I've been pretty clear that unless you've made the aesthetic decision to blow highlights in order to hold very deep shadow detail, you don't want to expose so far to the right that blooming becomes an issue. All along, I've been advocating correct exposure, not systematic overexposure.
The point is quite simply that with digital capture, the midtone, 18% (or 12.5%, or 13%) reflectance is enormously more fungible than the white point in post-capture tone mapping, hence the useful metric by which to judge whether or not an exposure is "correct" is the white point.
Forget for the moment the cushion offered by ACR's extended highlight recovery logicit's enormously useful in practice, but it shouldn't be relied upon if you're aiming for correct exposure, and when you reach the point where the well is filled in all three channels (bearing in mind that the red filter transmits more light than the green, which in turn transmits much more light than the blue), there is no highlight to recover, and attempting to stretch the detail that is available typically produce rainbow artifacts that may be managed if you're willing to entertain only a narrow range of white balances, but is not something to strive for.
Rather, the point I've been trying to hammer home is that if you capture 18% reflectance so that it gets recorded at its "correct" value in linear gamma, down around level 47, you waste a huge chunk of the camera's response on highlight detail to which your eye is totally insensitive, while simultaneously starving the shadows. If you look at linear conversions of step wedges, I can't see how you'd come to any other conclusion unless you're misled by Nikon's lossy compression.
You have to learn the behavior of your personal camera body and its metering system, which involves two compensations, one for the difference between nominal and actual ISO, one for the difference between where the meter expects the highlight to be in relation to 18% or 12% reflectance and where the camera actually puts it. All the usual fudge factors needed to compensate for the fact that the scene with 12% (or 18%) average reflectance rarely occurs in anything worth shooting still apply....
(Also replying to Bill Janes) I don't mean to denigrate attempts to master technique -- God knows, I spent enough time in darkrooms years ago trying to achieve semi-hemi-demi-mastery of black and white film and paper development. It's just that sometimes these discussions devolve into a numbers-game that makes garden-variety "pixel peeping" look like nothing by comparison. Sure, to programmers and scientists, the numbers matter. But I cannot imagine how anyone can shoot pictures while in the midst of these attacks of number-neuroses. Someone put it to me this way in e-mail: he was finding the mechanics of digital capture so oppressive that it had begun to get in the way of his using the camera as a tool for creating artwork.
I got a practical lesson about this the other day when my wife told me that a number of people had been enjoying a shot I took that I don't much like. I could tell I wouldn't like it when I saw it on the camera's LCD; I was much less thrilled by the print. Difficult lighting situation. There's a large highlight in the scene that is severely overexposed and there's nothing to be done about it -- it's 255/255/255 all the way. Exposing it correctly would have required fill-flash (not possible at the time) or ghastly underexposure of the rest of the scene (eeeuuwww). To my eye, the shot looks big-time-amateurish because of the large overexposed highlight. But the feedback I got was that some people who saw the print liked the overexposure in that one area. It looked "cool" to them. Go figure! I could simply dismiss their opinions as "uninformed"...and then again, I could consider that maybe my own prejudice about "everything" in the image having to be perfect sometimes just gets in the way.
Years ago a landscape architect I knew was approached in his office by a junior architect who was having serious problems drawing something. He wanted to get certain measurements just right but he couldn't figure out how to do it. The older guy looked up from his work, barked " EYEBALL the sucker!" and went back to what he was doing. End of conversation. Point taken...
> Well stated, but didn't you know that digital is different and the Zone System does not apply to digital. :)
This point appears to be, ah, controversial. I brought it up in an e-mail exchange with someone I consider to be a master printmaker, and he snapped back that the Zone System was developed to previsualize a black-and-white print while shooting black and white film, and cannot be applied to digital capture.
It wouldn't make sense for me, a simple bumpkin living in the provinces, to argue with that guy. But, I thought...if I have a meter calibrated correctly to expose a medium-grey object such that it is rendered as a medium grey object without excessive fiddling within the raw image converter...then why would an area of tonality two stops brighter than that not be "previsualizable" as Zone VII -- at least in some way that's vaguely analogous to "real" Zone System thinking -- and even if I'm working in color and not in black and white? What am I missing here? Is there simply no analogy at all -- even if someone is shooting with the intent of making a black and white print from a color digital original?
Whatever the objectively correct answer to the controversial questions, one thing's for sure: that Sekonic meter of mine, which is not a super-cheapie and which I seriously doubt is way off the mark calibration-wise (its "take" on studio flash exposures is almost right on the money), gives me nasty underexposure when I use it for incident readings outdoors. This having happened often enough, I am wondering if the meter-for-the-highlights / develop-for-the-shadows approach is going to be the only one that's practical in extreme lighting conditions. Or, perhaps the metering technique has to change. As in: don't point the thing directly back at the camera, as is typical. Point it downward as if the light source were coming from the ground. Sounds completely ridiculous, but the ridiculous-sounding approach to incident metering would surely have given me better exposures at times when I did it in the conventional way.
Maybe "digital" really is only Kodachrome-25-On-A-Chip. :)