This content has been marked as final. Show 24 replies
Yep...more or less.
> Does 1 stop of underexposure in 14 bit mode result in the loss of 8192 levels?
No, it does not. Half of the levels would not be used, if the exposure was exactly one stop lower than the theoretical maximum. However, it is very difficult to achieve a perfect exposure to the right, i.e. using the entire dynamic range without clipping, even with exposure bracketing. (Let's ignore the question, how one can verify the exposure while shooting.)
I don't know what you mean with "underexposure", but as the exposure is reduced, the range of the top 1 EV is getting smaller and smaller. Depending on the dynamic range of the scenery, "underexposure" could start for example in the third EV from the top. The numerical range for the third EV is 2048, i.e. one stop underexposure would reduce the numerical range by 2048, compared to no underexposure.
Another issue is, that many cameras do not utilize the full bit depth. For example the Canon 30D creates only about 3260 levels instead of 4096; the Canon 40D creates about 12500 levels instead of 16384 @ ISO 100. Accordingly, the range of the top EV is much smaller than 8192, and the ranges of lower EVs are reduced in the same proportion.
G Sch wrote -
I don't know what you mean with "underexposure"
In my case 1/10th of a stop shy of 255 for me would be a normal exposure for scenes with a full tonal range - blacks to whites- that fit into the sensors dynamic range (incident light meter re-calibrated to camera's sensor to render whites with detail at 1/10th shy of 255).
+2/3rds over 255 (+0.66 using highlight recovery) to pump slightly more light into shadow areas, where no fill light is available, if required - depending on my aesthetic goals.
-0.70 would be 1 stop underexposure from what I term as my normal exposure.
> In my case 1/10th of a stop shy of 255 for me would be a normal exposure
Hold on, something is wrong. Just before you were talking about *raw* data (14bit depth, 8192 levels), and suddenly you switched to *gamma encoded RGB*.
The original raw data and what you see displayed by the raw converter (no matter which one) are miles apart. This page http://forums.dpreview.com/forums/read.asp?forum=null&message=26905476 demonstrates the issues (never mind, that it is on a Sony thread, the underlying issues are identical for all cameras).
>14 bit=16384 possible greyscale levels. Does 1 stop of underexposure in 14 bit mode result in the loss of 8192 levels?
That loss is only thoretical, since the actual number of resolved levels is limited by noise, as explained in this post by Emil Martinec, Professor of Physics at the University of Chicago:
Let's use the Nikon D3 as an example. The full well capacity is about 65568 electrons and the gain is 4.2 electrons per 14 bit DN (data number). At full well of 65568 e- the photon noise would be sqrt(65568) = 256 electrons, which translates to ± 61 DN of noise using the stated gain. However, the standard deviation at saturation is low since positive noise would be clipped.
Going down 0.5 EV the electron count would be 46364 ± 215 and the DN would be 11039 ± 51. You can't resolve between individual pixel levels because of this noise. Furthermore, the eye can resolve only about 70 levels in the brightest f/stop of a digital capture (Weber-Fechner law), and the theoretical 8192 levels in the brightest f/stop are far beyond what would be required even for the most demanding application with the most extreme editing.
The Nikon lossy compression makes use of the above facts. Their 14 bit NEF compression records only 2753 of the 16384 levels by throwing away superfluous levels in the highlights, all without any perceptible loss of image quality. I expect some to challenge this assertion of visually lossless compression, but doubt very much that they will have any data back up their challenge.
Exposing to the right is important for achieving the best signal to noise ratio. The number of levels in the highlights is not a proper justification, but the number of levels in the shadows could be important if noise were not a limiting factor.
Can we keep this to a less technical level. If two RAW files were exposed of the same scene under the same lighting (contrast range that fits the sensor), would the image underexposed by one stop less than 255; as judged in a RAW converter (gamma corrected), have substantial less greyscale levels in comparison to the image exposed to 255?
Whether our eyes can observe the differences further down the workflow chain, after Photoshop adjustments, is another topic.
>Can we keep this to a less technical level. If two RAW files were exposed of the same scene under the same lighting (contrast range that fits the sensor), would the image underexposed by one stop less than 255; as judged in a RAW converter (gamma corrected), have substantial less greyscale levels in comparison to the image exposed to 255?
I think that everyone would agree that with linear integer encoding one would lose half the possible levels with one stop of underexposure. If you encoded in log or floating point, the loss would be less. If you reduce the bit depth and introduce gamma encoding, levels can be lost as shown by Bruce Lindbloom's levels calculator.
If you ignore the effects of noise and the limitations of human perception, then I think the analysis is meaningless.
"I think that everyone would agree that with linear integer encoding one would lose half the possible levels with one stop of underexposure."
This reply takes me back to my original question, answers it, is what I suspected, and Jeff Schewe has also confirmed. Gabor, IMHO, although in good faith, has over-complicated matters.
I seem to recall Bruce on this forum, approximately 3 years ago, also stating with eloquent reasoning, that half of all linear levels were lost by 1 stop of underexposure.
I am aware that a digital camera perceives light in a linear manner and that it is necessary for a RAW converter to redistribute the linear tonal information to correspond closer to the way we perceive brightness (gamma correction).
>I seem to recall Bruce on this forum, approximately 3 years ago, also stating with eloquent reasoning, that half of all linear levels were lost by 1 stop of underexposure.
Yes, and you can add Thomas Knoll to the list of luminaries making this statement. However, when you take into account noise and the limitations of human vision, the number of useful levels lost in the highlights is much less. While the highlights in raw files have more levels than is actually needed in most cases, the effect of one stop underexposure also halves the number of levels in the darker f/stops and this can have practical significance. With underexposure, you should worry more about noise and posterization in the shadows.
>would the image underexposed by one stop less than 255; as judged in a RAW converter (gamma corrected), have substantial less greyscale levels in comparison to the image exposed to 255?
Yep...more or less.
> Can we keep this to a less technical level
Your question is of technical nature. You are already confused; you should either drop the issue, or try understanding it. "Dumbing down" the answer does not help.
You are talking about *raw levels*. The top stop from the maximum possible exposure (pixel saturation) occupies half of the levels the camera can create in that given situation. However, the RGB value 255 does *not* correspond to the maximum pixel value.
Furthermore, the red, green and blue pixels are usually not equally exposed. For example it is typical in daylight, that the red raw channel is at least one stop lower than the green, and the blue is in-between. That means that starting out with the maximum exposure without clipping, the upper half of the range is not used by the red channel. When you reduce the exposure by one EV, then all pixel values will be halved. Thus the green pixels "release" or "lose" one half of the entire numerical range, but the red pixels "release" or "lose" only one quarter of the range.
I put a demo together in form of a layered TIFF, with explanation inside:
> "Dumbing down" the answer does not help.
FWIW - I've gotten benefit from the more technical descriptions from
yourself, and Bill. The OP does have a point, however that there are ways
to explain these concepts that are clearly understandable to a reasonably
educated person who lacks the deeper technical foundation. It is no mean
feat to accomplish that without dumbing down the answer.
G Sch wrote -
"Furthermore, the red, green and blue pixels are usually not equally exposed. For example it is typical in daylight, that the red raw channel is at least one stop lower than the green, and the blue is in-between."
I am fully aware that the green channel will be ahead of the red and blue channels due to the nature of the Bayer filter. I have several of my own images of a GM colorchecker that have have been put through Dave Coffins DCRAW. It's straightforward to select a grey patch and witness the RGB imbalance - I was aware of this some 3 years ago. People may know a little more than you think so it's not wise to assume.
Had you been more courteous in your reply instead of suggesting that I am confused (I am not in relation to the question that I posed) and to either drop matters, or understand the issue, I would have responded in a different manner - my earlier post referring to you mentioned in good faith (courteous) - this time I have decided not to display any people skills.
I suggest that you try and read peoples posts carefully, before you reply, to gauge an appropriate response, the hint in my original post ended - just curious! I appreciate that this is a skill that cannot be taught.
Jeff Schewe answered both of my questions in an appropriate manner, you failed miserably.
>Jeff Schewe answered both of my questions in an appropriate manner
If you consider Jeff Schewe's answers to be appropriate, you are not going into the matter with sufficient depth to understand the issues properly. These issues are more than theoretical, i.e. with current cameras does one gain anything by going from 12 bit to 14 bit? In most cases, no, because of noise. Can you discard redundant highlight details for visually lossless compression? Yes, you can.
Look at the following chart, which uses Roger Clark's data for the Canon 1D Mark II, a large pixel camera with excellent noise characteristics. Data values and noise are shown for highlights, midtones, and shadows according to a Kodak Q-14 target.
The highlight tones (Step A) have a data number (DN) of 3650 ± 13.2. Five levels down at DN 3555, the range is 3633 to 3659. The two values (3650 and 3555) are not significantly different. You have not resolved individual levels.
For the shadows (Step "B"), the DN is 92 ± 2.3. Five levels down at DN 87, the range is 85 to 89, which is significantly different from 90-94. You have resolved at least some of the levels.
Bill, would you please define Data Number? I googled it and came up mildly perplexed, as it has meanings all over the map!
>Bill, would you please define Data Number? I googled it and came up mildly perplexed, as it has meanings all over the map!
Data Number in this context refers to the raw pixel value. I don't know if it is formally defined, but DN is commonly used by people who evaluate raw files, e.g. Roger Clark:
If you consider Jeff Schewe's answers to be appropriate, you are not going into the matter with sufficient depth to understand the issues properly
I actually learnt a great deal from you several years ago, about highlight recovery, it was you that showed me a GM Colorchecker with the RGB histogram imbalance from a linear file. That is all I needed to know to have some understanding of how highlight recovery worked. I am grateful for this help.
I fully understand that a linear file, after demosaicing and gamma correction is a different beast. However I examine RAW images, with the software (ACR, Lightroom, Phase One or RAW Developer) that I use in the field at sporting events, and not Rawnalyze, or other geeky (term of endearment) software for detailed analysis of RAW data.
I go to great lengths to tie in my incident/spot meter to my sensor so that I can expose an image and know where the channels will rest in the RAW converter without relying on inaccurate camera, jpeg derived, histograms - No I don't use Julia Borg's UniWB in my Nikon's to improve the accuracy of the camera's histogram as I have a very accurate RAW workflow - but then you would not know this as forum exchanges are a poor way of getting to know another person's abilities.
Whilst I appreciate that the digital workflow is more complex in comparison to film, as we D&P our own work, In the days of E6 I carried out detailed tests with light meters to arrive at what I considered to be a very accurate exposure for my pro lab. I left the E6 plotting to them, neither did I need to understand sensitometry. I know film and digital require different approaches so let no one assume that I don't realise this - the bottom line though, for either medium, is arriving at accurate exposures.
With regard to understanding digital sensor technology to great depths (not workflows as I am very well versed) my knowledge is limited in comparison to yours and G Sch's but I wold put money on the fact that I can expose an image quickly, under pressure in the field, and consistently be a hair widths away from 255, or intentionally be +1/3 or +2/3rds over 255 when the image is presented at default in a RAW converter - that is more important to me than maths.
You still miss my point regarding being satisfied with Jeff's reply. Your replies indicate, although with good intentions, the need to understand sensor technology to obsessive degrees and sometimes frog-march people down complex routes - I don't feel this to be necessary in my case and just need to dig under the bonnet.
I drove high performance cars for several years, a traffic patrol officer, responding to emergency calls. Intensive concentration and very skilled driving is required to drive a powerful motor car at high speed, safely, and smoothly, more often than not on very narrow UK roads with all manner of constant hazards presenting themselves - feeling the cars responses cannot be replicated by computer analysis and mathematics. Other than the basics of centrifugal forces, to better understand how a car sits down around a bend, or in some peoples' cases, not, I didn't, and still don't have a clue, how a car engine works other than suck,squeeze,bang, blow, and this total lack of knowledge had no effect on the end result -my driving skills.
Ansel Adams and Edward Weston had different approaches to exposure, although Edward Weston was not so slap-dash as reported - guess which photographer's work I prefer?
"Dumbing down" the answer does not help.
Dumbing down 'indicates' that an individual is unintelligent however to simplify a subject so that a person can digest the information takes great skill.
Indeed, Weston was meticulous. He processed negs by inspection (as did many others at that time.)
His body of work, as a collection, has yet to be surpassed.
>He processed negs by inspection
Yeah well, if you were shooting orthochomatic film (like they were) you could develop by inspectionkinda had to back thenby the light of a really, really bright red safelight...kinda hard to develop by inspection when processing panchromatic film though....it tends to get bullet proof the moment any light touches the film!
I'm not advocating that, Jeff (BTW, I did successfully process panchro film by inspection with no fog. But it was too messy! So I learned the zone system instead), just pointing out that he was meticulous. Since his basic printing paper was Azo, which I believe came in only one grade, you had to be meticulous.
The trick in inspection is that the highlight density should match the shadow of a finger cast on the film (by transmission).
>You still miss my point regarding being satisfied with Jeff's reply. Your replies indicate, although with good intentions, the need to understand sensor technology to obsessive degrees and sometimes frog-march people down complex routes - I don't feel this to be necessary in my case and just need to dig under the bonnet.
Your replies indicate that you are no novice. The number of real levels in the brightest f/stop of a 14 bit file is generally unknown to the photographer and depends on the noise characteristics of the camera and the signal to noise ratio necessary to define a level. All the practical photographer needs to know is that the number of levels will be maximized and noise minimized by exposing to the right as far as possible. However, if you leave a bit of headroom, the lost levels will be imperceptible for practical purposes. If you err with slight overexposure, highlight recovery works reasonably well.
The brightest f/stop of a 14 bit raw file has 8192 possible levels and the next brightest stop would have 4096 levels. For the Nikon D3 at base ISO, the noise as documented earlier corresponds to about 14.5 levels. This would suggest to me that the brightest stop of a 14 bit NEF from the D3 (Nikon's raw file) has around 565 real levels and the next brightest has around 282 real levels. Since the human eye can distinguish only about 70 of these, one would have plenty of levels in the brightest f/stop even with one stop underexposure. However, with this degree of underexposure you would lose 1 stop of dynamic range and noise would be more prominent, especially in the shadows. With 1 stop underexposure, the signal to noise ratio (ignoring read noise and assuming that shot noise is predominant) increases by a factor of 1.4, not 2.0. In my experience, the shadows are limited more by noise than posterization, so the number of levels in the shadows is not the limiting factor.
Furthermore, I don't see any real advantage with the D3 at 14 bits over 12 bits. For practical purposes in my work, I don't worry about the number of levels with reasonable exposure. What is your experience with your camera and your work? To what practical purpose are you using Jeff's reply?