This content has been marked as final. Show 22 replies
You using 4.3.1? 4.3 had a bug regarding compressed NEF files.
OK, G Sch perhaps you can answer this question that came up among some Nikophiles over in QImage land...
WHY in the world would anyone use lossy RAW?
You're effectively throwing out detail just to keep tonal range... Sounds sloppy.
> G Sch perhaps you can answer this question that came up among some Nikophiles over in QImage land
I am not very suitable to answer this, as I don't own any Nikon camera, and I am suggesting others no to use lossy compression any more, because now a lossless compression is available as well.
However, the consideration is the same as with the older models, which offered only the selection between uncompressed or lossily compressed: storage and time for recording.
On the other hand, there are many photographers recording raw primarilly for delaying the WB decision, and sometimes saving some of the highlights. As long as one does not make "aggressive" adjustments, I don't see a problem with the loss of that information. However, I would not suggest exposing to the right with lossy raws.
> You using 4.3.1? 4.3 had a bug regarding compressed NEF files
Yes, it's with 4.3.1. The error in 4.3 was not with the new cameras, IIRC.
>WHY in the world would anyone use lossy RAW?
>You're effectively throwing out detail just to keep tonal range... Sounds sloppy.
With compressed NEF, what you are throwing out are redundant highlight tonal levels which the eye can not detect, not image detail. The details of NEF compression are reviewed in the links below:
A 12 bit linear raw file contains 4096 levels of brightness, and 2048 of these are in the brightest f/stop. According to the Weber-Fechner law, the human eye can distinguish only about 70 of these 2048 levels. Many of the levels are redundant and can be eliminated with no visual loss, even they might be useful for highlight recovery or extensive editing of the highlights.
For a discussion of encoding efficiency, readers are directed to a post by Greg Ward.
In the section on Microsoft/HP scRGB Encoding (a proposed "HDR" format, he notes:
"Presumably, a linear ramp is employed to simplify graphics hardware and image-processing operations. However, a linear encoding spends most of its precision at the high end, where the eye can detect little difference in adjacent code values. Meanwhile, the low end is impoverished in such a way that the effective dynamic range of this format is only about 3.5 orders of magnitude, not really adequate from human perception standpoint, and too limited for light probes and HDR environment mapping."
So the answer to your question is that compressed NEF saves space without affecting visual quality. With the Nikon D200 (a 10 mega pixels), the compressed NEFs are about 10MB compared to 15.5 MB for uncompressed. The D3 (12 mega pixels) has losslessly compressed NEFs which are about 13.5 MB for 12 bit files (it also offers 14 bits).
With the D200, I (and most photographers who post on the forums) often used compressed NEFs for all but the most critical images where highlight detail might be important. No one had demonstrated a loss of visual quality. In my experience, highlight recovery works quite well with these compressed NEFs. With the D3, I use the losslessly compressed NEFs, since the file size is reasonable and no data are lost.
Sincere thanks for the info Bill!!
I downloaded the compressed NEF you posted and it is corrupted. However, my own D3 produces compressed NEFs which ACR decode properly. Perhaps you have a bad memory card or card reader.
Here is a lossy compressed 12 bit D3 NEF that works fine with my windows ACR 4.3.1
I don't know what is wrong but the usual HTML code for a link is not working now. I have used it successfully in the past.
P.S. After I posted my message I saw the one posted by Thomas Knoll. At least, I can confirm that in one case it worked for my camera. FWIW, I don't think lossy compression with the D3 is that good of an idea. I took shots in all 6 possible combinations. Compressed 12 bit was 10.58 MB and losslessly compressed NEF was 12.35 MB. An uncompressed 12 bit NEF was 18.85 MB.
There is a known bug that happens in some rare cases with lossy compressed Nikon D300/D3 files. Most of the time lossy compressed files work fine. We are working on a solution for the next dot release.
your example for the lossy compression is the "wrong example".
The lossy compression of the D300 and D3 has two variations: one is for "not very noisy" images, like the one you posted, the other is for very noisy ones. In the latter case lossiness is introduced not only before the compression in the form of a non-linear lossy mapping, but even the Huffman encoding is lossy; I have never seen this way of abusing the Huffman encoding before. See Thomas Comerford's explanation on the respective thread of DPReview.
The error with the decompression occurs only with the lossy Huffman encoding. If you download the current version of Rawnalyze, it will process the "single lossy" images, but it rejects processing with a respective message if you try to process a "double lossy" one.
>your example for the lossy compression is the "wrong example".
Sorry, but I was not aware of the difference. My example was ISO 3200 and that it is not very noisy speaks well for the D3. Anyway, noise does not compress that well.
>If you download the current version of Rawnalyze, it will process the "single lossy" images, but it rejects processing with a respective message if you try to process a "double lossy" one.
I did download the latest version today. It is a very useful tool for looking at raw files. It works with all 6 types of D3 raw files taken at ISO 3200, but still crashes with an MFC error when I attempt to display a histogram. I would suggest putting a version number on about in the help menu so the user can be certain about what version is in use.
I am just working on that problem. In the meantime you can avoid it by maximizing the window, the error is namely that the window is too small with certain monitor size/setting combinations.
Re the double-compressed images: I uploaded today that version, which issues a message related to this compression. As of now, the result of my decompression is worse than that of ACR, and I don't invest any more effort in it now. I suggest everyone to avoid the lossy compression now, that a lossless one is available.
Beyond saving space on the card, why would compressed be preferred over RAW?
This conversation brings up something I've been running over in my mind concerning the distribution of bits according to illumination. I understand the half the levels of brightness occupying the brightest stop. Now consider this; What happens in a scanned image of a negative? The brightest stop is in the deep shadow, therefore possessing the greatest number of values. The highlights are in the densest part, possessing the least, the inversion of the stated ranges in post #5. Ok, what happens to this spread when you invert to get a positive image? There are no extra brightness values available now to the highlights, and the shadows are rich with values, but now are relagated to the lowest part of the digital scale to render correctly.
I can understand that most of the shadow values get deleted, but do the highlights now become pixelated? I don't see how new values comprising additional information can arise.
I would then expect the negs to have better information in the shadows after scanning and inverting and the highlights inferior to direct non-inverting capture and quantization. Would this not be visually apparent? Might this be the basis for doing both inverting and non-inverting data collection to allow non-linear processing of the entire range?
If the answer to this is no, there is no difference visually to pos/neg vs pos/pos, conversions, why the concern over the brightest stop collecting half the data which is unuseable anyway?
Something has to give here, perhaps it's my brain! :D
>I understand the half the levels of brightness occupying the brightest stop.
Half the values are in the brightest stop not because of the nature of light, but because that is the nature of linear integer encoding. If a log scale were used, the values would be evenly distributed. Linear encoding spends most of its precision at the high end, where the eye can detect little difference in adjacent code values. Floating point representation would also give much better precision at the low end.
See the encoding article by Greg Ward for This conversation brings up something I've been running over in my mind concerning the distribution of bits according to illumination.
I understand that Bill. I didn't quite say it right.
From a practical point of view, I don't see degradation of either end from scanned images when compared to the analog version. Rather, and maybe this is because of the inversion of the distribution of the value range, I can separate shadow values and retain deep blacks much better in digital. I presumed it was due to the added controls like Shadow/Highlight. Perhaps there is more to it. Likely there is more tuit! :D
Greg Ward's paper contain this symbol(â) in various combinations with english words, like this(peopleâs).
I don't recognize it. Is it an html symbol, and/or why is it in this paper?
Ok, for some reason this paper is using that symbol to represent a punctuation mark, like an apostrophe.
Yep. More tuit!
Thanks, Bill, for the paper. I just have to figure out how to get the punctuation to work right! People's and not peopleâs!
The really threw me!
>Thanks, Bill, for the paper. I just have to figure out how to get the punctuation to work right! People's and not peopleâs!
There is also a PDF with proper punctuation.
do you know any tools which could save/open 12 bpp jpeg data? It d b great if the software could also convert a raw data into such a format. I have realy been seeking since a long time but all i could ve found is a program called ImageConverter Plus, which is nothing useful but at least shows(?) 12-bit Jpeg images.
Thanks a lot in advance
emrah, 12-bit JPEG support is very rare, as you've discovered. I am not sure what you are looking for here, usage-wise. If you want to archive, then stick with the original raw file, convert to DNG, or use 16-bit TIFFs. If you want a working file for subsequent editing, you don't want JPEG anyways because of it's lossy compression. If you want images for the web, you want regular 8-bit JPEG. If you want images that you can just browse on your computer, again 8-bit JPEG is likely fine.
thnx for ur feedbak but the thing is that I am working on a JPEG encoder which supports 12bpp grayscale images. To be able to do the verification, a viewer would be perfect so that I can feed the same raw data to the tool and to my codes and compare the outputs at the end. Therefore 12-bit jpeg viewer is what i seek and what i can not find.
I understand what you're looking for, but this is beyond the scope of Camera Raw.