This content has been marked as final. Show 6 replies
Simple linear scaling. There is nothing spatial involved.
Interested software developers can look at the public DNG SDK for details.
Keep in mind that a 16bit image format (eg, TIF) is actually a container, and I've seen many variations on putting lesser bit depths in the format. EG, 10bits may only be stored in the first 1024 bins, or the same 10bit data may instead be stored in every 6th bin - the latter of which is the most common and best way. When I look at the 16bit histogram after ACR has converted 12bit RAW to 16bit TIF, it is as if 12bits has been converted to 16bits (actually 15bits), because every bin appears to have data in it.
As to how it's actually done, it may be proprietary ... but someone else may be able to draw conclusions from the open algorithms that are available (eg, dcraw).
Eric said it all, but apparently it needs to be spelled out in more detail.
The TIFF specification requires, that the original value range be spread over the entire numerical range of the actual TIFF format.
In straightforward cases, for example when converting an 8bit image to 16bit, the conversion is accomplished simply by multiplying the 8bit values by 256.
Note, that DNG is different: if the original raw data is of more than 8bit depth, then the DNG file will be 16bit, but the original raw data is *not spread* over the numerical range of 16bit depth.
An educated guess would be that the missing bits are filled with the most
significant bits of the original value, similar to the way that Photoshop
converts 8 bits to 16 bits in Photoshop.
This is superior to simply filling out with zeros because it spreads the
values out equally across the 16 bit range. For example, 0x00 maps to
0x0000, and 0xff maps to 0xffff.
If, as Michael Schaefer says, there are no empty bins in the 16 bit
histogram, noise is added.
Internally, Photoshop does not use the entire 16 bit range, but 0x0 through
0x8000. Guessing again, but it seems likely that ACR does the same.
No need to guess. It's linear scaling, as I described earlier. You start with a value in the range [0,a]. You want to map it to another range [0,b]. The scale factor is b/a.
As stated in the DNG spec, the raw values are mapped during the linearization stage to the range [0,1].
>If, as Michael Schaefer says, there are no empty bins in the 16 bit
histogram, noise is added.
If one takes a 12 bit raw file and multiplies by Eric's scaling factor (16) to convert to 16 bit the raw data does have holes. However, when the gamma correction is added and demosaicing with interpolation is performed these holes are completely filled in. Some noise may be added, but this is not necessary to explain the absence of holes in the converted file.