3 Replies Latest reply on Aug 2, 2007 1:42 AM by Ramón G Castañeda

    ACR uses "best" 8bits from RAW to build 16 bit output?

    Manovi Level 1

      Someone told me that in ACR the 16 bit (15+1 or whatever) output data (TIFF 16 bit) is build from the "best" 8bit data took from che 12-14 bit RAW files.

      Anyone noted this strange behaviour of ACR? 12->8->16bits?

      I evaluated that 12-14 bits are used DIRECTLY in the ACR demosaicing algorithm and subsequent color space mapping to obtain 15/16 bit info for each pixel. Depending on demosaicing the 12/14 bits of neighborhood photo diodes are "merged" to obtain 16 bit data for each pixel but it would be quite strange to EXTRACT the best 8 bit data (best for what?) during this demosaicing to fill half of 16bit output fo TIFF output.

      And I talk about output 16 bit TIFF and non 8 bit JPEGS/TIFF of course.

      Hope that TK or his staff can be the ultimate word about it (not revealing algorithm secrets of course!): are we always losing 8 bit of info saving in 16 bit TIFF from ACR?