5 Replies Latest reply on Jul 22, 2012 1:39 PM by fnordware

    16-bit float vs. 16-bit integer

    Navarro Parker Level 3

      I work with OpenEXRs 3D renders alot. While they are usually lossless at 32bpc, there are options to save at lossy 24-bit float and lossy 16-bit float with drastically reduced file sizes.(Photoshop still opens them up as 32bpc)

       

      How do these compare to Photoshop's 16bit color? (Which I'm assuming is 16-bit integer?)

       

      At 16-bit float, I'm throwing out half the color information, but I'd still have vastly more color information than 16-bit integer?

       

      When do I really need 32bpc float? And when is 16bpc float "good enough"?

        • 1. Re: 16-bit float vs. 16-bit integer
          Level 7

          Photoshop's 16 bit/channel is integer.

          In Photoshop, 32 bit/channel is the only floating point pixel format.

           

          16 bit float is good enough for storage if your image fits in that range, but not good for calculation - you really want it as 32 bit/channel for calculations (for performance and quality).  16 bit float has more range than 16 bit integer, but less precision.  See http://en.wikipedia.org/wiki/Half-precision_floating-point_format

          • 2. Re: 16-bit float vs. 16-bit integer
            Noel Carboni Level 8

            It's not quite that simple.

             

            32 bit floating point numbers have essentially an 8 bit exponent and 24 bit mantissa.  You could imagine that the exponent isn't particularly significant in values that generally range from 0.0 to 1.0, so you have 24 bits of precision (color information) essentially.

             

             

            At 16-bit float, I'm throwing out half the color information, but I'd still have vastly more color information than 16-bit integer?

             

            Not really.  But it's not a trivial comparison.

             

            I don't know the layout of the 24 bit format you mentioned, but a 16 bit half-float value has 11 bits of precision.  Photoshop's 16 bits/color mode has 15 bits of precision.

             

            The way integers are manipulated vs. floating point differs during image editing, with consistent retention of precision being a plus of the floating point format when manipulating colors of any brightness.  Essentially this means very little chance of introducing posterization from extreme operations in the workflow.  If your images are substantially dark, you might actually have more precision in a half-float, and if your images are light you might have more precision in 16 bits/channel integers.

             

            I'd be concerned over what is meant by "lossy" compression.  Can you see the compression artifacts?

             

            -Noel

            • 3. Re: 16-bit float vs. 16-bit integer
              Navarro Parker Level 3

              I'm assuming that "lossy" refers to tossing the color data out window — like GIF. I am unaware of any EXR compression artifacts.

               

              Are there any visual examples of  "16 bit float has more range than 16 bit integer, but less precision"? Trying to wrap my head around that.

              • 4. Re: 16-bit float vs. 16-bit integer
                Noel Carboni Level 8

                Don't assume because that sounds way too simplistic.  Do some tests yourself and see if you can spot the differences.  One good way is to open the same exact rendering saved two different ways, layer them one over the other, and set the mixing mode to Difference, and add an adjustment layer over the top to brighten the results.  You may have to convert to 16 bits/channel to do that.

                 

                I don't know of a good way to demonstrate the way the various data types work to a layman.  I am a career software engineer with a lot of experience with graphics software (as is Chris), so we have an inherent understanding of how floating point operations work on pixels, vs. integer operations.

                 

                As far as what format is good enough, that depends on your needs (and thus the definition of "good enough")...  You kind of need to work through the processes in your workflow several different ways, actually DOING what you intend to do to these images, and then compare the results.  It may be that whatever choices you make don't make perceptible differences in the final results.  Or maybe the differences will only come out when you do extreme operations.  It may be that you choose a lower quality method over a higher quality one because of resource issues (size, processing time, etc.).

                 

                I don't think this is a question someone is going to be able to answer for you without getting a great deal deeper into just what you're doing, and what your expectations for your results are.

                 

                -Noel

                • 5. Re: 16-bit float vs. 16-bit integer
                  fnordware Level 3

                  I don't know of any examples where converting from 16-bit integer to 16-bit float resulted in any noticeable loss, although theoretically there would be. Really, 16-bit integer is kind of overkill anyway. 10 bits have been proven to be plenty for broadcast quality video. (Note that I'm talking storing 10-bit; it's still preferred to process in 16-bit integer or 32-bit float.)

                   

                  32-bit float is also overkill for digital images, so I wouldn't consider converting from 32-bit to 16-bit to be "lossy." The one exception is when you have very large values above 65504.0, usually for a z-depth buffer or something. In those cases you can use PXR24 compression in EXR, which rounds 32-bit float to 24-bit, giving you the range of 32-bit but the precision of 16-bit float (and it compresses much better than full 32-bit).