Real world examples? Do you prefer to have artifacts and crappy looking color? Use 8 bit/channel.
If you don't want quantization artifacts and smoother color, use 16 bit/channel.
Typically, Images with gradients, or a long view of the sky that will undergo big movements would be the ideal images for working in 16 bits. I'm curious about your question, would working in high bit be a speed problem, or a workflow one?
Chris, I guess you worked a lot on the 16 bits support in Photoshop, we all know that you know more about color accuracy than most human beings, but I don't think that Bret was questioning the advantages of working in high-bit, but rather is searching for examples that best showcases the drawbacks of working in 8 bits, maybe to "sell" a workflow change towards better images...
There aren't many great real world examples because you can't know how someone else did their work.
If you see lots of artifacts: was it because of 8 bit/channel color adjustments, or JPEG, or recompressing?
But if you adjust the same high quality image in 8 and 16 bit/channel - you quickly see artifacts in 8 bit/channel, and rarely see artifacts in 16 bit/channel (if you do an extreme adjustment, you can see artifacts regardless of precision).
Thanks Chris & PE.
First off. I love Chris. He rocks.
PE, yes you are correct. I'm looking for best examples to 'sell' this to my team. It's really not a matter of speed or workflow. I just want to see some specific examples. What might make some good test images to play with to see the difference? For instance, in a smooth gradient in a sky will I necessarily see less banding in 16 bit or will it be that I will have more info and wiggle-room to work with in that sky when wrenching on the pixels?
Really low noise images of smooth things - e.g., sky or a smooth product under soft lighting or a logo that's been rasterized in an image - can show artifacts.
Another good example would be if you were to sharpen something heavily - oversharpening it really.
Keep in mind anything that you can cause by doing extreme operations is going to be there a little bit, and be a little visible, with normal operations.
I'm kind of surprised that in this day and age of ultra-powerful computers anyone is still considering using 8 bits/channel editing, frankly. There ARE some features of Photoshop (hint hint, Chris) that still only work on 8 bit data, but generally speaking those features do some pretty significant things to the image anyway, so perhaps ultimate data quality preservation isn't required.
I don't know whether it's applicable at all to what you're doing, but any astroimage processor will tell you that the kinds of extreme (and often numerous) operations on an astroimage to extract the maximum quality from the raw/stacked data absolutely require a high bit depth. I believe Chris also does some astrophography.
I think Chris is being a bit hard on 8 bit, because for most of the time, it is OK. Especially if you will not be printing your image. In fact, John Doogan, who is an Adobe Ambassador, told me in a Photoshop seminar some years ago, that he generally works in 8 bit until he encounters a problem, and then goes back to the RAW file and starts again in 16 bit. That’s the way I work.
But if you have a large expanse of cloudless sky, or are working in B&W, it’s a no brainer. I'll add images that you intend processing the hell out of to that list.
Noel Carboni wrote:
I'm kind of surprised that in this day and age of ultra-powerful computers anyone is still considering using 8 bits/channel editing, frankly.
Yes. Totally. We've had a workflow we haven't changed in a while and it hit me after reading Mr. Schewe's "Digital Negative" to ask "aren't we working in 16-bit?". And...we weren't. Until now. Thanks, all!
The places I see the worst artifacts are sky, studio product shots that need color adjustment (ie: they want the yellow box white, or the blue sweater to be fuscia), and studio model shots that need cleanup (correction for bad makeup or mixed lighting).
Astrophotography is not as common - but a great example because they're trying to pull out really faint details (sharp curves) while still maintining blacks and detail in the highlights. They'll take many 16 bit images and average them to reduce the noise even further!
Noel - hold your horses, I'm workin on it.
And yes, my own photographic work is always in 16 bit/channel, unless I'm just trying to get a quick snapshot off my phone and onto a tiny web image (which is a tiny fraction of what I produce).
For me personally, it's simple: I generally work in 16 bits, unless I'm forced to convert to 8 bits in order to use something that absolutely cannot be done in 16 bits.
But I do understand that other workflows will make other users more mindful of storage space and performance speed,