When I have an image in 8 bits per channel, its grayscale alpha channel is in 8 bits (256 levels).
But when I have an image in 16 bits per channel, is its grayscale alpha channel in 16 bits (32.768 levels)? They are a lot to make a selection...
Be aware that Selections are 8bit.
Edit: Which means that you will get differnt results
• if you load the Channel as a Selection and apply it as a Layer Mask and
• if you copy the Channel’s content and paste it into a Layer Mask –
though the difference may seem imperceptible.
grayscale alpha channel in 16 bits (32.768 levels)
Just to be numerically accurate, it's 0x0000 through 0x8000, so it's really 32,769 levels including pure black and max white.
So, for example, in an RGB image with 16 bits per channel, it will have 32.768 values of gray in every grayscale color information channel (red, green and blue), but 256 values of gray in the grayscale alpha channel, is it true?
Thanks and sorry for my English.
So, in a 16 bits image when I'm making a selection in Quick Mask mode, am I really working with 256 or 32,769 values of grey? :S
Why do alpha channels in 16 bits exist if they are not real? Alpha channels are only for selections, and selections are in 8 bits... :S
Why do alpha channels in 16 bits exist if they are not real?
They are real, you can copy and paste them into a Layer Mask for example instead of loading them.