I have created a new layer mask of an image, but when I use the free transform tool to resize it so it is smaller, there is a faint grey outline left around the image. Why does this happen and how do I get rid of it?
I think you're probably seeing some transparency bleed in from the edge, but you've not been terribly specific. Do you see any hint of checkerboarding if you make all layers below invisible?
A screenshot would be helpful.
Not that easy to see but what I did was:
1. Made a selection of the image, then clicked refine edge and output new file with layer mask.
2. Then when I edit the new file to try and make the image smaller with the transform tool I get the grey outline which you can see when I add a white background layer below it...
I see those now, thanks.
It's kind of the opposite of what I said before.
I suspect you have information in the document that's beyond the visible edge and the resize is pulling in part of a non-transparent pixel that just out of view initially.
If you're not trying to retain that info out there, but it's just there because a prior crop left the pixels out there because you don't have the [ ] Delete Cropped Pixels setting checked, then you can crop them before doing your masking or resizing. An easy way to do that is to start the Crop Tool, choose the [ ] Delete Cropped Pixels setting, move an edge and snap it back, then just complete the crop.
Not sure if that has worked. If I click on refine edge and output to selection and then use the transform tool I don't get the same problem. It only seems to be if I output to a new document and then use the transform tool that I run into problems...
I think it will be the bicubic resampling algorithm used when scaling that's making a 1 or 2 pixel border of grey along these edges of the mask, hence some of the original image slightly showing there.
A mask has a padding value of white or black which represents the region outside of a mask's pixels. When you downscale a mask, the padding value is brought into the region of new pixels in the mask and included in the resampling calculations. If a mask border was black and the padding is white, a gray will result.
Similarly, the transparency value of a pixel in the edge of an image is diluted by a transparency padding. Adobe do not want to fix this by making their bicubic algorithm less naive and giving the user a choice of whether a border is contaminated by padding; they say most people want bicubic resampling to give the results that Adobe's bicubic gives. They do not care that sometimes their customer does not want the contamination.
Here's an example. First there's a black layer with a new mask, over a white background. The empty mask appears white; the padding of the mask is white. Second, black is put in the mask so there's black up to the border. Third, the layer (and its mask) is downscaled. White padding has been brought into the mask and resampling has created gray pixels in the mask where black meets white. Fourth, a closeup of a gray line. Fifth is the same region of the mask. Gray in the mask allows the masked black layer to partially show against the white background as faint gray lines.
Wow, thanks for the reply! That is very in-depth That definitely sounds like the problem. So I presume there is no fix as such? The way I got round it was to output a selection as a new document without any layer mask...
With the example that I posted, a mask with black padding would have avoided the dilution of the black of the original mask border and the resulting gray line in the image.
Black padding can be achieved by adding an empty black mask or inverting a mask which has white padding. (According to an Adobe engineer, inverting a mask shouldn't invert the padding. Should and does are different things, though. )
An empty black mask can be added by Opt/Alt-clicking the Add Mask button at bottom of Layers panel. A mask can be inverted by targeting it and pressing Cmd/Ctrl+i.
Padding contamination when scaling can be avoided by not using bicubic resampling. You can choose to use bilinear or nearest neighbour resampling, but that generally gives poorer looking results in the image.
It's not that we don't want to change something - it's that we cannot change the nature of math.
This is not a configurable thing, just the consequence of the math involved because any resampling kernel wider than nearest neighbor will mix nearby values, and can result in edges mixing with what was outside those edges.
Is there something fundamentally wrong with the concept that there's nothing outside the edges? It seems under many conditions people just don't expect nor want "nearby values" beyond the edge of their visible canvas to influence their results.
Back when things were simple there just was nothing outside the edge of the image, and resampling didn't drag anything in. It still works that way with the Background.
Yes, I understand how nearby values are mixed to produce the new pixel, and that the padding value is used for virtual pixels outside of the mask. But I'm not as naive as your bicubic resampling algorithm is. The virtual pixels do not need to be limited to black or white. A virtual pixel can be the nearest real pixel to the virtual pixel or an averaging of two or more nearest real pixels to the virtual pixel. I'm certainly not saying that that would be best for all resampling circumstances, but it could be provided as an option for the user in order to prevent the problem that prompted this thread.
Europe, Middle East and Africa