Maybe you are wasting your time.
The essential thing is the effective resolution – often dependent on the scaling of an image in a layout application.
If you set the resolution to 300ppi (it is ppi, by the way, foir digital images) with »Resample Image« turned off is the resultant size too small for your needs.
For my work I need high quality pictures.
So you upsample them?
I looked on google and it said for paper 300+ ppi is considered Hi Res. For web usage 72 ppi is considered Hi Res.
So I used that information.
I don't upsample Hi Res images, only the images that aren't Hi Res yet. I put them on 300 ppi because I don't want to much quality loss.
But if I understand your comment correctly I can put my work on 300 ppi without making the file extremely large?
I put them on 300 ppi because I don't want to much quality loss.
If you upsample and it is not actually necessary you are reducing the image’s quality unnecessarily.
Edit: Actually the title states »downsizing« – still, unless one knows the final requirements fairly exactly down- or upsampling are both not recommended in my opinion.
But if I understand your comment correctly I can put my work on 300 ppi without making the file extremely large?
Once again: The effective resolution is the relevant one!
And that is dependent on the output size – which often is not decided by the photographer but by customers, graphic designers … later on.
Basically the resolution of an image itself is close to irrelevant, the pixel dimensions are relevant.
A 10px by 10px image with 300ppi is still fairly small and a 4000px x 3000px image with 72ppi is still OK for A4 (if the image section is not set too narrow).
The first thing that came to mind when I read your question was that scene in Star Wars, A New Hope where Yoda exclaims to Luke, "You must unlearn what you have learned."
1. My first and best advice to you is to put your thinking about ppi aside, and just concentrate on the actual pixel count in your images.
2. Secondly, do NOT try to minimize the size of your images while you're working on them or storing them. If anything, you want to raise it.
What I mean by pixel count is simply this: Keep in mind the number of horizontal pixels by number of vertical pixels in your image.
Trying to reduce the size will net you reduced quality. If your computer struggles with processing large images, get a better computer or just grin and bear it. You said you wanted quality. If you want speed in processing, prepare to sacrifice quality. There is no free lunch.
To take this explanation a bit further...
Your image capture equipment (e.g., camera, scanner, download, or whatever) delivers an image to you with a particular pixel count. This defines how detailed your image can be. For the sake of argument, let's say it's 3000 x 2000 pixels. While your'e preparing that image for whatever use you have in mind - e.g., printing - if you are primarily concerned with image quality consider upsampling that image, to even double the pixel count in each dimension and working with it in 16 bits/channel. At 6000 x 4000 pixels and 48 bits per pixel yes, that makes it HUGE to work with, but modern computers have been up to the task of working with large, deep images for quite a while now.
The advantage to working on upsampled images is that it gives you more room to work and it minimizes the data loss due to round-off error or other processing artifacts.
Now let's say you've got your big, beautiful image all prettied up and you're ready to print. Except in certain very specific instances where printer drivers are outdated or faulty, if you just set the image size, in inches without resampling, and even at this point ignore the ppi, this will give you quite possibly the best print you're going to get from your image. You might see Photoshop show you a ppi value of 600 or more when you do this. Not a problem. Just leave it.
Now: Please go and actually try this and see how the quality comes out on a real print.
P.S., I shoot using raw format, and I set Camera Raw to deliver 6144 x 4096 pixels and 16 bits/channel in every case, even though my camera's "native" size is smaller than that.
When you consider how much confusion we see in this forum about image size, ppi, dpi, up-sampling etc. etc. then it stands to reason that people who are not interested in Photoshop, or serious about photography, are going to even more confused – which is probably a nice way of saying ‘clueless’! We even experience it in Camera Clubs right up to National Photographic Society levels, where digital competition entries are submitted to large or too small.
Some email clients have routines that automatically downsize attached images. Or at least, they might ask the first time you attach an image, but subsequently take your input as defaults for future attached image files.
Then there is scanning. I suspect that the multipurpose printer with built in scanner is the norm for the home office nowadays, but I bet not many people fully understand how to use them.
In summary, a lot of people - probably most people – don’t have a clue about image size, and need a bit of careful nudging in order to coax a decent sized image out of them. So keep on asking and explaining until you get images you don't need to resize.
May I respectfully suggest two things:
1. Try not to state things as oversimplified absolutes. Did it occur to you that performing editing steps on an image with a smaller number of pixels also introduces "destruction" that editing on upsampled data might minimize? It seems to me the world is too complicated to be boiled down into 4-word rules-of-thumb.
2. Please consider couching your statements in a little humility. As it is, terse posts such as those above come across as confrontational. Understand that there might actually be folks here with more experience than you, who understand this stuff better than you do, and who give advice that has merit.
Thanks for considering my thoughts.
Any resampling is destructive.
Back on subject, please provide any evidence that upsampling during raw conversion through Camera Raw is destructive. It is, in every way I can see, a creative process, producing an image where there was none.
I understand that you may feel that a "native-sized" conversion could be better than an upsampled conversion.
For my part, I did an extensive study back two years ago when Photoshop CS5 first came out about how to get the maximum image detail and quality from raw images through the then new Camera Raw 6.
After converting various images for a week, adjusting parameters, and objectively comparing results both on screen and in prints, I concluded that Camera Raw can be coaxed into producing both a higher level of detail and more realistic looking detail if set to upsample the data to a resolution up to about double the size of the camera's native output. With, for example, my Canon EOS-40D, that's 3888 x 2592 pixels upsampled to 6144 x 4096 (not quite double, but it's the biggest Camera Raw will produce). Now I get results I will put up against anyone's.
I realize that you are the inventor of sliced bread, but show me how
upsampling recovers information from the dematrixed Bayer sensor layout
of two greens and a blue and a red. There is probably twice as much
brightness information in the four original pixels but it has been lost
by the time you upsample.
I'm not taking part in this particular discussion, but I'd like to introduce a new variable: the lens.
I do a lot of work zoomed in to pixel level, and there is a big difference in how much useful detail my various lenses will capture. With the 10-20mm wide zoom, I estimate useful resolution at about 3000 x 2000, and that's stopped down to f/10 or 11. The primes are much better with their simpler construction. The 50mm could probably carry detail up to 6000 x 4000 at f/8, which is why I use that whenever possible.
So I'm not too concerned with the pixel race. My camera outputs 4288 x 2848, and that's plenty enough for a magazine or book spread. But I don't do gallery prints.
Aside from that, I think Noel's argument about doing detail work on an upsampled file has merit, assuming that the upsampling itself works as advertised.
It's not so much that information is added, but...
1. Less is lost in the conversion to an image when more pixels are available into which to put the data.
2. Less is lost during ongoing processing of the image when more pixels are available to work on.
And keep in mind that the ongoing processing starts right in Camera Raw. For example, it does a form of deconvolution sharpening and having more pixels to work with yields a better result. Do you dial any sharpening into your conversions?
If you want to try it yourself, go for it. The differences are subtle, but they're there. If not, that's fine - I'm sure you can make pretty pictures the way you do it as well. No one's trying to force anything on anyone here.
The dematrixing algorithm uses estimates of the brightness information
in the red and blue pixels, I only remember that the blue pixel is
assumed to have 18% of the brightness of a green pixel in the same
location. As this is only a statistical data point , the overall effect
is a loss of resolution compared to the number of Bayer sensor pixels.
Up sampling can only make this worse, unless you can show how adding
more pixels that are not an even multiple is somehow improving the image.
Lundberg, I really don't have more to say on the subject, and I don't feel compelled to show anything more. If you don't want to believe working on upsampled data nets good results that's entirely up to you. No one's twisting your arm.
I will say this about your matter-of-fact "it works like this" statements, though: Nobody outside the development team knows the details of the Camera Raw implementation.
But I do know the results I've gotten by doing all manners of testing, and I stand by them. If you'd like to compare techniques on actual data some time, please by all means start a thread on it. I'm up for any processing challenge. Maybe we can get Dag to post one of his super sharp raw files, which would be perfect for such a thing.
The book "Color Constancy" by Marc Ebner tells anyone all they need to
know about Bayer sensor processing and Von Kriess algorithms. Von Kriess
was one of those turn of the century genius polymaths who was into all
sorts of science. He was a physiologist by profession, but he developed
the mathematics of color constancy a hundred years before the digital
camera was based on his work. Amazing.
You simply can't get more information out of an image than the original
sampling has in it. Any resampling that is not a even multiple will lose
I suspect the truth is somewhere in the middle. Methodology becomes very tricky here, and it's hard to know exactly what you are comparing by putting two examples next to each other.
Here are two instances of the same file opened from ACR. One is opened at native resolution, and the other going two steps up in resolution. Note the zoom levels in displaying the two. This is to get them to same scale.
Sharpening is a complicating factor too. Without any sharpening this would be too blurry to make anything out of it at this zoom level. But the same setting would produce different results in the two resolutions, so the higher resolution has slightly more sharpening at higher radius (perhaps a bit too high).
What this tells me is...very little, actually. I certainly don't see any quality loss, aside from a little edge softening. But that can be brought back by sharpening.
OTOH I don't really see any gain either - except for large format exhibition purposes, where the reduced pixelation would be a big plus.
It also tells me that resampling isn't nearly as destructive as one might think. The algorithms apparently work extremely well, so maybe it's nothing to be afraid of.
EDIT: I see the one on the right - the upsampled one - was a little oversharpened compared to the other. But I don't have time to make up new examples.
You've started to touch on it, Dag. In each case above does or does not the image with the higher pixel count look more real?
I can appreciate the desire to try to test theories A and B separately. You now see, as I did, that upsampling during conversion (which may not be upsampling at all, but just the placement of conversion results in a larger number of pixels) doesn't lose anything.
Now ask yourself, how many photos do I actually convert without using sharpening in ACR?
Try this methodology: Take each method - converting the image at "native" size and converting the image at upsampled size - and subject both to the processing you typically do when preparing images for actual use. In the converter, specifically use the 0.5 Radius setting for Sharpening, which calls up ACR's deconvolution sharpening. When all done, objectively compare the results. That will take you to another step I have already visited. At first you might feel there's little difference, then you might find - as I did - that with slight tweaks to the processing workflow you are able to achieve even better results from the upsampled conversion. After a while you consistently see results that are clearly superior to what you were getting with native sized conversions. After a while you figure out that it's a bit like getting a new camera with a few more megapixels.
Why? If you find it a necessity to explain these results, I have already done so above... What folks sometimes don't realize is that in the process of preparing images for for use, in the pixel manipulation (which starts in the converter) we actually stand to LOSE a lot of good information if it is already tightly packed into the pixels. Just look closely at the pixels of an oversharpened image if you doubt this.
Just so we're clear, then...we're really talking about damage control?
The underlying assumption is that by the time you're finished you have already thrown out a lot of information anyway, whether intentional or not, but this (working on an upsampled file) gives you better control over what you're throwing away, and what you're keeping?
OK, I could buy that...with one reservation: noise. If you do any noise reduction, you really want to keep the noise as crisp as possible. One single pixel standing out is just perfect. A diffuse 5 pixel blob, however...noise or detail?
And I think it's pretty clear that what happens here is an upsampling, there is such a thing as native resolution. The edge softening shows that unambiguously. Yes, I know that demosaicing from the Bayer filter is in itself an interpolation, but I also think that a good demosaicing algorithm takes all the pixels into consideration when reconstructing the color image, not just interpolating from one red pixel to the next red pixel, completely ignoring the green and blue pixels in between.
In any case, I certainly agree that you don't lose anything. So if for some reason you need a higher resolution file than what the camera delivers natively, there's little or no risk. It will look perfectly plausible.
(But I would of course shoot for a Photomerge if the subject doesn't move. That gives you the additional detail as well).
For ordinary image processing (improving already good images)
I can't see any benefit, upsampling the images.
But for morphing or warping
I found it very useful to apply upsampling, followed by down-
(please use zoom 200%)
The underlying idea: an area of pixels in an image has to be
distorted according to a vector field. Each pixel has to be
shifted in both directions. It's IMO quite clear that such a
geometrical transformation will work much more smooth
in an upsampled copy. After the transform, the upsampled
part will be downsampled and fit into the original image.
Best regards --Gernot Hoffmann
D Fosse wrote:
Just so we're clear, then...we're really talking about damage control?
Nope. I'm just talking about results. I'm only vaguely trying to guess at why, after being prompted to do so.
And there's no question it would be no more than a guess because we simply don't know how ACR works under the covers. You may think they convert then upsample, but they may well consider each pixel in the output image in succession and derive its color/luminance from the nearby source photosites. I have no idea. Nor are we likely to learn how it actually works any time soon. It's proprietary.
All I know is that whatever I do, I get better results from working at upsampled resolutions and either use those results directly or downsample at the end to make the work products. I'm not talking about night and day better, but incrementally so.
People seem to want to reconcile their understanding of how they think things work, and that's great, but just remember it's the results that count.
OK. I'll keep looking into this. But as you know, I'm never happy until I know how and why stuff works
At any rate, I have just gotten the go ahead for looking into digital backs for a Mamiya 67 (film) camera that we have (a beautiful piece). Resolutions range from 22 all the way up to 80 megapixels, but of course I can't go completely overboard. The problem is that the lower resolution sensors are also physically smaller - turning the lenses we have for it into extreme telephotos. Hopefully I can hit a sweet spot. The funny thing is that even though these backs are very expensive, getting the money for that is possible simply because we already have the camera. If I asked for a spanking new Nikon D4 at probably the same price it would be turned down flatly I'm sure. Psychology is funny stuff...anyway, fingers crossed.
I really don't know what you mean, Lundberg, but it doesn't sound like you meant that as a compliment.
I like to think this is about what we in the engineering business call "observed results" with a certain amount of "helping others" thrown in.
It would be nice if we could talk about it respectfully.
Please repeat silently to yourself "He is not trying to piss me off",
when reading my posts.
Okay, I'll do that. I'll ask you in return to consider using enough words in your terse posts so that others can understand what you're saying. I might ask as well that you consider the consequences of starting into a conversation by saying "Why on earth...".
Yes, the demosaic takes all the pixels into its grasp, but it uses
ESTIMATORS of the brightness of the blue and red.
I'll point out again that, unless you're actually on the Adobe Camera Raw design team, you simply cannot know what they're doing in the process of creating an image from a raw file. It seems you believe you do, and are trying to convince others of that by throwing about terminology.
I do understand the mechanics of debayering, by the way. If it were all done the same way, that's documented in some book somewhere, there wouldn't be significant differences between products, now would there? There are even differences within products, e.g., the ACR 2010 method vs. 2003 method. HOW it's being done in ACR is undocumented, defies being oversimplfied, and it's simply up to us to discover the best ways to use it.
Noel Carboni wrote:
I'm just talking about results.
I get better results from working at upsampled resolutions
Just as an example, two results from the same file, the one on the left converted to a higher-than-native size of 6144 x 4096, and the one on the right to the native size of 3888 x 2592 pixels.
Which do you think would make a cleaner looking large print?