I'm not sure this is really the optimal forum for thus, but it seems about as good as any.
In a few recent threads, including Completely Embed Font (not Subset) in PDF several methods not working, Dov
Isaacs has said things like:
Don't try to artificially upsample or downsample any image before placing into InDesign. Upsampling does not improve quality in any way whatsoever.
(I'm posting this seperately because I don't want to confuse those other threads. And Dov's advice is so generally good.)
I'm finding this claim about upsampling to be rather confusing, as it is counter to my understanding.
I'm under the impression that upsampling can indeed help to interpolate points and lead to apparently sharper images and finger detail. While it cannot restore data that is never there, it can be a marked improvement over not doing so.
As a practical example, we screen at 220lpi and print at 1200dpi, and one of our regular sources of cartoons (line art) comes in at 72ppi (about 110ppi effective), despite our best efforst to get higher resolution. We use a Photoshop action to upsample it to 300ppi (about 360ppi effective).
Specifically, we start with a 72ppi/8pp anti-aliased grayscale image, we upsample it to 600ppi, threshold black at 128, convert it to a bitmap at 600ppi with a diffusion dither, and downsample to 300ppi/1bpp.
The result looks dramatically better in print than the original. Is this a bizarre corner case (because it is hand-drawn line art; because the original is anti-aliased; because we're dithering; because this starts to sound like "image processing" rather than just "upsampling")? Because I would tend to think it is not consistent with the idea that upsampling doesn't help.
I am less sanguine about the next part of this: on occassion, some of our photo editors have had images they want to run in print whose resolution is much much too low to look acceptable. But for which the image is fairly important to run, regardless. They had had non-negative success with upsampling in Photoshop and perhaps applying some blurs. This can result in images that don't have the telltale artifacts ("huge jaggies") of low resolution imagery, even though they may not really have more information density. But there is value in eliminating those extremely obvious artifacts,
even if they are ultimately replaced with other artifacts.
So, am I missing something? Am I just confused? Or do these just qualify as corner-cases?