Monday, July 28, 2014

How Much Does Focus Blending and Stitching Cost In Sharpness?

This one is hard to quantify, but I think I can say that the answer is quite a lot. If I want to make a square picture and use a 2 - 3 image stitch to do so, it is definitely better than cropping a single image, but truth is every time we morph our images, to correct lens disortion, or perspective, or to stitch two images together, we lose a little resolution, perhaps as much as a third, which would be the equivalent of going from primes to a zoom, or a top zoom to a kit one - and really only an issue in really large prints or on screen at 100% (as per previous article about lens quality).

Remember that when stitching shots made with wide angle lenses, there is a lot more morphing and stretching of the image and pixels than in an image stitched from a longish lens.

My feeling is that focus blending also loses a fair bit of resolution. First the images have to be stretched compensate for size changes as you focus (true of all lenses but oddly not always in the same direction in zooms). Then the sharpest bits have to be tested for and gathered, and though I don't know, that may cost further resolution.

Think about the following worst case scenario:

You are using a wide angle lens to do an even wider stitch panorama. The lens has some distortion so in processing the images ready for stitching, Lightroom corrects the barrel distortion common in wide angles. Then the stitching programme does it's on pinching and stretching to make the images stitch properly, and then a whole lot more stretching to compensate for perspective distortion (unless you select cylindrical view and accept curved straight lines). Depending on how wide the lens is and how long the panorama, this might be as much as a 100% stretch, cutting resolution at the sides of the print by as much as 50%.

One way to avoid this cost in stitching is to use shift lenses, or a shifting back. Here the only limitation is lens quality at the larger image circle edges. For my Canon 24 ts-e lens, it shifts beautifully and a stitch that doesn't use image morphing can result in extremely high quality results. Lesser lenses do engender more loss at the edges then you have to test to see whether a camera swing type stitch is better than a lens shift stitch.

Bottom line, the wider the angle of lens, the more difficult it is to stitch, and oddly the same thing applies to focus blending as well where perhaps due to the complex nature of the wide angle lenses to minimize distortion in the first place, simply scaling images does not make them match all across the image and detail is lost.

So, with all of this, is there a role for stitching and focus blending? Absolutely. Like I started with, the square image from a crop doesn't hold a candle to the two image stitch, it's just that the two image stitch isn't quite as good as you think it should be.

The best way to think of it is that stitching is like switching to a larger format camera, with a poorer quality lens.

There is one situation where more pixels, even if not of the best resolution still makes better prints and that's in really big prints. This takes us back to the days of film where as one enlarged the images, they would gradually become less sharp, but there was no sudden cliff of lost quality. Now that most shoot digitally, the cliff is there, where sharpening becomes visible and pixellation becomes problematic. This seems to be true regardless of the strategy to make the really big prints - enlarging before sharpening or after, using this plug-in or that. The greater the number of original low noise pixels, the better, which usually means a larger sensor, not breaking up the current sensor size into smaller but more numerous pixels.

This is one reason for my happiness with the Pentax 645Z. It can take more editing, more stretching, more blending or stitching than cameras I have used before, and why, though micro 4/3 can make lovely images, they don't hold up in editing as well as images made with larger formats. If you don't do a lot of blending and stitching and editing and manipulation and perspective correction and lens correction, then the results can be wonderful.

I guess you could say I tend to stress my image files, and the more information there is to start with, the better the result.

Whether anyone should be torturing images like this is whole other matter and a subject for another day.

2 comments:

TJ said...

I really do like the idea of quantifying the loss of "resolution" here. Maybe to do this we do need to take a shot for one of those charts that are usually used to make profiles for distortions of fisheye lenses (and others), then measure the amount of "aliasing" the appears on the edges of straight lines after stretching or distortion fixes. I think this would also involve the value of the DPI resolution as well. Just ideas, careful thinking (and scientific one too) is required here.
Stitching panoramas by the way doesn't have to be done on two steps. Stitching programs like PTGui do all the job for distortion fixing. As for me, because I usually do full panoramas (spherical) maybe the coincide and overlap of images taken so compensate for such loss of resolution when fixing the distortion.
We should not ignore the fact that when we talk about large prints we have to talk about the proper viewing distance (even for small prints in fact). If I remember the equation correctly, Viewing distance is: Vd = 1.5 x D; where "D" is the diagonal of the print. NIK sharpener gives the possibility to sharpen an image in accordance with the printing media and the viewing distance (beside the other options). Supposedly those guys did quantify a great deal about sharpness and proper viewing distance. Thus, generally speaking, maybe we are losing some resolution after fixing the distortion or so, but the proper viewing distance would be just enough to view the image properly in reasonable sharpness.

Tim said...

Well, I should hope Lightroom would allow one to disable correcting for lens aberrations and just make a simple RAW conversion without corrections - it's better to do it in Hugin where the distortions can be calculated precisely according to the images at hand rather than interpolated from whatever sample lens was used when they felt like profiling one.

I also have been in the game of making as big an image as I can for quite a while, and while sometimes you just need to do a panorama, by far my favourite method is hand-held superresolution using what NASA call the "Drizzle" algorithm.

1) forget the tripod, shoot at adequate shutter-speed to get 3-4 good sharp images, hand-held
2) enlarge each 1.33x bigger than native in RAW conversion (use Lanczos3) (for non-base ISO, shoot more frames or relax and use 1.25x, 1.2x, etc)
3) use align_image_stack and enfuse --entropy-weight=1 --exposure-weight=1 as a selector
4) *now* correct for lens distortion, etc
5) rejoice: you have a much larger image to work from, where pixels that overlap are stacked (reducing noise) and those that interleave add resolution, and all pixels are closer to values received from real live photons - far more honest than that Genuine Fractals rubbish.