Would you like to double the resolution of your camera, for free? I sure do.
1. What is it?
When I found articles about "super resolution" at DPReview[a] and Cambridge in Colour[b], I was initially overjoyed - throw a bit of computational photography at the problem and just like that you could in theory have a camera with a resolution as high as your lens could deliver.
The basic idea was to take multiple photos of the same scene and then utilize small shifts in sensor position between photos to create a higher resolution image.
Can this turn your entry-level camera into a medium-format challenger? For that, we would need something on the order of a doubling of the linear resolution, or four times the number of pixels. Turn a 24 megapixel camera into a 96 megapixel monster using nothing but a heavy shutter finger and a bit of Photoshop? Seems too good to be true.
2. Why it Seems Like it Would Work
Many cameras have sensor-shift super sampling[c] where the sensor is shifted slightly between captures in order to negate the Bayer color filter array and create a red, green and blue sample at each pixel location. While this doubles your linear sampling frequency for the red and blue, it won't double your linear sampling frequency in the most important channel - luminance - but it does give you higher resolution, and it's not much of a stretch to think that maybe you could just sample even more and get even higher resolution.
3. Why It Doesn't Work
This kind of super-resolution is, however, based on a fallacy.
If it were the case that each pixel in the camera's sensor only received light from an infinitesimally small point - or from a point that was substantially smaller than the pixel - we could re-sample the scene by moving the sensor. But this is not the case. Camera manufacturers have gone to lengths to ensure that each pixel receives light from as large an angle as possible.
What this means is that we sample the scene with a so-called aperture error: each pixel represents the average of the light coming from an area, not a point. It can be shown that sampling a scene with an aperture error is equivalent to sampling a scene that has gone through a low-pass filter without aperture error; and we know from the sampling theorem that once you sample a signal at the Nyquist frequency, there is nothing to be gained by increasing the sampling frequency.
Now strictly speaking this isn't entirely true, since each pixel isn't perfect, only close to it, but it's true enough to make this approach non-functional for obtaining anything close to twice the linear resolution.
3.1. Bayer-Canceling Doesn't Work Either
But what about shifting the sensor to capture all three primary colors at each pixel? Doesn't that give us three times the resolution, since the camera doesn't have to interpolate the colors?
Again, this is based on the fallacy that the interpolation is done without any additional information beyond the sampled pixels, which is not the case. First, the world is mostly gray, meaning that the color channels are far from independent. If you detect an edge in the green channel, there's very likely an edge in the red and blue channels as well. Second, the camera is smart enough to not interpolate across edges using techniques such as Adaptive Homogeneity-Directed (AHD) interpolation.
The camera still has to guess the interpolated value, but it is very, very good at guessing. So good, in fact, that it is often spot on. The resolution increase is therefore very much less than what a naive assumption about interpolation would predict.
4. What Does Work
Multi-shot super-resolution does work in the case of panorama stitching. By using a lens with approximately twice the focal length, capturing nine photos and then stitching them, you double the linear resolution and get approximately four times as many pixels.
You can also do a three-shot panorama with the camera in the "wrong" orientation at one and a half times the focal length to get a 67% increase in linear resolution.
I've written a small app to help with this: Pano Aim