Distortion Correction
 
Support Ukraine

Distortion Correction

Building a lens always consists of tradeoffs. One important tradeoff is complexity versus price - the simpler a lens is made, the cheaper it can be manufactured. The downsides are usually increased distortion, lower sharpness etc. With the increased processing power of today's cameras, however, a lot of the disadvantages of simple lenses can be made up for in software. Some systems, like the Micro Four Thirds[a], rely on in-camera image correction - for examples, see the excellent article In-camera distortion correction[b] over at Digital Photography Review. Packages like DxO Optics[c] has correction of lens defects as their main selling point.

So how can this be done?

Well, distortion is basically pixels ending up where they shouldn't. So we can, for each pixel in an image, define a vector that tells us where this pixel should have ended up. The correction process then involves simply moving each pixel to its right spot. Since the distortion is mostly dependent on the focal length of the lens, we can re-use the vectors for all photos taken with that lens at that particular focal length setting.

In order for us to define the vector field we have to have some reference image. First I tried taping graph paper to the wall, but for the 10mm lens I was using I realized I would end up taping up a lot of paper, and I couldn't get the lines on the paper to quite match up. Finally it struck me - I already had a device that could produce exactly straight lines: my monitor. So I sketched up a grid in Photoshop and snapped two shots of it.

The distortion is measured using two test targets - one for horizontal distortion and one for vertical:

Test target for horizontal distortion
Test target for horizontal distortion

Test target for vertical distortion
Test target for vertical distortion

The measurement process is the same for both, and I'll describe how the horizontal distortion measurement happens. The vertical measurement is the same, except we rotate the test target 90 degrees and then rotate the vectors back 90 degrees.

  1. We scan the center scanline of the image. For each black line detected, we note its x-coordinate. This is the reference x-coordinate for the whole line. It is assumed that a perfect lens would have put all pixels of this black line at this x-coordinate, but that lens imperfections have caused it to end up elsewhere.

    1. Then we follow the black line outward from the center.

    2. For each scanline we re-acquire the black line. Since we know where the line ought to be (the reference x-coordinate) we now have the horizontal distortion for the line at that scanline.

    3. We now have distortion values for a full column - almost.

    4. For the scanlines where we couldn't find the black line (bad contrast, noise), we interpolate the values.

    5. The values are smoothed using a box blur filter with radius 8 to get rid of measurement noise.

    6. Store the distortion values for this column.

  2. We now have distortion values for selected columns in the image.

  3. These are interpolated across each scanline, to fill in the blanks between the columns. For this test, I used linear interpolation, but more complex interpolators can be used.

  4. The result is a vector field giving the distortion at each pixel of the image.

Measured distortion field
Measured distortion field

The red intensity indicated distortion along the X-axis, the green along the Y axis. Brighter colors means that the correction should move this pixel up or left, darker colors that the pixel should be moved down or right. Midtone values means no change.

How well does this work, then? I've used the simplest interpolation, the simplest test targets and more or less cheated at every step. The answer is: surprisingly well.

Correction of a Sigma 10-20mm @ 10mm. Mouse over to see the corrected image

UncorrectedCorrected

Correction of a Sigma 10-20mm @ 10mm. Mouse over to see the corrected image

Note the distortion in the lower- and upper-left corner. Also note that the distortion is not symmetrical: The left side of the image is more distorted than the right. Any mathematical model of lens distortion that assumes that the lens is symmetric along its axis, and therefore, that distortion is a function of distance from image center only, will not be able to correct for these distortions. Also note that the test target wasn't completely straight. The correction also rotates it slightly clockwise. I guess my monitor and my camera weren't 100% aligned.

How much data, then? For this experiment I stored the vector field as a Portable FloatMap[d], which makes it about 70MB. This is for one lens, mounted on one camera, at one focal length - and if you want to be really precise, at one focus distance. Obviously the data set size grows quickly. Luckily, however, the data can be compressed a lot. If we skip the steps 4-6 in the inner loop, and step 3 of the outer loop, we get a lot less data. What we get is approximately 9 columns and 7 lines of 2000 and 3000 points each. Each line has a reference coordinate, and each point has a measured coordinate. Given 32 bit floats for both, you end up with 9 * 2000 + 7 * 3000 * 4 = 156000 bytes. I'm sure that can be compressed further. The interpolation and smoothing can then be done immediately prior to applying the correction. If a lot of photos are to be corrected, the "full" distortion field can be cached. It is my experience that I tend to use the few lenses I have at very few focal lengths, so I wouldn't expect the cache to grow too large too fast.

2011-03-15
Moss