Having made a couple of VR panoramas I thought I'd jot down some lessons learned in the hope that it will make it easier for someone else to get to where I am.
Table of Contents
So you want to make a VR panorama? I did. It was both easier and harder than I first thought. Easier, because I realized that I could get away with much more than I thought possible before it was noticeable, and the software I used worked without a glitch. Harder, because it required me to change the way I chose subjects and then to go quickly through a very long list of things when doing the capture. It also meant spending a lot longer post-processing the images. Right now I'm at about two-three hours for each panorama; compare that with two-three minutes of editing a normal photo.
1.1. The Hardest Part
Let's start with the basic problem with VR panoramas: It is very difficult to find a spot where all 360 degrees are worth photographing. As any photographer knows, proper framing is difficult. Your photo should ideally be the subject and nothing but the subject. This is what makes wide-angle photography difficult. A wide-angle lens tends to pull in lots of junk from the sides of the frame, leaving the interesting stuff compressed into a narrow spot in the center. VR panoramas are worse. A lot worse. Suddenly, not only do you have to find something interesting to photograph - you have to find a spot where there isn't anything uninteresting.
Having solved that problem, though, let's talk about how to get from standing there to a panorama in the least amount of pain.
I use the following setup:
I'm sure it is possible to get just as good panoramas with other cameras and lenses. The list here is not intended to be authoritative, but merely an indication of what tools I have at my disposal.
2. The Equator
The VRwave Panoramic Photography Lens Database[j] lists camera lenses and suggested angular spacing for them. I would, however, add the observation that it is sometimes very difficult to get a good panorama using those settings.
The most interesting stuff is along the equator of the panorama. When you stand somewhere, most of the nice stuff is clustered along the horizon. If you're in a cathedral you may have some beautiful stuff above you, but otherwise you have a featureless ceiling or empty sky. Below you isn't much better. Even the most resplendent cathedral usually has a drab floor. The consequences of this are twofold: One, we should pay special attention to the band of photos at zero degrees pitch. Two, aligning images that don't include anything of the equator is difficult as they very often contain little but empty sky.
The settings listed on the VRwave website ignores this and assumes that you will use the same lens for the equator as you would for the top and bottom poles. This results in two problems: You end up with a lot of pixels being wasted on empty blue sky if you're outdoors, and you often end up with photos that are very difficult to align. Also, more photos take more time to shoot. Given that you have to finish shooting the full panorama before the conditions change too much, this is not to be ignored.
My recommendation is to use an 18mm lens for the equator and a 10mm lens for the top and bottom of the panorama. This gives you higher resolution along the equator, where the interesting stuff is. These photos cover everything from -33.5 degrees to 33.5 degrees, which is is a few degrees more than the range of full-color vision in humans[k] (see section 17-12, figure b). Thanks to the 100 degree field of view of the 10mm lens, you can pitch the camera up or down 45 degrees and get the zenith/nadir and the horizon in a single photo.
When I shoot panoramas I do the following: 12 shots (30 degrees between shots) with the 18-55mm set to 18mm along 0 degrees pitch. Then 6 shots (60 degrees between) with the 10-20mm set to 10mm at 45 degrees pitch, followed by 6 more shots at -45 degrees for a total of 24 photos.
This gives me a high-resolution band along the equator, complete coverage of the zenith point, and photos that are easy to align. Also, thanks to luck, the Nikon 18-55mm at 18mm and Sigma 10-20mm at 10mm have the exact same Nodal Ninja settings[l], so I never have to change them while shooting.
I crop away the bottom third of the "up" photos, and the top third of the "down" photos when assembling the images in Hugin. That way I keep Enblend from overwriting my high-resolution horizon with lower-resolution pixels from the up and down shots, while still being able to use those areas for alignment.
Speed is of the essence when shooting a panorama in changing conditions. As Ken Rockwell[m] says, the downside of all this post-processing, stitching, stacking etc. is that nature doesn't wait for you to get all your shots done.
Still photographs need dynamic elements to be successful. Moving and living subjects have to be caught at the peak of the action, the decisive instant that says it all. Landscapes, nature and architecture needs to be caught in the right light. The right light isn't in the middle of the day: the right light is very short-lived at the ends of the day. Clouds come and go, and the best syrupy golden light of dawn often only lasts for seconds. The strongest photos are those that capture something in transition.
If you take more than a half-second to fire all the shots you need to stitch and stack, you cannot possibly create a photograph as powerful as can be captured in one snap of my Powershot.
The first time I shot a panorama it took me over ten minutes to get everything right. I bumped the tripod, I got the focus wrong, I got the shutter and aperture values wrong, the white balance was off, I forgot to level the tripod and suddenly couldn't find the lens release button on the camera. It was enough to make me doubt my mental health.
Practice makes perfect, though. My routine for panoramas now is:
Set up tripod.
Attach camera to Nodal Ninja plate.
Attach Nodal Ninja to tripod.
Mount 18-55mm lens.
Turn off VR.
Set lens to 18mm.
Switch to Manual exposure.
Sampling the surroundings, decide on shutter speed, aperture, ISO and white balance.
Mount the camera to the Nodal Ninja.
Rotate the camera to 0 degrees yaw, 0 degrees pitch.
Use autofocus to set focus at distant object on horizon.
Switch to manual focus on the camera.
Take a sample shot, inspect exposure and focus. If good, get ready to work fast.
Switch to remote release. Remote timer release if shutter speed requires it.
Shoot 12 photos at 30 degree intervals.
Switch to 10-20mm lens.
Set lens to 10mm.
Switch to continuous release.
Switch to autofocus. Focus the lens at the same distant object used for the 18-55mm lens. Since you're at the same yaw and pitch as then, it should be easy.
Switch to manual focus.
Quick sample shot to verify that settings aren't completely wrong.
Switch to remote release.
Pitch camera up 45 degrees.
Shoot 6 photos at 60 degree intervals.
Pitch camera down to 45 degrees below horizon.
Shoot 6 photos at 60 degree intervals.
As you can see, there are a number of steps to go through for each panorama. If conditions change during execution you may or may not have to redo everything. The most important part is to get the 12 shots for the equator done. That is where people will look, and that is where you want to spend your pixels and effort. The "up" and "down" shots can usually be frankensteined in without anyone noticing too much.
4. Getting the Zenith and Nadir Right
Two points on the panorama are special: the top and bottom poles. Since Enblend works with already projected images it has no idea that the top and bottom row of pixels are, in fact, a single point. This results in the top and bottom points of the panorama looking like someone pinched them when the panorama is re-projected onto a VR cube. This happens even if you don't have any obstructions at these points. Panorama photographers are familiar with having to edit the nadir point - that is usually where the tripod blocks some of the panorama. But the zenith must also always be edited to fix the artifacts caused by blending projected images.
The solution is to create three images from the shots that were taken:
A 360-by-180 degree equirectangular map. This is the main panorama image.
A 90-by-90 degree equirectangular panorama centered on the bottom (nadir) point.
A 90-by-90 degree equirectangular panorama centered on the top (zenith) point.
To get image two and three: go to the panorama preview in Hugin and select "Num. Transform". For the zenith, adjust pitch by -90, for the nadir, adjust it by 90. Once you have rendered these three images, edit the zenith and nadir images to make them look good.
Then open up a new project in Hugin. Add the zenith and nadir images. Select "Equirectangular" for lens type and 90 degrees for field of view. Don't align them. Instead, go to the "Images" tab and input -90 degrees pitch for the nadir image and +90 degrees pitch for the zenith image.
Make sure that the output image size is the same as for image one above. Then select "output remapped images". Deselect the panorama output - we're going to manually paste in these two shots in the first image. Render everything and you should have two images that, when pasted into the panorama, will give you a panorama with no artifacts at the zenith and nadir points.
5. Photoshop and Singularities
A VR panorama is usually stored as an equirectangular image map. This is a simple and convenient projection of a sphere onto a rectangle, but as we saw in the previous section, it distorts the image toward the poles.
When you edit the panorama in Photoshop (or any other image editing program), you must be aware that the program is completely unaware that the image is a projection. Photoshop, for example, does not know that all pixels along the top row will come together in a single point. The result is that you risk getting artifacts at zenith and nadir, not due to the stitching process, but due to post-processing the panorama.
The only solution I've been able to come up with is to be very careful when editing the panorama. When applying adjustments I either apply them equally across the whole breadth of the image, or make sure that the adjustment doesn't affect any pixels near the top or bottom.
Once you've done the capture, it is time to assemble the images in Hugin. The program actually does a very good job of getting this right all by itself. Two things that I've noticed, though:
The "up" and "down" images need not be connected to each other, only to the images in the equator band. Very often the up and down images are nothing but empty sky or clouds; making for lousy feature points. Better to use the part of the horizon that is visible in them for alignment. I delete any control points between two "up" or two "down" images, as well as any from an "up" to a "down" image.
You need to optimize view and barrel. The lens data isn't quite complete, so if you rely on it for view you will get one image along the equator that just won't fit, no matter what. Let Hugin adjust the view and barrel and you'll be fine.