Thursday, December 13, 2012

360cities Workflow

For a long time, I held this proprietary, but these days it seems like everyone and their brother (no offense to mine) can make panoramas, so here it goes.

For this demonstration, I used a Nikon D-90 and the Nikon 10.5mm Fisheye lens. The first step is to set up and fix the camera. I put it into manual mode and then use autofocus to focus to infinity using something on the horizon. I then set the camera to manual focus so that it won't change. When shooting outside, I will usually use the "sunny 16 rule" and start setting up the exposure by setting the aperture to f/16. Then, with the camera pointed at the horizon or something interesting in the perspective, I use the exposure meter in the diopter display to balance the exposure. I then enable +/- 3EV exposure bracketing, so the camera will shoot nominal, an under exposure and then an over exposure. This step is not strictly necessary, but it makes it possible to correct the specific exposure of regions later. Although I sometimes forget, you must set the white balance and ISO rating while in manual mode. I usually set the white balance to "sunny" or "cloudy" and the ISO to 200. Recently, I've been using the high-quality JPG format for output. It is more economical than RAW format and in JPG mode the camera applies certain lens corrections to the output such as chromatic aberration.

Then I set up the Nodal Ninja with the 45-degree detent ring (8 positions) and I first shoot a row at a pitch of 30-degrees upward. Once I've gone all the way around, I adjust the pitch to 30-degrees downward. At each position, I press the shutter release three times because I am doing exposure bracketing. If there is something interesting below the tripod, I will fold up the tripod and hold it out to take 3 additional pictures of the nadir using the remote shutter release. The video below demonstrates the sequence of positions and the 48 (16*3) pictures that result.


The next step is to load the images onto the computer and to align them with eachother. I have some scripts that automate this process as demonstrated in the video below.

The script takes the images, by the order in which they were taken, and applies the appropriate pitch and yaw in the resulting Hugin PTO file. It also determines the EV value, focal length and FOV of each exposure and sets that in the PTO. The script also runs a control point generator on each pair of adjacent images in the graph below. Essentially, the two rows (actually, cycles) of +/- 30 degrees are connected horizontally, each of the horizontal positions in these rows are connected vertically, and finally the additional under- and over-exposure images are connected to the nominal exposure image. This drastically reduces the amount of time required to create control points and the number of false control points.


At about 2:00 into the video, I launch the panorama previewer. This allows you to see a coarse rendition of what it will look like. The exposure is set to 0 by default, so I click the 'auto EV' button to set it to the average exposure. The pictures are shown in the positions set by the script. The Nodal Ninja and tripod can move slightly, and therefore there will be some error from the planned angles. To correct this, we use the control point optimizer, which also sets the parameters of the camera model to render the images onto the equirectangular projection with low error. To set up the optimizer, we want to allow it to vary the yaw, pitch and roll of all the frames except the first one or any nominal exposure that is level. This will fix the position of that photo as an anchor. For my camera, I allow the tool to optimize the view (v), barrel (b) and distortion (c). I also enable the x shift (d) and y shift (e) to be optimized if necessary.

An iterative process follows whereby I optimize, remove outlying control points, add additional control points, and (sometimes) tweak the optimizer settings until the maximum distance between control points is less than 0.7 pixels. In the fast panorama preview window, you can "Show control points", which allows you to see where you have false control points or where objects have moved during shooting. This is typical of clouds as is seen in the video. The tripod should also be ignored because it will be removed in the final result. Remove any control points on the tripod or panoramic head. Alternatively, you could use a mask before you optimize, but I haven't yet integrated this into my tools. This iterative process is genuinely carried out in the video (no smoke and mirrors).

The next step is to choose a projection and output size. The Hugin tool does a great job of computing the optimal size for your output. I use the equirectangular projection for my output. I will sometimes choose a small size (2000x1000 pixels) for my output to check for major errors. Not shown in the video is the type of output you want to use. Under panorama outputs, check "Exposure fused from any arrangement", Format: "TIFF", Compression: "Packbits". Under remapped images, check "No exposure correction, low dynamic range". Under Layers, check "Blended layers of similar exposure, without exposure correction". For remapper, I use Nona, enfuse for image fusion and enblend for exposure blender. To save space, I check "Save cropped images" under the remapper options.

The next step in the process is to make corrections to the remapped images. This is demonstrated in the video below.



In this image, I had a subject that moved around and wanted to fix that first. I use the extra outputs that I asked for in the Hugin stitcher tab to make put my subject in one piece. In this particular output process, the exposure was slightly different in the resulting panorama than it was in any of the individual exposure layers. Ideally, one would create the three exposure layers as complete panoramas and then blend the result together. Unfortunately, my subject moved inside the exposure bracket, so I had to blend the exposure myself with the GIMP. There is a slight aura in the result which could be fixed with more care. I was hasty for the sake of keeping the video short.

The following video demonstrates how the remapped images are positioned. The tool can form a complete panorama for each exposure setting, and then blend these together. The exposure blending process essentially starts with the nominal expsoure, fills in highlights from the under exposure, and fills in shadows from the over exposure.


The next thing I do is to is remap the final blended image that we just fixed onto a cubic projection. This is done with the erect2cubic tool. The point of this is that it is easier to fix the zenith and nadir in the cubic projection. There is sometimes a dark spot at the zenith and the tripod is seen at the nadir. The process to remove these is shown in the video. If you took additional pictures for what is below the tripod, this can be integrated into your output in a tutorial to come. Usually, you can just replicate the ground around the tripod to cover it up.

I then map these images back to the equirectangular projection. The final output is ready for uploading after a couple checks. I usually look around it closely for fusion errors. I usually check the histogram to make sure I am using most of the exposure spectrum. Then, I choose a JPEG quality to make sure that the output is less than 25MB. A low-quality JPEG is shown below:


Friday, May 25, 2012

ASUS P9X79 Wake on LAN (WOL)

I built a new system including an ASUS P9X79 motherboard. Wake on LAN (WOL) was just not working for me, and a setting that would enable it did not seem to exist in the BIOS (well, the EFI). There's essentially no explicit documentation on how to enable this feature in the owner's manual. Online, I found many people struggling with the same problem on older ASUS motherboards: they seemed to fix the problem by enabling the Intel LAN PXE Boot ROM (LAN PXE OPROM)---this is not necessary! The PXE ROM is for booting over the LAN, not waking in response to a magic packet. There is also a distractor called  ErP Ready which seems to enable more power features when the system is in a standby state, but also seems to be mutually exclusive of other power-on options.

The only setting you need to enable is Advanced \ APM \ Power On By PCIE/PCI. This makes sense because the Intel LAN controller is almost certainly connected as a PCIE or PCI device. I was able to wake the system from the off state using the wakeonlan program (a perl script) from MacPorts, even over WiFi.