The 3D Toolbox
The eyes of a typical adult human being are approximately 60mm to 70mm apart. To reproduce the human perspective as closely as possible, Panasonic AG-3DA1 engineers adopted a fixed inter-axial distance of 60mm, thus ensuring at normal operating distances from 10 meters to 100 meters that the roundness of objects and background position appear life-like, a key consideration for shooters looking to mimic to the extent possible the human experience on earth.
If we narrow the interocular (IO) distance in the camera to only 3mm, we would produce a perspective suggestive of an insect with eyes 3mm apart. The narrow IO may make sense for a story about enterprising mosquitoes, or for shooting close-ups of smallish objects to reduce the severe convergence angle that leads to 3D headaches. Conversely, if we're shooting King Kong in 3D, we might increase the IO substantially to 2 meters or more to reflect the giant ape's perspective; the increased amount of 3D in the scene motivated by the much wider positioning of the eyes in King Kong's skull.
Determining the proper amount of 3D in a scene is an artistic choice that goes to the core capabilities and sensibilities of the shooter. Several factors affect the appropriate IO setting such as lens' focal length that affects the perceived placement of objects within the 3D volume. Long lenses collapse 3D space and bring backgrounds closer; short lenses have the opposite effect, pushing backgrounds further into the distance.
The 3D shooter must always consider the anticipated display venue, a viewer's comfort zone being of utmost concern owing to image magnification and screen size. Audiences in a digital cinema in front of a 10-meter screen can tolerate only one-third as much parallax as viewers at home watching the same program on a 42-inch plasma TV.
The smart 3D shooter therefore acts with restraint when setting an appropriate interocular distance (IOD). A rule of thumb for years had been the so-called "3 Percent Rule," which stipulates that foreground objects should be no closer to the camera than 30 times the IO distance. My experience and that of other 3D shooters suggest that 2 percent overall is much safer, or roughly 50 times the IOD. Of course, momentary violations of this rule may be perfectly acceptable. What kind of 3D program would we have without spears or knives flying at the viewer from time to time?
Like exposure and the cinematographer's light meter, the mathematical limits of parallax applicable to a scene is subjective and therefore should be interpreted creatively. Many scenes exhibiting what appears to be excessive parallax may in fact reproduce fine without inflicting pain or eliciting viewer howls of protest. Still, the experienced stereographer is keenly aware of the relative impact of common left- and right-eye image disparities. Errors of vertical gap, for example, can be especially disturbing to audiences, while color disparities may be readily overlooked.
While several IOD calculators are available at various price points, the iPhone/iPod touch/iPad app IOD calc ($49.99) may be the most comprehensive, straightforward, and easy to use. With it, you can assess the amount of parallax as a percentage of a scene, with areas of excessive divergence readily identified and thus easily addressed at the time of capture.
It's always a question for 3D shooters whether to converge in-camera or later in post. The key advantage of converging in post is the avoidance of keystoning as might happen with toed-in cameras, and the ability to set convergence. Applying convergence in software may also allow for a closer placement of the screen planein the case of the 3DA1, to accommodate converged objects nearer to the camera than approximately 85in.
Not all 3D camera disparities can be effectively rectified in post, however. Insufficient parallax during capture may fail to record critical detail around objects, this detail being effectively lost forever and not easily introduced via software.
3D disparity adjustments in post may also lead to a loss of resolution because the left and right images must be enlarged to allow the required repositioning inside the 1920x1080 window. The loss of resolution is an inevitable consequence of shooting 1920x1080 in the first place. The alternative route of shooting at 2K with a slightly larger frame size (2048 horizontal pixels) allows much greater freedom to set convergence and otherwise correct 3D disparities in post.
Shooting 3D requires shooters to acquire a vastly different skill set. While we've always used monoscopic depth cues to create a 3D illusion, the 3D shooter no longer relies on such cues exclusively to project the illusion of depth. 3D is after all a technical trick that requires the viewer's cooperation to fuse two distinct 2D images in the far-reaches of the mind. In this sense, the 3D stereographer pushes constantly up against the limits of what an audience can be reasonably expected to accommodate.
Simple tools like IOD calc and Dashwood Cinema Solutions' Stereo3D Toolbox LE ($99) can help the 3D shooter avoid inflicting unnecessary pain on the audience. As effective storytellers working in this new dimension, that has to be at the very least a worthwhile and noble goal.