Large-format single-sensor (LFSS) cameras are touted for their high resolution, ability to mount cine lenses and film-like depth of field. But these LFSS cameras also present some design challenges for manufacturers and face some natural disadvantages to the long-serving three-sensor camera design—a design that stretches back to the tube camera era and birth of color television.
The Mosaic Effect
Ikonoskop A-Cam dII
In simple terms, a three-sensor camera employs a beam-splitting prism that separates the image from the lens into the three primary colors: red, green and blue. The resultant red, green and blue images are delivered discretely to the three separate sensors. Each of those three sensors has the same number of equally spaced pixels.
As its name suggests, a large-format single-sensor camera employs a single sensor that is considerably larger than the largest sensors (2/3-inch) used in three-sensor cameras. Rather than using a prism to separate colors, LFSS technology employs a mosaic color mask over the pixels on the chip so that some pixels receive only green light, some only red and some only blue.
Though there are some other mosaicing schemes, most LFSS cameras today employ a Bayer pattern in the mask over the individual pixels. The Bayer pattern allows twice the number of pixels to receive green light (50 percent) as receive either red or blue (25 percent each). This pixel count disparity is resolved by “debayering” processing to achieve the RGB output. (The term “debayering” is used in this article to describe the reconstruction processing for LFSS cameras, whether they use a Bayer pattern or some other pattern.)
Three-sensor cameras employ a beam-splitting prism that separates the image from the lens into the three primary colors: red, green and blue.
Most camera manufacturers agree that the design of an LFSS makes it simpler to build than a three-sensor camera, because there’s no prism assembly to deal with. But the debayering process, unnecessary in a three-sensor camera, is critical.
“It takes a lot of math to achieve that,” says Juan Martinez, senior product manager, Sony Electronics. “You need to have an incredibly powerful digital signal processor.”
Larry Thorpe, senior fellow at Canon, agrees. “That’s probably the singular difference between the single-sensor camera and the three-chip camera, in that the three-chip camera has the beauty that it delivers immediately at 4:4:4,” he says. Each LFSS maker has its own proprietary algorithm that strives to deliver an image as close to 4:4:4 (uncompressed) as possible.
Michael Bergeron, senior business development manager at Panasonic, points out that “using a mosaic filter, you don’t have co-sited pixels. In a three-imager camera, the red, green and blue subpixel of each pixel is in exactly the same spot.” Part of the debayering process involves divining what a blue pixel on an LFSS chip, for example, would capture in a location where there is actually a green or red pixel.
Klaus Weber, director of product marketing, cameras, at Grass Valley Germany, says this can be a problem when it comes to imaging fine color detail with an LFSS camera. “If you shoot very small, colored objects, like small flowers or a bee, in front of green grass, for example, I think the debayering process will generate a lot of artifacts, because the estimation will not work so well.”
As to the amount of power consumed by the two different designs, Weber rates the difference as a push. “I think the power consumption from a large, single-imager CMOS camera compared to a three-2/3-inch CMOS should not be very different.” Where the single imager itself might use marginally less power than three separate imagers, most of what you save from the imager “you most likely will consume on the larger processing power that is needed,” Weber says.
Another difficulty LFSS camera makers face is in the dyes used to create the mosaic mask over the single sensor. Thorpe says there are two aspects to this issue: “One is designing the material and shaping the spectral responses to do justice to what you’re seeking in terms of meeting colorimetric specs of high-definition television or, more challenging, the colorimetric specifications of DCI, the Digital Cinema Initiative. And then there’s the issue of the stability of those pigments with time. Some are better than others.”
LFSS camera designers are also forced to compromise in their use of a low-pass filter to prevent aliasing in the image. The lower the pixel count for each individual color, the more severe the aliasing problem can be.
The Bayer pattern allows twice the number of pixels to receive green light as receive either red or blue light.
“Since there are twice as many green pixels as red and blue pixels, you’re faced with a choice,” says Panasonic’s Bergeron. “Either make an anti-aliasing filter that’s very aggressive [to handle the red and blue channel], that filters out frequencies that the green pixels would have resolved ... or tune it to the green pixels, in which case you can get aliasing in the red and blue.”
In a three-sensor camera, because pixel count on each chip is the same, a single low-pass filter is optimal for all three colors.
There are also obstacles to the three-sensor camera design that are not experienced by LFSSs. “It’s really complicated because you have an optical block and three sensors,” says Sony’s Martinez. When black balancing or compensating for drift due to heat, for example, on an LFSS camera, “you’re doing all the corrections on one sensor, not on three separate sensors.”
A prism design has optical considerations as well. “For the type of glass that’s used in the prism, it typically requires optical correction associated with it,” says Alan Keil, director of engineering at Ikegami. “That’s compared to just air, which is normally all there is [between the lens and sensor] in a single-sensor camera.”
One other thing to consider is the lens size required to cover the large sensor of an LFSS. While in a cine application an LFSS might be fitted with a prime lens or relatively short-range zoom lens, modern sports applications often employ a 100x zoom range lens. “A 100x zoom for a 35mm size imager would be maybe 100 kg [220 lb.],” says GV’s Weber.
But as Keil points out, the modern 100x zoom lenses are designed to deliver sharpness for the high-definition cameras they are mounted on. “If you increased resolution by a lot, those lenses might not be perfect enough.”