Sony’s announcement of the VENICE camera has naturally raised the sort of stir we’d expect from a new camera release from a major manufacturer. One of the most significant characteristics of the camera is its large-format sensor, Sony’s first full-frame sensor for a high-end digital motion picture camera.
VENICE’s 36mm x 24mm full-frame image sensor can capture images up to a maximum resolution of 6048 x 4032 (with firmware update). By switching imager modes, VENICE can natively support Super 35 24.9mm x 18.7mm 4096 x 3024 resolution (equivalent to 4-perf motion picture film) and Super 35 24.9mm x 14.0mm 4096 x 2160 resolution (equivalent to 3-perf film). In other words, VENICE’s full-frame sensor can capture in almost any format, including full 18mm-height Super 35 anamorphic and spherical and full-frame 24mm-height anamorphic and spherical (with firmware update).
Sony’s VENICE 36mm x 24mm full-frame image sensor can capture images up to a maximum resolution of 6048 x 4032.
Of course, Sony has been manufacturing very respectable sensors in-house for quite a while, with the excellent a7S DSLR distinguished by its great low-light capability on a sensor the size of an 8-perf stills frame, but the company has not released a big-chip motion picture camera before this.
We might reasonably have expected otherwise. Intuition would suggest that Sony’s F65 should have a large sensor, if the number after the F were intended to indicate results comparable to 65mm film in terms of depth of field. As it is, Sony has waited until VENICE to do that, but Sony is not the first. RED has put out big-chip cameras, the Vision Research Phantom 65 has a 65mm-sized sensor, as does ARRI’s Alexa 65, and that’s before we even consider the enormous numbers of super-affordable DSLRs that followed the trendsetting Canon EOS 5D Mk II. There are upsides and downsides to making sensors bigger, as most of us are aware, but it’s worth considering why manufacturers have started leaning toward them in the first place, because there are some crucial engineering realities at play.
Screen shot from the short film “The Dig,” which was shot with Sony’s VENICE camera. Watch video.
What we want is out of a sensor is well known: high resolution, low noise, high sensitivity and wide dynamic range. Noise and sensitivity are largely synonymous, since amplification causes noise, and a more sensitive chip requires less amplification to produce a usable picture. The sensitivity of a sensor, its real-world usability in low light, is controlled by how reliably it records every photon that hits it (called its “quantum efficiency”) and by how much it introduces random noise into that recording.
Dynamic range is almost as easy to understand: as more photons hit the sensor, it has to store the electrons as they build up. When there’s no more room for electrons, the pixel (or more properly, the photosite) becomes saturated, and we see clipping. Improving the quantum efficiency and storage capacity of a sensor are, of course, targets for research and development.
“The Dig” director of photography Claudio Miranda, ASC (behind VENICE camera), and director Joseph Kosinski (white shirt)
Even if we can make a sensor that has perfect quantum efficiency, though, we have to deal with “shot noise,” a phenomenon caused by the fact that light is made up of photons, or tiny individual chunks of energy. We can’t have half a photon, so light isn’t actually continuously variable. To photograph an object, we count how many photons bounce off it during one particular video frame. Photons bounce off (matte, non-shiny) surfaces in random directions, so the number of them involved in exposing any one particular video frame can vary slightly depending on how the photons behaved during that particular frame. If there are very few of them involved, those random changes can cause the total brightness to differ noticeably between frames, causing visible noise. That’s shot noise, and there’s absolutely nothing that can be done about it.
So we’re starting to hit the fundamental limits. One of the easiest ways around them is to make the photosites on the sensor bigger. A larger area of silicon will catch more photons and has more room to store the resulting electrons, improving both sensitivity and dynamic range, and alongside them resistance to shot noise. The problem is that this approach works directly against the modern tendency toward higher resolution. Put more pixels on a sensor of the same size, and they have to get smaller. To keep performance high while simultaneously increasing resolution, the sensor has to get bigger.
The Canon EOS 5D Mk IV has a full-frame image sensor.
That’s a perfectly reasonable thing to do, especially as it means the central area of the sensor can be extracted to make cameras like VENICE usable with lenses intended for conventional Super 35mm shooting, so it doesn’t always have to be used as a big-chip device. The question is whether people will feel like they’re getting the most out of it when they do that, or whether Sony’s new camera is part of a broader move toward bigger-chip shooting becoming standard. Even in the photochemical world, with bits of the Hunger Games series, Dunkirk and other projects shooting 65mm film negative, there’s a detectable shift toward bigger formats.
And there’s a question to ask about that, on several levels. Let’s leave alone the issue of whether it’s actually a good idea to exhibit resolutions significantly beyond 2K. It’s important to realize we have never really done that: the traditional 35mm process—involving copying the camera negative to an interpositive, then to an internegative, then to the print—often struggled to resolve the equivalent of 1.5K. In terms of audience appreciation of conventional theatrical exhibition, there’s no need for it. There is, however, more than enough of a reason to do it for a large-screen presentation such as IMAX or a ride film, or for visual effects work; there’s no such thing as too much resolution at acquisition, at least outside the data wrangling challenges. The bigger issues arise from sheer practicality.
It might take something like the Qinematiq depth-sensing follow focus device to make full frame sensors more practical.
None of these concerns will inconvenience the upscale productions that Sony clearly has in mind for VENICE, but they inform us about the broader push for bigger chips. First, there’s a problem of simply finding lenses. A surprising number of modern cinema lenses cover chips bigger than Super 35mm, though the imager size will certainly reduce the choices available. Lenses built specifically for 5-perf 65mm work are rare and expensive, sometimes being based on medium-format stills glass—and there’s a difference between 5-perf 65mm and the format that was once called VistaVision, the 8-perf 35mm horizontal stills frame that is the basis of VENICE’s sensor size.
Anyway, once we’ve found lenses, we face the issue of long focal lengths. In order to create a narrow field of view on such a big chip, lenses need a very, very long focal length, which will make them big, heavy and slow. With the requirement for long-focal-length lenses for even fairly moderate fields of view, focus pulling becomes tricky. Yes, we’re all aware of the near-universal love of very shallow depth of field at the moment, but there does come a point where sanity has to step in if we want to have both the near and far sides of someone’s eyeball in focus simultaneously.
Cameras with really big imaging sensors are great, and they can offer performance that simply isn’t going to happen, absent Star Trek technology, any other way. They’re perfectly appropriate for upscale productions with access to all the right support gear and ninja focus pullers. They’re less ideal for very fast, run-and-gun, low-budget stuff, which, given the DSLR resolution, is the place they’re most often found. Let’s not assume that the best solution for every circumstance is the one that provides the shallowest depth of field.