Digital imaging has evolved considerably over the course of the last two decades. In video, we have graduated from analog to digital, standard definition to high definition, 30 frames per second to 24 fps, interlaced to progressive, small sensor to large sensor, fixed lens to interchangeable lenses, moderate light sensitivity to high ISOs, high definition to “cinema” resolution…All of these advancements have elevated the quality of digital motion picture images substantially.
Today we have a new evolutionary link in digital imagery: raw image recording. Raw has become the new buzzword. Note that it’s raw, not RAW or Raw—it is not an acronym, it’s not someone’s name, it is not a proper noun; it doesn’t deserve a chariot or Secret Service agents, nor does it deserve an unnecessary capital letter, let alone three. Just raw.
The top row of images shows what the Bayer sensor records: half of the
photosites record green information, one-quarter record red and one-quarter
blue. With just this information, the combined raw information doesn’t create
a complete color image. The second row of images shows the individual
color records after interpolation with full information in green, red and blue.
Combining those elements creates full pixel information and a full color image.
(Model Becka Adams)
Raw is actually just an adjective that describes the kind of image file being recorded. In its most simple definition, raw describes data that is recorded without any image processing or compression. After the analog-to-digital conversion happens at the camera’s sensor, you are recording the “raw” data from the sensor’s photosites.
Raw describes a state in the image-making process before any demosiacing (debayering), color processing or file processing has taken place. The raw image file itself is not really even an image—it’s purely data reported from each photosite on the sensor. That means that each “pixel” in the raw image contains information from only one color, either red, green or blue.
In order to describe what is happening when a camera shoots raw imagery, I have to go back and recap how single-sensor cameras work.
All of the photosites (often erroneously called “pixels”) on a CCD or CMOS sensor are colorblind; each calculates only brightness values based on the number of photons that strike that particular photosite. In order to create a pixel in the final image, we also need red, green and blue color information.
The Bayer pattern color array, with its mosaic of blue, green and red filters,
covers the sensor’s array of photosites.
In a full-raster three-chip camera, this is a simple process because there are three photosites gathering information for every pixel—one gathering green information, one blue and one red. That data is combined to create a pixel with full RGB information.
With a single-sensor camera, we have to find a way to gather three colors from one source. We do this by incorporating a color filter array (CFA), most commonly in the Bayer pattern. This method places a single-colored filter (R, G or B) over each photosite so that the photosite will collect photons in only that wavelength of color.
As every pixel in the final image requires red, green and blue color information, the sensor is providing only one-third of the data necessary to create that pixel. A debayering or demosiacing process (named for the mosaic pattern of the CFA) must take place to create the final image. The debayering algorithm interpolates (a fancy word for a mathematical guess based on given information) the missing two colors for each pixel.
The camera applies the debayering algorithm after the image is captured by the sensor and before the data is recorded. This takes a considerable amount of processing power. Also, much like setting the camera on “auto,” the process relies on your camera to make imaging decisions for you. Most cameras do this extremely well, but there are always compromises.
There are several other steps that the image goes through before it is recorded:
– The white balance setting is applied
– Colorimetric interpretation algorithms are applied
– Gamma correction is applied
– Noise reduction is applied
– Antialiasing filters are applied
– Image sharpening (compensating for antialiasing) is applied
– Image compression algorithm is applied
– Color dissemination (dictated by compression algorithm) is applied
This diagram shows the inside of a three-sensor camera and the
configuration of prisms that separates light into its component
colors to send to each of the three sensors.
When you shoot in raw mode, you bypass all of these functions and simply record the raw data from the sensor. You’ll use specialized software in post to essentially set these image processing values yourself, providing greater control and flexibility over the look.
While most sensors are capable of capturing 12-bit data from the sensor, many record formats are limited to 8-bit. This means that once the camera does the debayering and coloring and image sharpening, it discards four bits of data for every bit recorded. In a 12-bit system, there are 4,096 levels of information per photosite; in an 8-bit system, there are just 256 levels per photosite. That’s 16 times more color information in 12-bit than 8-bit systems. Most of that information is not discernable to the human eye, but the higher bit depth allows the camera to capture more subtle gradations between colors and between highlight and shadow.
In short, there is more color information and more dynamic range in the raw data from the sensor than in your final image. If you record the raw information from the sensor, then you can use software in post to make your own choices about which 3,840 bits you’ll “throw away” when you create the final image.
There are some considerations before you adopt a raw workflow. First, the post process will be longer because the debayering and image processing happens in the studio instead of in the camera. Although post software performs these image processing functions very quickly, it’s still not as fast as specialized in-camera processors. So using raw footage will increase the ingest time.
Shooting raw on set takes away one of the greatest advantages of shooting digital (as opposed to film); namely, you no longer have What You See Is What You Get (WYSIWYG) on your monitors. With no image processing applied, raw footage does not resemble the final image, or even a very good image. Before it can be useful on an on-set monitor, raw imagery has to be processed through a LUT (lookup table), which debayers and mimics the image processing that will happen later in post. Raw processing may take place in the camera (an in-camera processor outputs a simulated final image via HDMI or HD-SDI) or in a piece of hardware between the camera and the monitor. In either case, the LUT generates an approximate image, not the final one.
Further, if the person processing the image in post does not pay attention to the metadata associated with the raw files or the notes from the cinematographer, the intentions on set can be quickly and easily lost in the post process.
Each photosite on a single sensor has a color filter over it that allows
light waves of only that color to strike the site. In a Bayer pattern color
array, half of the sites are filtered with green, one-quarter are filtered
with red and one-quarter with blue.
A final problem I’ve started to notice is a production “laziness” brought on by a “fix it in post” mentality. Since there is more room to manipulate data in post, setting image characteristics such as white balance, gamma range, and highlight and shadow protection on the set is slightly less of a concern when shooting raw imagery. Unfortunately this latitude tempts some filmmakers to ignore these aspects when shooting because they can “fix” them later. This can be a very dangerous habit. In my professional opinion, it is always better to work to get the final, polished image at the lens rather than in postproduction.
Some people get confused between raw interpolation and color dissemination. Keep in mind that raw records only one-third of the color information needed to create the final image. Each “pixel” has information on only one color; the other two colors have to be interpolated to complete the picture. You cannot have 4:4:4 and raw; these are mutually exclusive terms. If the image has been debayered and the color information has been interpolated—meaning it is no longer raw—then your camera and/or record format can choose to represent all of the interpolated (and captured) color information (4:4:4) or discard some (4:2:2, 4:1:1. 4:2:0, etc.).
Some cameras provide uncompressed video via an alternate output. This should not be confused with raw. An uncompressed signal has been debayered, and white balance, colorimetric interpretation, gamma correction, antialiasing and sharpening have all been applied—you are merely getting the image before additional compression and/or color dissemination happens at the recording stage. This image feed has to be recorded via an external recorder as there currently does not exist an in-camera record system robust enough to capture uncompressed HD video.
At the end of the day, raw, like any other option in capturing motion images, is a tool to be utilized, and in many cases it is a very beneficial workflow that allows more creative freedom in postproduction. It is not appropriate for every production situation, however. I would not want to use a raw workflow to shoot news or documentary projects or live events (for live broadcast).
I have heard people talk about raw as being the end-all-and-be-all, the ultimate future of all workflows, but it is not the only option. And it is most certainly not an excuse to be lazy with lighting, exposure and other camera functions that define the technical and artistic aspects of the final image.