Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

High Dynamic Range (HDR): Everything You Wanted to Know But Were Afraid to Ask

De-mystifying this ubiquitous term

Opening image from Lost in Space, courtesy of Netflix

Related Resources

The webcast “Working in HDR from Set to Screen — Lost in Space.” Click on the image above to watch.

If you’re creating video content of any kind, it’s likely that you’ve run across the term “HDR” on a regular basis but if you aren’t actually making HDR content, the concepts behind the term likely seem daunting and confusing. There are HDR cameras; HDR TVs (with a number of subtypes) and mastering for HDR display (theatrical or home). So it would be easy to wonder what the term means in these different contexts and how knowing this could help you think about your work.

Let’s start off with the fact that the term “HDR” (for high dynamic range) is used to mean two different things and sometimes those different meanings are combined indiscriminately. HDR cinematography refers to the use of one of several methods to capture an original scene that contains more extreme levels of contrast (the difference between lightest and darkest tones) than is normally possible from a sensor.

Whether you’re shooting HDR on an iPhone or another personal device or a professional motion picture camera, this generally makes use of multiple differently-calibrated photo sites (either on a single sensor or the use of a second sensor) so that one set can capture and hold detail in the highlights and the other in the mid-tones and shadows.

Webcast: HDR Essentials: Vocabulary and Workflow

Read more: Frame Rate, HDR and Creative Intent

Here are the five biggest things you need to know about HDR.

1. HDR photography and HDR display refer to two completely different things. This can be confusing for people new to HDR, and even some not so new: When a camera manufacturer (or smart phone maker for that matter) boasts “HDR,” they’re referring to the ability of their sensor(s) to capture a scene of greater dynamic range than an “SDR” sensor.

A. There aren’t specific light levels but by way of an example, if you’re outside with bright highlights from direct sun, deep shadows, and open shade, then using an “SDR” sensor, you’ll expect that you can expose your image in a way that you keep some detail in the shadows but blow out the sky or such that you keep detail in the sky and those shadows go completely black with zero information retained. The greater your DR (dynamic range — or what is called “latitude” in film), the more you can hold onto detail in both

This is accomplished in any number of ways. Film negative actually still has a very large amount of DR and has been a benchmark for digital photography for some time. Sensors (and their related image processing hardware) continue to expand on the dynamic range.

But, manufacturers have also made use of many combinations of ways of capturing shadow and highlight detail with more than one sensor or multiple photo sites, in which part exposes for the shadows and another part holds highlight detail and then the two are combined into a single image, ideally with the blue sky and clouds defined and the figures lurking around the shadows also defined.

Making all that information look believable on your screen, monitor, TV or a theater screen is all about manipulating all that picture information so it’s optimized for the DR of that display. That’s mastering, and that’s a whole other issue.

B. As with any photographic process, you are not literally capturing and recreating the real dynamic range of a scene — say the one described above — you are capturing the information and re-interpreting it for whatever display method is being used to look at the image. The movie theater or your TV room is not actually is brightly illuminated as a desert at noon. This obvious fact is frequently not given enough thought in these discussions.

This is a vital concept: A properly calibrated HD monitor displays what’s called peak luminance at 100 nits. An explosion in a Mission: Impossible movie or super-bright environment experienced by Brad Pitt’s character in Ad Astra is obviously reduced to a tiny fraction of the real volume of light that would be present if you were actually there, watching the scene for yourself.

Brad Pitt in “Ad Astra”

The brightness scale from dark to light is vastly condensed to “fit” on the display. And when the brightness scale is condensed, not just the brightest highlights, but the entire image has to be tone mapped specifically to be displayed to look appropriate for the particular set of specs of that display, in all ways, from the amount of light output in the areas defined as the brightest whites of the image, and also in all the characteristics of displaying all the midtones and shadows. Those are different if mapping to HDTV than they will be if mapping to an HDR TV that maxes out at the currently-used 1000 nits or, as there are many different models with many different characteristics, 300 nits or 700 nits, etc. There is no standard for monitors at this point.

When mastering motion picture images for HDR display (there are four “flavors” of HDR currently), the image information is mapped such that the brightest highlights are 1000 nits, even 4000 nits, and the rest of the brightness scale is mapped accordingly for the rest of the image is mapped accordingly. If there were a standard, say if all HDR TVs were designed to show the brightest highlights at 1000 nits, then that would be a bit simpler than the situation actually is. Given that there are so many different specs for these displays, the image information has to be adapted on the fly (through one of several methods) to work within those specs.

This is an oversimplification but it explains the concept:

TL;DR: HDR display is based on how the image is mastered and processed. HDR image capture is about the amount of detail in the original scene you’re shooting. HDR display is about how that image is processed in post and then within the specific display. None of this is based on whether or not an HDR imager was involved in the initial cinematography.

2. Your movie can be finished in HDR at some point in the future, regardless of what you shoot it with today (but there are two big caveats):

Caveat #1: You need to have enough dynamic range in the original to do a successful HDR finish.

A lot of movies and TV shows that were shot on film are being remastered for HDR display. Movies such as Ad Astra, which were shot on film were mastered for HDR theatrical and home viewing. And a large catalog of old classics, also shot on film, are being remastered for HDR display. Well-exposed film negative has enough information in it that if you remap the imagery in post such that the brightest highlights read on a properly-calibrated HDR monitor at 1000, rather than 100, nits, the entire image can be mapped accordingly and all the tones between darkest black and that brightest white will look natural.

This is not the case with a lot of older shows that were shot in HD or even SD video. If there isn’t enough information to successfully “stretch” the tones out and if you try, you’ll end up with an image so full of distracting artifacts that there won’t be any point.

Caveat #2: If you’re really expecting to do an HDR finish, you’ll want to think about that when lighting.

Without going too far down a rabbit hole about how imagery is both remapped and often re-graded for HDR display, it’s important to realize that picture information, especially in the brightest areas of your set that might not make it into the SDR-mastered version because it would clip (look totally blown out), is going to show up, possibly in surprisingly clear detail, because you have more “room” for it to be in the displayed version of the image.

Say you’re shooting an office interior and you want everything outside the window to simply blow out. The information might still be there in the raw file but in mastering for SDR, it just goes beyond what can “fit”. But the HDR display has that much more “room” to show what’s in that much brighter a space. So when the image information is re-mapped for HDR, you’ll now see things you don’t want to see: the gaffer out the window who wasn’t visible in SDR; the fact that it’s Cincinnati, not New York, outside that window. As HDR has become more popular, it’s frequently been in the colorists’ bay where these types of issues have really come into focus.

So when shooting, if you think there’s likelihood your project will be mastered for HDR displays, be careful of what’s in the brightest portion of the image because they will likely not like the same in HDR.

3. There are many flavors of HDR with many different specifications. There is no real standard at this point.

When high-end post houses grade for Dolby Vision, they often sit with a very expensive, professional Dolby-manufactured monitor capable of displaying 4000 nits and their color correction console (DaVinci Resolve, Filmlight Baselight, Autodesk Lustre) set up with scopes that measure accordingly. But at this time, consumer monitors that are being called “HDR” rarely actually display 1000 nits, and most top out at significantly less than that.

Regardless, there are several ways that clever manufacturers like Dolby that involve using metadata to “see” exactly what the specific display is capable of showing and then re-map the picture information to look appropriate on that specific display.

So the idea of an absolute standard look as existed for so many years with film and SDR TV (“here is your master and everybody everywhere who looks at this under the proper conditions is seeing this exact look) is non-existent currently in the wild west of HDR display.

This is one reason why a lot of filmmakers continue to think in terms of their SDR master as the absolute, final way of seeing their vision and HDR as an afterthought or a pesky deliverable requirement.

4. If you’re thinking about HDR, remember that you can use the greater highlight range and/or the apparent increase in shadow detail to make the audience feel something extra — something that can’t be displayed in SDR.

Some filmmakers have embraced the options of HDR, despite the lack of a standard. For the Brad Pitt feature Ad Astra, director James Gray, cinematographer Hoyte Van Hoytema and colorist Greg Fisher made very selective use of HDR for certain scenes.

And the same goes for darker shadows in HDR, which create the perception of darker darks with greater detail. If you’re among the small number of people who saw Alejandro Iñárritu’s The Revenant (shot by Emmanuel Lubezki and colored by Steven J. Scott) in Dolby Cinema format, you might recall the profound feeling of darkness in the sky during night scenes. One striking moment offered bright campfire in a night so black that the proscenium seems to disappear, leaving the campfire in just a sea of black. If you experienced It Chapter Two (directed by Andy Muschietti, shot by Checco Varese and colored by Stephen Nakamura) in a Dolby Cinema theater, you also saw deeper into those massive dark shadows than would be possible in a traditional film or digital cinema presentation.

5. Unless you’re required by a distributor (feature film, network, OTT, etc.) to use the HDR finish in a certain way, there is no artistic requirement to use that additional dynamic range in your images.

A large number of directors and cinematographers specifically do not want to master for HDR, and if they are forced to, as is often the case, for streaming or archival purposes, they don’t want to actually use the increased dynamic range as described above, either because the lack of standard makes it impossible to sign off on any exact version of the look and/or because they have a strong fondness for the way images look and feel in the theater or on an SDR HDTV and don’t subscribe to the subjective notion that more DR is better DR.

Their argument always goes something like this: “The artistry of Citizen Kane, 2001: A Space Odyssey, Days of Heaven, Apocalypse Now, Blade Runner, Hateful Eight and pretty much all movies until about five years ago succeed so spectacularly within the standards and limitations of celluloid capture and display, there’s no reason to think that brighter whites and darker blacks are going to make images “better,” more impactful, more impressive, or more artistic.

Not much is certain about the future technology of HDR or the ways creative filmmakers will find to use that extra dynamic range. But one thing is for sure: the above debate will continue for a long time to come.

Close