The advent of high dynamic range content has brought with it the need for dynamic metadata, a concept that Lars Borg, principal scientist in Adobe’s Digital Video and Audio Engineering Group, explains to SMPTE Newswatch.
“Dynamic metadata is a new concept because it is something we didn’t need to worry about in the past,” Borg explains. “If you think about regular HDTV in the past and what we call the color volume—what colors could be applied [to content], the chromacity, the brightness—that color volume for media was the same as the color volume for the display. The display could show any color [present] in the media, and the media could carry any color that could be [visible] on a display. There was a one-to-one mapping between the media encoding and the display, but that is no longer the case in the world of ultra-high-definition…Dynamic metadata says ‘I’m not even using the full range of the mastering display.’ Instead, for this clip, map these colors onto the target display, and do it this way. When you are trying to put a big plug through a small hole, you need to know exactly what part of the big plug to preserve and what part to shave off.”
Read the full story here.