When done professionally, HDR results in a massive increase to brightness. The Dolby Vision™ HDR projection system, for example, provides up to 31 foot-lamberts of luminance, while a typical theater experience will offer half that level of brightness at only 14 foot-lamberts. Here, Final Cut Pro X is used to create an HDR simulation with footage from a typical consumer camera.
If you’ve been following releases from manufacturers, as well as standards work in imaging industry organizations, you may be wondering whether all the increases in resolution, dynamic range, color gamut and capture quality are worth the effort and the cost.
It’s not just about 4K. It’s high dynamic range (HDR) and high frame rate (HFR), too—and it’s all getting so cheap!Even though those prices for new equipment and software are falling, however, changes in technique and workflow require extra time for learning equipment and application features, testing new techniques for your particular shooting and editing environment, and additional time added to the workflow as you fumble through unfamiliar routines. You may also have to upgrade hardware and software, once again.
In addition to deciding whether to move to 4K or higher resolution, there are more decisions to make, like whether or not to get wrapped up in HDR imaging.
You may have noticed a trend here: I seem to spend a lot of time in this column trying to figure out whether each new, biggest and best advance is worth the time that it takes to integrate it into my workflow, whether it will make the production that I deliver to clients and viewers any better, and if it’s technically better, whether they will even notice. If they do, which exact improvements will “wow!” them?
MYTH: More Is Always Better
The quest for HDR has been around for decades in one form or another. Gamma correction was introduced with tube-type cameras and cathode ray tube (CRT) displays as a rudimentary way of representing video in a form to match the response of the eye and reproduce viewable scenes on available displays of the day.
As sensors improved, they captured more dynamic range; quieter video amplifications reproduced more shadow information; and, knee or shouldering circuits compressed more of the highlights into the analog video signal. Brighter displays brought wider dynamic range to viewers. The move to digital processing allowed creative mapping of light levels in order to optimize scenes for lighting conditions and moods.
Linear light imaging would require more than 14-bits to reproduce the 14 stops of static illumination (15,000:1) that a healthy eye can handle, or 30 bits to reproduce the 30 stops of dynamic contrast (1,000,000:1) that can exist in nature.
But with clever level translation using a gamma or log function akin to processing in the human eye, plus additional tweaking of the blacks and slope-shifting above a “knee” point in the highlights, we’re now able to get a reasonable reproduction of 14 stops in about 10-bits—well within the reach of modern recorders.
If we want to preserve even more information, as is often required for post, our sensors need to be top-of-the-line, and we need to allocate more bits, perhaps 12 or even 16. Keep in mind that the human eye is capable of seeing up to roughly 24 stops, carefully adjusting to different levels of brightness, so dynamic range will vary by situation and even by person. The price of using fewer bits or weaker processing can be contouring in smooth gradations of brightness or color, like skies or solid backgrounds. Larger areas of fine detail or course textures will often hide contouring.
Beware of consumer cameras that don’t allow you to get more than 8-bits out of the camera. Moving to HDR reproduction usually suggests increasing the color gamut because brighter scenes allow for brighter colors, while spreading the range of colors requires more levels of color to prevent contouring in subtle color shading, much like in brightness.
If you dig into techniques, you’ll often find that cameras with similarly named approaches produce much different results. This doesn’t necessarily mean that a camera doesn’t do a better job than older models, but it might explain why the better cameras still cost more. The different approaches often show up in different applications.
For example, across camera types, HDR in still cameras usually means capture of two or more frames in sequence, each of which is optimized for a different range of brightness. The frames are then merged into a single frame that reproduces a wider range than either of the original frames.
In video, this technique is seldom used because of the impact on motion reproduction, motion blur and depth of field. The likelihood of creating visible artifacts would be much more visible in a motion sequence than in a single frame. Some consumer cameras use multi-frame techniques to get HDR. Others try to squeeze improved dynamic range out of tone mapping, which may run into noise problems.
Professional-level HDR generally emphasizes improved sensor performance (both in sensitivity and highlight overload handling), combined with new digital in-camera processing curves. In some cases, this can include dynamic curve application based on the output of each pixel or group of pixels.
If more flexibility in post is required, there are new record formats optimized for transferring that information, and recorders possessing greater capacity to maintain that range in postproduction. As always, there are various approaches available, and you have to decide which new format to choose.
As I study more, I find that HDR has specifically different connotations on the image capture side than on the distribution side. You can develop an HDR “look,” emphasizing the preservation of detail in shadows and in highlights by applying tools in-camera, or recording HDR video with more bits and creating the look in post.
These “front-end” approaches are targeted toward delivering standard dynamic range (SDR) video to display devices that reproduce up to a few hundred nits of brightness in a living room, or viewing room, with relatively low light.
The other side of HDR includes the capture in an HDR format, followed by a special workflow that needs special monitors for preserving more dynamic range throughout the distribution chain, all the way to HDR monitors that can produce over 1,000 nits of brightness in a well-lit room.
Getting HDR signals to an HDR display requires new formats and devices that can handle several stops of highlight information. Of course, legacy displays won’t know what to do with the excess information and will need conversion back to SDR video at some point.
Once again, it’s another transition, another product that may require multiple deliverables, and more confusing options for consumers. Are you ready for that?
Upgrading your workflow involves looking at the impact of managing larger files coming in from the camera, including additional time to transfer those files, potentially larger and faster servers to handle those files, more powerful computers to ingest, convert, process and assemble the additional information, and larger archive pools to maintain the products and originals for future use.
HDR, HFR, sensitivity and color gamut advances are found almost daily in all levels of cameras. The high end and the low end are both improving so, regardless, you still have decisions to make. The performance or features that were acceptable last year might not cut it this year.
If you’re now experiencing a sense of déjà vu, it could be because you lived through a similar experience when you moved from standard-definition to high-definition video production. But if you’re entering a state of depression, it could be because you thought you had slain all these demons when you moved from film to digital video production.
Does it ever end? To be continued? You can bet on it; there’s always more Misinformation and not enough time.
Charles “C.R.” Caillouet is a technical producer and video engineer who has worked in TV production, from preproduction through field acquisition to postproduction and presentation, as well as for NASA, Sony and Panasonic. He’s currently Technical Director of the Jackson Hole Wildlife Film Festival and Science Media Symposium.