Interlaced video saves a lot of bandwidth. What do you need to be concerned about with interlace capture?
Q Recently I was working on a project where I wanted to shoot some B-roll shots. I was thinking I’d like slo-mo. The camera I had access to allowed for shooting at 60p. My camera operator and I were of differing opinions on how to work. He wanted to shoot everything at 60p and said I could just change the rate in post. I was unsure if that was the best idea. I was confident if I shot at 24 that I wouldn’t want to be at 60p for everything—my experience has been that it would look too different. But for this particular project, I was going to be at 30. We ended up just doing select shots at 60—I felt more confident that way. Right or wrong call?
A I think you made the right call. Feeling confident while shooting is a good thing. There are some other reasons why you might have chosen wisely. There are a lot of decisions to be made on how a project is shot: which camera to use—which lens to use, camera support, recording method, codec, frame rate, resolution—the list is never-ending. Some of these choices may be thought of as “creative” choices: 85mm or 25mm lens, gimbal or dolly. Others are “workflow” or “technical” choices: codec or recording method. And then there are those that are both: creative decisions that can dramatically affect workflow, as well. For me, frame rate falls into that category.
I’m reminded of a presentation I heard at the Hollywood Post Alliance Tech Retreat a couple of years ago. I forget who the presenter was (other than he was from a postproduction company), but he mentioned that some of his clients didn’t understand why their post costs increased as they moved to shooting in 4K. In short, he said they told him, “When I rent the camera, they don’t charge me more if I switch it from HD to 4K.” The thinking was that it shouldn’t cost more in post. Depending on the codec and recording method, however, going from 30 to 60 for all of your shots doubles the amount of data. (Of course, by 30 I mean 29.97, and by 60 I mean 59.94.)
All that extra data needs to be offloaded from the card or drive on location, backed up (you’re backing up your data, right?) and then ingested. If you aren’t using the fastest drive, that means although you’ve finished striking the location, you may still be waiting on data transfer. All that copying takes time and takes up space. If you shoot days or weeks worth, this can also have a real impact on whoever does the post. In going to 60p, will they have to use proxies instead of the original files? This might be a performance issue; can their system play 60p at full resolution? Or it might be a space issue; do they have twice the room?
From a performance perspective, if they can’t deal with 60p original files, then you have to figure on proxies being created, which might increase your total ingest time (though, in some cases, proxies can happen in the background). A quick start to editing might not be in the cards. So your decision, however you arrived at it, probably saved you time on location and in postproduction.
While on the subject of the amount of data you must deal with, let’s talk about the issue of paying attention to the amount of data. I’ve been on location for several projects recently and have noticed habits that have crept into shoots that can create workflow issues later on, mainly in terms of the amount of data captured each day. Regardless of whether you shoot at 24 or 60, how long the camera rolls is important. When film is shot, the director calls “cut” for a reason. She knows that film stock is expensive and that changing magazines takes even more time.
With digital capture, it appears people assume that since there’s no film stock to worry about and slating takes time (particularly with double-system sound), “cutting the roll” isn’t that critical. For instance, directors say “roll,” but then give more direction to talent before “action” is called. Or, after the take, they give comments while the camera continues to roll before finally calling “cut.” But if a little bit of the film mindset can work its way into a director’s habits, it can help to reduce the amount of unnecessary data captured. As we move to more resolution and more bit depth, file sizes aren’t getting smaller. Every “bit” helps.
INTERLACE: NOT DEAD YET
Q I thought interlace was a thing of the past. I’ve come up with an idea for a short TV series project and thought I should take a look at what I’ll need to end up with as a master. A friend sent me the delivery specs for a particular company. I read through it and was surprised to see that I needed to get them an interlaced master. I mostly work in 24p, so I’m not sure what I should be thinking about.
A Welcome to the reality of broadcast and cablecast television. While most televisions these days are displaying progressive video—whether natively or converted from interlace—there are many broadcasters and cable networks that deliver in interlace. Let me step back a bit to review quickly. Interlaced video has an image frame comprised of two parts (called fields), which are made up of every other horizontal line of pixels. Each of the two fields are captured and presented to a display sequentially, one after another. Depending on the frame rate, the fields might occur 1/60th of a second apart. Comparing this to progressive video of the same image, the camera captures and presents the full frame at once in 1/30th of a second for each frame. Interlace is a means of dealing with bandwidth. By sending only half the information at once, less bandwidth is needed to transmit a picture.
Our original analog television system was inherently interlace. With the move to digital, the transmission standards approved included both interlace and progressive. One standard was 1080i, which was 1920×1080 interlace at 29.97 frames per second (59.94 when combining the alternating fields). Another was 720p, which was 1280×720 progressive at 59.94 frames per second. Actually, I should say “is” not “was,” as over-the-air broadcasters and many cable networks use these formats today. You might see in the numbers how transmission bandwidth is affecting things. You can have higher resolution at a lower frame rate, or you can have higher frame rate and a lower resolution. Standards are slow to change, but they’re still needed so that cable companies, broadcasters and display manufacturers know how to send and display content.
Deliverables are just that—what you deliver. The specifications don’t concern what you capture (although some programming submission guidelines specify that, too). If you need to deliver 1080i at 59.94, you can still capture progressively. You just have to pay attention to how you create your deliverables and, more importantly, how you edit. Timing for television is critical. Your total running time has to be accurate; your acts probably have minimum and maximum lengths per the standard. If you’re doing everything in 23.98, but delivering in 59.94, you’ll need to properly calculate the running time. Most advanced editing applications do this while some do not.
It’s also not as simple as converting all your footage into 59.94 either before ingest or by using a 59.94 timeline. This is because the conversion process involves adding 2:3 pulldown, which creates interlace fields from the progressive frames. To do so, pulldown creates some new frames that are actually comprised of two different “capture” frames. But pulldown itself isn’t an issue; it has been done ever since there were movies shown on television. Networks want to maximize compression to get the best signal in a limited program stream. They will remove pulldown so they don’t have to compress the extra new frames.
Unfortunately, if you edit a 23.98 clip into a 59.94 timeline, your editing software restarts the pulldown sequence, called cadence, at each edit point, making it difficult for the compression algorithms to find the pulldown and remove it. The solution: Do the conversion on the final output so the cadence begins at the beginning of the show and stays consistent throughout. Deliverables don’t have to affect your capture, but you do need to consider them throughout the postproduction process.