Remote Editing, Part 3


Last time, I talked about streaming services’ understanding of low latency. Now it’s my turn.

I usually edit with clients in my suite, so I’m accustomed to near-zero latency (yes near-zero, as there’s latency in displays). But as I thought about latency for a remote edit, I realized I had to be realistic about what I really need. I had to think about what really happens in an edit suite, not what I’d like to happen or what I imagine happens.

During an edit session, sometimes I feel like I have to ring a bell to get people’s attention to watch the latest cut—to get them off their phone, laptop or to interrupt their discussions. I honestly don’t think this is because they’re not interested in what I’m doing but because they have so much on their plates and need to multitask while I edit. And if I’m honest, I also think it might be because sometimes the editing process is like watching paint dry.

So, I asked myself if I really need to have zero latency. Does it need to be exactly like sitting in the suite? While it would be great to have zero latency, a somewhat low latency of 2 to 5 seconds didn’t seem to be a deal-breaker. I envisioned what it would feel like to wait for two seconds after I hit play. It seemed reasonable.

But there’s another issue with latency and that’s audio. If the client group all have their mics on and I have my mic on, my microphone will pick up the playback in real-time, but it won’t match the delay they hear from the stream. Their audio (and picture) will be 2 to 5 seconds delayed, so my audio won’t match.

My solution was to mute my suite audio when I play back. I’ve heard the tracks over and over anyway. In fact, most of the time, if I mute the track and play my timeline, the tracks play back in my head anyway. So, my process mutes the speakers in my suite for client playback. Then, when I make adjustments where I need to follow audio, I turn the speakers back on but mute my microphone.

Speaking of microphones, in my tests I found using a good quality conferencing provider via people’s cell phones was better than using a service run via each user’s computer. When you use computer services, there are audio configurations to set up, some people may think they should use the video portion, there’s muting and feedback and sound quality can be issues.

In my experience, phones just work better. They also allow people to enter and leave the discussion without too much distraction. In short, they just work.

So, now, I had latency figured out and how to deal with audio. Next time, what I put on the stream and how it all worked.

Read Part 1 Here

Read Part 2 Here

Read Part 4 Here