Remote Editing, Part 4

I wrote about getting the right connections for remote editing. I talked about encoding so I could stream my session live. Last time, I examined the whole concept of latency—how delayed the stream would be when it arrived at a client’s location.

Now the questions are “Did it work?” and “What did I stream?”

Even though I went through various tests ahead of time, I went through one more—what I call an “internal” test with some remote colleagues, just to ensure I hadn’t missed anything. At this point, I had two different methods of streaming. Plan A had very low latency and Plan B had “only” low latency.

The reason I had two plans wasn’t for redundancy but because of complexity. The very low latency streaming required the installation of an application. Although not usually a problem, some clients have very strict policies about installing applications on their company’s equipment. With IT support under pressure due to all the remote workers, I doubted we could get the application installed and running properly. Also, the application wasn’t the most user friendly. Even some of my tech-savvy colleagues had some minor issues.

After my “internal” tests, I opted for Plan B. The stream ran in a browser with nothing to install. The trade-off was a little more delay, but as I mentioned in my discussion of latency, I could make it work.

What did I stream? Fortunately, I was able to make use of a Blackmagic Design ATEM Production Studio 4K switcher. While I wasn’t streaming 4K, this allowed me to output the HD-SDI output of my suite and also to connect an HDMI output of my workstation that mirrored my desktop. Via a software control panel on a laptop, I could switch between the full-screen playback and the desktop to show the timeline and bins. I had also set up a small monitor on the output of the ATEM so that I could see what was being sent to the encoder.

I felt it was important to show the desktop so that clients would be able to see what was going on and what I was doing. It was all about keeping them engaged. That isn’t easy to do if they’re only looking at a freeze-frame. (It also kept them from over-analyzing that still frame.)

Showing the desktop also allowed me to concentrate on the editing at hand instead of keeping up a running commentary of what I was doing. They could see the progress. And when I switched from the desktop view to full-frame, it signaled that they needed to pay attention more closely, as we were going to watch playback.

To make my life easier, when I had everyone test their connections I used an AJA KiPro recorder to play back a loop of footage so that I didn’t tie up my workstation. That also meant clients didn’t see any parts of the project until they were ready.

Also, to make my life easier, I didn’t monitor the stream using the same network I streamed from. First, I wanted to save bandwidth. But I also wanted to check the stream outside of the comfort of my reliable connection, so I used a phone (and cell data) to check the stream.

So how did it all work? It worked well. All but one of the clients could see the stream and make comments. The one who couldn’t connect to the stream had neighborhood internet issues, a problem unrelated to what we were doing.

Overall, the experience was similar to a regular edit session. However, I can’t say it was exactly the same. The intangibles of an in-person experience, picking up visual cues and not having the start and stop of conference call communication make it different. But in the end, the session succeeded.

Read Part 1 Here

Read Part 2 Here

Read Part 3 Here

MENU