If you attended last year’s Sundance Film Festival, you might have had the opportunity to glimpse at the future of editing. The radical new technology that was on display was in a small booth, presented by a venture-backed developer of a new kind of human/machine interface, Oblong Industries. The promised glimpse of the future was provided by a conceptual editing interface known as TAMPER. The system is a spatial operating environment, also known as an SOE. What exactly is an SOE? Some of the SOE’s core ideas are already familiar to people from the film Minority Report, whose characters performed forensic analysis using massive, gesturally driven displays. Similar depictions of gesturally driven displays have recently shown up in music videos and commercials. The similarity of TAMPER to these fictional depictions is no coincidence. John Underkoffler, chief scientist with Oblong Industries, served as science advisor on Minority Report and based the design of the scenes in the film directly on his earlier work at the MIT Media Lab.
What exactly is TAMPER? Imagine using your hands, instead of a mouse, to move and manipulate video in 3D space in real time. Sounds like science fiction? TAMPER is based on advanced science, but the impressive exhibition of the system at Sundance proved that the technology isn’t fiction. In his demonstration, not only does Underkoffler use only his hands to engage transport controls to select clips, rearrange them and edit them, but he also literally grabs specific video elements from precomped scenes from several films, ranging from a 1967 Jacques Tati comedy to a Sergio Leone western and rearranges them on a table in front of him into a real-time video collage where Leone’s cowboy shares the mashup with Tati’s French vacationer as a car chase from 1971’s Vanishing Point tears through the background.
TAMPER runs on Oblong’s own proprietary g-speak platform. The SOE is one of the biggest steps forward in the human/computer interface since the keyboard and mouse became the standard method for humans to control computers back in 1984 with the first Mac computers. The g-speak platform on which TAMPER runs is built around free-hand, three-space gestural input. Applications are controlled by hand poses, movement and pointing. Finger and hand motion are tracked to 0.1mm at a 100 Hz scan rate; pointing is pixel-accurate. Gestural input is measurably more efficient than a mouse at performing complex navigation, sorting and selection tasks. The user interacts with the display by pointing instead of touch (or mouse).
When asked about the impetus for designing TAMPER, Underkoffler replies, “The premise is that it’s time for something new. It’s necessary because the speed and capability of the machines is now far exceeding the speed at which humans can ‘speak’ in conversation with the machine because our ability to com-municate with computers is limited by the mouse and keyboard.”