Saturday, 31 March 2012 17:00

Depth Perception

By Mel Lamberts Published in 3D
Depth Perception

This Article Features Photo Zoom


With George Lucas' Star Wars: Episode I—The Phantom Menace 3D currently playing at a number of AMC multiplexes and Titanic 3D poised to repeat the success enjoyed by its director James Cameron with Avatar 3D, the audience for stereoscopic productions is expanding dramatically. With an eye on consumer dollars, a growing number of movie studios are looking through their film vaults and reissuing landmark productions in stereoscopic 3D.

Walt Disney Animation Studios' Stereoscopic Supervisor Robert Neuman.
The 2D-to-3D conversion process, in essence, involves generating a complementary right-eye image to accompany the existing visuals and, hence, generate a sense of added stereoscopic depth. But, as will be readily apparent, creating visual planes within the resultant 3D production requires that specific elements be separated. The reason is obvious: Nearer objects will be separated more than distant objects; to create a believable vanishing point within the stereoscopic production, the artist must shift each object according to the specific plane it occupies in 3D space. There are advantages to be gained from working with digitally generated images, primarily because the separate image components have already been generated as animation elements or CGI elements that define the various planes within the composition. (Think of a multilayered image in Adobe Photoshop or a similar pixel-based graphics program.)

"There are two primary parameters we can use to generate a sense of depth within a stereoscopic image," explains Robert Neuman, stereoscopic supervisor at Walt Disney Animation Studios, who worked on The Lion King 3D and Beauty and the Beast 3D, both of which started life as conventional 2D productions. "Convergence controls the position between near and far within the scene, while interocular distance—sometimes referred to as interpupil distance or, more accurately, interaxial separation—defines the apparent depth of the stereoscopic image."

For the 2D-to-3D conversion, animators develop a relief map that shows in black where objects are located further within the image and in white for those that will appear closer. This process is easier to visualize without wearing 3D glasses.

In other words, if an object is far away from us, the light from that object entering our eyes is parallel and both eyes are looking straight ahead. As that object moves closer to us, our eyes converge; the left eye points more and more to the right while the right points increasingly to the left. The amount of stereo effect is dependent upon the separation of the real or virtual camera lenses, which, in turn, defines the relative parallax differences between the left- and right-eye images. If an object in a stereoscopic image is located in the same space for both eyes, it will appear on the surface of the movie screen and will exhibit zero parallax; objects in front of the screen are in negative parallax, and background elements that appear to be behind the screen are said to be in positive parallax, with corresponding negative or positive offsets in the position of the object for left- and right-eye images. The depth of field or focus also provides valuable clues to the relationship between near and far objects within the frame.

During preparation of 3D releases of Disney Studios' iconic animations, Neuman recalls that the first step was to decide where each character and key objects would be assigned in the depth field, "We mark up a rough version of the scene with numbers that indicate the image offset to increase/decrease parallax between the left and right images, from +5 to +10 pixels [on digital images that might be as large as 2,048 pixels wide]. Those depth maps indicate to our animators the parallax settings for each element within the scene. Using our offset data, we can then develop a relief map that shows in black where objects are located further within the image and in white for those that will appear closer—it's a useful way of visualizing the scene without having to don 3D glasses. The grayscale map shows us, in basic terms, what we need to know about the stereoscopic results."

This Article Features Photo Zoom

In essence, Disney's talented animators are creating a virtual space in front of the movie viewer. Done correctly, the illusion can be totally immersive, with objects placed within the stereoscopic plane at different distances and images seeming to appear across the front of the audience, extending at least halfway from the screen and appearing to travel back several hundred feet.

Neuman explains that instead of creating a right-eye image from the 2D original, which, in turn, becomes the left-eye image, Disney Animation creates unique left- and right-eye elements. "That gives us far cleaner results," the stereoscopic supervisor considers. "Instead of displacing the right-eye image from the left by 10 pixels, for example, we can use a +5 pixel offset for the left eye and +5 for the right, which delivers higher clarity and more realistic results, with less unwanted artifacts. Also, since we might need to fill the occluded space [which remains in the background when an element is moved left or right], these smaller offsets are less noticeable."
 
We make sure the convergence and focus will track each object as it moves, and maintain a realistic sense of distance.
—Robert Neuman, Walt Disney Animation Studios.
 
Motion tearing results when an object moves too quickly across the movie or TV screen, or within the 3D space; when the viewers' eyes can't track the resultant movement, any 3D illusion will be lost.

"We make sure the convergence and focus will track each object as it moves, and maintain a realistic sense of distance," Neuman states.

A conventional movie projector running at 24 fps will produce a 40 ms delay between the two stereoscopic images; other systems operating at 60 or higher frame rates will produce shorter delays. While stationary or slowly moving objects won't be affected, for objects traveling quickly, audiences might see the left-eye image at one location and the right-eye image in another location; the 3D stereoscopic effect will be corrupted.

It's also fortunate that Disney Animation embraced digital animation technologies two decades ago. Explains Neuman, "That means that we have access to layered images that can be re-processed to create the 3D versions of The Lion King and Beauty and the Beast, but each pixel still needs to be created in the new versions. It's a long, intricate process."

Disney Animation has been using digital-image technologies, as opposed to traditional painted-cell animation, since 1991, having developed a proprietary computer-based CAPS System at its Burbank animation and postproduction HQ.

This Article Features Photo Zoom

"We can use our in-house software to streamline a lot of the 2D-to-3D conversion processes," Neuman says, "and are able to see the results of parallax changes in close to real time [to monitor the degree of stereoscopic staging and blocking being re-created in each scene]. But the dimensionization project takes around 10 months, with as many as 30 artists working in parallel. It's an incredibly careful and labor-intensive process."

Katie Fico, Olun Riley and Dale Mayeda worked as sequence supervisors on The Lion King 3D and Beauty and the Beast 3D.

For more information, visit www.disneyanimation.com.

3D Toolkit
NewTek LightWave 11 offers interactive, real-time stereoscopic CGI production
By Mel Lambert

For 3D productions that will be CGI-based, with no live action, NewTek offers an affordable solution for creating 3D images within conventional Windows- or Mac-based workstations. LightWave 11 enables videographers and artists to develop CGI scenes with objects that can be moved freely within the virtual space created by the DSP-based model and separate the viewing points or "virtual cameras" being used to generate appropriate left- and right-eye outputs. The fully integrated application includes modeling, rendering and animation capabilities. LightWave 3D was used with great success during virtual production for such landmark movies as Avatar and The Adventures of Tintin, as well as final work and renders for Titanic, Repo Men, V, Fringe, CSI: Crime Scene Investigation, Terra Nova and The Fairly OddParents, plus the in-progress James Bond movie, Skyfall. The software also was used to produce Abiogenesis, a five-minute animated feature from New Zealand director Richard Mans.

"The industry is facing unprecedented budgeting and scheduling challenges," says Rob Powers, NewTek's VP of 3D software development. "LightWave includes an entire pipeline—from model to render—designed to support the creative process. The application provides artists with the ability to interact in real time with 3D content and to work seamlessly with the full range of software applications in production pipelines."

A new LightWave feature, Instancing, makes it possible to duplicate a large number of objects in a scene while minimizing memory overhead, with the ability to scale, position, rotate and surface randomly cloned objects for realistic detail.

Adds Powers, "Another new function, Flocking, allows artists to animate realistic motion of grouped objects, such as birds, fish, insects, animals, aircraft and spaceships, using a new motion modifier, including crowd avoidance of neighboring objects. And Fracture lets the user pre-fracture objects that are ready for destruction with a new Modeler tool. Finally, Bullet Dynamics can be used to collapse buildings, create explosions or place objects in a natural-looking random pattern."

Model and texture data also can be imported/exported to Pixologic ZBrush software.

The Emmy® Award-winning 3D modeling and animation software enables users to set interocular distance and convergence to match the artist's intention of placing objects either in front of or behind the neutral visual plane and then move these CGI objects freely throughout the space.

"The main advantage of LightWave 11 is that the software doesn't require a separate rendering engine," says Powers. "It's built in. The end result is a much faster turnaround and an accelerated workflow."

NewTek's Virtual Preview Renderer (VPR) offers on-screen, real-time rendering, while Anaglyph Stereoscopic Preview enables real-time interocular, "red-blue" anaglyphic separations.

Powers identifies a major paradigm shift in the way that CGI is used in current 3D productions. "Previously, you would use a wire frame model to assemble the elements, paint the scene, add textures, light the image and then add a virtual stereoscopic camera," he says. "Now, because of the real-time DSP power of programs like LightWave, stereographers can fully integrate with the process and see the result rendered instantly to create a fully operable working environment. Our latest modeling algorithms make the end results as photorealistic as possible, which helps create an enhanced depth of field."

LightWave 11 has a suggested retail price of $1,495; upgrades from earlier versions cost $695.

For more information, visit the NewTek website at newtek.com.

Leave a comment

Make sure you enter the (*) required information where indicated. HTML code is not allowed.

Subscribe & Save!
International residents, click here.