READ THE LATEST ISSUE HERE

The Cameras Of Avatar

Posted on Apr 21, 2010 by Alex Fice

James Cameron wields the virtual camera that gave him the ability to actually see an actor’s CG character – and the CG environments – in camera, as he worked with actors on the stage Photo credit: Mark Fellman

Picture Gallery

Avatar is the most challenging film I’ve ever made,” says Avatar writer-director James Cameron. Avatar is an adventure film shot in stereoscopic 3D with photo-realistic 10-foot-tall CG-generated characters as the stars. Two new types of cameras were developed to make that possible: the 3D Fusion camera and the Simul-Cam virtual camera. Let’s take a look at both of those in detail, with Fusion camera co-developer and DP Vince Pace and with Avatar virtual production supervisor and Simul-Cam developer Glenn Derry.

The 3D Fusion Camera

“The cameras that we used were Sony F950,” says Fusion camera co-developer and DP Vince Pace. “They have a special front-end, called a J-cam that was developed with Sony starting about ten years ago. That was the foundation of the HD camera we used. Each camera had two Sony F950 processors and two J-cam optical blocks. The reason we did that was to keep it smaller on the front end, to keep the package as small as possible.”

Instead of using the typical Panavision camera, James Cameron and partner Vince Pace of Pace Technologies created the 3D Fusion stereo camera utilizing two Sony F950 cameras. It took seven years of development to create the Fusion system, the world’s most advanced stereoscopic camera system. Before Avatar, director Cameron used the new digital 3D camera to bring back the experience of deep ocean exploration of the interior of the Titanic in the 3-D IMAX film Ghosts of the Abyss, followed by Aliens of the Deep.

“When I was on the deep dive crew with him for Titanic, the first thing Jim wants to see is if this is something he can incorporate into the film and do correctly even though it’s 2 ½ miles under the ocean,” says Pace. “Jim is a forward thinker. There was a lot of development done on the film early on, before we had a green light.” For the live-action portion of Avatar, Cameron and Pace worked together for about two years, shooting tests, establishing some of the look of the jungle, evaluating the look of the 3D.

“When the New Zealand portion came along, I was working on Hannah Montana,” says Pace. Mauro Fiore ASC was picked as the live action DP for the principal photography in New Zealand. When Cameron was finished photography in New Zealand, he returned to L.A. A lot of the shots were missing from the editorial side that had to be picked up by Cameron and Pace and finished in L.A. “What they were capturing in performance capture and what I was creating in the live action sequences needed to cohesively exist in one movie,” says DP Mauro Fiore. Most of Avatar’s live-action scenes were shot in Wellington, New Zealand, where enormous sets were erected.

“What I enjoy most is trying to execute Jim’s vision through a digital camera,” says Pace. “The qualifications for 3D are two cameras and duct tape. That’s the entry point. It’s much more complex than that, but we’ve got to make it transparent to production. The amount of stereo we induce is constant throughout shooting. The stereo in a scene is how far apart the cameras are and where they’re converging on a subject, the parallax. What’s important for stereo is having a consistent amount of stereo throughout the cut.”

Traditionally, 3D cameras employ a master-slave relationship. One camera is told by the operator what to do and the other camera blindly follows the same command. Pace found early on there were way too many errors approaching it that way, that real lenses are not a perfect match. Commanding the lenses to be 5mm, for example, didn’t send them to exactly 5mm. There were small deviations. It all had to be dialed in to make stereo match, to have the two lenses performing exactly at 5mm.

The Fusion camera is used for match-move, where you have motion capture CG characters and have to match there moves to composite them into an established shot. There are micro-motors on the Fusion rig to adjust the separation and convergence angle of the stereo Sony F950 cameras. The standard three lens functions of zoom, focus and iris, plus interocular distance and convergence are all under software machine control. Beam splitters provide the capability of having an interocular of half an inch even though the cameras are four inches wide. The cameras are mounted in the Fusion rig at a 90-degree angle.

“One of the things you’ll notice as you analyze Avatar is it’s a very organic, filmmaker’s movie,” says Pace. “When you travel from live-action, to live-action mixed with CG, to totally CG, the style of shooting is exactly the same. It’s put the camera here…dolly…I’m going to crane down. Jim didn’t care if he was working in a hybrid live-action shot of Pandora or totally CG with Weta. It took it out of the key frame animator’s world. It got executed through the movement of the filmmaker’s eyes, and then he handed it off to animation.”

“Avatar is the largest handmade movie ever made,” says Pace. “Jim manipulated every single frame in a very tactile and immediate way.”

Face Cameras and the Virtual Camera, CG Directing Goes Real Time

Volume, the Avatar mo-cap stage, didn’t take the traditional approach of placing reflective markers on the actors’ faces to capture their expressions. Instead, the actors wore special headgear, like a football helmet with a tiny camera attached. The rig recorded facial expression and muscle movements to a degree never before possible.

Most importantly, the camera recorded eye movement, a feature that prior systems lacked and made CG characters appear lifeless.

The Virtual Camera allowed James Cameron to direct scenes within his computer-generated world. Through this virtual camera, the director would not see actress Zoë Saldana, but her 10-foot tall blue-skinned CG character Neytiri in a CG environment. The in-camera CG imagery had the resolution of a video game, so after Cameron completed filming and editing a sequence, the WETA visual effects company would work on it for months to create the final, high resolution photo-realistic images.

The Virtual Camera resembles a videogame controller with a video monitor attached. It’s not really a camera at all because it doesn’t have a lens. It’s a camera monitor with no camera. Because it’s a virtual camera without physical limits, Cameron could set the device to create a five-to-one scale for vertical moves. Moving the camera three feet in this mode translates into a 15-foot crane move. “Long after the actors had gone home, I would still be in the Volume with the virtual camera, shooting coverage on the scene,” says Cameron. “Just by playing back the take, I can get the scene from different angles. We can re-light it. We can do all sorts of things.”

However, the Virtual Camera was only the beginning.

The Simul-Cam, Match-move Directing Goes Real Time 

Glenn Derry is responsible for the technology at James Cameron’s production company Lightstorm. He oversaw the Avatar virtual production and headed the mo-cap and Simul-cam integration.

Derry talked with Cameron early in 2005, while they were working with the Virtual Camera. Cameron asked Derry if, since the Virtual Camera is just an object in space, couldn’t we track a live action camera shot and have that displayed with a real-time composite and see that with the Avatars? Derry answered it could be done, but he’d have to think about how to do that with the systems they were using.

If you can already track together CG characters on a CG background, it doesn’t sound like a big leap to track a real 3D camera into the same shot. It’s not easy.

The way a motion capture system works is it uses reflective markers with machine vision cameras to track objects in 3D space. That works fine on a dark stage. However, if you would try to do that on a stage with intense live action lighting or shoot outside it wouldn’t work. Existing mo-cap systems struggled with “thresholding”. A shiny object in the scene would spook the system. It couldn’t tell the difference between a reflector and a shiny reflection or a light. The big weakness with the motion capture systems was it requires a very controlled lighting setup.

Derry came up with a high intensity phased LED system. “My company Technoprops built a series of high-intensity LED markers that sync into phase with the motion capture cameras,” says Derry. “I developed a whole syncing system to sync the live action photographic camera, the motion capture cameras, and these active LED markers on the Pace rigs. We’re doing live 3D comps.” What Derry created was a super high intensity LED lighting system that fires off the LEDs in sync with the motion capture cameras running at 20 microsecond exposures. The motion capture cameras would see the markers, but not see any of the bright live action stage lighting or even the sun.

“Jim could direct the actors in real time as CG characters,” says Derry. “The actors would wear motion-capture suits. We would track their position in 3D space and also this other object, which was a virtual camera that Jim held up in space. The virtual camera acted like a regular camera, but instead of seeing though a lens, he was seeing a CG representation of where that virtual camera was in space. He could see what these characters are doing.”

Six months out from principal photography the end of 2007, Derry started an intense R&D effort on many fronts with four different companies. It would be a race to complete the technology before shooting began in New Zealand.

“The Simul-Cam is a mish-mash of different technologies,” says Derry. “My company Technoprops developed a piece of hardware that allowed us to track a monitor in 3D space and show Jim’s virtual environments in real time through our motion capture system out to a CG viewing tool.”

“Two days before we’re shipping to New Zealand, I still haven’t tested Simul-Cam,” says Derry. “I know that all the individual parts work, but I’ve never had a chance to test them together. I go down to our set and setup a green screen to do Simul-Cam. We plug everything in. I hear one of the PAs say, Jim’s coming down right now. I’m probably three hours from making it work and everything has to get in the truck to be shipped.” Derry told the crew to pull the plug, that they’re not doing this while Jim’s here. Everything was thrown in a container and went to New Zealand. Derry didn’t arrive there for six weeks.

“One of the first Simul-Cam scenes we shot was one of the hardest scenes,” says Derry. “We fired it up. It worked out of the gate. Jim would break it every single day. Every day Jim would add characters to the system. We’d have to figure out how to reduce the poly counts on the characters so we could get it in real time. We were writing software every day.”

Derry’s company Technoprops specializes in this type of hardware and one-off syncing technology development. A system called Overdrive made by Concept Overdrive was coupled with Vince Pace’s camera systems 9-axis motion control software to make it talk to Motion Builder software for real-time CG representation. The mo-cap company is Giant. A bunch of C++ and Python plug-ins were written and a device driver so Overdrive could talk to the Pace camera. Technoprops rolled its own FPGAs for some of the hardware and did a lot of low level circuit design for sync engineering.

“We created some hardware and wrote some software that allowed us to pull Vince’s Cam-Net data, which gave us focus, iris, zoom, interocular and convergence in real-time,” says Derry. “We used that information to come up with a camera-solve, knowing where it’s at in 3D space. As they would rack-focus the convergence would move, and we’d see convergence on our 3D camera do the same thing.” As this is happening Derry is collecting lots of metadata being captured on the set in a custom visual asset management system they created called Gia.

“Every single day on Avatar was like a prototype,” says Derry. “We were going out with unproven technology. I was on it for five years, from February 2005 on. Jim had worked with Vince on the camera systems prior to Avatar. A lot of what we did on Avatar was grabbing off-the-shelf and making it work. We had a full Avid edit system on set and 2k projectors everywhere we went. We created our Avid Media with the color correction applied with the 3D comps all lined up as we were shooting. Every day there were issues. We were writing code as we were shooting. We were constantly improving. By the end we got down to a system that just worked.”

Avatar used Linux, Macs, Windows computers. The real-time CG was on Windows machines. The Giant mo-cap systems run on Linux. Overdrive runs on Linux. The producers like Macs. Cameron’s company Lightstorm is primarily a Linux shop.

What’s Next?

Vince Pace says he has 50 to 60 Fusion camera systems now. When he shot the movie Hannah Montana he had 12 different cameras. Pace is working with ESPN on sports production with a 500mm lens on the camera. He’s doing handheld, Steadi-Cam, Techno-Crane, underwater, all the usual camera moves. He also has cameras with large sensors. He says that for feature work it’s about 80% beam splitter and 20% side-by-side rigs. In sports it’s the opposite. He has two mobile units now shooting.

Pace just wrapped on the 3D films Yogi Bear and Sanctum last week. He’s doing the Masters Golf that’s going out live. Sports 3D is starting to arrive on a subscription basis for the home TVs. Samsung and Panasonce just launched their lines of home 3D HDTVs and are available at Best Buy. DirecTV is about to launch a 3D channel. It’s exploded recently. Comcast is starting to set it a 3D HDTV subscription service.

Glenn Derry is working on a movie called Real Steal for DreamWorks. Unlike the prototype experience on Avatar, here the technology is plug it in and it just works. They’re generating our dailies in real time.

“The issue is not the tech,” says Derry. “It’s the interface between the tech and the people who use it. The actors can see themselves on a screen, but how do you give the actors proper eye lines? How do the actors know where to look without having a guy out there with a tennis ball on a stick moving it around?” Derry is developing technology to generate eye lines in real time, marks that the actors can see but the cameras cannot. He also sees the technology crossing over into gaming and for Delta Force-style military training.

AVID work with Blackmagic Design on new ...

April 11th, 2015

Even though there is a rumour that Blackmagic...

Trooping The Colour

April 19th, 2018

The Samsung Portable SSD T5 drive may be small, but it’s made a huge...

Panasonic goes to Hollywood

November 8th, 2019

At the launch event for Panasonic’s Lumix S1H in Los Angeles last month, we...

'Deadpool' VFX Breakdown - A Wake Up Cal...

March 17th, 2016

When three times Oscar winning Cinematographer Robert Richardson called for an 'effects free' Oscar...

Newsletter

Subscribe to the Definition newsletter to get the latest issue and more delivered to your inbox.

You may opt-out at any time. Privacy Policy.