The world in shot
Posted on Jun 11, 2021
Aerial companies are helping productions capture 360º images – and it’s all thanks to mind-blowing camera arrays
Words Phil Rhodes / pictures various
If we’re going to the lengths of having a camera on a stabilised mount, then putting that stabilised mount on a helicopter, it’s probably best to take full advantage of the situation. So, why not more cameras? Why not six cameras?
The idea of putting an array of cameras on a helicopter goes back decades. And because they’re the preserve of upscale productions, those involved tend to have an impressive credit history. Jeremy Braben, founder of Helicopter Film Services, has credits on the Fantastic Beasts and Jurassic World series, and both Wonder Woman productions, to name just a few.
“The first array I recall was back in the early nineties,” he says. “It was three fixed Arriflex cameras on a Tyler nose mount. That could be considered an array. They wanted to acquire three plates in the same axis at the same time – and that’s the brief for shooting multiple-camera arrays.”
Stereo 3D naturally created a need for more than one camera. Braben says: “Quite often, there were just two cameras, then visual effects got on to the idea of multiple cameras. With that, they can use the six-camera stitch, which is often used on aerials to give a huge frame to work with.” Especially given the expense of aerial camerawork, productions are generally keen to get as much out of them as possible.
“The first arrays were built around Red cameras, but now we have developed an array for Alexa Mini LFs. It’s generally shooting Open Gate Raw, which means an awful lot of data!” Braben explains.
If an aircraft with an array is good, consider the multi-aircraft excellence from XM2 Pursuit on productions such as Star Wars: The Rise of Skywalker. The organisation is a collaboration between Pursuit Aviation and Melbourne-based drone specialists, XM2 – CEO Stephen Oh has credits as drone camera operator on Mission: Impossible 7, Fast & Furious 9 and No Time to Die, as well as television project Westworld. As Oh says, his job involves both technical and creative work. “We make our own drones and our own platforms. On The Rise of Skywalker, we had three drone teams and 21 different drones. We were shooting main unit, second unit and VFX unit, and the VFX unit was flying an Alexa 65.”
For that production, one of XM2 Pursuit’s drones flew a total of 749km of canyon photography. While Oh says there is inevitably some consideration over the size and weight of camera packages, various configurations are possible, with three- or six-camera set-ups common. “Now, we’re doing it with Alexa Mini LFs. On Westworld Season 3, there was a requirement from production to do aerials in downtown LA, and we developed the system we call ‘Hammerhead’, which is a three-camera array.”
John Marzano is an aerial director of photography with a similarly glittering credit history, with work on The Northman, and television including Downton Abbey and Bridgerton. Those huge frames, Marzano confirms, have grown with the available camera equipment. “I have two different six-camera arrays,” he explains. “One for Red Weapon, one for Alexa Mini, and I’ve just modified the Alexa Mini to take Mini LFs. Then I have a three-camera Alexa Mini array and three-camera Red Weapon array.”
Operating requires monitoring, which is slightly complicated. Some configurations simply place all six camera images in two rows on a monitor, but more sophisticated approaches are possible. Braben describes a system developed by array partners, Brownian Motion. “We have a stitch box allowing us to look at one image, which is a composite of all six views. If you were looking at six individual monitors, it could have undesirable effects in a helicopter that’s moving around. We tend to look at one master, then use that for exposure and master framing.”
Choosing that frame has, perhaps, not always been as easy as it could be, because the controls for a helicopter camera mount have not generally been set up as a camera operator might expect.
Oh explains: “Traditionally, these arrays are controlled by joystick. Your DOP can’t sit in the helicopter and operate the head, because they’re used to wheels. But we’re going to announce something in the coming weeks to get the head operating via wheels. Now, a DOP who’s used to operating on Russian Arms or Technocranes – with the traditional wheels – can sit inside our helicopter and operate the gimbal.”
Even as the user interface becomes more familiar, though, one traditional responsibility of a cinematographer is often left on the ground, because few productions can realistically light the sort of territory a helicopter shot encompasses. As such, choosing a time, a place and a frame becomes all-important. “Generally, VFX prefers to shoot in flatter light and that’s generally what we do,” Marzano explains. “Occasionally, we’ll shoot in hard light, mostly fora specific effect – particularly in back projection, where they’re using the images to put on an LED screen for actors to play in front of.”
With camera and mount in hand, the choice of lenses is often driven by the need to give VFX the cleanest possible data. “There’s an overlap between images,” Braben points out. “It can be tweaked, but it’s generally a 10% overlap. It changes between sensor sizes, but we usually use the Zeiss CP.3 21mm extended data lens. We can use 24mm, but you don’t want to go any wider than that, because you start to get issues with distortion.”
Actually attaching cameras to helicopters is a process complicated by both engineering and regulation. What’s more, big camera arrays are beginning to test the limits of the technology.
Braben describes the concerns of size, space and weight, as well as the performance of the stabilised camera system. “The Shotover – or whatever it may be – has to be able to stabilise that payload. Going from a single camera and lens (albeit a fairly weighty zoom) to six cameras and six lenses is getting to the limits of what these systems were designed to stabilise.”
However, helicopters are not the only platform for arrays. Oh was head operator for drones, and operated all of the array work on Fast & Furious 9. This often involved mounting the array to a car, creating background images that were later used to place the principal cast at the heart of action scenes. Often, a scene was shot several times, with the array car driven in the position of one of the picture cars.
“The main unit would shoot, and while that’s resetting (or that shot’s over), we jumped on to the arrays and did all their runs. So, if there were five cars racing down the road, weaving and hitting one another, the camera vehicle would drive each car’s position. The stunt drivers are very accurate and match the positions very well.”
One thing that everyone involved seems to agree on is that the appetite for production work has, since the doldrums of 2020, become insatiable. “From October, it just went stratospheric and we’ve got so much,” Marzano enthuses. “I’ve only got five days off between now and the end of June!”
Single-camera solution in flight
When filming fixed-wing aircraft in flight, helicopters can struggle to keep up, making a fixed-wing camera platform necessary. However, the increased speed makes bulky arrays impractical, and XM2 Pursuit is currently testing a single camera called the ‘Nine by Seven’.
Stephen Oh says: “It’s 9.3K by 7K, and we can put signature primes on there. Up until now, you couldn’t put an array on a jet, but we believe it will beat a six-camera array on a helicopter.” A single-camera solution also makes life easier for the VFX people: “With an array, you’ve got to stitch – and a stitch point is a potential point of failure. With the Nine by Seven, the file size is huge, but you don’t have to stitch.”
Stitch it in post
John Moffatt is a visual effects supervisor with a two-decade history of films, including many of the Harry Potter series, as well as movies such as Wonder Woman 1984, The Da Vinci Code and Atonement. Good results from arrays, he confirms, require planning.
“Each of the lenses needs to be grid tested, which means we shoot a black & white grid, mounted on a flat board, for each of the lenses on each of the cameras. That allows us to see what distortion and barrelling the lens is creating. Once we’ve analysed that, we’re able to remove the distortion, allowing us to put the six elements together in Nuke or the 3D software.”
As the end user of the data created by camera arrays, Moffatt is a big fan. “This technology allows you to acquire more information than you need, to make choices later on. There’s many a producer who says, ‘Woah, why do you want that?’, because there’s a significant upfront cost. But, ultimately, it allows for the creative choices downstream, which is the thing everyone remembers. Whether the director and cinematographer were happy with the results, it’s easy to find people who comment on visual effects work and it gets labelled as ‘bad CGI’. What you don’t see is the good CGI, because it doesn’t get noticed!”
Originally featured in the June 2021 issue of Definition magazine.