
Ironing out the wrinkles in VP
Posted on May 6, 2025 by Admin
Phil Rhodes considers how VP is evolving – and the role Zoe Saldaña’s nose plays
As new ideas evolve into established practices, they often become simpler, more affordable and less likely to bring cautious producers out in a nasty rash. Virtual production has been around for long enough to enter that phase, and it is certainly saving people both time and money. Whether it has become any easier, though, depends on what virtual means to any particular production.
If that sounds like a redundant question, bear in mind that live broadcast often treats terms like virtual production and virtual studio as interchangeable. If we’re broadcasting the Olympics and ask the presenters not to wear green, that’s fine. However, for single-camera drama, especially when shooting something like Guardians of the Galaxy, with characters who are both blue and green (and pink, yellow, white and furry) all in the same frame, it becomes much more complex.
In-camera VFX has become a catch-all term for photographing live-action scenes against a background image, but whatever we call it, ICVFX benefits the director, the actors and the people who no longer have to draw around Zoe Saldaña’s nose several thousand times. Don’t laugh – Dan Shor, who played the character Ram in TRON, reports having been accosted on the street by someone who had spent many months doing the same for him.
The thing is, all those benefits have long been a hallmark of back projection, a technique that first earned Oscars in – wait for it – 1930. In fact, there are back-projected scenes in Aliens that still withstand modern scrutiny. Now, Cameron’s classic may be a touchstone of the real-world, practical-effects filmmaking that audiences value, but comparing it to modern VP might seem just a bit far-fetched. Today’s set-ups might involve camera tracking, real-time rendering of custom-built virtual worlds, 5000 DMX channels of image-based lighting and a dozen other refinements that certainly didn’t exist in 1985.
What matters, perhaps, is that many of those technologies are also omitted from set-ups of today, and that’s okay.
A full-capability VP stage involves a lot of complexity. The video wall is a multi-ton, multimillion-pound piece of hardware comprising display panels, receiver cards, processors and a small town’s worth of power and data cables. Equipment choices influence brightness, colour, frame rate and resolution, which in turn influence how closely it can be seen before dissolving into dots or shimmering interference patterns. Some displays have more resolution than the images displayed on them, just to avoid moiré. Displays need calibrating so the panels don’t look like a chequerboard of almost-matching images; processor manufacturers such as Brompton provide devices to handle that.

If we want the on-screen image to react to camera position, we need to involve camera tracking – whether that means markers on the main camera with tracking cameras around the studio, or the reverse. The GhostFrame system cleverly displays tracking markers (or other data) on the screen when the camera’s shutter is closed. Cranes can also be fitted with encoders to relay position data. Completely markerless systems provide convenience, though they might struggle in a particularly featureless corner.
With or without tracking, genlocking the display and camera might still involve a cable even now – it took the world a long time to work out wireless genlock. For more than one camera, sequential exposure systems are an option, but since these displays rely on pulse-width control, adding more cameras reduces the pulses per frame, ultimately leading to compromises in brightness control. Lenses may also need encoding for focus, iris and zoom. Some lenses have this built in (with at least two existing systems), while various bolt-on solutions are available.
The most complex way to get an image onto a screen involves a full 3D graphic design effort. Since Unreal Engine isn’t ideal for modelling, assets are often created in software like Maya before being textured and lit to balance visual fidelity with server performance limits. Lighting the real space to match might require tools like Assimilate’s Live FX, which turns video images into lighting control data, then rigging lighting devices to suit. Calibrating lighting to match the screen image is another process that may still be somewhat manual.
That description is inevitably incomplete, and it still sounds like a lot (because it is). VP facilities are built from a disparate stack of equipment mostly inherited from other industries. PlayStations and Xboxes have created a vast market for 3D rendering devices, but the appetite for VP studios is probably too small for anyone to pay for the R&D on a convenient, single-purpose box which does it all – even if such a thing were possible to imagine.
This all sounds a bit grim, but we also know that a lot of productions are having a wonderful time shooting car interiors against LED video walls. Clearly, not every episode of this season’s new police procedural is shouldering the VFX workload of a nine-figure superhero movie. How do we wrangle such a pile of equipment on a smaller show?
Well, to a great extent, we don’t because the world is realising that many applications of ICVFX only need a subset of the full arsenal.

Aliens didn’t use camera tracking, 3D rendering or even real-time colour correction. Interactive lighting involved waving flags in front of lights. Screen content came from a model unit on the adjacent stage, working under the gun to create backdrops around the main unit’s schedule (which makes it difficult to complain about the pre-production workload of preparing material for an LED wall). Better yet, LED walls wouldn’t exist for decades after Aliens, so it relied on 35mm projection. Black picture areas were still a white screen, so the slightest stray light would destroy contrast (that’s what compromises the least-successful shots). It was in-camera compositing on hard mode, and it worked.
That’s not to propose classic back projection as the right solution for 2025, but between those two extremes lies a huge range of options. Using a live-action plate rather than real-time rendering is standard procedure when it comes to convenient car interiors.
A couple of early ICVFX experiments utilised video walls rented from live-events companies, and without even synchronising the screen to the camera. That demands a crew who knows what they’re doing, to put it mildly. Fortunately, such crews are readily available.
Even at the high end, time and experience have smoothed out some of the early wrinkles. Particularly, all that equipment is separate, but that makes it highly configurable. High-contrast LED walls will always be easier to light around than a white back projection screen. Hybrid approaches such as 2.5D backdrops – where flat images are projected onto approximate geometry – can save time. There will always be skills to learn – shooting good plates is an art form of its own – and experienced professionals will concede that most set-ups rely on at least some form of manual adjustments.
Things may still change. AI promises to do some of the content-generation work, as it has promised so much (and sometimes delivered). What matters, though, is that the range of options which make ICVFX complex also make it flexible enough to cover many different scenarios. It’s probably that realisation, as much as any technology, which has made VP so much more approachable.
This story appears in the April 2025 issue of Definition