READ THE LATEST ISSUE HERE

Revolutionary scanning work at Clear Angle Studios

Posted on Jun 2, 2025 by Admin

The CEO of Clear Angle Studios (CAS) discusses its revolutionary scanning work, the promise of radiance fields in VFX and why CAS is prioritising real capture over AI-driven generation

DEFINITION: CAS seems to be expanding around the world – how has building a global network of studios shaped the way you approach your projects?

DOMINIC RIDLEY: We’ve built our systems and locations in response to the needs of studios and VFX teams and now have facilities in London, Atlanta, Vancouver, Athens, Cape Town, Budapest and LA. Another layer of consideration is that our regional facilities are often located in studio lots, with our London facility housed within Pinewood Studios, and our Atlanta facility operating out of Trilith Studios.

Thanks to our roving scanning units (we call them scantainers, and each one has its very own industry-themed name, including Jean-Claude Van Scan and Scanny DeVito), we aren’t limited to our static facilities. If you have seen some of our recent LinkedIn posts, you will see that we can pretty much scan anywhere that we can transport one of our scanning units – and that includes Jordan’s deserts!

Having teams on the ground in key production hubs worldwide means we can offer our clients unparalleled consistency and availability across time zones, and that we can embed deeply into the productions we work across, wherever they are on the globe. It also allows us to respond faster and maintain our high production standards regardless of location.

DEF: Your proprietary Dorothy head rigs have been critical to your scanning work. How has Dorothy evolved, and what difference does it make to high-res facial capture?

DR: Dorothy has undergone several evolutionary changes since its launch in 2018. From improvements in camera fidelity and synchronisation to enhanced rig calibration and portability, each version is designed to push the limits of realistic 3D scanning. One of Dorothy’s more exciting updates was the collaboration on 4D capture with DI4D and Texturing XYZ; this collaboration allowed us to push the boundaries of what is possible with 3D scanning.

What sets Dorothy apart, though, is its ability to capture incredibly nuanced expressions and micro-movements. These details are absolutely essential for creating the most believable digital doubles possible. The result is not just high-resolution scans, but emotionally resonant performances that translate seamlessly to the screen. Vitally, these micro-movements and micro-expressions are what bring the actor or actress’ double to life on screen, preventing doubles from feeling unnatural. It’s not just about being technologically advanced, it’s about performance preservation.

DEF: You showcased a breakthrough in 4D scanning in collaboration with DI4D and Texturing XYZ. Can you share SOME more about this achievement and its significance for the future of facial performance capture?

DR: These collaborations represent a major leap forward for the whole industry. By combining our precise volumetric capture techniques with DI4D’s motion fidelity and Texturing XYZ’s unparalleled texture data, we’ve been able to create a 4D pipeline that delivers incredibly lifelike, production-ready facial assets. What we’re most excited about is the sheer accuracy and expressiveness that this can now achieve; we’re able to capture scans down to the pore level while in motion. For studios pushing to get photorealistic characters and next-generation VFX, this is a game changer, shortening the path from scan to screen and significantly elevating the quality of digital performance.

DEF: Radiance fields and Gaussian Splatting are hot topics in VFX. How do you see these developments influencing the future of digital asset creation?

DR: These technologies are incredibly promising. Radiance fields and Gaussian splatting are redefining how we think about spatial representation and light capture. Together, they are pointing toward a future where complex geometry and photorealism could be derived from minimal data input.

This innovation could potentially reduce overheads and unlock faster workflows for artists across a wide range of disciplines. At the moment, they don’t appear to fit into a traditional visual effects workflow, but we’re looking forward to seeing how this evolves in the future – especially for environmental capture and look development.

DEF: You work across full-body scanning, facial capture, LiDAR, aerial surveys and props. How do these disciplines come together when building complete digital assets for productions?

DR: The core of our work at Clear Angle Studios is all about supplying high-quality data that can be implemented seamlessly into production workflows. For example, LiDAR ensures that our scans integrate flawlessly into virtual environments, while aerial surveys offer macro-level context for real-world environments. Our goal is to eliminate the guesswork by capturing every detail – from a wrinkle on a forehead to the shadow cast by a mountain range. When you bring all these elements together cohesively, productions can move faster with fewer surprises in post.

DEF: CAS has taken a firm stance on prioritising real capture over AI-driven generation. Why is maintaining a focus on real-world data key, and do you see this outlook changing?

DR: We believe that grounding digital assets in real-world data ensures authenticity, accuracy and, most importantly, creative integrity. AI tools are evolving rapidly and certainly have a place in the pipeline, but they’re only as good as the data they’re trained on. Our priority is to build a foundation of truth using reality. We believe that using real light, real geometry and real motion delivers a more realistic end product. Our focus on replicating reality is at the core of what we do. Ultimately, we could be open to integrating AI in the future – if it enhances quality or efficiency – but never at the cost of realism. The physical world remains our gold standard.

DEF: Finally, what is one recent project or technical achievement you are particularly proud of?

DR: Alongside our partners DI4D and Symbiote, we’ve developed a 4D performance-capture pipeline that produces per-frame texture maps, enabling us to capture and reproduce an actor’s performance in unparalleled fidelity. This technology, which we announced to the public at SIGGRAPH in 2024, was first used to capture the performance of Demi Moore for the climactic scenes of The Substance, where her face appears on part of the Monstro Elisasue character. 

To create the shots, the production combined full-body and prop scans of the various prosthetic elements with 4D performance data from a session Moore did in our Dorothy rig. This approach means that, even when the audience is looking at a full VFX asset, they are still watching an authentic performance from Moore – recreated exactly as she gave it, down to the minutest detail.

Considering her performance in this film earned her a best actress nomination at the Oscars, it speaks to the potential of our technology as a way of capturing physical scenes and subjects in a way that allows the visual effects to supplement – rather than replace – real-world elements.

This article appears in the May/June 2025 issue of Definition

Phone chiller

December 21st, 2022

Ricardo de Gracia, DOP on critically acclaimed Netflix Spanish teen drama Élite, explains how...

Talent, Tech & Tomorrow

December 2nd, 2024

Chris Batten, managing director at Absolute Post gets his opinions on nurturing new talent,...

In short: Marion

March 7th, 2025

A work of fiction rooted in reality, Marion’s titular character is the only female...

Production: The Crown

May 13th, 2024

We speak to The Crown's DOP Adriano Goldman on both establishing and evolving the...

Newsletter

Subscribe to the Definition newsletter to get the latest issue and more delivered to your inbox.

You may opt-out at any time. Privacy Policy.