Andrew Marshall – Lead Motion Editor

In the second edition of DI4D Faces, we spoke with our Lead Motion Editor, Andrew George Marshall. After 9 years working at DI4D, Andrew shares his vast knowledge in the field of facial capture and discusses how the animation industry has evolved.  

What drove you to facial animation and facial performance capture?

I finished High School early to study Computing before studying Computer Games Development at college. Having my first taste of 3D animation and working to develop very simple project games was the realisation of a dream and I never looked back. I’ve always really enjoyed 3D animation and modelling. For my Honours project, I compared 3D software packages, modelling and animating characters in each respectively. While that doesn’t fully concern facial animation, it definitely was the basis to help me in my career at DI4D.

I’ve always enjoyed keyframe animation. Working to produce something exactly how you want it gives me a great sense of accomplishment. Facial performance capture was the perfect blend of my animation knowledge and my personal drive to animate models to the highest standard. With performance capture, each new project gives me that same sense of true accomplishment when it looks as perfect as the real thing.

Can you tell us a little bit about your role and how it has evolved since you joined DI4D?

I joined DI4D in June 2012, almost immediately after I had completed and handed in my Honours dissertation at university. As a Lead Motion Editor, I work with our animation team to make sure we produce the most high-fidelity performance capture data possible. I work on tracking data from both the PRO and HMC systems, and also during the after trackings: reviewing data and post-processing sequences. Recently, I have been working on getting our data on to clients rigs to work with their own processing pipelines.

After 9 years at DI4D, my role has changed quite significantly. I was one of the first-ever Motion Editors the company hired. Back then, I used to mainly track the data. Now, I deal with tracking, post processing, and machine learning, working with our team to ensure our data is of the highest standard we can get. I never dealt with any coding scripts when I first started, now, a day doesn’t go by without having to script something!

Since you joined DI4D nine years ago, how has the technology evolved?  

When I started nearly a decade ago the software was in its infancy. Now, our software is so far ahead. Tracking data, setting up meshes and post processing data has got to the point where we can work on very intense selects with high levels of motion. We can now make them look as good as possible at a very high resolution. And all with much greater ease and at a higher fidelity than was possible years ago. All the software used to process data for the PRO and HMC systems is DI4D’s own. Over the years, we’ve pushed the boundaries of our in-house work to adapt each stage of the facial performance capture pipeline.

A few years back, 20 minutes of captured data took us a few months with the full team to fully process. DI4D’s current technology can produce the same amount of data in much less time with less motion editors working on it. It’s the perfect time for video games and blockbuster movies as more and more projects require more and more data being captured and processed. With PURE 4D, we no longer have the same concerns about time constraints and size of projects.

From your experience, how have you seen the industry change over the years?

The industry for facial motion capture has seen exponential growth with next-generation consoles and massive blockbusters relying on a ton of CGI at the highest level possible that can be created. Facial capture has become a real go-to for the industry to take their characters to the next level, blurring the lines as best they can from 3D animation to real life.

Regarding projects you’ve worked on, which one would you highlight and why?  

The first project that comes to mind as a highlight is Call of Duty: Modern Warfare. To work on a game that has become the highest-selling game of the franchise to date was simply spectacular. At the time, it was the most data we had produced for a game and it feels like getting that opportunity was the catalyst for us to get bigger projects. Blade Runner 2049 was also a huge project. It was incredible to be part of a film that went on to win the Oscar and Bafta for best visual effects. I was really proud to have worked on a project that was acclaimed and grabbed people’s attention so fervently.

Additionally, a pivotal moment for me was when our CEO Colin Urquhart moved to LA. Having a system and someone there in the heartland of the motion capture industry with almost immediate access to many big-time film and game productions really gave us a boost in terms of clients and notoriety. 

How do you think DI4D’s technology is making an impact on facial performance capture and facial animation?

DI4D’s technology has made an impact on the amount of companies that are turning to motion capture for the most realistic performances. For photoreal character animation, facial capture can make it almost indistinguishable from real-life actors, and DI4D provides data that doesn’t require clean up afterwards.

Can you anticipate what might be the next big thing in facial animation?

I believe in the future that facial animation, and facial motion capture specifically, will only grow and reach levels beyond what it is now. More and more motion capture is being used to recreate performers on screen. Digital Doubles are revolutionising the industry and will only become more popular, powering the future of animation. The incredible advancement in machine learning and animation rigs being used will allow studios to immediately capture entire projects directly. 

Capturing the live performance of specific characters might be the next big thing. Companies have created pipelines with motion capture that capture the character’s performance and animate a character in a live setting. However, the technology is still in its infancy. In the future, if performers could act in front of cameras, and the tracking of the mesh could be done almost instantaneously in a live setting with little to no clean up afterwards, the size and scale and jobs captured and produced would just be unlimited.

Get in touch with Andrew via LinkedIn. If you have any questions for the team, drop us a message here.

Sign-up to our newsletter to stay up to date with DI4D.