Claire Copland – Head of Rigging
In DI4D Faces we bring to the front the force behind DI4D to celebrate their talent and their careers. In this edition, we speak to DI4D’s Head of Rigging, Claire Copland, about her career evolution and the changing technology of facial animation.
What drove you to facial animation and facial performance capture?
I started with a background in the arts. After completing a six-month art course at college, I went into 3D Computing Animation at Caledonian University where I later received a master’s in 3D Design. I had a strong interest in visual effects and focused on 3D facial rigs and animation in my master’s, so working with DI4D seemed like the perfect fit.
Faces have always been at the forefront for me. I drew them a lot in high school, and they’ve been a constant throughout all of my education. Facial capture is such an exciting area because it’s a forever growing field. We’re seeing amazing things coming.
Can you tell us a little bit about your role and how it has evolved since you joined DI4D?
As Head of Rigging, I check out the client rigs we receive. I work out how the rigs will operate with our processes. Now that we’re seeing a move away from traditional rigs towards our PURE4D pipeline, I’m often working with PURE4D rigs, or post-processes such as eye-tracking and jaw motion.
My career at DI4D started in the role of Motion Editor, where I also learned how to process data and began running the whole processing pipeline. From there, I moved into project management and then onto rigging. I also deal with data management, so I’ve worked across the board! There’s always a learning process: whatever the project, there is some creative problem-solving to be done. I find that the most enjoyable part is figuring out the little things that make everything look and work better.
Since you joined DI4D five years ago, how has the technology evolved?
We have a completely new processing pipeline to process the data, which is a lot more user friendly. We’re working on bigger projects, capturing data at higher resolution and can now build 4D data at a very high quality, so projects are ten times bigger than before. With DI4D’s technology, we’re able to capture micro-expressions on the face, little wrinkles around the eyes or twitches. It makes people a lot more human. That’s just in the last five years – the quality increase is incredible.
The most important development, for me, was when we started using our own HMC (Head-Mounted Camera), which meant bigger, performance based projects, where traditional rigs were being used and required rig solves.
From your experience, how have you seen the industry change over the years?
The demand for realistic faces has really increased. We’ve always been known for realism, but on a small scale. In recent years, we’ve moved more and more towards digital humans, rigging, and digital doubles – focusing on bigger and bigger projects. For me, the most interesting things going on at the moment are the attempts to de-age people and recreate dead celebrities; recent projects have shown incredible steps forward in this area.
Regarding projects you’ve worked on, which one would you highlight and why?
I loved working on ‘Helping Hand’, an episode featured in the Netflix animation anthology Love, Death & Robots. For that, we had to work really closely with Axis Animation’s rigging team and the actors – we were at the early stages of some new techniques we were developing, so there was a good collaboration with Axis to make the animation as perfect as possible.
The F1 project was also interesting as it was the first one where we used PURE4D, which includes a new rigging system for the character’s faces. We created PURE4D rigs for 6 characters, perfected the pipeline, and the final results were amazing. We’re now working on an even bigger project using PURE4D and we’ve been able to apply what we learned on F1 – early results we’ve seen from the client are incredible!
How do you think DI4D’s technology is making an impact on facial performance capture and facial animation?
With PURE4D, you’re getting a pipeline that brings together the best of DI4D’s different technologies, that captures the smallest movements that a person can make and ensures they are faithfully replicated by the digital version of the character.. It’s revolutionised how we build digital doubles. When you’ve got a system tracking the smallest of details on an actor’s face, that can be used on large scale projects, you’re opening up new levels of facial animation.
Outside of the tech itself, the most exciting part of DI4D’s work is hearing feedback from clients and fans. After our first project with Call of Duty, we read so many comments about how impressive the faces were. Even those who don’t play video games might see digital Einstein and wonder how that was done – it’s great knowing that people are seeing and responding to this progress in all forms of media.
Can you anticipate what might be the next big thing in facial animation?
While there’s a lot of talk about developing rigs to make digital doubles as precise as possible, a next step is potentially having no rigs at all. I think that the PURE4D approach is a step in that direction. There’s also a push for seeing as much of the final product on set as physically possible: having a digital double on a screen while you’re capturing it on film, even if you haven’t fully constructed it yet.
Projects are growing and they’re going to continue doing so. The technology is going to expand too: to get better detail, the files will be bigger, and the quality is going to go up. We’re going to get all those little wrinkles – even the ones you don’t know you have!