Lewis Telfer – Facial Capture Supervisor
DI4D Faces is our new blog series where we bring to the front the force behind DI4D to celebrate their talent and their careers. To kick off our DI4D Faces series, we spoke with Lewis Telfer, our very own Facial Capture Supervisor, about his role, his highlights working at DI4D and how he sees the future of facial animation.
What drove you to facial animation and facial performance capture?
I have always been interested in people’s faces and how they’re portrayed. My art portfolio at school was based on people’s faces and trying to portray emotion. While studying 3D Computer Animation at Glasgow Caledonian University, I spent a lot of time modelling and sculpting characters, which I really enjoyed. Since I was really interested in learning about how the face moves, facial animation seemed like the natural next step. What’s great about DI4D is its approach to facial animation – while there’s definitely still a place for traditional keyframe facial animation, I always felt that going down the performance capture route resulted in the most realistic facial animation and it’s that realism that I find the most impressive.
Can you tell us a little bit about your role and how it has evolved since you joined DI4D?
For the most part, my role involves overseeing any service projects that we are working on. I try to ensure that any data captured is up to the highest standard that our clients expect. I’m part of the team that operates the camera systems and supervises the performance capture on set. Oftentimes, I’m involved in liaising with clients to understand their needs and answer any technical or practical questions that they might have about the project. If we’re a little quieter, I try to spend time figuring out innovative ways to use the tools we have at our disposal and improve our pipeline and the data that we produce.
Over the years, my responsibilities have increased significantly, as have the techniques and tools we use to produce the animation. I’m more hands-on with the camera systems and data capture. I’ve also become a lot more involved with clients and the organisational aspects of the job, something I wasn’t involved in when I first started.
Since you joined DI4D four years ago, how has the technology evolved?
DI4D’s technology has definitely changed in a number of ways. We’re now able to capture more data and higher-resolution footage, processing higher resolution raw data and tracking higher resolution meshes, allowing us to really push the bar when it comes to attaining a high level of detail. When I first started, we were using a prototype HMC or a FoxVFX HMC, however, we now have our own DI4D HMC which is really cool! Software-wise, the animators and engineers have a great relationship so we’re always adding new tools or developing new techniques to improve workflow or overcome obstacles.
With DI4D’s technology, we’re processing at the highest resolution and building a 3D mesh for every frame of a given performance. We produce animation that matches the exact actor’s performance – creators know that what they see is what they get. We also tend to have a good relationship with our clients and often work with them to cater to the needs of their project as opposed to offering a black and white solution – a catalyst in pushing the technology.
From your experience, how have you seen the industry change over the years?
Back when I started in 2017, we spent a lot of time solving our data to facial rigs that were constrained by controls which I think can limit the quality of the final animation. It certainly seems to be steering more towards direct drive in some form or another. I think clients and directors are becoming more sympathetic to the idea of capturing the performance they want as opposed to getting something good but having to spend time editing and adjusting the performance in post.
Regarding projects you’ve worked on, which one would you highlight and why?
That’s a difficult question, there are so many! It’s always really exciting to see what our clients are able to do with the animation data that they receive from us. I think a highlight for me would be the Kid Who Would Be King as it was the first project that I was on set and involved with the performance capture. The Far Cry 5 trailers we worked on looked great as well as the Love Deaths and Robots episodes. I really liked the style of them and thought the visuals were incredible. Another stand out project for me is HOME. I was involved in the whole pipeline from capture to delivery and I think the final piece looks amazing. Moreover, I need to highlight Rachael from Blade Runner 2049 – just mind-blowing. That project demonstrates how it’s possible to create a believable realistic animation to the point where you won’t even know it’s not a real person.
How do you think DI4D’s technology is making an impact on facial performance capture and facial animation?
I think DI4D’s technology is certainly making a positive impact on facial animation. We’re working really hard to produce the highest quality facial animation on a scale that has traditionally meant a compromise in quality. Our service provides everything, from capturing the data to the delivery of the final animation, proving to be a reliable option for clients.
The launch of PURE4D this year was definitely a significant moment for everyone at DI4D. We had all spent a fair bit of time piecing all of the parts of the puzzle together so to see it come to fruition was incredible. It was definitely a turning point for us as it really proved that we’re able to tackle much larger projects without compromising on the quality that we pride ourselves in.
Can you anticipate what might be the next big thing in facial animation?
It’s difficult to say. With all of the advances in virtual production and game engines that can support more resolution, I think we’re probably looking at more CG being captured and treated in the same way as an actor or actress is captured for a movie. We’re going to rely less on traditional rigs that allow for artistic input but have the potential to limit the quality of the animation, relying more on capturing the best possible performance and using direct drive or unconstrained rigs that really only exist to compress the data into a format that’s easy to drop straight into the engine. There will be more developments in technologies, such as machine learning, which will become more present. We’re going to see a scale that we never thought possible and we’ll be able to produce much more data in a shorter space of time. With Digital Doubles becoming more present, there will be more powerful solutions available and it’s only just going to get better as time goes on.