Ewan Borland – Chief Software Engineer

In this edition of DI4D Faces, we speak to one of the longest serving members of the team, Chief Software Engineer, Ewan Borland. Ewan has been instrumental in every aspect of the company’s development throughout the years, and we’re extremely proud to share this in-depth look at his career and how his work has helped shape the field of facial animation.

Background

Tell us a little about you and your background.

I studied Mathematics and Computing Science at the University of Glasgow. At that time they had a research group dedicated to computer vision and I was lucky enough to do a project in my final year working with data from an early structured light 4D capture system they had developed.

I had always been interested in computer graphics but that truly felt like a new and exciting area and I was keen to explore further. After graduating I was able to secure a place within that research group and work on a variety of topics covering everything from experimental animation for children’s TV to capturing accurate 3D models of live pigs!

Role

When and why did you join DI4D?

Towards the end of 2004 I had decided that I wanted to move out of academia and put my skills to use in industry. Both of DI4D’s founders Colin Urquhart and Douglas Green had strong ties to the research group at Glasgow University where I was working. As such, I was aware of their company and what they were hoping to achieve and they had some knowledge of my expertise and the topics I had been working on.

As luck would have it, having done the hard work of getting the company up and running, they were at the point where they were ready to grow the team just as I was looking for my next challenge. We got together and it seemed like we would be a good fit so I joined the team in January 2005 on a 3 month contract and I’m still there 16 years later!

What drove you to facial animation and facial performance capture?

I’m old enough that facial animation technology was in its infancy when I was making career choices so it wasn’t an area I could set out to work in directly. What I was really interested in at the time was CGI (Computer Generated Imagery) in all its forms – particularly the degree of achievable realism. I pursued topics in that direction whenever I could, and as the technology and tools matured, capturing, modelling, and rendering of everyday objects became more commonplace and the real challenges in realism were found in portraying and animating the human face in a convincing manner.

Can you tell us a little bit about your role at DI4D and what you do on a daily basis? 

My work varies a lot from day to day depending on the projects we have running. Usually my time is split fairly evenly between R&D activities and providing support for existing tools in use on live projects. In both cases it involves working closely with our team of motion editors to understand where our tools work well, and where there is scope to make things even better. Sometimes I also get directly involved with clients if we need to make changes to our standard pipeline to accommodate their way of working. That variety certainly keeps things interesting.

Has your role changed since you started at DI4D?

When I joined DI4D, I was the junior member of a three man team. Now I’m in charge of the team responsible for developing all of the proprietary software used by DI4D to deliver world leading facial animation. That’s a big responsibility, but I think my long history with the company means I’m ideally placed to do so – even if it does mean I’m not writing as much code as I used to!

DI4D has evolved from a technology company selling cutting edge capture systems to one providing the very best facial animation as a service. That change in the company’s focus has been reflected in changes to my own role: whereas once I was heavily customer facing, now I spend the majority of my time working with our talented teams to develop the best solutions for improving the fidelity and productivity of our tools.

Technology

Since you joined DI4D, how has the technology evolved?

When I first joined DI4D, we were focussed on our proprietary passive stereo photogrammetry algorithm that differentiated us from everyone else. We had a solution that allowed us to deliver accurate 3D models without any structured light, special makeup or other onerous setup. However, much of the responsibility for taking that model to the next stage in the customer’s pipeline fell on the customer. Since then we have extended the scope of our service, delivering final quality animation that customers can use directly. We now have an entire ecosystem of tools built around our PURE4D service that take the raw geometry produced by our original algorithms all the way through to the best quality facial animation at scale for even the biggest projects.

What is it about DI4D’s technology that makes creators want to use it to produce world-leading video games and blockbuster movies?

DI4D was ahead of the game in terms of the resolution and fidelity of the data we were able to produce at an early stage in the company’s development. We have a history of delivering results that are verifiably accurate, not just plausible. That has enabled us to develop tools that capture and deliver all the nuances of an actor’s performance faithfully. I think content creators know that if you are capturing the best talent, then it’s important not to lose any of their performance in the translation from video to animation, and that’s why they come to us.

Regarding projects you’ve worked on, which one would you highlight and why?

That’s a tough one. I’ve been involved in so many great projects that stand out for different reasons. Of the more recent projects I think I’d probably have to single out the work we did on the Netflix anthology Love, Death & Robots. We worked on a number of different episodes and it’s one of the few projects recently where I was involved with nearly every stage of the process, including supervising some of the filming using our HMCs. Seeing all of the performances we worked on be so well received in their final form is very rewarding. I’m also a fan of Alastair Reynolds and several of the episodes were based on his novels so that was an added bonus!

Past, Present & Future of Facial Animation

From your experience, how have you seen the industry change over the years?

For a long time there was a feeling that facial capture was good for creating rigs and other assets but that the highest quality animation had to come from animators. I think there has been a realisation that, whilst there is nothing a talented artist can’t do with enough time and resource, technology has improved to the point where capturing a talented actor is a much more direct and efficient way to get good performance.

This is also reflected in changing attitudes to capturing facial animation where we see people capturing multiple takes or even doing re-shoots to get the performance they want from an actor whereas historically they might have relied on editing or refining the final animation. I think the increasing prevalence of real time applications will only further this trend.

Is there a particular moment or memory in your work history that really stands out, a pivot moment?

DI4D were one of the first adopters of NVIDIA CUDA technology, working with NVIDIA engineers to migrate our core algorithms to the GPU ahead of the very first NVIDIA GTC (GPU Technology Conference). That provided NVIDIA with an exciting visual demo and gave us the leap in performance necessary to make 4D a practical reality. Taking an algorithm that used to take minutes and running it in milliseconds transformed what was possible for us and paved the way for a host of other developments that have ultimately led us to PURE4D.

How do you think DI4D’s technology is making an impact on facial performance capture and facial animation?

With PURE4D, DI4D have created a service that provides an exceptional level of quality but that is still cost effective and scalable enough for even the largest projects. What that means is that our technology is directly enabling the quality that was previously only seen in blockbuster films to be applied to in-game content at scale. This is raising the bar for facial animation in games and enabling our customers to further reduce the gap between film, TV and games helping them to deliver next level experiences on the next generation of consoles and hardware.

How do you visualise the future of facial animation? Can you anticipate what might be the next big thing in facial animation?

I think we are getting very close to the point where facial animation is indistinguishable from live action. Several high profile films have showcased work that would have been inconceivable just a few years ago. We’re still skirting the boundaries of the uncanny valley for now but as we bring more and more of the actor to the animation we’re definitely getting close to realising that goal. Of course that’s most true for digital doubles when the actor and the character look alike; bringing that same level of realism across to different characters and being able to do all of that in real time will likely keep us all busy for a while yet.

Get in touch with Ewan via LinkedIn. If you have any questions for the team, drop us a message here.

Sign-up to our newsletter to stay up to date with DI4D.