Rich Cross – Senior Software Engineer

A few months into the role, DI4D Senior Software Engineer Rich Cross tells DI4D Faces about his move from game development and VFX to the world of facial animation and facial performance capture.

Tell us a little about you and your background?

I first became interested in computer graphics back in the 90s after seeing Toy Story, and started coding at home, writing raytracing software and experimenting with realtime rendering. From there, I started working as a graphics engineer for a game developer, and I’ve since worked across most fields in game development: networking, audio, AI, tools, gameplay. I’ve also worked in embedded systems, in virtual and augmented reality, and on pipeline development for a VFX studio, so I have a fairly eclectic development background.

Why did you join DI4D? 

Lots of reasons!  I wanted to return to working with high end graphics after a few years working in the more restricted world of augmented reality, but mainly I was looking for more interesting and challenging work – and it definitely delivers that. The opportunity to work with technologies I hadn’t previously had exposure to, such as CUDA and Optix, was another factor. And finally, getting to work on projects for AAA games and big budget films is incredibly cool!

What skills have proven most valuable from this background in terms of the work you’re focussed on at DI4D? 

Having experience in modern C++ was obviously valuable, as writing C++ is a prerequisite for the role as part of day-to-day development.  Previous experience with graphics APIs and shaders was also useful for working on in-house tools used by the motion editors. Above all, I think the experience I’ve gained across a range of disciplines was most useful. Development at DI4D is varied, covering everything from VFX pipeline operations, through machine learning and shader development, to mathematical algorithms. Being adaptable is a plus!

What drove you to facial animation and facial performance capture? 

My previous experience with facial animation was all using traditional facial rigs, with lots of custom controls to generate expressions, so using performance capture was a totally new experience for me, and I was amazed at the quality of the animation that can be achieved.  Being a part of creating that was a big attraction.

Can you tell us about your role at DI4D and what you do on a daily basis? 

I work in the engineering team, as a Senior Software Engineer.  This involves both working on in-house software, and supporting the team of motion editors to achieve the great results they get.  Development is split between larger longer term projects, often with a R+D focus, and ongoing enhancements to existing software.  The software engineering team at DI4D is growing so there are also some great opportunities to help shape and evolve the great work that DI4D do. 

In terms of the areas you’ve worked in so far, which would you highlight and why? 

This is hard to choose!  I’ve enjoyed working on research oriented code, and developing the algorithms used in the machine learning in Pure4D has been especially interesting. The mathematics involved can be challenging, but it’s also incredibly rewarding.

You’re hybrid working, remote and on-site. How is that facilitated at DI4D and what benefits has it brought you?

Hybrid working works really well at DI4D.  We make good use of Slack for communication between those in the office and those at home, and within the engineering department we have a daily video call to discuss current development, so it never feels like you’re working alone. On a practical level, our setup means that I can use the same PC both at work and at home, so I don’t need to install software on my home PC just for work.  The time saved by not commuting lets me spend more time with family, and I get to enjoy some fresh air and go for a run or walk at lunchtime.

What are you excited about in terms of the evolution and future of the facial capture industry and why?

I think the quality of facial animation generated from captures is so high that using facial capture will replace traditional rigs as the preferred method of facial animation.  This increased demand will lead to an increased focus on using machine learning to create facial animation quickly and accurately. I feel there will be a lot more research dedicated towards real time facial animation driven by machine learning.

If you have any questions for the team, drop us a message here.

Sign-up to our newsletter to stay up to date with DI4D.