Creator Profile: Territory Studio

Territory Studio used Depthkit to create the vision system of a robotic dog in the feature film A-X-L

August 6, 2019

Territory Studio, co-founded by Executive Creative Director David Sheldon-Hicks, is an award-winning motion graphics and visual effects studio shaping the cinematic language of future interfaces, science fiction holograms, and even the eyes of a robot dog.

Engaging the emotions we feel for certain technologies, whether dystopian or comforting, Territory Studio communicates the visual language that drives technological storytelling in hit films like Guardians of the Galaxy, Blade Runner 2049, and Avengers.

Territory Studio used Depthkit to create the vision system of a robotic dog in the feature film A-X-L, directed by Oliver Daly, based on the short film MILES, also directed by Daly. Volumetric Capture Supervisor Andrew Gant captured Depthkit content of the two lead actors, which Territory’s team, including Art Director Nils Kloth, then composited into the vision system and HUD (heads up display) of A-X-L, the robot dog. Through A-X-L’s encounters with two intrepid protagonists, the dog literally learns to see, with new resolution and meaning, and is able to bond with its human companions.

What does volumetric capture mean for storytelling, and how does it affect a cinematic production process? How do we relate to and tell stories about the increasing role of technology in our lives? What’s it really like to stare a depth sensor in the face and treat it like your new canine best friend? Scatter spanned time zones and teleconferencing hiccups to dive into these questions with Territory Studio.

This interview was conducted by Anna Henson, Producer at Scatter, and features Territory Studio Executive Creative Director and Co-Founder David Sheldon-Hicks, Art Director Nils Kloth, and Volumetric Capture Supervisor Andrew Gant.

When did Territory start, what kinds of projects do you work on, and what was the main dream behind the creation of Territory Studio?

David: We started nine years ago in 2010, with myself and two other business partners. It really started with a project for Electronic Arts in LA for the game Medal of Honor. I had a background in motion graphics for music videos and films, and always knew that we wanted to be in motion in some form or another with a focus on design. And then after about a year, we were approached by the art department on Prometheus, to help them with screen graphics and holograms. That was an amazing project to be a part of. From our involvement with that film, we started to be known for a kind of "future tech" visualization for science fiction and fantasy films, like Guardians of the Galaxy, Jupiter Ascending and Avengers: Age of Ultron. These led to our more recent films like Blade Runner 2049, and volumetric graphics for Ghost in the Shell, Ready Player One and Men In Black: International. We've also recently been working on Black Mirror and some other Netflix things. We often work on feature films, but we also work on computer game projects and TV commercials and all sorts of different content. But consistently we are using design and technology to tell stories.

You're creating the images of future technologies for contemporary audiences, with relevance to contemporary situations and insight into possible futures, which is really interesting. How do you tell the story of what a future technology will look or behave like?

David: Sometimes it's using that future idea as escapism, like in Blade Runner 2049 for example. You're telling a dystopian future, but you're hoping it's not really going to happen, you're hoping it's not really going to be like that. In other cases, it's helping brands imagine what their ambitions could be, and setting out where technology and user journeys could be going, which is really fascinating. We're doing a lot of work at the moment on future cities - what's going to happen to cityscapes when cars drive themselves? What's going to happen to architecture and urban planning when it's all automated?

Let’s talk about the short film MILES and the feature film A-X-L. How did MILES come to Territory? MILES was the precursor to A-X-L, correct?

A-X-L, Global Road Entertainment, Lakeshore Entertainment, Phantom Four Films



David: Yes. From Territory's perspective, we were approached by Oliver Daly, the director of MILES. He'd already shot all the live action. And I think he'd already worked with [Volumetric Capture Supervisor] Andrew [Gant] on shooting the Depthkit slugs and capturing all that data. Oliver had seen Guardians of the Galaxy and said, I really loved the aesthetic that you used in that film, would you be interested in a no budget short film? I get those kind of emails daily. We always try and support filmmakers, and I try to take on the really good ones. I said, just send me what you've got. And he showed me the edit, and my jaw was on the floor. Beautifully cast, amazing photography. Really great action. I mean, it was just brilliant - not on the level at all that I was expecting to see. Especially for a film director doing this first project. I thought, ok yeah we'll do this for you. 

We'd never worked with Depthkit before. We'd never experienced it, but because we had a long-running relationship with Microsoft, I kind of understood the principles behind it.  We've worked on a number of XBox game titles. So I kind of understood how the sensor worked in terms of infrared and resolution of the file that you get back. I understood the basic principles. So I said yes to it without really knowing how we would do it. And then what I loved about the whole concept is that you've got this robot dog, and you want to see the view of the world through that robot dog's eyes. The idea was, let's actually use a robotic sensor, let's image the world in the way a robot dog might actually view it. And the beauty of that is you get the odd aberration or imperfection in the digital imaging. It makes it even more authentic to the whole process. We celebrated those imperfections in some of our treatment, to add a more analog real time sense to it all. We didn't want to iron them out. We wanted to leave them in. I thought that was what was beautiful about it all. And that's where we got introduced to Andrew.


The idea was, let's actually use a robotic sensor, let's image the world in the way a robot dog might actually view it.

Video: Clip from A-X-L, actor Alex Neustaedter meets the robot dog. Courtesy Territory Studio.


So Andrew, you and Oliver had already shot with Depthkit before, and what material had you sent to Territory when they first saw it?

Andrew: I introduced myself, told them I'd love to share assets and create a sort of behind-the-scenes-thing, and that's how we got to start geeking out with each other, learning how they worked, learning how I worked. And from there on out, A-X-L was a pleasure to work on because we got to start over [after MILES.] and take a look at everything and think, what can we do better? Where do we need to improve? How can we do this in a better workflow? And I think still to this day, Nils and I, we love talking about how we can do this better, how the workflow can improve.

Nils, from your perspective, what did you think when you first worked with Depthkit? Had you ever worked with volumetric capture at all before and what was your first impression of working in this way?

Nils: I think the first time I worked with Depthkit was sometime during my studies, actually, but A-X-L was the first project with Territory. David lured me in and said do you want to work on this? This was my first time using Depthkit on a full scale production.

How did you and Andrew interface together? What was that team or workflow like?

Nils: Well, I came to it when it was all shot. So we had all the elements delivered; Andrew supplied us cleaned up OBJ sequences at that point, which we then converted into alembic sequences to use in Cinema 4D.

David: Because there were some 3d scenes that were needed up front, for onset delivery. There's this moment where A-X-L projects this training film onto a building - explaining why the robot dogs exist. It tells this little story around that. So we were doing concept work and design work way before Oliver had even shot the film. Then by the time we got the Depthkit shots lined up, you [Nils] were then with us and ready to go.

Video: Scene breakdown with Depthkit green screen shoot, look development, and final style frames.

I'm curious about the artistic or cinematic references you were thinking about when you were making A-X-L. Where were you getting your inspiration from? How were you thinking about integrating the HUD, the futuristic vision, and also the real capture?

David: I'm trying to not say Short Circuit. I think the reason I am saying Short Circuit is because there were definitely 1980’s nostalgia references for me in this film. It has that retro vibe. I talk about this in some other things that I do, this kind of emotion that we attach to certain technologies; we feel a real warmth for certain technologies and we build this kind of real distrust towards others. You know, it is comparing Skynet and Terminator, built on our suspicions of technologies, with something like Short Circuit or the Millennium Falcon, which creates this real kind of charm and warmth.

I talk about this in some other things that I do, this kind of emotion that we attach to certain technologies; we feel a real warmth for certain technologies and we build this kind of real distrust towards others.


I'm so interested in that idea of the emotions we feel for certain technologies. With the work you do at Territory, you help shape that emotion through visual storytelling to help us feel things for whatever particular type of technology you're trying to demonstrate.

David: And in A-X-L there was a bit of an arc in that. To start off with the robot dog, he's military, so cold, very technical, but it’s also actually an infant. It's learning. So there's this kind of simplicity to it all. As it learns from being around other humans and develops; the machine learning has more layers of depth and understanding and becomes more intricate and complicated and organic in some ways. So we wanted to play on those themes. We were definitely influenced by that idea of nostalgia and romance for technology and not letting it become too cold.


A-X-L style frame, featuring actor Becky G

With the sensors that Depthkit uses, what you actually realize is it doesn't feel too robotic. It doesn't feel too technical. There's quite an organicness to it - especially when you start tracing those 3D forms. And especially if you are imaging humans. Either hair flowing or the organic movements of it all - it can be quite graceful and a beautiful way to read an image. You kind of fill in the gaps, you know, your mind fills in the gaps, because it doesn't see everything. And it's a bit like cinema: with cinema we accept shallow depth of field because it kind of puts this beauty on the image by taking things out. I think Depthkit has a similar sort of sensibility to it sometimes. There's just a different way of seeing everything, and we all know that's what art is in a lot of ways. It's helping us see the world in different ways. And I think that Depthkit gives us another opportunity to do that, to see things from a slightly different angle.

With the sensors that Depthkit uses, what you actually realize is it doesn't feel too robotic. It doesn't feel too technical. There's quite an organic-ness to it - especially when you start tracing those 3D forms. And especially if you are imaging humans. 


How is using Depthkit different from working with avatars and 3D modeled humans?

Nils: Oh quite different. When you 3D model and build a rig you have full control over it. Depthkit gives you a limited, but kind of better way of working, because it makes it easier. What you shoot is what you get. You have a template that you have to work with. And then build on that and push it to the maximum. When we were trying to develop the vision for A-X-L, we were trying to figure out what can we do with Depthkit, how can we integrate it into the environment that we had to build? So yeah, it's different but it's more organic. It has a realism to it that you can't recreate easily. 

Did you have an arc similar to the way that the robot learns more, in terms of how people were represented over the course of the film?

David: Through the film, the visual treatment builds in definition. There are moments when the actual video playback becomes masked through some of the polygons. So you get RGB coming through as well. Building up really lovely little transitions and what is typically a 3 second burst of a shot. But what's lovely about the data coming through with Depthkit, is that you can then interrogate it, and interpolate it in lots of different ways. Once you've got the Depthkit data, the treatments that you can do with it are kind of unlimited, really. It means that you're not spending your whole time trying to build up the action. Because you've got action in the can - that's done. And it all becomes about the visual interpretation of it. So you can spend more time on that because you're spending less time on creating the action.


A-X-L style frame, featuring actor Alex Neustaedter


Once you've got the Depthkit data, the treatments that you can do with it are kind of unlimited, really. It means that you're not spending your whole time trying to build up the action. Because you've got action in the can - that's done. So you can spend more time on [the visual interpretation of the data] because you're spending less time on creating the action.


What was the process of communication like amongst the whole team? What workflows did you develop?

Andrew: On my end, it started in the editing room. I was able to sit with the editor and director, tweak the angles, and provide a visual reference that they could immediately put into their working edit. Using the Depthkit Visualizer, Oliver would be able to sit with me and tell me to distort the scan a little here or add this color during a certain story moment. These pre-vis exports would then be incorporated into their preliminary edit, which worked as a great way for Oliver to communicate with Territory on his vision.

Nils: We would receive final edits with Andrew's mock-up Depthkit visualization, plus a data treatment for the UI elements of what the narrative is supposed to be like.

David: These were all story-based, little tech slugs. So we immediately knew what the important story moment was for Oliver in each shot. We'd then receive an overall edit, which was a few shots of live action, then cut to robot vision, which would've been a Depthkit plate, and then back to live action, and then maybe back into a few Depthkit plates. And so we understood the overall story for that sequence and how each individual moment would tie into that. So that would be our reference edit, with Andrew's pieces in there. We would take the data for each shot and then work those up, treat them, and then slot them back in.

Andrew: Regarding the robot vision as a way to drive the story forward, A-X-L is broken when we first meet him. So his vision starts off very basic, very static, very blocky and it evolves over time. When I was in the editing room, Oliver would pay really close attention to this, and wanted to provide references that illustrated this evolution. As the story progresses, A-X-L gets repaired, learns more about the human environment, and in-turn his vision would evolve and become more detailed. This storytelling arc was baked into his references.

So you're using the process of the technology developing to communicate the evolving nature of this relationship between the robotic dog and the boy who he meets. Nils, how did you feel like the 3-dimensionality of the Depthkit footage played into the aesthetic? Did you leverage the different views from the captures?

Nils: There are a lot of shots where we’re suggesting that the dog can move his vision physically around a person, so that’s one of the most sci-fi elements about it. There’s a scene around a campfire and in the live action shots, they’re in a very static location, but you cut to the robot dog’s vision and it’s basically circling around them.

Andrew: Those are actually my favorite shots. I remember seeing those shots for the first time, when A-X-L starts using his 3D sense, and you’re moving the camera around in digital space.


That’s what separates the robot vision from the human vision, right?

Nils: Yeah, and that’s combined with the whole UI element [of the HUD], projecting information and finding your target.

Andrew, what was the process like shooting with the actors? How did they react to shooting with Depthkit?

Andrew: It was a lot of fun because you can walk up to the actors and say, look, the Depthkit and camera is the robot's eyes. They immediately understand this. Next, the most important part of the learning process is teaching them the dead zones. I’m very attuned on how to obtain the most detailed data possible, which for the most part, is keeping the talent as close as possible without cropping the important information. Walking through the boundaries and limits with the actors and telling them how close they can get is key. We also place markers on set to show the dead zones, because once they're acting, we need to make it as easy as possible so they don't have to think about all of these limitations. I think it's really important to, on the operator's part, to make sure you're getting the best data possible. Overall, it is a very fun process; actors immediately get it, and they love interacting with the camera and Depthkit.


Depthkit green screen shoot for A-X-L, with actor Alex Neustaedter, shot by Andrew Gant


Where you able to show them live previews during the shooting process?

Andrew: Absolutely. So whenever you're on set with Depthkit, most people who haven't worked with it before are always pretty skeptical and are thinking "is this thing even going to work?" The producers on A-X-L, for the most part, had to take Oliver's word for continuing forward with Depthkit and volumetric capture for the vision. Their natural choice was to just shoot everything on green screen and do the vision in post. They just hadn't worked with volumetric capture before, so they had to trust us and work off example of what we did on Miles together. I remember one of the producers coming up to me as I was calibrating the system on our first shoot day. He said "Hey just so you know, this is all on you. I hope this works." Once the Depthkit was set up, I brought everybody over to my monitor to show them how everything works, which is my favorite part of the day. Watching people's reaction the minute you show them the live capture view while spinning the camera around the capture is always a pure joy. After this, everybody is immediately onboard and it automatically motivates the actors.


Left: Depthkit green screen shoot featuring actors Alex Neustaedter and Becky G. Right: Depthkit visualization (legacy tool)



A-X-L final style frame 


David and Nils, from Territory’s perspective, how do you see volumetric capture as a part of your process? Are you continuing to get more projects with it? What do you see as the future of this medium? 

David: It is the future in terms of capture. It has to be. Everything that we do is now three dimensional delivery, whether it's VFX for a film, VFX for a Netflix series, VR, MR, or some sort of live experience. Everything is about volumetric capture. I spend my life now going around to agencies and brands saying, just forget about the delivery mechanism, you have to capture the asset in the right way. Think about having a raw image from a camera, that can give you the latitude for exposure. Volumetric capture is exactly the same for 3D. You have to have the 3D data from the source, that needs to be captured there and then. And when you have that, you've got this scale of opportunity, what you can do with that media. If you really believe in VR and AR, and all of these headset technologies, computer games, immersive theatre, immersive storytelling, it all has to go this way. And there's still lots of work to be done with it. Topology, getting that right, getting the organic forms. Getting the resolution higher, more depth. How do you get this process production-ready when it comes to an array of sensors? Making it work for camera crews so they don't feel threatened by it. It's a whole project. But it's here to stay. I don't think people realize how much of an actual requirement there is for this technology. Currently just a missed opportunity to do more. But from the days of MILES I've been an utter convert.

Thanks, David, Nils, and Andrew for a wonderful conversation!

Territory’s team envisions and renders the future for today’s audiences. Depthkit is a tool for building multi-dimensional storytelling, with the power of real human performances. We’re excited that Territory Studio’s work has leveraged this technology as a key to our cinematic dreams and emotional narratives. They not only tell stories about future technologies, they live by this value as well, experimenting and pushing the boundaries with the tools they use. We love being a part of their journey!

Want to try our Depthkit for VFX workflow yourself? 
Check out these tutorials, with working project file and demo assets to get started right away!


Learn how to bring Depthkit assets into Houdini

Depthkit + Houdini

Learn how to use Depthkit in your After Effects projects

Depthkit + After Effects + Plexus

Oops! Something went wrong while submitting the form.