Creator Profile: Keijiro Takahashi

Making Waves with Depthkit + Unity VFX Graph

April 15, 2019

Pioneering creative coder and open source trailblazer, Keijiro Takahashi, pushes volumetric filmmaking into new territories with Depthkit and Unity VFX Graph.

Read the Japanese language version here.

A series of visual experiments created with Depthkit and Unity VFX Graph featuring dancer GENDAI, courtesy of Keijiro Takahashi

We love Keijiro’s inspiring work and his ethos of sharing with the community, and if you’re reading this, you probably do, too. When we found out he was experimenting with Depthkit and Unity’s new Visual Effects Graph, we were thrilled and wanted to dig deeper into his process. How did he combine these tools? What did this enable for him creatively? Why did he want to use and manipulate volumetric footage?


Unity's VFX Graph is a game changing tool. We believe it has special relevance for our Volumetric Filmmaking community as a powerful way to expand the aesthetic range of 3D captured footage. Making our creative tools accessible to a wide range of skill sets and expertise is our passion. We want to empower you to build and inhabit all the worlds you can dream up - transitioning from photoreal capture to out-of-this-world effects seamlessly and dynamically.

So when Keijiro dropped his experiments on Twitter, we knew there was a conversation to be had and shared. Keijiro is always pioneering, pushing VFX and real time graphics processing to the next level. As a new Depthkit user, his experiments brought a fresh and dynamic perspective to the possibilities in 3D capture.


Of course, Keijiro is at the top of his game as a creative coder. He makes this stuff look easy. He makes waves, literally and figuratively.

Keijiro working in Unity's VFX Graph. Image Credit: Keijiro Takahashi


If you think you need to be a code wizard like Keijiro to create these looks, think again. While Keijiro is publishing his work open source on his Github, Scatter is building the next generation of our Unity SDK to leverage the Visual Effects Graph so you can achieve Keijiro-like effects with no coding required!

Whether you’re a coder, filmmaker, artist, or vfx professional, Keijiro’s work is probably on your radar of inspiring industry leaders. He certainly inspires us, and we're excited to get to ask him a few questions to find out what inspired him to work volumetrically, and with Depthkit + Unity VFX Graph specifically.

Interview

Conducted in Japanese and translated into English by Matthew Fargo, an artist and creative coder.

Read the Japanese language version here.

Let’s start with a little introduction. You have a very broad and diverse creative practice, so how would you describe what you do?

Keijiro Takahashi: Well, my name is Takahashi Keijirō. I learned programming on an eight-bit computer in elementary School, and have been at it ever since. After college I spent ten years as an engineer at a video game firm. Everything I know about creative tech can basically be broken down into stuff I taught myself, before starting work, and then stuff I learned on the job afterwards.

After I quit and went freelance, I wanted to try all the stuff I couldn’t do as a working stiff—collaborating with motion graphic artists and musicians, and leveraging all the technical knowledge I’d gleaned from the gaming industry in new ways. I work at Unity, now, but I continue to pursue this stuff independently.

                                                                                Courtesy of Keijiro Takahashi, featuring GENDAI

Yeah, it seems like you’ve always got something new on the stove: one day it’ll be an FFMPEG capture library, the next day it’s a plugin for NDI, and then you’re working on one for HAP. Is there a particular method to how you choose your projects?

KT: Generally I just hack on things that I need for my own work. The NDI and HAP plugins were built for specific things I built in the past. Unity was originally conceived of as a game engine, so there are a lot of missing features when you try and use it for video processing. So a lot of the time I’m just trying to fill those gaps, both for myself and for anybody else who might be trying to do video with it—which is why I make a point of sharing my work.

So it’s less of new tools inspiring new work, but your work actually inspires new tools. Is that how you ended up doing the pieces you did as BRDG/VRDG? Tell us a little about your process.

KT: This came from doing a volumetric shoot for the music video for this Nishikawa Takanori song, “Unbroken.” At the time, I hadn’t even heard of Depthkit, and we just kind of cobbled together our own workflow. Anyway, we got through the shoot just fine, but the experience left me wondering if there wasn’t a better, cleaner way to go about doing volumetric video.

Eventually somebody put me on Depthkit, and I got the idea of using it in tandem with Unity’s Visual Effect Graph. This video was one of those early experiments.

Volumetric video has a couple big advantages over motion capture. First, there’s the sheer ease of use; and then there’s the whole range of expression it grants you

That makes sense, because your earlier work, such as [this glitch can dance] seems to use traditional motion capture instead of volumetric. Can you talk a little more about what made you switch to a Kinect/Depthkit setup?

KT: Volumetric video has a couple big advantages over motion capture. First, there’s the sheer ease of use; and then there’s the whole range of expression it grants you. As for ease of use: I use motion capture a lot, and I always feel like it’s a tremendous hassle. It requires a lot of equipment and people—and booking a studio—just thinking about it makes your wallet hurt.

But with volumetric video you can just roll up with a Kinect and Depthkit and start shooting with really minimal amounts of equipment. All these experimental pieces, from shooting to completion, took only a few days. When you’re doing creative work, I can’t stress how important it is for everything to be lightweight like this—to let you have an idea and strike while the iron is hot.

As far as the range of expression it grants: this is really important when you’re collaborating with other artists. In [the tweet below], for example, the dancer’s costume was a vital part of what they were expressing. Capturing all the things outside of human bodies with traditional motion capture would require a humongous effort; but with volumetric video, you get all the hairstyles, clothing, and accessories of your dancers and musicians.

That dancer’s jacket is practically the protagonist. I noticed the title of the piece mentions VFX Graph — any particular reason you’re using a visual programming tool like VFX Graph, here, when you seem like you could just dash off shaders on a whim?

KT: Mostly I was just interested in learning VFX Graph. My day job is as an evangelist at Unity, so I’m sort of obligated to know these things. And I figured that volumetric data made for the perfect learning material, so that’s what I used to test out VFX Graph’s features.

The other reason is just the sheer usability of VFX Graph, as a framework. This became clear once I started tooling around with it, but VFX Graph is just really visually simple and easy to use. I can just throw all my ideas at it and see what sticks — it’s a really effective tool for taking vague creative impulses and making them concrete, quick. It definitely has its uses, even for seasoned programmers, and I’d take it a step further and say that a little knowledge of programming will make you a lot better at using it. For what it’s worth, I definitely see myself using it more and more in my own work, moving forward.

So essentially you took two tools you were unfamiliar with, learned how to use them both, then figured out how to integrate them, and made some amazing creative work in the process. Do you find that using unfamiliar tools is actually conducive to creativity?

KT: Absolutely. Unfamiliar tools are actually better at creating the serendipity needed for fresh ideas, and the joy of simply getting my hands around the neck of a new medium always stimulates the creative in me. At work as well as in my side projects, I always make it a point to use new tools and technologies whenever possible.

Depthkit is really clean and intuitive — if you can figure out how to use a camera, you can probably figure out how to use Depthkit.

It looks like you were originally using the Realsense SDK. What was the reason for switching over the Depthkit?

KT: I still use Realsense for prototyping. Kinect and Depthkit are indispensable if you’re trying to create actual high-quality work, but Kinects are large, clunky objects, and I can’t see myself forfeiting the desk space. Meanwhile the Realsense is tiny and USB-powered, so it’s perfect for doing little experiments with. Especially in the prototyping phase, it’s nice to have a real time stream of data rather than a recording, so I rely on the Realsense SDK for this.

So for now I tend to have these two different workflows going on, but the upcoming Azure Kinect should solve the problem of desk space, and I’ve heard rumors that you guys are working on streaming functionality of Depthkit, so in the future hopefully I can do everything in one place.


Yeah, that Azure Kinect is a good piece of news. With all the hardware, frameworks, and libraries that you use on a daily basis, what’s your overall impression of Depthkit?

KT: Well, the first thing that struck me about Depthkit was simply how refined it was, as a tool. Usually, frameworks built for highly-specialized technologies like these are pretty hairy and opaque, but Depthkit is really clean and intuitive — if you can figure out how to use a camera, you can probably figure out how to use Depthkit. This alone makes a huge difference in my workload. Before Depthkit, whenever we’d have shoots, I’d have to find someone who was a specialist in whatever tool we were using and have them embedded on site, and now none of that is necessary. It’s really simplified our workflow: the film crew goes and does their shoot, and then hands the data to a VFX artist.

The other thing that’s been really helpful is the way you guys mapped out the easiest way to get my hands on a Kinect. When Microsoft stopped producing the Kinect, they pulled the plug on the SDK and its documentation, as well, so it was hard to find reliable information on where to get one and how to set it up. The Depthkit documentation kind of lays it all out for you, so you can get up and running without any hassles. Depthkit goes out of its way to compensate for all of the convoluted messiness that is the Kinect.

Is there anything you’d like the Depthkit team to develop or expand on — anything that might help tighten up your workflow?

KT: More than anything, I would love to have the ability to live-stream Depthkit data in real-time. It would be huge to be able to take the VFX I’ve built for volumetric video and use them in live performances.

Also, in live settings, the operations area is often far away from the stage, so it would be amazing to get data in a format that was ready for SDI or Video-over-IP.

Keijiro Takahashi's GitHub

The videos you’ve put out are all really stunning, but I would be remiss not to mention how much people love using your libraries on GitHub, as well. I imagine this overlaps with your career as an evangelist, but can you talk a little about your relationship with open source and code sharing?

KT: Frankly, I don’t have particularly strong feelings about it. I grew up with the attitude that open-sourcing your code was simply what one does, and so I release everything that I can legally get away with. With me, it’s really just making my source code publicly available — I’ve never actually had to consider any of the difficulties of developing an open-source library with other people in a public forum, so I’m probably not the best person to speak to this.

One thing that is close to my heart, though — and this may be slightly tangential to the idea of “open source” — but when I was doing a lot of work with musicians, I was really inspired by the idea of the “remix.” Just the idea that the act of interpreting someone else’s work is, itself, a new act of creation. On some level, I think that I release all my source code with the hope that something akin to a remix might happen with it. I like seeing my work bleed into that of other people’s, sometimes diluted, sometimes amplified.

The better a technology can capture human motion, the greater its faculty for expression, and the larger of an emotional response it can evoke.

Any advice for young bloods just getting started with volumetric video?

KT: It has become incredibly easy to make volumetric video -- all you need is a Kinect or a RealSense camera, Depthkit, and VFX Graph. And this ease-of-use is opening up all sorts of new ways to use it. So basically, if you find yourself thinking “Hmm, volumetric video might look cool, here,” then you should go for it, no brainer. Although, to be entirely frank, it might behoove you to wait until that Azure Kinect comes out!


Thanks, Keijiro and Matt, for a riveting conversation!

At Scatter, we are dedicated to ensuring that Depthkit will always work in tandem with the latest tools you love, and we know Unity is an important one. We want to empower you to be able to capture and create with the full aesthetic spectrum, combining volumetric footage and the latest and greatest tools for interactive media.

Stay tuned for more news from us soon!

- Scatter
Oops! Something went wrong while submitting the form.