Frequently Asked Questions


Q: Does DepthKit support multi-camera shoots like the footage in Blackout?
A: Not yet. Our strategy is that, in building Blackout, we'll be concurrently working on the tool and won't release it before it's ready.

Q: Does DepthKit currently include the filtering techniques used in Giant?
A: The public version does not. We're still working on those techniques but we can selectively offer it as a service.

Q: When will you support [new depth sensor]?
A: We're constantly testing and working with new sensors. Currently we're excited about Orbbec, ZED, Intel RealSense, PMD, Tango etc. We're not currently making any specific promises about when we'll add support for new sensors. If you've encountered a sensor that you're excited about (or make), please get in touch!


Q: Does DepthKit work with X software product or Y workflow?
A: We're going to continue supporting common export formats (.OBJ, .ABC) and we'll be publishing tutorials as we go, but we are focusing on a particular workflow first: DepthKit > Unity for VR/AR. We encourage you to test and share workflows as you experiment with DepthKit footage in your favorite environments. If you are an expert in a particular toolset please get in touch with suggestions for favorite import formats or other conventions.

Q: What framerate does DepthKit shoot?
A: We recommend filming at/around 24p. The Kinect shoots at a variable framerate around 30fps and DepthKit down-samples to the framerate of the footage provided. We recommend shooting at 23.976 fps / 24p.

Q: What is the range of DepthKit? 
A: Currently we're focused on the Kinect V2 which has a range of around .5m and 4.5m (1'8"-14'6".)

Q: What is the Kinect V2 depth resolution?
A: The new Kinect has a depth resolution of 512 x 424 pixels with a FoV of 70.6° x 60° resulting in an average of around 7 x 7 pixels per degree. Due to the use time-of-flight as the core mechanism for depth capture, each pixel in the depth image contains a real measured depth value. See Roland Smeenk's thorough breakdown for more information.

Q: What is the optimal range for shooting with DepthKit?
A: The quality of the depth data falls off as you move away from the sensor, so the general recommendation is that you fill your depth frame with your subject whenever possible.

Q: Do I need to be plugged in to the wall to run DepthKit? 
A: With the Kinect V2 - which is the best sensor that we're supporting right now - the answer is yes. However the legacy versions of the DepthKit support the Asus/Primesense sensors that run off USB power. Alternatively we've had users make mobile rigs that include a battery and an AC/DC inverter that work great for running a Kinect.

Q: Do I need an external camera to shoot with DepthKit if I only want depth data sequences? 
A: No, you do not. For more information, contact us or ask a question about this in the DepthKit Beta Slack channel.

Q: What is the field of view (FoV) Kinect V2?
A: The Depth camera on the Kinect V2 has an FoV of  FoV of 70.6° x 60° and the color camera has a color resolution of 1920 x 1080px and a FoV of 84.1° x 53.8° resulting in an average of about 22 x 20 pixels per degree. There is a great interactive Kinect 1 vs Kinect V2 FoV comparison tool here.

Have a question about the beta program?