Depthkit FAQ

General

Q: What platforms does Depthkit run on?
A: Depthkit runs on Windows only for now for capture and Windows/macOS for processing footage. It is possible to use macOS with Bootcamp to allow for Capture on a macOS machine. Virtualization solutions will not work.

Q: What sensors does Depthkit support?
A: Depthkit supports the Kinect v2 on Windows with the addition of a Kinect V2 Windows Adapter.

Q: What framerate does Depthkit shoot?
A: We recommend filming at/around 30p. The Kinect shoots at a variable framerate around 30fps and DepthKit down-samples to the framerate of the footage provided.

Q: Do I have to use a Kinect v2 for Depthkit?
A: For now, yes. We know that Microsoft has discontinued the Kinect V2 and prices are slowly rising on them and they are increasingly becoming hard to find. It's our goal to make sure we're always giving people an easy to find option for a depth sensor, and as the Kinect goes away we're looking for the "next" sensor that we can recommend to people. For now, a Kinect v2 + Windows adapter is required, but we look forward to offering new sensors in the near future.

Q: Do I need an external camera to shoot with Depthkit? 
A: No, you do not. You can use our simple capture tool Depthkit For more information, contact us or ask a question about this in the Depthkit Beta Slack channel.

Q: Does Depthkit support multi-camera, outside-in, capture?
A: Not yet. We're working on a tool internally to support it that we can't wait to share with you. If you are interested in getting your hands on multi-camera capture as soon as it's available, you can sign up to be notified here.

Q: How long will Depthkit be in Beta? Should I wait until the "full" release comes out? 
A: We'll always have a beta version of Depthkit available for people to opt-in to to try new features. However, we're hoping to release an initial public version by mid-late 2018 that will have a stable set of core features that users will be able to leverage on their projects without the fear of a "beta" label.

Beta Program

Q: What happens after I apply for the beta?
A: Once you apply we'll review your application and reach out to you if we think you're a good candidate to use Depthkit. If accepted, you will receive a link to register for a Depthkit license. Once you have registered, you'll be able to login on the site using the "Beta Login" link in the bottom right of the site. Once you login, you'll be redirected to the Beta Homepage, where you can access the software, tutorials, and more. The Beta homepage can also be accessed through the "Beta Home" link in the website's footer. 

Q: What do I get as a member of the Depthkit Beta program?
A: You get access to the latest versions of the tool and information about our workflows as we continue to develop and refine them. You also get access to support from the Depthkit team, all of whom are ready and willing to help you figure out any issues you may be having. This includes membership in an invite-only Slack channel for beta users where you are free to discuss the tool or make any recommendations/suggestions you may have.

Capabilities

Q: What is the range of Depthkit? 
A: Currently we're focused on the Kinect V2 which has a range of around .5m and 4.5m (1'8"-14'6".)

Q: What is the Kinect V2 depth resolution?
A: The Kinect V2 has a depth resolution of 512 x 424 pixels with a FoV of 70.6° x 60° resulting in an average of around 7 x 7 pixels per degree. Due to the use time-of-flight as the core mechanism for depth capture, each pixel in the depth image contains a real measured depth value. See Roland Smeenk's thorough breakdown for more information.

Q: What software can I use to work with Depthkit assets?
A: The current version of Depthkit is geared towards working with realtime game engines like Unity, but we also support export for use in tools like Maya or Cinema 4D, Houdini etc. DepthKit can also export an image sequence that can be used in common non-linear editing tools like Adobe Premiere and Final Cut Pro, or After Effects etc.

Q: When will you support new depth sensors?
A: We're constantly testing and working with new sensors. Currently we're excited about Orbbec, ZED, Intel RealSense, PMD, etc. We're not currently making any specific promises about when we'll add support for new sensors. If you've encountered a sensor that you're excited about (or make), please get in touch!

Q: Does Depthkit work with X software product or Y workflow?
A: We're going to continue supporting common export formats (.OBJ, .ABC) and we'll be publishing tutorials as we go, but we are focusing on a particular workflow first: DepthKit > Unity for VR/AR. We encourage you to test and share workflows as you experiment with Depthkit footage in your favorite environments.

Q: What is the optimal range for shooting with Depthkit?
A: The quality of the depth data falls off as you move away from the sensor, so the general recommendation is that you fill your depth frame with your subject whenever possible.

Q: Do I need to be plugged in to the wall to run Depthkit? 
A: With the Kinect V2 - which is the best sensor that we're supporting right now - the answer is yes. However the legacy versions of the DepthKit support the Asus/Primesense sensors that run off USB power. Alternatively we've had users make mobile rigs that include a battery and an AC/DC inverter that work great for running a Kinect.

Q: What are the specs of the Kinect V2?
A: The Depth camera on the Kinect V2 has an FoV of  FoV of 70.6° x 60° and the color camera has a color resolution of 1920 x 1080px and a FoV of 84.1° x 53.8° resulting in an average of about 22 x 20 pixels per degree. There is a great interactive Kinect 1 vs Kinect V2 FoV comparison tool here.


For all other inquires or to report an issue with the site, please contact us at support@depthkit.tv.