DepthKit Calibration

After the color camera and depth camera are calibrated, you need to calibrate the relationship between the two of them in order to properly project the color information from the video camera onto the depth. To do this, we need to determine the physical position of the two cameras relative to one another. This allows us to combine the two data streams into one 3D scene.

Adjust Kinect Angle 

The DepthKit Calibration process captures the position & rotation of the two cameras. Before calibrating, check that the fields of view & alignment between the two cameras match.

Compare the live preview on your camera with the Kinect image in the capture app to make sure that the video camera and the Kinect are well aligned. To do this, we recommend picking an object in the room and aligning it to the center of the live-preview color image, then adjust the Kinect so that the object can be seen in it's preview mode in a location that matches the color camera's preview. 

To adjust the Kinect with our mount, loosen the adjustment screw on the side of the mount and tilt the Kinect until the center point of both the video camera and Kinect views match. When they're matched, tighten the screw down and make sure that everything is secure. If they don't match perfectly it will still work but if either your video camera or Kinect FoV are significantly wider than one-another, then you may have 'wasted pixels' in the areas that are not shared between both views and hence won't be able to be seen in DepthKit.

If you are using a custom mount, know that once you align the cameras you should not adjust them again at any point during the shoot or the calibration will be off. When using a custom mount, follow the above instructions to align the two views (depth and color) to have the same field of view, and be sure to "lock in" the camera alignments in the way that your mount allows.

Calibrate Correspondence

Now that the camera views are matched, we'll take corresponding images of the checkerboard from the Depth & Infrared views on the Kinect and video camera to determine where they sit relative to each other. It is important to make sure that no screws are loose or anything is wobbly in the mount at this stage. If the relationship between the camera and depth sensor change after we create this calibration, you'll have to repeat this step.

Navigate to the second tab in the Capture application, labeled Calibrate Correspondence. 

This is an example of a checkerboard that's too close to the Kinect such that the Kinect is over-illuminating the surface. In the depth image (left) the black areas represent regions without valid depth samples. Meanwhile, in the IR image (right) we don't see the red grid pattern that's needed for calibration. In this case either rotate the checkerboard away from the Kinect to reduce glare or move it back until the entire surface is all white/grey at left and the red checkerboard dots appear at right.

First, place the checkerboard 2.5'-3' in front of the camera and move it backwards until there are no black spots on the top left window. Any pixels that appear black are not being sensed. Typically that is caused by being too close to the Kinect and as a result the IR illumination is too bright.

Without moving the camera or the checkerboard, you'll next need to capture one image each of (L-R) Depth, IR, then a short video with the video camera to place in the right-hand slot:

  1. Hover your mouse over the left-hand box, double check that the surface of the checkerboard has no black spots on it and sample it by clicking inside the box.
  2. Without moving anything, hover over the middle box and make sure that you can see a grid of small red dots. If you can, click the box to sample the IR camera. 
  3. Without moving anything, check your focus on the video camera to ensure that the checkerboard is in focus and take a short (1-3 sec) clip with the video camera.

Move the checkerboard back 9", rotate it off axis a little bit, move it slightly off-center and repeat the steps above. Repeat these steps until you've filled all four rows. The goal is to sample a variety of areas around the frame with checkerboards at varying angles and varying depth.


You'll next need to ingest the color video you just recorded into the application itself.


Once you've captured all four sets, download the video clips from the camera and drop them into a new folder in the working directory you set before. One at a time, drag the video files into the empty  rectangular tiles in the application next to the corresponding depth and IR thumbnails taken from the device. Visually confirm that you haven't mixed them up. Note that if you don't see a clearly defined checkerboard in the viewer on the right, your ink may be too glossy and you need to print with a less glossy ink. This is a known issue.

With four sets of three images complete, click 'Generate RGB/Depth Correspondence'. If you get an error it means the algorithm was unable to find an exact fit for the selected checkerboard pairs. Try hitting 'ignore' to exclude one of the image sets - 'bad apples' may be throwing off the model. Typically it's advisable to start by discarding the furthest or most obliquely angled set and clicking 'regenerate.'

You can navigate in the window at right with the following controls:

Hotkey   Function
W        Move Forward
S        Move Backward
A        Move Left
D        Move Right
E        Move Up
C        Move Down
Q        Rotate Counterclockwise
R        Rotate Clockwise

Just like before, excluding images may help in this situation. Click 'Ignore' on all but one of the images, and attempt to Generate RGB/Depth Correspondence again. When you find an image that allows the process to complete successfully, try combining it with other images. There is some randomness in the algorithm, so it helps to try the same combinations a few times just to see if it 'guesses' a better starting place.

By pressing the left and right arrows you can cycle through previews of the four checkerboard calibration sets. If it's correct, you'll see the checkerboard image data pixels (in black and white) mapped cleanly onto the depth model of the same image.

You'll also see corresponding colored dots floating in space corresponding to the checkerboard depth planes. Some dots are likely to be missing from the grid pattern, as they were removed as outliers while generating the calibration. An ideal calibration will contain dots from at least three different sets of colors. By cycling through all tests checkerboards sets, the checkerboard image should be visibly well aligned to the depth data.

One note about the Kinect V2: currently for some reason we've been seeing far fewer matching dots on this final view but it turns out that the calibration is more than satisfactory. If you have at least a few dots from most if not all the checkerboards in the scene, we recommend saving the calibration and shooting a test clip to visually assess whether the calibration is good or not. 


Once you're here, congratulations! You're all calibrated and ready to go, with the hardest part of using DepthKit behind you.