pic: Year of Depth Cameras?

e88574e41b582b2c7b2befc16d1bd09b_l.jpg

Taken from the Microsoft Kinect at ~30 fps on a i3 laptop. Next step is to see how this program fairs with totes that are touching.

How far away was the tote (in something like feet)?

Derp. I didn’t adjust the algorithm to call for the distance at the center of the contour from the depth map.

Here is the image with it outputting real distance in inches: Dropbox - Error - Simplify your life

tl;dr: It’s 171.14 inches

The Kinect does some unexpected depth processing that you may want to be aware of (or at least, it did in the past, and I don’t think it’s changed recently).

When you ask for the depth value of a specific pixel (and this is hard for me to put into words) it:

returns the distance to the closest point of the plane that:
-intersects the object in that pixel
AND
-is normal to the viewing angle of the Kinect.

This is easily observed by putting an object directly ahead of the Kinect, say, at 6 feet. Then, slide the object left and right. The Kinect will report that the object is still 6 feet away, even though as you slide the object to the left or the right, the absolute distance to the object increases!

This is very helpful for games, by the way - since the Kinect was originally intended to watch the human body, they wanted to report constant distance values for the depth of the player body even if the player body is not directly in front of the sensor.

However, if you’re trying to find objects and navigate to them, you should be aware of this particular quirk. This means as you rotate the sensor, the reported distance of objects will change, even though their real-world distance has remained constant.

huh. I didn’t know that. That’s rather interesting. I don’t see it being a problem because most routines would turn to line up, then drive to pick it up. Who knows though.

Have you gotten a chance to use the Kinect v2 (assuming you’re not using one in your example)? I’ve read that it uses time of flight to find distances instead of IR pattern recognition the old one uses. It has a greater field of view, higher resolution and frame rate, independent color channels, and longer range. Is it something you’re thinking about developing with?

I would love to develop with it, but I don’t have access to one.

Note that Kinect v2 would require USB3 and lots of computation power, so integration of it on the robot would require a pretty powerful coprocessor or laptop.

Seems perfect for those Nvidia Jetson TK1 boards on FIRST Choice.