Kinect Video Mode and Resolutions

So, our team is using the Freenect library to interface with the Kinect, which will be mounted on the robot as a vision device. I spotted these lines in the library code in cameras.c:

freenect_frame_mode supported_video_modes[video_mode_count] = {

// reserved, resolution, format, bytes, width, height, data_bits_per_pixel, padding_bits_per_pixel, framerate, is_valid

...]

{MAKE_RESERVED(FREENECT_RESOLUTION_HIGH,   FREENECT_VIDEO_IR_8BIT), FREENECT_RESOLUTION_HIGH, {FREENECT_VIDEO_IR_8BIT}, 1280*1024, 1280, 1024, 8, 0, 10, 1 },

...]

What this appears to be is an IR video mode (there’s corresponding RGB, Bayer, etc. modes) with resolution 1280x1024, which is higher than I’ve ever heard for the Kinect.

Is this video mode usable? Is it that the Kinect’s camera is capable of this, but the low framerate constrains its use, and thus the Kinect’s resolution is given as a lower number by most sources?

The MS drivers also support an RGB image of 1280x1024, 640x480, and 320x240.

Greg McKaskle

No, that mode is actually quite unstable from what I’ve heard.

You’re not get that much better performance with that, in fact, it probably will degrade your performance as your image size is 4x bigger (than 640x480) meaning more pixels to search through.

That mode is the Infrared feed from Kinect. Possibly useful. :wink: You cannot view the RGB and IR feeds at the same time per the Kinect’s firmware. You can however view IR and depth feeds at the same time.

Run glview and switch camera feeds until you find the IR one. It will be a grayscale video feed.

The OpenKinect library outputs a 640x480 RGB, a “640x480” depth feed (really 632x480), and a 640x488 IR feed.

There is a very useful hint in these posts. The Kinect has a high accuracy IR camera with great filter already installed… Hmmm, if only we were using the IR camera and high power IR leds to illuminate the retro-reflective tape. Sure seems like that would really help us control our image in bad/variable lighting situations :wink:

-Mike

Hmmmm…

Hmmmmmmmmmmmmmmmmmmmmmmmmmmmm…

Hmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm…

::rtm::

Now you have me worried with the “read the manual” icon. The students worked really hard on this idea and none of the mentors saw a problem with it. Did we miss something?

Not really. You just need to figure out how the make it work. Fortunately, the Kinect does all the IR stuff for you already.

I would be careful with the assumption that IR is a better part of the spectrum to be in than visible. The bright lighting used at many FRC events has non-negligible IR content to it if I remember correctly. Not saying you’ll be worse off than in the visible spectrum (and you may be much better), but I wouldn’t assume you’re much better off.

I would also be very careful with “high power IR LEDs” as the human eye cannot see the output and therefore you will not know to blink or look away and could potentially damage your eyes if the power levels are high enough. Just make sure you are keeping things at safe levels (some research may be needed to determine what is safe).

Real-time performance is really not an issue here. We’re grabbing and processing single frames, and I can’t imagine what you’d need a live IR camera feed for.

Problem is on taking the data from the Kinect (quite a bit of low-level stuff) and putting it through image processing. If you get past that programming challenge all should be well.

I would also be very careful with “high power IR LEDs” as the human eye cannot see the output and therefore you will not know to blink or look away and could potentially damage your eyes if the power levels are high enough. Just make sure you are keeping things at safe levels (some research may be needed to determine what is safe).

The Kinect mounts an IR laser which projects a matrix of dots with a holographic diffuser; the dots show up incredibly bright (>200 luminance vs <20 for rest of scene) on the IR feed. Considering that the Kinect is designed to be pointed towards people’s eyes for long durations, it follows that the IR laser is quite weak. You won’t need an LED with high enough power to be dangerous.

I would be careful with the assumption that IR is a better part of the spectrum to be in than visible. The bright lighting used at many FRC events has non-negligible IR content to it if I remember correctly. Not saying you’ll be worse off than in the visible spectrum (and you may be much better), but I wouldn’t assume you’re much better off.

The grayscale IR image will be practically always better than a color image.

In our experience the built-in Kinect laser is not powerful enough to get good images in a light-noisy environment at any useful distance by itself. We’ve tested that quite a bit and even in the evening in an open area we started getting image quality problems quite quickly. We also tried using the LED ring light from AndyMark - it worked fine in a dark area to about 20 feet, but it a really bright env, it was overpowered by ambient light very quickly. I imagine in a bad env. it will become less effective very quickly. We’ve already ordered 850 and 830nm IR LEDs to see how well we can boost the initial image quality distance. We’ve also ordered some visible LEDs as well to see which camera/light combo gives us the most reliable initial image for processing.

As for the rest of the stuff, we are using RoboRealm to handle image processing and edge detection. The students were able to get a great set of edges with that in literally less than an hour. It’s a really impressive piece of software and is clearly going to be the linchpin for our vision system.

-Mike

I was able to detect targets in a fairly bright room using the IR feed. The Kinect straight ignored bright, overhead lights… They were fluorescent, though. Issues arises with incandescents. Nothing we can’t deal with, though.

It totally worked, we just didn’t get the range we wanted.

Got the Kinect to autotrack the targets tonight.