Kinect is Legal for Both Drive Station and Field Use (Here’s why)

Team 494 (Bill Maxwell/Martians) has been working with Kinect since its introduction

One Handed Kinect FPS Hack - Better Than Mouse

This is a device driver that is designed to replace a controller.
It will work on the drive station and provide joystick info to the driver station program which will be sent to the crio. Extra information can also be sent through the drive station sensor information packet field.
The minimum distance the sensor will work is 17 inches; therefore the sensor must be elevated above the drive station pointed downward.
This means that only the drivers can control the robot using the kinect. During autonomous the driver’s arms are not visible to the elevated Kinect. This makes the kinect legal for the drive station.

The Kinect looking downward (maybe on a simple pole) give very high resolution depth information of the driver’s arms. Think of all the possible virtual controls that could be printed flat on your drive station control board.

Field use is possible, but might have some problems.
First the sensor is super safe in its light output, 8 million units have been sold to be used by children standing in front and looking straight at sensor.
A laptop can be used on the robot, the kinect sensor is connected to the usb port on the laptop, its extra 12volts can be supplied by the robot battery.
It is very legal to connect a laptop to the crio’s network port.
This gives you unlimited programming possibilities.

Now for the field use problems:
Based on our test the sensor will work to about 28 feet in low light.
If the sensor is moved outside into the sunlight its range drops to less than 4 feet.
Different events will have different lighting, how much this will affect the kinect is unknown, and any modification of the kinect light output would make its safety unknown and therefore very illegal.

The next problem is that multiple Kinects used on the field could interfere with each other. People testing this find the interference low with 3 units, but what if everyone on the field is using kinect?

The final problem is that FIRST may just say NO.

I hope everyone will work toward making depth sensors standard for FIRST, because they are the future of robotics.

I’m not up-to-speed on the whole ‘Kinect’ thing, but doesn’t it have some sort of laser? I think that’s the FIRST-illegal part.

Here’s an example of two Kinects working together, the poster commented on the distortion. Sorry for the indirect link, no youtube at school.

It sends a bunch of IR dots (normal light, not lasers) all over the room that it is being used in, and uses an IR camera to detect the reflected position and intensity of each dot to determine the depth.

The Kinect depth sensor uses an IR laser to generate those dots. That runs afoul of the FRC robot rules.

If a non-reflective surface were put on the driver’s station, and the Kinect pointed down at that surface, I don’t think that the GDC would disallow a Kinect due to safety within that context. If a driver can manipulate the robot that way, more power to him/her! Actually, you could probably put a proper-scale representation of the robot under the Kinect and manipulate it to manipulate the robot (like 1731 did in 2007 – see the Behind the Design book).

As for use on the live robot, I think that there are still too many unknowns. If anything, Q&A it with very specific parameters in mind. If it’s approved and you stay within those parameters, you would have a very strong argument (but not a guarantee since the inspectors have the final say in anything contextual) that the Kinect is safe for use.

This would be a good question for the GDC. Because of the type of low power IR laser you may get an exception.

I check my Kinect and found it to have a Class 1 safty rating

Class 1 LASER PRODUCT
A class 1 laser is safe under all conditions of normal use.

I will ask the GDC.

But just in case there is a problem,

I checked on the Kinect IR Emitter and found to run at the same frequency as tv remotes 830nm with a static image generated by a 30000 point caustic patterend grate. Would it be possible to replace the Kinect IR Emitter with a standard IR led of the same power ratting. FIRST uses IR leds everwhere.

What do you think?

Interesting. I (obviously) had no idea. I had seen the IR-camera youtube videos and remembered the dots being so big that I figured they were generated via a non-laser IR emitter somehow.

Edit: Now having re-watched the videos, it makes a lot more sense that it’s a laser.

No. An incoherent light source would not work with the pattern grating. It must be a laser.

(Perhaps a “QWLED” device that I remember reading about a couple of decades ago would work. It produced coherent but highly divergent light, suitable for tabletop holography. I never heard about it again, so perhaps it never made it to an actual product.)

When you ask, you may want to differentiate between use on the ROBOT and use on the OPERATOR CONSOLE.

Even if it is legal, it shouldn’t be. Too many Kinect’s on the field could definitely cause too much interference and unexpected results. The Kinect operates on a 830nm wavelength (near the wavelength of a laser mouse) uniformly across ALL Kinect’s manufactured so therefore if at any point during the match if the 830nm projections are projected onto other projections or if projections are projected on top of other projections there will be interference. The more Kinect’s in one area, the higher the chance of interference. There are also other things to worry about such as IR-absorbent material on the field and such, not even to mention the difficulties you will have in getting the Kinect to work through the cRIO.

All the information you will ever want: http://openkinect.org/wiki/Hardware_info

Even if it is legal, it shouldn’t be. Too many Kinect’s on the field could definitely cause too much interference and unexpected results

Because the Kinect controller uses a pattern match on a non-repeating caustic pattern it is very resistant to interference. When there is a problem, the area of the problem is marked with a ‘unknown’ depth code (2047).

When we use a standard light camera there are hundreds of interference artifacts each frame that must be filtered out, this is not needed with the kinect.

We have tried to use sonar sensors in the past, but everything interferes with them.

There are also other things to worry about such as IR-absorbent material on the field

Our testing so far has found only mirrors to be a problem. Carpet, the logo tubes and other robots (we have lots of them) image very well.

not even to mention the difficulties you will have in getting the Kinect to work through the cRIO.

This is no problem at all; we can use laptops on our robots this year. Kinect runs very well in both windows 7 and XP. We also can connect the laptop to the crio’s second network port.

This is my seventh year as mentor in FIRST, this is the first time it is possible to use leading edge tech in our robot. We may not make it to the field because of FIRST stopping up, but we will not stop trying to make the best robot possible. We will be able to use our new tech in off season competitions and most importantly demonstrations to new students and sponsors.

This is the begining of new age, the natural computer (robot) interface.

Oh, what would be the advantage of a kinect over a regular old joystick?

Oh and good luck with autonomy with a kinect. Have fun trying to communicate with the cRio efficiently enough so that you can rotate the Kinect and not sacrifice precious clock cycles.

Oh, what would be the advantage of a kinect over a regular old joystick?

This is a very good question.

We will program our drive station to use both a XBOX controller and the Kinect. (Not at the same time) Based on years of using a joystick (Xbox controller) we know how it works. The Kinect interface will be a totally new adventure. Our goal will be to make a natural interface in which anyone can walk up and drive our robot. This may not be possible in 5 weeks, it may never be possible.

Oh and good luck with autonomy with a kinect. Have fun trying to communicate with the cRio efficiently enough so that you can rotate the Kinect and not sacrifice precious clock cycles.

The kinect on the field may not be a good choice for this game. We will find out by trying both a standard camera and the Kinect (depth sensor).

As for the cRio precious clock cycle, if we off load the visual processing to a laptop and use C++ in the cRio there will be lots of unused cRio cycles.

I feel like this is more cool-factor than anything else, and in my opinion isn’t worth hassling FIRST and wasting time over. Sure you could use a Kinect, but you could also just program a camera or two more efficiently.

Yes, the laptop would get the image input and process it, but you would then have to transfer that data to the cRio and then the cRio has to move the motors and ect. I honestly do not think one has to invest a $100 for a kinect but rather just invest $20 on a pair of low end webcams.

I think there will be more interference then you expect. Current tests have revealed that two Kinect’s pointed at the same area at a 90 degree angle show little interference but six Kinect’s pointed at any varying angle, possibly all at each other will render it useless. Just think if you could see the infrared pattern with 6 Kinect’s pointed at the same area (which you can with an infrared camera or infrared goggles) all you will see is a big infrared blob, not a pattern. There’s no way the computer could make sense of something like that and give you enough information to make your algorithms work.

It would be pretty easy for defender robots to use the Kinect or similar allowed-lasers to mess with it if they wanted. And not to be pessimistic but even people from the audience or other team members could use items to interfere and no one could know because its invisible to the eye. Even things like camera auto-focusing that release IR wavelengths could interfere to a certain degree.

I think it’s great that there’s people out there that are pushing to use this but I simply don’t think that the technology is mature enough to use easily because it posses too many potential uncertainties in different situations for any algorithm you may throw at it. So far everything it’s been used for (including Microsoft) and has actually worked reasonably well has been done in very very controlled environments.

Just my 2 cents.

Sure, its easy for me to stand in the stands and hold up a vision target, rendering anyones autoscoring system useless. But am I going to? No.

Best of luck to what you are trying to accomplish, but I think most can use a joystick to drive as well. I think using joysticks can provide more predictable, and therefore more effective and safer results.