View Full Version : Kinect is Legal for Both Drive Station and Field Use (Here’s why)
MaxKinect
14-01-2011, 10:28
Team 494 (Bill Maxwell/Martians) has been working with Kinect since its introduction
One Handed Kinect FPS Hack - Better Than Mouse
http://www.youtube.com/watch?v=1j9UhxtmWmA
This is a device driver that is designed to replace a controller.
It will work on the drive station and provide joystick info to the driver station program which will be sent to the crio. Extra information can also be sent through the drive station sensor information packet field.
The minimum distance the sensor will work is 17 inches; therefore the sensor must be elevated above the drive station pointed downward.
This means that only the drivers can control the robot using the kinect. During autonomous the driver’s arms are not visible to the elevated Kinect. This makes the kinect legal for the drive station.
The Kinect looking downward (maybe on a simple pole) give very high resolution depth information of the driver’s arms. Think of all the possible virtual controls that could be printed flat on your drive station control board.
Field use is possible, but might have some problems.
First the sensor is super safe in its light output, 8 million units have been sold to be used by children standing in front and looking straight at sensor.
A laptop can be used on the robot, the kinect sensor is connected to the usb port on the laptop, its extra 12volts can be supplied by the robot battery.
It is very legal to connect a laptop to the crio's network port.
This gives you unlimited programming possibilities.
Now for the field use problems:
Based on our test the sensor will work to about 28 feet in low light.
If the sensor is moved outside into the sunlight its range drops to less than 4 feet.
Different events will have different lighting, how much this will affect the kinect is unknown, and any modification of the kinect light output would make its safety unknown and therefore very illegal.
The next problem is that multiple Kinects used on the field could interfere with each other. People testing this find the interference low with 3 units, but what if everyone on the field is using kinect?
The final problem is that FIRST may just say NO.
I hope everyone will work toward making depth sensors standard for FIRST, because they are the future of robotics.
synth3tk
14-01-2011, 10:41
I'm not up-to-speed on the whole 'Kinect' thing, but doesn't it have some sort of laser? I think that's the FIRST-illegal part.
smileydude560
14-01-2011, 10:44
http://gizmodo.com/5701466/kinect-3d-video-capture-just-got-even-more-insane
Here's an example of two Kinects working together, the poster commented on the distortion. Sorry for the indirect link, no youtube at school.
I'm not up-to-speed on the whole 'Kinect' thing, but doesn't it have some sort of laser? I think that's the FIRST-illegal part.
It sends a bunch of IR dots (normal light, not lasers) all over the room that it is being used in, and uses an IR camera to detect the reflected position and intensity of each dot to determine the depth.
http://en.wikipedia.org/wiki/Structured_Light_3D_Scanner
Alan Anderson
14-01-2011, 11:15
It sends a bunch of IR dots (normal light, not lasers)...
The Kinect depth sensor uses an IR laser to generate those dots. That runs afoul of the FRC robot rules.
If a non-reflective surface were put on the driver's station, and the Kinect pointed down at that surface, I don't think that the GDC would disallow a Kinect due to safety within that context. If a driver can manipulate the robot that way, more power to him/her! Actually, you could probably put a proper-scale representation of the robot under the Kinect and manipulate it to manipulate the robot (like 1731 did in 2007 -- see the Behind the Design book).
As for use on the live robot, I think that there are still too many unknowns. If anything, Q&A it with very specific parameters in mind. If it's approved and you stay within those parameters, you would have a very strong argument (but not a guarantee since the inspectors have the final say in anything contextual) that the Kinect is safe for use.
This would be a good question for the GDC. Because of the type of low power IR laser you may get an exception.
MaxKinect
14-01-2011, 12:56
I check my Kinect and found it to have a Class 1 safty rating
http://en.wikipedia.org/wiki/Laser_safety#Class_1
Class 1 LASER PRODUCT
A class 1 laser is safe under all conditions of normal use.
I will ask the GDC.
But just in case there is a problem,
I checked on the Kinect IR Emitter and found to run at the same frequency as tv remotes 830nm with a static image generated by a 30000 point caustic patterend grate. Would it be possible to replace the Kinect IR Emitter with a standard IR led of the same power ratting. FIRST uses IR leds everwhere.
What do you think?
The Kinect depth sensor uses an IR laser to generate those dots. That runs afoul of the FRC robot rules.
Interesting. I (obviously) had no idea. I had seen the IR-camera youtube videos and remembered the dots being so big that I figured they were generated via a non-laser IR emitter somehow.
Edit: Now having re-watched the videos, it makes a lot more sense that it's a laser.
Alan Anderson
14-01-2011, 15:20
Would it be possible to replace the Kinect IR Emitter with a standard IR led of the same power ratting.
No. An incoherent light source would not work with the pattern grating. It must be a laser.
(Perhaps a "QWLED" device that I remember reading about a couple of decades ago would work. It produced coherent but highly divergent light, suitable for tabletop holography. I never heard about it again, so perhaps it never made it to an actual product.)
Joe Ross
14-01-2011, 15:32
I will ask the GDC.
When you ask, you may want to differentiate between use on the ROBOT and use on the OPERATOR CONSOLE.
Ryan Gordon
14-01-2011, 18:16
Even if it is legal, it shouldn't be. Too many Kinect's on the field could definitely cause too much interference and unexpected results. The Kinect operates on a 830nm wavelength (near the wavelength of a laser mouse) uniformly across ALL Kinect's manufactured so therefore if at any point during the match if the 830nm projections are projected onto other projections or if projections are projected on top of other projections there will be interference. The more Kinect's in one area, the higher the chance of interference. There are also other things to worry about such as IR-absorbent material on the field and such, not even to mention the difficulties you will have in getting the Kinect to work through the cRIO.
All the information you will ever want: http://openkinect.org/wiki/Hardware_info
MaxKinect
14-01-2011, 22:19
Even if it is legal, it shouldn't be. Too many Kinect's on the field could definitely cause too much interference and unexpected results
Because the Kinect controller uses a pattern match on a non-repeating caustic pattern it is very resistant to interference. When there is a problem, the area of the problem is marked with a 'unknown' depth code (2047).
When we use a standard light camera there are hundreds of interference artifacts each frame that must be filtered out, this is not needed with the kinect.
We have tried to use sonar sensors in the past, but everything interferes with them.
There are also other things to worry about such as IR-absorbent material on the field
Our testing so far has found only mirrors to be a problem. Carpet, the logo tubes and other robots (we have lots of them) image very well.
not even to mention the difficulties you will have in getting the Kinect to work through the cRIO.
This is no problem at all; we can use laptops on our robots this year. Kinect runs very well in both windows 7 and XP. We also can connect the laptop to the crio's second network port.
This is my seventh year as mentor in FIRST, this is the first time it is possible to use leading edge tech in our robot. We may not make it to the field because of FIRST stopping up, but we will not stop trying to make the best robot possible. We will be able to use our new tech in off season competitions and most importantly demonstrations to new students and sponsors.
This is the begining of new age, the natural computer (robot) interface.
davidthefat
14-01-2011, 22:57
Oh, what would be the advantage of a kinect over a regular old joystick?
Oh and good luck with autonomy with a kinect. Have fun trying to communicate with the cRio efficiently enough so that you can rotate the Kinect and not sacrifice precious clock cycles.
MaxKinect
15-01-2011, 08:37
Oh, what would be the advantage of a kinect over a regular old joystick?
This is a very good question.
We will program our drive station to use both a XBOX controller and the Kinect. (Not at the same time) Based on years of using a joystick (Xbox controller) we know how it works. The Kinect interface will be a totally new adventure. Our goal will be to make a natural interface in which anyone can walk up and drive our robot. This may not be possible in 5 weeks, it may never be possible.
Oh and good luck with autonomy with a kinect. Have fun trying to communicate with the cRio efficiently enough so that you can rotate the Kinect and not sacrifice precious clock cycles.
The kinect on the field may not be a good choice for this game. We will find out by trying both a standard camera and the Kinect (depth sensor).
As for the cRio precious clock cycle, if we off load the visual processing to a laptop and use C++ in the cRio there will be lots of unused cRio cycles.
winglerw28
15-01-2011, 10:41
I feel like this is more cool-factor than anything else, and in my opinion isn't worth hassling FIRST and wasting time over. Sure you could use a Kinect, but you could also just program a camera or two more efficiently.
davidthefat
15-01-2011, 10:45
This is a very good question.
We will program our drive station to use both a XBOX controller and the Kinect. (Not at the same time) Based on years of using a joystick (Xbox controller) we know how it works. The Kinect interface will be a totally new adventure. Our goal will be to make a natural interface in which anyone can walk up and drive our robot. This may not be possible in 5 weeks, it may never be possible.
The kinect on the field may not be a good choice for this game. We will find out by trying both a standard camera and the Kinect (depth sensor).
As for the cRio precious clock cycle, if we off load the visual processing to a laptop and use C++ in the cRio there will be lots of unused cRio cycles.
Yes, the laptop would get the image input and process it, but you would then have to transfer that data to the cRio and then the cRio has to move the motors and ect. I honestly do not think one has to invest a $100 for a kinect but rather just invest $20 on a pair of low end webcams.
Ryan Gordon
15-01-2011, 15:16
Because the Kinect controller uses a pattern match on a non-repeating caustic pattern it is very resistant to interference. When there is a problem, the area of the problem is marked with a 'unknown' depth code (2047).
I think there will be more interference then you expect. Current tests have revealed that two Kinect's pointed at the same area at a 90 degree angle show little interference but six Kinect's pointed at any varying angle, possibly all at each other will render it useless. Just think if you could see the infrared pattern with 6 Kinect's pointed at the same area (which you can with an infrared camera or infrared goggles) all you will see is a big infrared blob, not a pattern. There's no way the computer could make sense of something like that and give you enough information to make your algorithms work.
It would be pretty easy for defender robots to use the Kinect or similar allowed-lasers to mess with it if they wanted. And not to be pessimistic but even people from the audience or other team members could use items to interfere and no one could know because its invisible to the eye. Even things like camera auto-focusing that release IR wavelengths could interfere to a certain degree.
I think it's great that there's people out there that are pushing to use this but I simply don't think that the technology is mature enough to use easily because it posses too many potential uncertainties in different situations for any algorithm you may throw at it. So far everything it's been used for (including Microsoft) and has actually worked reasonably well has been done in very very controlled environments.
Just my 2 cents.
Grim Tuesday
16-01-2011, 13:17
Sure, its easy for me to stand in the stands and hold up a vision target, rendering anyones autoscoring system useless. But am I going to? No.
PayneTrain
16-01-2011, 13:53
Best of luck to what you are trying to accomplish, but I think most can use a joystick to drive as well. I think using joysticks can provide more predictable, and therefore more effective and safer results.
Billfred
16-01-2011, 14:04
Just one point to bring up:
The Kinect has a motorized pivot on the base. How do you pass the FRC motor rules, short of cracking the case and removing it?
GaryVoshol
20-01-2011, 21:49
The GDC has spoken - Not Legal: http://forums.usfirst.org/showthread.php?t=16240
Tristan Lall
21-01-2011, 02:43
The GDC has spoken - Not Legal: http://forums.usfirst.org/showthread.php?t=16240On the robot, at least. (<R02> refers to "[i]tems specifically prohibited from use on the ROBOT".)
Arguably, you could enclose the laser completely to avoid the <R02> violation. (The laser is integral; the device causing it not to be exposed to the surroundings may not have to be integral, depending on the interpretation of the rule. However, as a practical matter, I can anticipate the Q&A response....) As for the motorized base, since there is significant signal processing and I/O on the Kinect, it might be considered a "COTS computing device" for the purposes of <R45>.
On the topic of the laser rule, while it's relatively easy to enforce—except for the sometimes-fluid definition of "exposed"—it's somewhat too conservative. These are obviously eye-safe lasers—the Kinect has a Class 1 (http://en.wikipedia.org/wiki/Laser_safety#Class_1) rating (for an exposed device, eye-safe on a continuous basis).
The rule ought to allow any number of unmodified Class 1 lasers for competition use, on the robot and operator console, subject to a gameplay rule about creating distractions. There should be a separate venue rule stating that if you're in possession of a class 3 or higher laser, or in possession of a laser modified so as to invalidate its rating, or using any laser whatsoever in a vexatious way, you'll be thrown out of the building.
Vikesrock
21-01-2011, 08:25
Just one point to bring up:
The Kinect has a motorized pivot on the base. How do you pass the FRC motor rules, short of cracking the case and removing it?
It isn't all that bad to do this once you have the right tools. We have two at work that we have removed the entire base from.
PrinceTyke
22-01-2011, 12:41
I'd just like to point out that, while it wouldn't necessarily be better, it would certainly be cooler to control it with Kinect.
Vikesrock
23-01-2011, 09:57
It isn't all that bad to do this once you have the right tools. We have two at work that we have removed the entire base from.
And now here's a video of those Kinect's mounted to a robot to perform autonomous navigation.
http://www.youtube.com/watch?v=kn93BS44Das
The processing overhead for the Kinect's is much lower than any stereo vision implementations I have heard of. This entire system runs on a pair of dual core 1.66GHZ Intel Atom motherboards, mounted on the robot and running off an unfiltered 12V DC in from the battery. It should be noted that the motors are MUCH smaller than FRC and the battery is much bigger so voltage dip is nowhere near what is seen on an FRC robot.
Al Skierkiewicz
23-01-2011, 10:49
<R02> ROBOT parts shall not be made from hazardous materials, be unsafe, or cause an unsafe condition. Items specifically prohibited from use on the ROBOT include (but are not limited to):
C. Any devices or decorations specifically intended to jam or interfere with the remote sensing capabilities of another robot,including vision systems, acoustic range finders, sonars, infra-red proximity detectors, etc.(e.g. including imagery on your robot that, to a reasonably astute observer, mimics the VISION TARGET)
D. Exposed lasers of any type (COTS devices with completely enclosed integral lasers, such as a laser ring gyro, are permitted)
This kind of says it all doesn't it?
Vikesrock
23-01-2011, 11:16
This kind of says it all doesn't it?
Absolutely. I was not intending my post here to be a suggestion that anyone try using the Kinect on their FRC robot, as the GDC has explicitly ruled on it in the Q&A in addition to the rule you cited. I just thought some people here may be interested in what can be done with the device on a robot.
MaxKinect
23-01-2011, 14:33
Absolutely. I was not intending my post here to be a suggestion that anyone try using the Kinect on their FRC robot, as the GDC has explicitly ruled on it in the Q&A in addition to the rule you cited. I just thought some people here may be interested in what can be done with the device on a robot.
This is a very good point. Kinect is a very safe sensor technology. The fact that is uses a single frequency infrared light source makes it a laser device and therefore "this year" (note: these words were used by the GDC to answer to question) not allowed for field use.
The Martians (494) and More Martians (70) are still working with the Kinect and hope that other teams will join us in the adventure of discovery.
Tom Line
23-01-2011, 14:56
Yes, the laptop would get the image input and process it, but you would then have to transfer that data to the cRio and then the cRio has to move the motors and ect. I honestly do not think one has to invest a $100 for a kinect but rather just invest $20 on a pair of low end webcams.
David, I would be very interested in seeing you develop a system out of two webcams that provides the kinect's capabilities. I suspect the engineers at any of the large console suppliers like Sony, Microsoft, and Nintendo would be equally as interested.
Max, as I understand it the IR lasers extend out in a spread pattern. The provide full depth information at thousands of points and combine that with full color video. The Kinect does not just take 2 dimensional pictures of things. It provides distances, widths, heights, and all at a speed that can record high speed movement in real time and reconcile that with a full color picture. This system could be used in FIRST to emulate the Lidar used on so many DARPA vehicles for full field navigation.
Indeed, David, this system would be ideal for a fully autonomous robot. How's that going, by the way?
The software and hardware has already been developed by thousands of engineers over a number of years, using technology by many different countries. It works in ANY ambient light condition because of the IR - even in nearly pitch black. It's accurate from around 1.5 feet to 20 feet - 1/3 of one of our fields.
Max, that's really pretty exciting stuff. The year you put one of these on the robots and use it in autonomous is the year you win the innovation award available at every competition you attend. Awesome. I'm sorry to see FIRST rule it out this year, though I can't see that rule in place for long.
Tom Line
23-01-2011, 15:04
Yes, the laptop would get the image input and process it, but you would then have to transfer that data to the cRio and then the cRio has to move the motors and ect. I honestly do not think one has to invest a $100 for a kinect but rather just invest $20 on a pair of low end webcams.
David, I would be very interested in seeing you develop a system out of two webcams that provides the kinect's capabilities. I suspect the engineers at any of the large console suppliers like Sony, Microsoft, and Nintendo would be equally as interested.
Max, as I understand it the IR lasers extend out in a spread pattern. The provide full depth information at thousands of points and combine that with full color video. The Kinect does not just take 2 dimensional pictures of things. It provides distances, widths, heights, and all at a speed that can record high speed movement in real time and reconcile that with a full color picture. This system could be used in FIRST to emulate the Lidar used on so many DARPA vehicles for full field navigation.
Indeed, David, this system would be ideal for a fully autonomous robot. How's that going, by the way?
The software and hardware has already been developed by thousands of engineers over a number of years, using technology by many different countries. It works in ANY ambient light condition because of the IR - even in nearly pitch black. It's accurate from around 1.5 feet to 20 feet - 1/3 of one of our fields.
Max, that's really pretty exciting stuff. The year you put one of these on the robots and use it in autonomous is the year you win the innovation award available at every competition you attend. Awesome. I'm sorry to see FIRST rule it out this year, though I can't see that rule in place for long.
I don't think I'd use it for driver station control but Kinect is incredibly exciting a robotics sensor. I would really like to use it in future years. I really hope next year we can make this happen.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.