![]() |
Running the Kinect on the Robot.
I wanted to get some discussion going about the possibility of running the Kinect on the robot instead of on the DriverStation. I think this would open up some really cool possibilities for the robots on the field and off it.
So let's start with the obvious. The Kinect itself. It has a 640x480 RGB camera and a 640x480 depth camera. It has a motor to adjust up and down about 90 degrees total. It also has internal accelerometers. The cameras have a field of view 57 degrees by 43 degrees vertically. This is a very cool piece of technology that I hope we can use to it's full potential. And I feel like using it as a control mechanism by the drivers just isn't right. Either FIRST isn't telling us everything (shocker) or this really just isn't that thought out. But let's ignore all that for a second. First, is this even legal? Yes. From http://www.usfirst.org/roboticsprograms/frc/kinect the question "Can I put the Kinect on my robot to detect other robots or field elements?" was asked and got this answer: "While the focus for Kinect in 2012 is at the operator level, as described above, there are no plans to prohibit teams from implementing the Kinect sensor on the robot." It seems like they're just leaving it open for those teams who are smart enough to figure out how. And that's the problem. My first question is how to get it connected properly on the robot. First thing we need is power. It's USB so it should just be 5 Volts which won't be a problem. Next is connectivity. We need a USB host device unless anybody here wants to re-implement the USB protocol from scratch. And I'm wondering if any rule savvy people here know what kinda of things we can put on the robot. I was thinking it would be best to put something like an arduino or such on the robot that would handle all image manipulation or point detection and would send the rest of the results back to the DriverStation either by the Network or a Digital Input. Does anybody know if thats legal? We will most likely have to write the USB communication stuff ourselves if we are to run on an embedded device. We can use the protocol documentation from http://openkinect.org/wiki/Protocol_Documentation to figure out how to handle everything. If anybody has some knowledge in low level USB and Drivers it would be appreciated. And lastly we need to be able to access this data in a timely fashion and react quickly. I don't know what's new this year with the DriverStation, but it would be cool if we could use the Kinect as our Camera. I think this is legal if we don't modify the DriverStation code. So the device would have to act as a IP camera that responds to the same commands as the current cameras. And it would have to communicate all data points that are needed for autonomous code. I think this potential route was left here purpose so we could create some cool stuff with it. If anybody has any ideas, experience, or anything they think might be help contributions would be greatly appreciated. Happy New Year, Luke Young Programming Manager, Team 2264 |
Re: Running the Kinect on the Robot.
from what i remember why it's not easy, it's because the crio firmware does not include the usb card, which would be the only way of integration from what i know. so you would have to add to the updates to allow you to use it and then run it on the crio... it'd be cool, but you might be spending a ton of time on it
|
Re: Running the Kinect on the Robot.
I remember that there was a whole discussion of this last year... Assuming they will not change the rules regarding the legality of non KOPs motors, you would have to modify the Kinect to take those tilt/pan motors out. I'm telling you Arduino is not enough horsepower for image processing. You need a full on FPGA or ARM (A9 or something) processor on there. It's mathematically not possible; it does not even have enough memory to pass on the image to the cRio.
Also, from the sounds of it, you just want somebody else to do all the hard work for you. Now, I don't know man, if you want it, you figure it out on your own. Just don't get your hopes up that someone will go out of their own way to get this working. Do not plan on having it at all on the robot. The last thing I want happening is that you design your robot around this thing, which has not been done, and waiting for someone else to deliver it for you. I know how that feels, this happened to us last year. The shipment of the pneumatic actuators came the week before competition and we already shipped it off. Here are the threads: http://www.chiefdelphi.com/forums/sh...ad.php?t=89101 http://www.chiefdelphi.com/forums/sh...ad.php?t=87803 If you are up for it, go for it. Just GO. That was my issue last year, I never just "did". |
Re: Running the Kinect on the Robot.
Hi guys.
My team did beta testing for the Kinect. Here is the thread for our Kinect Beta Presentation: http://www.chiefdelphi.com/forums/sh...ight=team+2903 We discussed this idea about the Kinect being used on the robot. It will most likely be used just as a driving mechanism (although the rules don't prohibit use on the robot directly). Getting it to work with the cRIO would be very difficult. Good luck with your ambitious endeavors! |
Re: Running the Kinect on the Robot.
Arduino was my first thought. To pass the values from usb to serial, or could we pass the values from usb to ethernet? (and only process it on DS side?).
|
Couldn't one put a small laptop on the robot to connect the USB. Then connect the laptop to the cRIO?
|
Re: Running the Kinect on the Robot.
If parts utilization rules remain similar to how they have been in the recent past, you could conceivably use a Gumstix processor and breakout board to interface with the Kinect, and then communicate with the cRIO over an Ethernet connection (through the switch). Since the Gumstix runs a fairly full-featured version of Linux, you can use the openni driver to talk with the Kinect and get the RGB and/or depth images, and then send them over to the cRIO. I have heard that with the newer Gumstix, it is possible to do this in real-time with ~70% (Gumstix) CPU usage.
This route would require that you (a) figure out how to power the Kinect (feasible), (b) write an application for the Gumstix to use the openni driver to get the data of our choice and serialize it over ethernet, (c) write networking code on the cRIO side to receive your data, and (d) write image processing code to do something with it. It is definitely doable, but (c) and (d) would require careful attention to make sure you aren't overwhelming your cRIO. It would also cost you upwards of $400 for both the Gumstix and the desired I/O breakout board. |
Re: Running the Kinect on the Robot.
The academic robotics group at NI did have a Kinect mounted and running on a superdroid chassis for awhile. Here are some additional considerations.
Power: The Kinect is not a five volt USB device. The cable that comes from the Kinect is an XBox shaped connector that will not plug straight into a laptop or other USB connectors. To connect to a pc or laptop requires an adapter cable that changes the shape of the connector and plugs into a 110 AC. I believe it provides about 12 watts at 12 volts DC to the Kinect. Not a huge deal, but not normal USB plug-n-play either. I have no experience to predict how the Kinect would behave in low voltage situations. Mounting: The Kinect mechanicals were intended to be mounted in a stationary position. Supporting the sensor bar to isolate it from shake and vibration is something to consider. The academic team mentioned above eventually mounted theirs upside down. Also, the servos that connect the bar to the base are not for continuous use. Cameras: The color camera on the Kinect has resolutions of 1280x1024 compressed, 640x480, and 320x240. The lower resolutions are not compressed. The IR camera supports 320x240, 160x120,and 80x60 uncompressed. The color format, at least through the MS drivers is often 32 bit xRGB, but there is some support for YUV 16 bit. Depth data is 13 bit resolution, and the drivers sometimes combine 3bit player info into it. To transfer video to the DS, compression is likely needed. Drivers and Control: Driver options are MS or OpenNI (not related to National Instruments, but to Natural Interface). MS drivers require Win7. Interference: The Kinect depth sensor works by projecting an IR wavelength patterned light image In front of the sensor bar, viewing the light patterns that return to the IR camera, and processing the data to map distortions in the pattern to 3D depth values. To work reliably, the IR camera needs to be able to be able to measure the light dots. Other IR light projected onto the field, by other Kinects, by spotlights, or other lighting may cause interference. Hope this info helps. Greg Mckaskle |
Re: Running the Kinect on the Robot.
Here are some on robot kinect resources that could also be helpful.
http://www.atomicrobotics.com/2011/1...12-frc-robots/ http://www.atomicrobotics.com/2011/10/link-more/ Also here is a crazy kinect application that is just cool http://www.youtube.com/watch?v=pxoL4bnLp0g |
Re: Running the Kinect on the Robot.
My Two Cents:
The Kinect is not directly USB, it requires a secondary power source. The cRIO card is only compatible with the USB Mass Storage Protocol, for storing information to flash drives and the like. My take on how to connect the Kinect to the cRIO: Using a computer or netbook, take in the information from the kinect, and process the necessary information. (Target x, target y, target depth) Then, using a USB-Serial adapter, output the processed data directly into the cRIO, and then the cRIO can control the motors. Sample string: "X:0,Y:0,Z:0" This way, the massive amount of data being output from the Kinect does not have to be processed by the cRIO. Under this style, the computer would be considered a Custom Circuit, and thus cannot control any other actuators. |
Re: Running the Kinect on the Robot.
I agree, doing the processing with a local coprocessor is the way to go. You need to have additional electronics no matter what to deal with pulling the data, so why not spend a bit more and throw a whole Linux at it?
There are a bunch of low cost ARM based boards out there that can act as USB hosts. The panda board, beagle board, and beagle bone are all TI OMAP (TI's mobile device system on chip offering) dev boards. I assume they have enough horsepower to do the necessary CV on the depth maps, but I wouldn't use it without doing a bit more research. |
Re: Running the Kinect on the Robot.
Um, did anybody ask if it's the standard Kinect? We keep thinking standard, and therefore USB, but they might be special ones, to connect direct to the CRIO. Plus, if it is meant for "the operator level", USB is fine for the driver station laptops. That could tell us a lot. But six weeks, that doesn't leave alot of time for CRIO USB conversions. Whatever you'd do with a kinect, you could probably do with something similar, and easier to attach. Might not be worth the time.
|
Re: Running the Kinect on the Robot.
Quote:
Rumor has it that a board with only a slightly less powerful CPU, the Beagle Board, managed only single digit frame rates using the Kinect. Only a report on the interwebs, but it does back up the claim that you have to be careful. That said, I think that if it can be managed, the Kinect could be an awesome sensor on a FIRST robot (find a ball, find the floor, find a wall, find the corner... ...get ball, put into corner...). It is going to happen. I am not sure if it is this year though (or if it is it will be only a handful of teams that manage it - imho) Joe J. |
Re: Running the Kinect on the Robot.
So it appears that I got some of the Kinect specs wrong. I apologize.
So I was thinking. From the looks of the Beta stuff the Kinect libraries are just wrappers for the official SDK. Which runs on windows.... And that lead me to the classmate. Could we just throw our classmate on the robot to act as a proxy between the crio and the Kinect. It could also handle the processing of images. Assuming we could power it, keep it safe, and keep under the 120 lb limit this may be the best option. It already runs Windows 7 so it will be compatible with the official SDK. We would then use our own laptop for driving. This is probably the cheapest (free) option for us. Does that sound like something FIRST would allow? I think it's legal now but they may release an update to stop that if it becomes popular. |
Re: Running the Kinect on the Robot.
Quote:
|
| All times are GMT -5. The time now is 23:45. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi