Running the Kinect on the Robot.

I wanted to get some discussion going about the possibility of running the Kinect on the robot instead of on the DriverStation. I think this would open up some really cool possibilities for the robots on the field and off it.

So let’s start with the obvious. The Kinect itself. It has a 640x480 RGB camera and a 640x480 depth camera. It has a motor to adjust up and down about 90 degrees total. It also has internal accelerometers. The cameras have a field of view 57 degrees by 43 degrees vertically.

This is a very cool piece of technology that I hope we can use to it’s full potential. And I feel like using it as a control mechanism by the drivers just isn’t right. Either FIRST isn’t telling us everything (shocker) or this really just isn’t that thought out. But let’s ignore all that for a second.

First, is this even legal? Yes. From http://www.usfirst.org/roboticsprograms/frc/kinect the question “Can I put the Kinect on my robot to detect other robots or field elements?” was asked and got this answer: “While the focus for Kinect in 2012 is at the operator level, as described above, there are no plans to prohibit teams from implementing the Kinect sensor on the robot.” It seems like they’re just leaving it open for those teams who are smart enough to figure out how. And that’s the problem.

My first question is how to get it connected properly on the robot. First thing we need is power. It’s USB so it should just be 5 Volts which won’t be a problem. Next is connectivity. We need a USB host device unless anybody here wants to re-implement the USB protocol from scratch. And I’m wondering if any rule savvy people here know what kinda of things we can put on the robot. I was thinking it would be best to put something like an arduino or such on the robot that would handle all image manipulation or point detection and would send the rest of the results back to the DriverStation either by the Network or a Digital Input. Does anybody know if thats legal?

We will most likely have to write the USB communication stuff ourselves if we are to run on an embedded device. We can use the protocol documentation from http://openkinect.org/wiki/Protocol_Documentation to figure out how to handle everything. If anybody has some knowledge in low level USB and Drivers it would be appreciated.

And lastly we need to be able to access this data in a timely fashion and react quickly. I don’t know what’s new this year with the DriverStation, but it would be cool if we could use the Kinect as our Camera. I think this is legal if we don’t modify the DriverStation code. So the device would have to act as a IP camera that responds to the same commands as the current cameras. And it would have to communicate all data points that are needed for autonomous code.

I think this potential route was left here purpose so we could create some cool stuff with it. If anybody has any ideas, experience, or anything they think might be help contributions would be greatly appreciated.

Happy New Year,
Luke Young
Programming Manager, Team 2264

from what i remember why it’s not easy, it’s because the crio firmware does not include the usb card, which would be the only way of integration from what i know. so you would have to add to the updates to allow you to use it and then run it on the crio… it’d be cool, but you might be spending a ton of time on it

I remember that there was a whole discussion of this last year… Assuming they will not change the rules regarding the legality of non KOPs motors, you would have to modify the Kinect to take those tilt/pan motors out. I’m telling you Arduino is not enough horsepower for image processing. You need a full on FPGA or ARM (A9 or something) processor on there. It’s mathematically not possible; it does not even have enough memory to pass on the image to the cRio.

Also, from the sounds of it, you just want somebody else to do all the hard work for you. Now, I don’t know man, if you want it, you figure it out on your own.

Just don’t get your hopes up that someone will go out of their own way to get this working. Do not plan on having it at all on the robot. The last thing I want happening is that you design your robot around this thing, which has not been done, and waiting for someone else to deliver it for you. I know how that feels, this happened to us last year. The shipment of the pneumatic actuators came the week before competition and we already shipped it off.

Here are the threads:

If you are up for it, go for it. Just GO. That was my issue last year, I never just “did”.

Hi guys.

My team did beta testing for the Kinect.

Here is the thread for our Kinect Beta Presentation:
http://www.chiefdelphi.com/forums/showthread.php?t=98473&highlight=team+2903

We discussed this idea about the Kinect being used on the robot. It will most likely be used just as a driving mechanism (although the rules don’t prohibit use on the robot directly).
Getting it to work with the cRIO would be very difficult. Good luck with your ambitious endeavors!

Arduino was my first thought. To pass the values from usb to serial, or could we pass the values from usb to ethernet? (and only process it on DS side?).

Couldn’t one put a small laptop on the robot to connect the USB. Then connect the laptop to the cRIO?

If parts utilization rules remain similar to how they have been in the recent past, you could conceivably use a Gumstix processor and breakout board to interface with the Kinect, and then communicate with the cRIO over an Ethernet connection (through the switch). Since the Gumstix runs a fairly full-featured version of Linux, you can use the openni driver to talk with the Kinect and get the RGB and/or depth images, and then send them over to the cRIO. I have heard that with the newer Gumstix, it is possible to do this in real-time with ~70% (Gumstix) CPU usage.

This route would require that you (a) figure out how to power the Kinect (feasible), (b) write an application for the Gumstix to use the openni driver to get the data of our choice and serialize it over ethernet, © write networking code on the cRIO side to receive your data, and (d) write image processing code to do something with it. It is definitely doable, but © and (d) would require careful attention to make sure you aren’t overwhelming your cRIO. It would also cost you upwards of $400 for both the Gumstix and the desired I/O breakout board.

The academic robotics group at NI did have a Kinect mounted and running on a superdroid chassis for awhile. Here are some additional considerations.

Power:
The Kinect is not a five volt USB device. The cable that comes from the Kinect is an XBox shaped connector that will not plug straight into a laptop or other USB connectors. To connect to a pc or laptop requires an adapter cable that changes the shape of the connector and plugs into a 110 AC. I believe it provides about 12 watts at 12 volts DC to the Kinect. Not a huge deal, but not normal USB plug-n-play either. I have no experience to predict how the Kinect would behave in low voltage situations.

Mounting:
The Kinect mechanicals were intended to be mounted in a stationary position. Supporting the sensor bar to isolate it from shake and vibration is something to consider. The academic team mentioned above eventually mounted theirs upside down. Also, the servos that connect the bar to the base are not for continuous use.

Cameras:
The color camera on the Kinect has resolutions of 1280x1024 compressed, 640x480, and 320x240. The lower resolutions are not compressed. The IR camera supports 320x240, 160x120,and 80x60 uncompressed. The color format, at least through the MS drivers is often 32 bit xRGB, but there is some support for YUV 16 bit. Depth data is 13 bit resolution, and the drivers sometimes combine 3bit player info into it. To transfer video to the DS, compression is likely needed.

Drivers and Control:
Driver options are MS or OpenNI (not related to National Instruments, but to Natural Interface). MS drivers require Win7.

Interference:
The Kinect depth sensor works by projecting an IR wavelength patterned light image In front of the sensor bar, viewing the light patterns that return to the IR camera, and processing the data to map distortions in the pattern to 3D depth values. To work reliably, the IR camera needs to be able to be able to measure the light dots. Other IR light projected onto the field, by other Kinects, by spotlights, or other lighting may cause interference.

Hope this info helps.
Greg Mckaskle

Here are some on robot kinect resources that could also be helpful.

http://www.atomicrobotics.com/2011/11/kinects-2012-frc-robots/
http://www.atomicrobotics.com/2011/10/link-more/

Also here is a crazy kinect application that is just cool http://www.youtube.com/watch?v=pxoL4bnLp0g

My Two Cents:
The Kinect is not directly USB, it requires a secondary power source.

The cRIO card is only compatible with the USB Mass Storage Protocol, for storing information to flash drives and the like.

My take on how to connect the Kinect to the cRIO:

Using a computer or netbook, take in the information from the kinect, and process the necessary information. (Target x, target y, target depth)
Then, using a USB-Serial adapter, output the processed data directly into the cRIO, and then the cRIO can control the motors.
Sample string: “X:0,Y:0,Z:0”

This way, the massive amount of data being output from the Kinect does not have to be processed by the cRIO.

Under this style, the computer would be considered a Custom Circuit, and thus cannot control any other actuators.

I agree, doing the processing with a local coprocessor is the way to go. You need to have additional electronics no matter what to deal with pulling the data, so why not spend a bit more and throw a whole Linux at it?

There are a bunch of low cost ARM based boards out there that can act as USB hosts. The panda board, beagle board, and beagle bone are all TI OMAP (TI’s mobile device system on chip offering) dev boards. I assume they have enough horsepower to do the necessary CV on the depth maps, but I wouldn’t use it without doing a bit more research.

Um, did anybody ask if it’s the standard Kinect? We keep thinking standard, and therefore USB, but they might be special ones, to connect direct to the CRIO. Plus, if it is meant for “the operator level”, USB is fine for the driver station laptops. That could tell us a lot. But six weeks, that doesn’t leave alot of time for CRIO USB conversions. Whatever you’d do with a kinect, you could probably do with something similar, and easier to attach. Might not be worth the time.

Our team was looking into the Panda Board. I know some folks that are using the Panda Board with the PrimeSense sensor in the Kinect. To say the Panda Board has “enough horsepower” is a judgement call. From what I hear, yes, you can get the Kinect driver to work and you can (of course) get OpenCV to run under a Linux OS but you have to be smart. It is easy to use up all the processors horsepower.

Rumor has it that a board with only a slightly less powerful CPU, the Beagle Board, managed only single digit frame rates using the Kinect. Only a report on the interwebs, but it does back up the claim that you have to be careful.

That said, I think that if it can be managed, the Kinect could be an awesome sensor on a FIRST robot (find a ball, find the floor, find a wall, find the corner… …get ball, put into corner…). It is going to happen. I am not sure if it is this year though (or if it is it will be only a handful of teams that manage it - imho)

Joe J.

So it appears that I got some of the Kinect specs wrong. I apologize.

So I was thinking. From the looks of the Beta stuff the Kinect libraries are just wrappers for the official SDK. Which runs on windows… And that lead me to the classmate. Could we just throw our classmate on the robot to act as a proxy between the crio and the Kinect. It could also handle the processing of images. Assuming we could power it, keep it safe, and keep under the 120 lb limit this may be the best option. It already runs Windows 7 so it will be compatible with the official SDK. We would then use our own laptop for driving. This is probably the cheapest (free) option for us. Does that sound like something FIRST would allow? I think it’s legal now but they may release an update to stop that if it becomes popular.

Never really thought of this, and I doubt anyone else has. Putting a laptop on the robot to act as a proxy. It has been un-thought of and I’m sure there isn’t any rules regarding it though I would look into it asap before acting upon this. It would be a nice act though. If you could create a simple tcp link between the machine on the bot and the control laptop, you could do anything. Though, it could get complicated and the link may get bogged with data. I have a good feeling that it may still get too bogged up and sluggish during a competition. This is why I wouldn’t recommend it though anythings worth a try. You can do nothing but learn from the experience.

As far as I know, most of the depth perception is done on the Kinect itself. It is just transferring the data and images to the PC or 360. Now, you have to realize, you would have to find a way to power the laptop. Batteries are not allowed.

5 volts from the power distribution shouldn’t be a problem as far as I know. Though I’m not the guy wiring it all so I don’t know the rules towards that sorta thing.

-The Kinect we are getting is a standard Kinect, including the AC adapter and cable thingy to connect directly to USB (you would probably need a 12v regulator for the robot)

-I would go with a single-board computer running Linux, and send the data to the cRio via IP. You could send debug data to the driver station while you’re at it, if you wanted to. I would probably get all of the useful information out of the image on the co-processor, and feed coordinates or other numerical data back to the robot at the highest rate possible.

-A laptop running Win7 will have (comparatively) high system requirements to an embedded single-board Linux machine, as you aren’t running a GUI at all, and you can trim the background processes to just what you need.

-A laptop is very heavy. Just throwing that out there.

-As to power a laptop or other machine, I would probably get an automotive power supply and feed it off of a 12v regulator, since the robot batteries can go down fairly low. Laptop chargers usually run above 12v anyway (the one in front of me is 18.5), so you need a boost converter anyway.

-The FRC Kinect stuff wraps around the Microsoft Kinect SDK (which only runs on Win7), and feeds some stuff to the robot via the Driver Station, including all 20 skeletal coordinates that are tracked. To use the Kinect on the DS, you do not have to write ANY driver station end code, the data is all passed to the robot.

I was thinking about this as an option. We have some sponsors that we could probably get some custom devices to do this for if we intend to run linux. I was just suggesting the Laptop because of the simplicity to setup. Though I agree the GUI and Windows in general are memory and CPU hogs. Linux would be best, but could prove to have issue since we are using a non-official SDK.

Does anybody know what the classmate power supply is? (Even running ubuntu on it would be an improvement)

I was only wring for a half a summer, but I believe there is a DC->DC step up as part of the standard wiring board. It should be the 2.5"ish square block covered with heatsink fins. I think it pulses the straight battery voltage through an inductor and regulates 24v out. I do not now how much current you could pull from this thing and I do not remember what its actually used for, but you should be able to solder up a step-down circuit to take this 24v to laptop voltage (17-18ish?) in about 10 minutes with an LM317 and outboard pass transistor (maybe the MJ2995 if you want overkill safety without heavily heatsinking).

On the topic of image recognition, is there any pre-existing software (especially Linux software?) to determine the shape of “color” (IR distance) blobs in an image? It seems like if you could see a blob and determine how far away it was on average (and therefore its actual height), you should be able to easily detect other robots/structures on the field.

As to whether having your robot autonomously see other robots/tall game objects will be useful this year… that’s still up for grabs until Saturday. :smiley: