Log in

View Full Version : Running the Kinect on the Robot.


innoying
05-01-2012, 01:31
I wanted to get some discussion going about the possibility of running the Kinect on the robot instead of on the DriverStation. I think this would open up some really cool possibilities for the robots on the field and off it.

So let's start with the obvious. The Kinect itself. It has a 640x480 RGB camera and a 640x480 depth camera. It has a motor to adjust up and down about 90 degrees total. It also has internal accelerometers. The cameras have a field of view 57 degrees by 43 degrees vertically.

This is a very cool piece of technology that I hope we can use to it's full potential. And I feel like using it as a control mechanism by the drivers just isn't right. Either FIRST isn't telling us everything (shocker) or this really just isn't that thought out. But let's ignore all that for a second.

First, is this even legal? Yes. From http://www.usfirst.org/roboticsprograms/frc/kinect the question "Can I put the Kinect on my robot to detect other robots or field elements?" was asked and got this answer: "While the focus for Kinect in 2012 is at the operator level, as described above, there are no plans to prohibit teams from implementing the Kinect sensor on the robot." It seems like they're just leaving it open for those teams who are smart enough to figure out how. And that's the problem.

My first question is how to get it connected properly on the robot. First thing we need is power. It's USB so it should just be 5 Volts which won't be a problem. Next is connectivity. We need a USB host device unless anybody here wants to re-implement the USB protocol from scratch. And I'm wondering if any rule savvy people here know what kinda of things we can put on the robot. I was thinking it would be best to put something like an arduino or such on the robot that would handle all image manipulation or point detection and would send the rest of the results back to the DriverStation either by the Network or a Digital Input. Does anybody know if thats legal?

We will most likely have to write the USB communication stuff ourselves if we are to run on an embedded device. We can use the protocol documentation from http://openkinect.org/wiki/Protocol_Documentation to figure out how to handle everything. If anybody has some knowledge in low level USB and Drivers it would be appreciated.

And lastly we need to be able to access this data in a timely fashion and react quickly. I don't know what's new this year with the DriverStation, but it would be cool if we could use the Kinect as our Camera. I think this is legal if we don't modify the DriverStation code. So the device would have to act as a IP camera that responds to the same commands as the current cameras. And it would have to communicate all data points that are needed for autonomous code.

I think this potential route was left here purpose so we could create some cool stuff with it. If anybody has any ideas, experience, or anything they think might be help contributions would be greatly appreciated.

Happy New Year,
Luke Young
Programming Manager, Team 2264

Chexposito
05-01-2012, 01:46
from what i remember why it's not easy, it's because the crio firmware does not include the usb card, which would be the only way of integration from what i know. so you would have to add to the updates to allow you to use it and then run it on the crio... it'd be cool, but you might be spending a ton of time on it

davidthefat
05-01-2012, 02:13
I remember that there was a whole discussion of this last year... Assuming they will not change the rules regarding the legality of non KOPs motors, you would have to modify the Kinect to take those tilt/pan motors out. I'm telling you Arduino is not enough horsepower for image processing. You need a full on FPGA or ARM (A9 or something) processor on there. It's mathematically not possible; it does not even have enough memory to pass on the image to the cRio.

Also, from the sounds of it, you just want somebody else to do all the hard work for you. Now, I don't know man, if you want it, you figure it out on your own.

Just don't get your hopes up that someone will go out of their own way to get this working. Do not plan on having it at all on the robot. The last thing I want happening is that you design your robot around this thing, which has not been done, and waiting for someone else to deliver it for you. I know how that feels, this happened to us last year. The shipment of the pneumatic actuators came the week before competition and we already shipped it off.

Here are the threads:
http://www.chiefdelphi.com/forums/showthread.php?t=89101
http://www.chiefdelphi.com/forums/showthread.php?t=87803


If you are up for it, go for it. Just GO. That was my issue last year, I never just "did".

Dan Radion
05-01-2012, 05:08
Hi guys.

My team did beta testing for the Kinect.

Here is the thread for our Kinect Beta Presentation:
http://www.chiefdelphi.com/forums/showthread.php?t=98473&highlight=team+2903


We discussed this idea about the Kinect being used on the robot. It will most likely be used just as a driving mechanism (although the rules don't prohibit use on the robot directly).
Getting it to work with the cRIO would be very difficult. Good luck with your ambitious endeavors!

DavisC
05-01-2012, 07:04
Arduino was my first thought. To pass the values from usb to serial, or could we pass the values from usb to ethernet? (and only process it on DS side?).

Peyton Yeung
05-01-2012, 07:14
Couldn't one put a small laptop on the robot to connect the USB. Then connect the laptop to the cRIO?

Jared Russell
05-01-2012, 08:06
If parts utilization rules remain similar to how they have been in the recent past, you could conceivably use a Gumstix processor and breakout board to interface with the Kinect, and then communicate with the cRIO over an Ethernet connection (through the switch). Since the Gumstix runs a fairly full-featured version of Linux, you can use the openni driver to talk with the Kinect and get the RGB and/or depth images, and then send them over to the cRIO. I have heard that with the newer Gumstix, it is possible to do this in real-time with ~70% (Gumstix) CPU usage.

This route would require that you (a) figure out how to power the Kinect (feasible), (b) write an application for the Gumstix to use the openni driver to get the data of our choice and serialize it over ethernet, (c) write networking code on the cRIO side to receive your data, and (d) write image processing code to do something with it. It is definitely doable, but (c) and (d) would require careful attention to make sure you aren't overwhelming your cRIO. It would also cost you upwards of $400 for both the Gumstix and the desired I/O breakout board.

Greg McKaskle
05-01-2012, 09:18
The academic robotics group at NI did have a Kinect mounted and running on a superdroid chassis for awhile. Here are some additional considerations.

Power:
The Kinect is not a five volt USB device. The cable that comes from the Kinect is an XBox shaped connector that will not plug straight into a laptop or other USB connectors. To connect to a pc or laptop requires an adapter cable that changes the shape of the connector and plugs into a 110 AC. I believe it provides about 12 watts at 12 volts DC to the Kinect. Not a huge deal, but not normal USB plug-n-play either. I have no experience to predict how the Kinect would behave in low voltage situations.

Mounting:
The Kinect mechanicals were intended to be mounted in a stationary position. Supporting the sensor bar to isolate it from shake and vibration is something to consider. The academic team mentioned above eventually mounted theirs upside down. Also, the servos that connect the bar to the base are not for continuous use.

Cameras:
The color camera on the Kinect has resolutions of 1280x1024 compressed, 640x480, and 320x240. The lower resolutions are not compressed. The IR camera supports 320x240, 160x120,and 80x60 uncompressed. The color format, at least through the MS drivers is often 32 bit xRGB, but there is some support for YUV 16 bit. Depth data is 13 bit resolution, and the drivers sometimes combine 3bit player info into it. To transfer video to the DS, compression is likely needed.

Drivers and Control:
Driver options are MS or OpenNI (not related to National Instruments, but to Natural Interface). MS drivers require Win7.

Interference:
The Kinect depth sensor works by projecting an IR wavelength patterned light image In front of the sensor bar, viewing the light patterns that return to the IR camera, and processing the data to map distortions in the pattern to 3D depth values. To work reliably, the IR camera needs to be able to be able to measure the light dots. Other IR light projected onto the field, by other Kinects, by spotlights, or other lighting may cause interference.

Hope this info helps.
Greg Mckaskle

staplemonx
05-01-2012, 10:59
Here are some on robot kinect resources that could also be helpful.

http://www.atomicrobotics.com/2011/11/kinects-2012-frc-robots/
http://www.atomicrobotics.com/2011/10/link-more/

Also here is a crazy kinect application that is just cool http://www.youtube.com/watch?v=pxoL4bnLp0g

zaphodp.jensen
05-01-2012, 11:28
My Two Cents:
The Kinect is not directly USB, it requires a secondary power source.

The cRIO card is only compatible with the USB Mass Storage Protocol, for storing information to flash drives and the like.

My take on how to connect the Kinect to the cRIO:

Using a computer or netbook, take in the information from the kinect, and process the necessary information. (Target x, target y, target depth)
Then, using a USB-Serial adapter, output the processed data directly into the cRIO, and then the cRIO can control the motors.
Sample string: "X:0,Y:0,Z:0"

This way, the massive amount of data being output from the Kinect does not have to be processed by the cRIO.

Under this style, the computer would be considered a Custom Circuit, and thus cannot control any other actuators.

Tom Bottiglieri
05-01-2012, 11:41
I agree, doing the processing with a local coprocessor is the way to go. You need to have additional electronics no matter what to deal with pulling the data, so why not spend a bit more and throw a whole Linux at it?

There are a bunch of low cost ARM based boards out there that can act as USB hosts. The panda board, beagle board, and beagle bone are all TI OMAP (TI's mobile device system on chip offering) dev boards. I assume they have enough horsepower to do the necessary CV on the depth maps, but I wouldn't use it without doing a bit more research.

nssheepster
05-01-2012, 12:38
Um, did anybody ask if it's the standard Kinect? We keep thinking standard, and therefore USB, but they might be special ones, to connect direct to the CRIO. Plus, if it is meant for "the operator level", USB is fine for the driver station laptops. That could tell us a lot. But six weeks, that doesn't leave alot of time for CRIO USB conversions. Whatever you'd do with a kinect, you could probably do with something similar, and easier to attach. Might not be worth the time.

Joe Johnson
05-01-2012, 17:15
I agree, doing the processing with a local coprocessor is the way to go. You need to have additional electronics no matter what to deal with pulling the data, so why not spend a bit more and throw a whole Linux at it?

There are a bunch of low cost ARM based boards out there that can act as USB hosts. The panda board, beagle board, and beagle bone are all TI OMAP (TI's mobile device system on chip offering) dev boards. I assume they have enough horsepower to do the necessary CV on the depth maps, but I wouldn't use it without doing a bit more research.

Our team was looking into the Panda Board. I know some folks that are using the Panda Board with the PrimeSense sensor in the Kinect. To say the Panda Board has "enough horsepower" is a judgement call. From what I hear, yes, you can get the Kinect driver to work and you can (of course) get OpenCV to run under a Linux OS but you have to be smart. It is easy to use up all the processors horsepower.

Rumor has it that a board with only a slightly less powerful CPU, the Beagle Board, managed only single digit frame rates using the Kinect. Only a report on the interwebs, but it does back up the claim that you have to be careful.

That said, I think that if it can be managed, the Kinect could be an awesome sensor on a FIRST robot (find a ball, find the floor, find a wall, find the corner... ...get ball, put into corner...). It is going to happen. I am not sure if it is this year though (or if it is it will be only a handful of teams that manage it - imho)

Joe J.

innoying
05-01-2012, 17:29
So it appears that I got some of the Kinect specs wrong. I apologize.

So I was thinking. From the looks of the Beta stuff the Kinect libraries are just wrappers for the official SDK. Which runs on windows.... And that lead me to the classmate. Could we just throw our classmate on the robot to act as a proxy between the crio and the Kinect. It could also handle the processing of images. Assuming we could power it, keep it safe, and keep under the 120 lb limit this may be the best option. It already runs Windows 7 so it will be compatible with the official SDK. We would then use our own laptop for driving. This is probably the cheapest (free) option for us. Does that sound like something FIRST would allow? I think it's legal now but they may release an update to stop that if it becomes popular.

DjMaddius
05-01-2012, 17:40
So it appears that I got some of the Kinect specs wrong. I apologize.

So I was thinking. From the looks of the Beta stuff the Kinect libraries are just wrappers for the official SDK. Which runs on windows.... And that lead me to the classmate. Could we just throw our classmate on the robot to act as a proxy between the crio and the Kinect. It could also handle the processing of images. Assuming we could power it, keep it safe, and keep under the 120 lb limit this may be the best option. It already runs Windows 7 so it will be compatible with the official SDK. We would then use our own laptop for driving. This is probably the cheapest (free) option for us. Does that sound like something FIRST would allow? I think it's legal now but they may release an update to stop that if it becomes popular.Never really thought of this, and I doubt anyone else has. Putting a laptop on the robot to act as a proxy. It has been un-thought of and I'm sure there isn't any rules regarding it though I would look into it asap before acting upon this. It would be a nice act though. If you could create a simple tcp link between the machine on the bot and the control laptop, you could do anything. Though, it could get complicated and the link may get bogged with data. I have a good feeling that it may still get too bogged up and sluggish during a competition. This is why I wouldn't recommend it though anythings worth a try. You can do nothing but learn from the experience.

davidthefat
05-01-2012, 17:45
As far as I know, most of the depth perception is done on the Kinect itself. It is just transferring the data and images to the PC or 360. Now, you have to realize, you would have to find a way to power the laptop. Batteries are not allowed.

DjMaddius
05-01-2012, 17:59
As far as I know, most of the depth perception is done on the Kinect itself. It is just transferring the data and images to the PC or 360. Now, you have to realize, you would have to find a way to power the laptop. Batteries are not allowed.5 volts from the power distribution shouldn't be a problem as far as I know. Though I'm not the guy wiring it all so I don't know the rules towards that sorta thing.

apalrd
05-01-2012, 18:19
-The Kinect we are getting is a standard Kinect, including the AC adapter and cable thingy to connect directly to USB (you would probably need a 12v regulator for the robot)

-I would go with a single-board computer running Linux, and send the data to the cRio via IP. You could send debug data to the driver station while you're at it, if you wanted to. I would probably get all of the useful information out of the image on the co-processor, and feed coordinates or other numerical data back to the robot at the highest rate possible.

-A laptop running Win7 will have (comparatively) high system requirements to an embedded single-board Linux machine, as you aren't running a GUI at all, and you can trim the background processes to just what you need.

-A laptop is very heavy. Just throwing that out there.

-As to power a laptop or other machine, I would probably get an automotive power supply and feed it off of a 12v regulator, since the robot batteries can go down fairly low. Laptop chargers usually run above 12v anyway (the one in front of me is 18.5), so you need a boost converter anyway.

-The FRC Kinect stuff wraps around the Microsoft Kinect SDK (which only runs on Win7), and feeds some stuff to the robot via the Driver Station, including all 20 skeletal coordinates that are tracked. To use the Kinect on the DS, you do not have to write ANY driver station end code, the data is all passed to the robot.

innoying
05-01-2012, 19:34
-I would go with a single-board computer running Linux, and send the data to the cRio via IP. You could send debug data to the driver station while you're at it, if you wanted to. I would probably get all of the useful information out of the image on the co-processor, and feed coordinates or other numerical data back to the robot at the highest rate possible.


I was thinking about this as an option. We have some sponsors that we could probably get some custom devices to do this for if we intend to run linux. I was just suggesting the Laptop because of the simplicity to setup. Though I agree the GUI and Windows in general are memory and CPU hogs. Linux would be best, but could prove to have issue since we are using a non-official SDK.


-As to power a laptop or other machine, I would probably get an automotive power supply and feed it off of a 12v regulator, since the robot batteries can go down fairly low. Laptop chargers usually run above 12v anyway (the one in front of me is 18.5), so you need a boost converter anyway.

Does anybody know what the classmate power supply is? (Even running ubuntu on it would be an improvement)

bhaidet
05-01-2012, 20:09
I was only wring for a half a summer, but I believe there is a DC->DC step up as part of the standard wiring board. It should be the 2.5"ish square block covered with heatsink fins. I think it pulses the straight battery voltage through an inductor and regulates 24v out. I do not now how much current you could pull from this thing and I do not remember what its actually used for, but you should be able to solder up a step-down circuit to take this 24v to laptop voltage (17-18ish?) in about 10 minutes with an LM317 and outboard pass transistor (maybe the MJ2995 if you want overkill safety without heavily heatsinking).

On the topic of image recognition, is there any pre-existing software (especially Linux software?) to determine the shape of "color" (IR distance) blobs in an image? It seems like if you could see a blob and determine how far away it was on average (and therefore its actual height), you should be able to easily detect other robots/structures on the field.

As to whether having your robot autonomously see other robots/tall game objects will be useful this year... that's still up for grabs until Saturday. :D

apalrd
05-01-2012, 20:21
I was only wring for a half a summer, but I believe there is a DC->DC step up as part of the standard wiring board. It should be the 2.5"ish square block covered with heatsink fins. I think it pulses the straight battery voltage through an inductor and regulates 24v out. I do not now how much current you could pull from this thing and I do not remember what its actually used for, but you should be able to solder up a step-down circuit to take this 24v to laptop voltage (17-18ish?) in about 10 minutes with an LM317 and outboard pass transistor (maybe the MJ2995 if you want overkill safety without heavily heatsinking).

On the topic of image recognition, is there any pre-existing software (especially Linux software?) to determine the shape of "color" (IR distance) blobs in an image? It seems like if you could see a blob and determine how far away it was on average (and therefore its actual height), you should be able to easily detect other robots/structures on the field.

As to whether having your robot autonomously see other robots/tall game objects will be useful this year... that's still up for grabs until Saturday. :D

-The FRC PD board has regulated 12v and 24v supplies which are guaranteed down to 4.5v, but those are restricted to the cRio and bridge only
-There's also a 5v regulator, I don't think that one has any guarantee on it.

-The heat sink device of which you speak (which happens to weigh a whole 1/4lb, I weighed ours last season) reduced the (regulated) 12v down to 5v for the new radio. Confused yet?

-I would probably just find a single-board computer with either a 12v input or a car power supply, then a boost converter to 12v like the one on the PD board for the radio (guaranteed down to 4.5v or so)


-As for image data, the Kinect returns depth data as an image, so you could effectively process it for blobs like a normal 11-bit greyscale image. OpenCV has commonly been used for image processing, although I honestly haven't used it myself.

bhaidet
05-01-2012, 22:35
When I was on the programming team in the past, we were always limited to telling it a color and it telling us the blob. Does the software you are talking about detect the color if you tell it how big of a blob you would like to find? (assuming we want to know how far away the robot-sized blob is, not just find out how tall robots exactly 10 ft from the camera are.)

So the brick is a step down to 5v? Does that mean the step-up is embedded in the PDB? I remember looking up their inductive step-up circuit once and being very confused, but that was a long time ago. Do you know of any circuits that are a simple step-up? If we need less than double the battery voltage, we should be able to get away with a simple charge pump with a 555 or similar running the switching.

Joe Johnson
06-01-2012, 08:05
<snip>

-As for image data, the Kinect returns depth data as an image, so you could effectively process it for blobs like a normal 11-bit greyscale image. OpenCV has commonly been used for image processing, although I honestly haven't used it myself.


No not really. It returns the depth data, but not as an image. You can build an image out of the data, but there are a lot of reindeer games involved.

Which isn't to say that it can't be done, it can, but there is bit shifting and such involved. It is far from simply "get a distance image, ship it to an OpenCV routine, ... , here are all the interesting geometric shapes in the field of view"

By the way, I have been noodling on how would I find something interesting, say, I don't know, maybe the center of a ball of radius X and color Y.

I think I would first of all use very rough color filter (say, everything "near enough" to color of interest - where "near enough" is a very wide tolerance). Second, I think I would pass a best fit sphere through the 3D points for each of these candidate points (providing center point and radius). Third, I would filter by radius (only looking for balls of radius X+/- tol). Finally, I would group and average the centers into logical individual balls (e.g. you can't have 2 red balls closer to each other than 2 Radii).

It sounds like a lot but this is all integer math stuff for the most part. I think we could get a reasonable frame rate out of a board like the Panda Board.

Cool stuff... ...there just are not enough hours in a day...

Joe J.

Gdeaver
06-01-2012, 08:08
Remember you have 6 weeks to complete the programming projects. Do you really want to take a low level programming project on during build.

Jared Russell
06-01-2012, 08:26
No not really. It returns the depth data, but not as an image. You can build an image out of the data, but there are a lot of reindeer games involved.

The openni Linux driver and C/C++ wrappers can do this for you pretty painlessly.


By the way, I have been noodling on how would I find something interesting, say, I don't know, maybe the center of a ball of radius X and color Y.

As long as Y = "a distinct color not found/illegal on robots", you could probably do this pretty well without even using the Kinect's depth image. (OpenCV has built in hough circle routines, for example: http://www.youtube.com/watch?v=IeLeMBU4yJk). For added robustness, you could use the Kinect depth image simply to help select the range of radii to look for. I think you'd get equivalent performance - and much more efficient computation - using this method than with 3D point cloud fitting.

Joe Johnson
06-01-2012, 08:48
<snip>

As long as Y = "a distinct color not found/illegal on robots", you could probably do this pretty well without even using the Kinect's depth image. (OpenCV has built in hough circle routines, for example: http://www.youtube.com/watch?v=IeLeMBU4yJk) (http://www.youtube.com/watch?v=IeLeMBU4yJk%29).

For added robustness, you could use the Kinect depth image simply to help select the range of radii to look for. I think you'd get equivalent performance - and much more efficient computation - using this method than with 3D point cloud fitting.

First regarding "As long as Y = "a distinct color not found/illegal on robots"" This is a pretty significant as long as.

Second, regarding using standard image processing, my experience with machine vision is that with controlled lighting, life is good, without it, life can be pretty crumby.

An FRC Robotics field is a pretty lousy lighting environment -- may be bright, may be dim, may be spots, may be colored lighting, ...

There were teams in the GA dome whose image processing algorithm ran fine during the day, but had fits after dark (and vice versa). Are you willing to live with the possibility that your algorithm runs fine on your division field but goes whacky on Einstein? Maybe but maybe not...

So... ...I think that the 3D points from the PrimeSense distance data are going to be more robust to ambient lighting conditions.

Joe J.

zaphodp.jensen
06-01-2012, 08:56
I have a feeling that if you keep trying to fit more and more information in through the TCP/IP port, you will start having lag. If you have a second USB port, I would use a usb to serial converter to pass filtered data directly to the cRIO using a high baud rate. This would be easier to set up then a TCP/IP port, imho.

Joe Johnson
06-01-2012, 09:15
I have a feeling that if you keep trying to fit more and more information in through the TCP/IP port, you will start having lag. If you have a second USB port, I would use a usb to serial converter to pass filtered data directly to the cRIO using a high baud rate. This would be easier to set up then a TCP/IP port, imho.

I have heard mixed reviews on this topic and I don't know who to believe.

In each case usually reliable sources tell me that 640X480 data (image and distance) CAN and CANNOT be reliably sent at 20-30 fps via the wireless router during a robot competition. Both sides are equally adamant that they are correct.

My problem is that if I guess wrong, I potentially don't find out until the first regional. Yikes!

So... ...my plan is that if we use it at all (and I am leaning toward not using it, at least this year) I want to do all the processing on the USB host (e.g. a Panda Board running an embed friendly distro of linux) we'd only be sending digested data via the TCP/IP link (e.g. the red ball is at coords X1,Y1,Z1, the blue ball is at coords X2,Y2,Z2, the floor is at Distance, Theta , Psi, a wall is at ..., ). It is hard to imagine that this would tax the link very much.

Joe J.

Jared Russell
06-01-2012, 09:26
First regarding "As long as Y = "a distinct color not found/illegal on robots"" This is a pretty significant as long as.

Second, regarding using standard image processing, my experience with machine vision is that with controlled lighting, life is good, without it, life can be pretty crumby.


The use of the color threshold in this case would be used only to speed things up (throwing away pixels that are not conceivably part of the ball) and/or to differentiate between balls/spheres of different colors. As long as you can detect the color discontinuity at the edges of the ball with an edge detector (the first step of the hough circle transform), you will still be able to detect the ball. The detection of color/intensity discontinuities is fairly robust to illumination (which is why it is under the hood of SIFT, SURF, etc., features).

There's no question that using the depth sensor would be even more robust, but I have performed reliable shape recognition using only RGB techniques in far less constrained environments than an FRC field (albeit with far more engineering time than I would be willing/able to devote to FRC programming :) )

sjspry
07-01-2012, 15:27
Firstly, it needs to be decided whether placing the kinect on the robot will in some way enhance the robot during the hybrid period. Seeing as I can't really think of a reason why it would help to give feedback to your robot during hybrid, we'll assume putting it on the robot is a better idea.

But so far, most of the discussion focuses on interfacing the kinect to the cRIO on the robot, directly or indirectly. Here's why this is not a good idea:


It will likely not be possible to interface the kinect directly to the cRIO (reimplementing USB would not be possible w/o access to the FPGA, which we do not have; USB-serial communication would be too slow and is a subset USB protocol and is not compatible).
The cRIO is slow (300MHz), performing image processing on it is probably a bad idea in the first place (at least in my team's experience).
The additional cocomputers (>1GHz; less w/ uncompressed stream) have a chance of working, but are fairly expensive (at least for some teams, I know we don't have a spare $100-200 or more).


The problem is is that I don't have any counterpoints. The fact that the kinect uses a USB interface is a huge issue. Last year our team worked out a system to have an application on the driver station grab images from the ethernet camera, do the processing on the laptop, and send back commands, but this only worked because we were able to bypass the cRIO entirely when doing our image transmission. To do something similar this season with the kinect, you would need to convert the USB image stream to ethernet... and at this point (due to the hardware required to do this), you might as well put a computer directly on the robot, which is list item #3.

So this turns into an argument of smart cRIO vs. dumb cRIO (in the dumb/smart terminal sense). Last year, our team had a dumb cRIO with a command framework that worked pretty well, interpreting commands sent back from the computer. This year, a similar system would be doable, but only by shelling out for an integrated system and using that to do the image processing.

The deciding factor becomes cost. While you might be able to go cheaper than a Panda Board, someone had already mentioned Beagle Boards and similarly processored boards being too slow. It really depends on how worthwhile you think the depth data from the kinect's IR camera will be. Personally, I don't think it will be that gamechanging, seeing as you should know distance from the basket based on where you start.

As for using it in hybrid mode...? Still seems rather useless, seeing as anything you might want to tell it would be static, and could be accomplished through more orthodox means (like switches on the robot or something). Our team will probably forgo the kinect entirely, and might end up trying to sell it if we can't find an off-season project to put it in.

davidthefat
08-01-2012, 12:25
The best price per performance has to be a PS3. 6 SPEs (SIMD processors) and 1 dual threaded PPE (RISC, PowerPC Core) at 3.2 GHz open to your use. Can be picked up at around $250, no where near the $400 limit. Also, a laptop that price will have performance significantly lower. PS3 still running linux will be open to the Video4Linux drivers that support a variety of webcams and the Kinect drivers were originally written for linux. If you are even up for it, you can hook up 2-3 PS3s to have a mini distributed memory cluster on your very own robot!

But again, good luck even trying to interface one. There also is a 1 minute boot up time for the PS3 into Linux.

But you probably can get better performance with an FPGA, but who's willing to do that?

fb39ca4
08-01-2012, 13:20
The best price per performance has to be a PS3. 6 SPEs (SIMD processors) and 1 dual threaded PPE (RISC, PowerPC Core) at 3.2 GHz open to your use. Can be picked up at around $250, no where near the $400 limit. Also, a laptop that price will have performance significantly lower. PS3 still running linux will be open to the Video4Linux drivers that support a variety of webcams and the Kinect drivers were originally written for linux. If you are even up for it, you can hook up 2-3 PS3s to have a mini distributed memory cluster on your very own robot!

But again, good luck even trying to interface one. There also is a 1 minute boot up time for the PS3 into Linux.

But you probably can get better performance with an FPGA, but who's willing to do that? But the PS3 has 200W power consumption or something like that.

As far as I know, most of the depth perception is done on the Kinect itself. It is just transferring the data and images to the PC or 360. Now, you have to realize, you would have to find a way to power the laptop. Batteries are not allowed.
[R36] The only legal source of electrical energy for the Robot during the competition is one MK ES17-12
12VDC non-spillable lead acid battery, or one EnerSys NP 18-12 battery, as provided in the 2012 KOP. This is the only battery allowed on the Robot.

Batteries integral to and part of a COTS computing device are also permitted (i.e. laptop batteries), provided they’re only used to power the COTS computing device.

davidthefat
08-01-2012, 13:25
But the PS3 has 200W power consumption or something like that.

Some of the new phat versions are 130 W. Not sure on their linux support.

Sparks333
08-01-2012, 19:58
Dunno about opencv support on arm6, but how about a raspberry pi? $35, USB, Ethernet, 700Mhz...

Sparks

RoboMaster
08-01-2012, 20:37
... raspberry pi? $35, USB, Ethernet, 700Mhz

Yeah, nobody has mentioned raspberry pi yet. But isn't 700Mhz too slow? Isn't the cRIO 600Mhz and that's too slow for the Kinect? Sorry if I'm totally wrong, I guess I don't know and I'm more asking than confirming. I couldn't find it in the thread..

Greg McKaskle
08-01-2012, 20:56
The cRIO is a 400 MHz PPC 603e. There is another FreeScale numbering scheme for it too. It is typically rated at around 780 MIPs I think. These numbers will give you some indication of performance, but keep in mind that benchmarks are often more of a marketing too than an engineering tool. I really don't think that processing the 80x60 depth image on the cRIO would be any sort of issue, and for a virtual LIDAR or obstacle avoidance tool, I think this resolution is more than enough for what you'd need. The most expensive portion of the Kinect is the skeleton tracking. If you aren't worried about that, you are basically using one of the two cameras on the Kinect, not unlike the Axis, but over USB.

As for using Raspberry Pi, I'd personally be a little worried about availability for this season, and then about how new the platform and tools are. I think it sounds amazing, and so does everyone else I know, which is why I'm a little nervous about availability.

Greg McKaskle

realslimschadey
08-01-2012, 21:08
Well the usb on the kinect is not the same as a regular usb. it looks like you took a corner off of a regular usb. Is FIRST going to be giving us a switch or connector???

davidthefat
08-01-2012, 21:09
The cRIO is a 400 MHz PPC 603e. There is another FreeScale numbering scheme for it too. It is typically rated at around 780 MIPs I think. These numbers will give you some indication of performance, but keep in mind that benchmarks are often more of a marketing too than an engineering tool. I really don't think that processing the 80x60 depth image on the cRIO would be any sort of issue, and for a virtual LIDAR or obstacle avoidance tool, I think this resolution is more than enough for what you'd need. The most expensive portion of the Kinect is the skeleton tracking. If you aren't worried about that, you are basically using one of the two cameras on the Kinect, not unlike the Axis, but over USB.

As for using Raspberry Pi, I'd personally be a little worried about availability for this season, and then about how new the platform and tools are. I think it sounds amazing, and so does everyone else I know, which is why I'm a little nervous about availability.

Greg McKaskle
The depth image is 11 bits of data. With that in mind, I have no doubts about the cRio handling it. What I am worried about are the color images. Also, the PS3 has SIMD processors, that means it can calculate 2 pixels in one register.


Well the usb on the kinect is not the same as a regular usb. it looks like you took a corner off of a regular usb. Is FIRST going to be giving us a switch or connector???


No, you are on your own.

Greg McKaskle
08-01-2012, 22:03
NI-Vision supports 16 bit monochrome images natively, and that is how the depth images are buffered. If you are using color images, then it is similar to the Axis, but the images over USB aren't compressed except for the highest resolution. All of the Axis images are compressed.

If the camera is used as a virtual LIDAR, then the color isn't needed, just depth.

Greg McKaskle

Chris_Ely
09-01-2012, 10:54
Would it be possible to use a USB to Ethernet adapter like this (http://www.meritline.com/usb-female-ethernet-rj45-male-adapter---p-43543.aspx)? Send the Kinect information through the bridge to the computer.

RufflesRidge
09-01-2012, 10:57
Would it be possible to use a USB to Ethernet adapter like this? Send the Kinect information through the bridge to the computer.

That device changes the physical connector, but not the data format. It would be of little to no use in interfacing a Kinect with the cRIO or robot radio.

RoboMaster
09-01-2012, 15:51
Well the usb on the kinect is not the same as a regular usb. it looks like you took a corner off of a regular usb. Is FIRST going to be giving us a switch or connector???

In the Kinect box we found that the Kinect came with a short adapter cable to change to normal USB.

Greg McKaskle
09-01-2012, 16:43
That cable also provides power, and the Kinect will not work without it plugged into 110 VAC.

Greg McKaskle

Jonie4
10-01-2012, 03:15
Well, assuming that you can regulate the power going into the Kinect, what if you used an adapter like this:

http://www.silexamerica.com/products/usb_device_connectivity/sx-3000gb.html

to forward data to the Driver's Station, then use the laptop there to do all of the image processing, and send instructions to the robot based on that?

Tom Bottiglieri
10-01-2012, 03:45
Well, assuming that you can regulate the power going into the Kinect, what if you used an adapter like this:

http://www.silexamerica.com/products/usb_device_connectivity/sx-3000gb.html

to forward data to the Driver's Station, then use the laptop there to do all of the image processing, and send instructions to the robot based on that?
It looks like amazon has a few other things like this. I wonder if you can specify which port it uses, as the field has a firewall.

1711Raptors
10-01-2012, 08:54
Does Rule 52 (cRIO control of the robot) even allow the use of a supplemental processing device on the robot platform?

If we use the Classmate on the robot itself (all vibration and power management considered) and integrate via a network connection to the CRio plugged into the Bridge, is this even legal in 2012? Agreed the image processing is not 'controlling' the robot per-se, but it is certainly 'influencing' the control mechanisms of the robot.

We have found that with the Kinect shape game using the Classmate, it is very laggy (2 secs) and have freed up the little PC for something with more power as the DS. This was post-reimage, so it's the best its going to get.

AirRaptor

sjspry
10-01-2012, 21:22
The concern of the rule you mention is that all motor, solenoid, and peripheral control go through the cRIO exclusively so that the field management systems can shut it down and enable certain functions during the different match phases. Seeing as all of the discussion seems to assume the cRIO and digital breakout board will still be the source of all controlling signals, there shouldn't be any problem.

While I can't point you to an official ruling, this topic was brought up before and I am almost positive someone found an official comment/asked themselves.

mwtidd
10-01-2012, 21:55
One thing people should not forget about utilizing kinect to auto aim is that the kinect cannot see the clear poly. So team built fields will look very different from the actual field. Also there is a good chance that the practice fields at comp will look different too.

Reference the field tour video to get an idea of what items you might pick up. Also remember there will be people behind that glass. :)

Right now I'm thinking of using the outside cross formed by the second rows red border and the top player station beam.

I am curious to see what object the kinect can pick up from the front of the key (12') and the top of the key (16').

http://www.youtube.com/watch?v=_JrLRGQ95_I&feature=player_detailpage#t=30s

Greg McKaskle
11-01-2012, 07:36
The range of the Kinect IR is 4' to 11' -- the read me refers to this as the optimal range, but in my experience this is the range.

The RGB camera is a camera. It has a fixed focus lens, and its range for processing is essentially limited by its resolution.

Greg McKaskle

rbellini
11-01-2012, 08:42
The classmate on the robot as an interface between the Kinect and the cRIO seems the best option to me. It has all of the interfaces and a full version of S/W that can communicate "processed" information to the cRIO. For example, although it may be slow, the vision processing S/W supplied in the FRC examples could run on the classmate and then just provide "pointing" information to the cRIO.

Since the CRIO already has packet processing utilities to communicate with the classmate (as a driver station), they could also be used to send the "pointing" information. Then the cRIO simply reads that information and controls the motors as needed.

Austinh100
12-01-2012, 09:31
Our team is going to attempt to get this working for the 2012 season, we are ordering a panda board and will keep you guys updated.

mwtidd
12-01-2012, 10:35
Our team is going to attempt to get this working for the 2012 season, we are ordering a panda board and will keep you guys updated.

Sounds great! I'm working on a getting a pipeline working with the MS SDK and OpenCV. A couple of my notes on tests run with the MS depth view:

Depending on the thickness of the hoop netting, the kinect may be able to see it. I used a fairly thick net and it was able to see it fairly well.

It may have issues with the retroreflective tape. The depth sensor tends to through errors with items that reflect.

My hope is to use a combo RGB and depth to see the Red and Blue squares, and thus being able to filter out background colors. Remember the depth sensor will see the rectrangles and the support beams on the field roughly the same, and the poly it will see straight through it.

Also the kinects motor is limited to about 15 angle changes so if you are planning to use it to pick up balls, you may want to take this into consideration. Also at 5' and its max downward angle it can only see within about 3 feet of itself. I am considering the idea of 2 kinects, one for balls and one for hoops. Where one would work for both, I think it may be easier to capture using a second.

Hopefully my notes are somewhat helpful.

spying189
12-01-2012, 16:23
Honestly, our team is having issues with Java & Kinect implementation. Does anyone have a program that will convert Java code to C++????

jhersh
16-01-2012, 18:27
Honestly, our team is having issues with Java & Kinect implementation. Does anyone have a program that will convert Java code to C++????

The code is so similar it should be pretty trivial to just do it by hand. It shouldn't take much time. Just use the compiler's errors to guide you.

catacon
18-01-2012, 19:59
Preliminary testing with the Kinect.

http://www.youtube.com/watch?v=VQ5IupU1fpY

http://www.youtube.com/watch?v=g9h4F9Ay5-4

realslimschadey
23-01-2012, 16:13
are there any drivers that we need to use the kinect with the driver station. it says i need a server. does the new driver station come with it. when i plug the kinect into the classmate it doesnt recognize it????????:eek: :confused: :confused: :confused: :confused:

RoboMaster
23-01-2012, 17:31
Preliminary testing with the Kinect.

http://www.youtube.com/watch?v=VQ5IupU1fpY

http://www.youtube.com/watch?v=g9h4F9Ay5-4

Wow, those are really interesting, thanks. I assume the Kinect is just hooked up to a computer. How did you program it? Microsoft Visual Studio with Kinect SDK? Any FRC resources or crossovers?

RufflesRidge
23-01-2012, 17:44
Wow, those are really interesting, thanks. I assume the Kinect is just hooked up to a computer. How did you program it? Microsoft Visual Studio with Kinect SDK? Any FRC resources or crossovers?

Looks more like Libfreenect, my guess would be hooked up to OpenCV.

realslimschadey
23-01-2012, 18:00
how do get driver station to recognize the kinect?:confused:

RoboMaster
23-01-2012, 18:07
how do get driver station to recognize the kinect?:confused:

realslimschadey, please look at some other threads, resources from FIRST, or start your own question thread. This thread is about using the Kinect on the robot, like a camera.

catacon
23-01-2012, 19:29
Looks more like Libfreenect, my guess would be hooked up to OpenCV.

At the time I was using the CodeLabratories NUI drivers with OpenCV. I have since switched to OpenKinect (libfreenect). Seems to be working very well. The Kinect is able to track the target and I am working on getting the depth to the target.

shuhao
23-01-2012, 20:03
How are you doing the tracking/recognition?

What's your algorithm? It seems to be fairly fast..

My knowledge of CV is rather limited (the basics of edge, corner detection, the high level understanding of stereo vision, scrolling window.. machine learning based pattern matching etc..), and I haven't had much experiences with OpenCV.

A basic flow of your algorithm would be nice if you're willing to share :D

catacon
24-01-2012, 01:39
How are you doing the tracking/recognition?

What's your algorithm? It seems to be fairly fast..

My knowledge of CV is rather limited (the basics of edge, corner detection, the high level understanding of stereo vision, scrolling window.. machine learning based pattern matching etc..), and I haven't had much experiences with OpenCV.

A basic flow of your algorithm would be nice if you're willing to share :D


I will post a general outline of my algorithm once I get something more solid down. I have improved it greatly since those videos.

Depth measurements and angle based tracking have been locked in.

catacon
24-01-2012, 14:48
Another video. Yaaaaaaay....


http://www.youtube.com/watch?v=6M3MpksczlY

shuhao
28-01-2012, 00:13
Another video. Yaaaaaaay....


http://www.youtube.com/watch?v=6M3MpksczlY


Any explanations soon?

catacon
30-01-2012, 16:21
Yeah...maybe this week. I kind of want to get things perfected first.

I got out Pandaboard and will be working on getting things running on that this week.

spying189
31-01-2012, 11:36
are there any drivers that we need to use the kinect with the driver station. it says i need a server. does the new driver station come with it. when i plug the kinect into the classmate it doesnt recognize it????????:eek: :confused: :confused: :confused: :confused:

realslimschadey, please look at some other threads, resources from FIRST, or start your own question thread. This thread is about using the Kinect on the robot, like a camera.


To use the Kinect on the robot, you would need either a computer ON the robot, or have the Kinect USB somehow streamed wirelessly to the Classmate/FIRST laptop for translation, then sent back to the bot for angle determination. FIRST provides all the resources through their "Technical Resources" webpage.
http://www.usfirst.org/roboticsprograms/frc/2012-kit-of-parts-driver-station/ (http://www.usfirst.org/roboticsprograms/frc/2012-kit-of-parts-driver-station) This webpage provides you with all the links that you will need. First, you need to download the NI Labview Update (http://joule.ni.com/nidu/cds/view/p/id/2261/lang/en) in order to have the Classmate up to date, along with a update for FIRST utilities (http://joule.ni.com/nidu/cds/view/p/id/2262/lang/en), and the Driver Station Update. (http://joule.ni.com/nidu/cds/view/p/id/2263) These MUST be installed **IN LISTED ORDER** to run the supported version of the Driver Station. To give your Classmate/FIRST laptop the ability to support Kinect use, you must first ensure the computer meets the following system requirements.
It must have:
Microsoft Windows 7 Starter Edition and up
2.0 GHZ Processor or higher
1 GB of RAM or higher
3 GB or more of FREE HARD DRIVE space

To get the Kinect running after the Driver Station is installed, download & install the Microsoft Kinect SDK (http://www.microsoft.com/en-us/kinectforwindows/download/). After doing this, download/install the FRC Kinect Server software. (http://firstforge.wpi.edu/sf/frs/do/listReleases/projects.wpilib/frs.2012_frc_kinect_server) This will allow FIRST software cross-compatibility between the Kinect SDK and the FIRST software.
Finally, download the Kinect Kiosk software (http://firstforge.wpi.edu/sf/frs/do/listReleases/projects.wpilib/frs.2012_frc_kinect_kiosk) to enable viewing of what the Kinect sees on the Driver Station. (Skeleton or not.) If you would like it to act just as a camera on the robot, then you will have to somehow program that into the FRC Dashboard (Editing the Driver Station code isn't allowed).

Hopefully this will help those who needed it working!

shuhao
31-01-2012, 13:27
First of all, you don't need windows for the kinect oon the robot. In fact, it is probably a bad idea, because libfreenect is just better than Microsoft's sdk

azula369
31-01-2012, 17:25
We're a beginning team considering using a USB wireless extender (http://www.usbfirewire.com/Parts/rr-47-2022.html) to transfer the data collected by the Kinect to the Classmate, process it there, and then transfer it back through the radio. Does this sound feasible, and also legal, to the more experienced teams?

mwtidd
31-01-2012, 17:49
First of all, you don't need windows for the kinect oon the robot. In fact, it is probably a bad idea, because libfreenect is just better than Microsoft's sdk

Your first statement is definitely correct.

However I'd be curious as to what makes libfreenect better.

For me I think it would save money and weight, but the library itself isn't necessarily better.

The key function I utilize in the MS SDK that makes the vision processing for this game quite a bit easier is the GetColorPixelCoordinatesFromDepthPixel function. This makes finding the intersection of the RGB and depth images much easier.

As far as I know achieving this in labfreenect takes a good deal of work and calibration. Also, I've found OpenKinect to be a pain to install, and watched it kill my machine the last two times I've tried to install it on windows.

I don't believe one solution is "better" than the other, it all depends on the approach and the application.

cgmv123
31-01-2012, 18:40
We're a beginning team considering using a USB wireless extender (http://www.usbfirewire.com/Parts/rr-47-2022.html) to transfer the data collected by the Kinect to the Classmate, process it there, and then transfer it back through the radio. Does this sound feasible, and also legal, to the more experienced teams?

Feasible depending on range, but not legal. The only wireless communication allowed is the robot radio.

azula369
31-01-2012, 19:04
OK, but buying and connecting a Pandaboard is legal, right? So that's plan B. Are there any legality issues we should be aware of with that?

catacon
31-01-2012, 21:18
The key function I utilize in the MS SDK that makes the vision processing for this game quite a bit easier is the GetColorPixelCoordinatesFromDepthPixel function. This makes finding the intersection of the RGB and depth images much easier.

As far as I know achieving this in labfreenect takes a good deal of work and calibration. Also, I've found OpenKinect to be a pain to install, and watched it kill my machine the last two times I've tried to install it on windows.


That sure is a mouthful haha. This is not that difficult with libfreenect, especially with OpenCV (for me, anyways). Besides, not real need for the RGB feed anyway. ;-)

libfreenect is a pain to install on Windows. It's cake on Linux, though.

catacon
31-01-2012, 21:58
Got the Kinect running on our Pandaboard tonight. It's a little slow, but I have a few ideas on how to speed it up.

RufflesRidge
31-01-2012, 22:16
Got the Kinect running on our Pandaboard tonight. It's a little slow, but I have a few ideas on how to speed it up.

I'd love to know what framerate you're seeing and what kind of processing you're doing. Are you using both feeds or just opening the depth stream?

catacon
31-01-2012, 23:00
Framerate isn't the best, but I think that mostly has to do with me displaying both video feeds onto a 1080p monitor. Obviously this won't be done on the robot. When I don't display the video feeds, the "framerate" or rather, the output, is much better.

I am using the IR and depth feeds.

mwtidd
01-02-2012, 07:57
Framerate isn't the best, but I think that mostly has to do with me displaying both video feeds onto a 1080p monitor. Obviously this won't be done on the robot. When I don't display the video feeds, the "framerate" or rather, the output, is much better.

I am using the IR and depth feeds.

I had the same results with the MS SDK. By disabling the video feed cpu usage dropped by 5%, which on a i7 quad core is a significant drop.

Thanks for the insight on linux vs windows with open kinect. Unfortunately for me right now I don't have a linux box at my disposal.

It seems using the reflective tape is definitely better for finding the center of the target, and I think I'm probably going to use the same strategy. I use the rgb and depth to find distance, because I believe the carpenters tape is more reliable for the depth measurements. I'm curious have you tried you vision tracking with other shiny aluminum objects in the field of view? That's what killed me last year was forgetting about the reflections on the legit field. Also are you using a clear poly or smoked poly backboard. I'm trying to find someone who has taking a shot of the 1/2" smoked poly backboard with the kinect. I have a feeling it will look closer to wood than clear poly.

catacon
01-02-2012, 11:25
I am currently just using clear poly.

Since I am using the IR feed, many "shiny" things are of no concern since they are reflecting (humanly) visible light. The biggest issue comes for light sources that produce IR light (e.g. incandescent bulbs). However, this is not hard to deal with since you can easily filter out small object and setup the algorithm to only look for rectangles.

I am using the retroreflective tape to find the target and then I look at the gaffers tape for the depth (the black stuff on the inside). It's not perfect yet, but I think I can sharpen it up a bit.

I did get OpenKinect to work on Windows, but it took some doing. After I used CMake to generate a VisualStudio solution, I had to go through and build each project individually (skipping some since I didn't care about them). And there were some stupid errors like it would try to build a C++ project as a C project, so I would have to set the projects as C++ manually. But...it did finally work.

yash101
28-11-2012, 23:42
It's simple. Just buy a small ARM computer. They're cheap. I use the Raspberry Pi. Tons of developer resources. It's just $35. Edit a text file to overclock the CPU, GPU or RAM. Get the CodeBlocks IDE and the FreeNect Library.

Just open up terminal and type in:
sudo apt-get install codeblocks freenect openssh-server

type in the 'openssh-server' so that you can shut it down with the command:

shutdown -h now

The Raspberry Pi should run on a 'wide' range of voltages. To power the kinect, get a step-up converter/transformer to get 24 volts. Then, use a switching or LDO Linear voltage regulator. Just not that the Kinect requires 1A of current. Someone posted that it requires 12 Watts and 12 volts. Ohms law will solve this for you. That is all I know about this setup. I might use this, but because of my PHP knowledge, I am going to create a web Point-and-Click-to-Attack protocol service so that we can do things more accurately than any other team even if they have the best drivers!
Thank You!

Golto
30-11-2012, 16:06
One thing that I may think of:

Possibly using another small computer, such as a Raspberry Pi board, this as far as I recall is legal under the co-processor rules, so long as it doesn't interface with the bot directly. Then the two boards could communicate via I2C. The Pi board would allow for some VERY high level tracking and analysis, then I2C could send some of the tracking info back to the cRIO.

Just a thought.

sebflippers
30-11-2012, 19:28
Our team has been working on this for a while, and we have gotten a kinect feed on our pandaboard. Creating the 640x480 depthImage uses ~60% (30fps), but doing anything with the data (like opencv filtering) brought us down to <5fps. 987 did this before, and they skipped 5 pixels at a time to achieve reasonable framerate. We were thinking of just sending the raw depthImage (using openni) to the driver station dashboard (with opencv), and then back to the cRio, all over Ethernet.

spi & i2c are unnecessary. Just use ethernet.

rPi is waaay too slow.

yash101
01-12-2012, 13:37
The Raspberry Pi should work, as I came up with the plan, HTTPd SSH Telnetd server. The Pi contacts the controller computer, which validates the data and forwards the data to the cRIO. If the terminal is running Windows 8, it is easily possible to create a JavaScript app that creates some sort of point-and-click-to-attack mechanism. In a competition that uses beanbags or balls, You could point and click with a mouse or touchscreen and it will automatically make to Robot go for the ball or beanbag and automatically execute what needs to be done with it, for example, shooting it, placing it, tearing it, etc.:] :] :] :] :]

yash101
05-12-2012, 17:52
Doing that will damage the kinect to a point in which you would have to go to microsoft and have them fix it with their robots, or buy a new one. Before going for the conclusion, It is USB, so 5 volts will work, Shouldn't you read the AC adapter? It says '12 Volt 1.08A' Powering it off 5 volts should do some nice amounts of damage to your kinect.