![]() |
Re: Running the Kinect on the Robot.
As far as I know, most of the depth perception is done on the Kinect itself. It is just transferring the data and images to the PC or 360. Now, you have to realize, you would have to find a way to power the laptop. Batteries are not allowed.
|
Re: Running the Kinect on the Robot.
Quote:
|
Re: Running the Kinect on the Robot.
-The Kinect we are getting is a standard Kinect, including the AC adapter and cable thingy to connect directly to USB (you would probably need a 12v regulator for the robot)
-I would go with a single-board computer running Linux, and send the data to the cRio via IP. You could send debug data to the driver station while you're at it, if you wanted to. I would probably get all of the useful information out of the image on the co-processor, and feed coordinates or other numerical data back to the robot at the highest rate possible. -A laptop running Win7 will have (comparatively) high system requirements to an embedded single-board Linux machine, as you aren't running a GUI at all, and you can trim the background processes to just what you need. -A laptop is very heavy. Just throwing that out there. -As to power a laptop or other machine, I would probably get an automotive power supply and feed it off of a 12v regulator, since the robot batteries can go down fairly low. Laptop chargers usually run above 12v anyway (the one in front of me is 18.5), so you need a boost converter anyway. -The FRC Kinect stuff wraps around the Microsoft Kinect SDK (which only runs on Win7), and feeds some stuff to the robot via the Driver Station, including all 20 skeletal coordinates that are tracked. To use the Kinect on the DS, you do not have to write ANY driver station end code, the data is all passed to the robot. |
Re: Running the Kinect on the Robot.
Quote:
Quote:
|
Re: Running the Kinect on the Robot.
I was only wring for a half a summer, but I believe there is a DC->DC step up as part of the standard wiring board. It should be the 2.5"ish square block covered with heatsink fins. I think it pulses the straight battery voltage through an inductor and regulates 24v out. I do not now how much current you could pull from this thing and I do not remember what its actually used for, but you should be able to solder up a step-down circuit to take this 24v to laptop voltage (17-18ish?) in about 10 minutes with an LM317 and outboard pass transistor (maybe the MJ2995 if you want overkill safety without heavily heatsinking).
On the topic of image recognition, is there any pre-existing software (especially Linux software?) to determine the shape of "color" (IR distance) blobs in an image? It seems like if you could see a blob and determine how far away it was on average (and therefore its actual height), you should be able to easily detect other robots/structures on the field. As to whether having your robot autonomously see other robots/tall game objects will be useful this year... that's still up for grabs until Saturday. :D |
Re: Running the Kinect on the Robot.
Quote:
-There's also a 5v regulator, I don't think that one has any guarantee on it. -The heat sink device of which you speak (which happens to weigh a whole 1/4lb, I weighed ours last season) reduced the (regulated) 12v down to 5v for the new radio. Confused yet? -I would probably just find a single-board computer with either a 12v input or a car power supply, then a boost converter to 12v like the one on the PD board for the radio (guaranteed down to 4.5v or so) -As for image data, the Kinect returns depth data as an image, so you could effectively process it for blobs like a normal 11-bit greyscale image. OpenCV has commonly been used for image processing, although I honestly haven't used it myself. |
Re: Running the Kinect on the Robot.
When I was on the programming team in the past, we were always limited to telling it a color and it telling us the blob. Does the software you are talking about detect the color if you tell it how big of a blob you would like to find? (assuming we want to know how far away the robot-sized blob is, not just find out how tall robots exactly 10 ft from the camera are.)
So the brick is a step down to 5v? Does that mean the step-up is embedded in the PDB? I remember looking up their inductive step-up circuit once and being very confused, but that was a long time ago. Do you know of any circuits that are a simple step-up? If we need less than double the battery voltage, we should be able to get away with a simple charge pump with a 555 or similar running the switching. |
Re: Running the Kinect on the Robot.
Quote:
No not really. It returns the depth data, but not as an image. You can build an image out of the data, but there are a lot of reindeer games involved. Which isn't to say that it can't be done, it can, but there is bit shifting and such involved. It is far from simply "get a distance image, ship it to an OpenCV routine, ... , here are all the interesting geometric shapes in the field of view" By the way, I have been noodling on how would I find something interesting, say, I don't know, maybe the center of a ball of radius X and color Y. I think I would first of all use very rough color filter (say, everything "near enough" to color of interest - where "near enough" is a very wide tolerance). Second, I think I would pass a best fit sphere through the 3D points for each of these candidate points (providing center point and radius). Third, I would filter by radius (only looking for balls of radius X+/- tol). Finally, I would group and average the centers into logical individual balls (e.g. you can't have 2 red balls closer to each other than 2 Radii). It sounds like a lot but this is all integer math stuff for the most part. I think we could get a reasonable frame rate out of a board like the Panda Board. Cool stuff... ...there just are not enough hours in a day... Joe J. |
Re: Running the Kinect on the Robot.
Remember you have 6 weeks to complete the programming projects. Do you really want to take a low level programming project on during build.
|
Re: Running the Kinect on the Robot.
Quote:
Quote:
|
Re: Running the Kinect on the Robot.
Quote:
Second, regarding using standard image processing, my experience with machine vision is that with controlled lighting, life is good, without it, life can be pretty crumby. An FRC Robotics field is a pretty lousy lighting environment -- may be bright, may be dim, may be spots, may be colored lighting, ... There were teams in the GA dome whose image processing algorithm ran fine during the day, but had fits after dark (and vice versa). Are you willing to live with the possibility that your algorithm runs fine on your division field but goes whacky on Einstein? Maybe but maybe not... So... ...I think that the 3D points from the PrimeSense distance data are going to be more robust to ambient lighting conditions. Joe J. |
Re: Running the Kinect on the Robot.
I have a feeling that if you keep trying to fit more and more information in through the TCP/IP port, you will start having lag. If you have a second USB port, I would use a usb to serial converter to pass filtered data directly to the cRIO using a high baud rate. This would be easier to set up then a TCP/IP port, imho.
|
Re: Running the Kinect on the Robot.
Quote:
In each case usually reliable sources tell me that 640X480 data (image and distance) CAN and CANNOT be reliably sent at 20-30 fps via the wireless router during a robot competition. Both sides are equally adamant that they are correct. My problem is that if I guess wrong, I potentially don't find out until the first regional. Yikes! So... ...my plan is that if we use it at all (and I am leaning toward not using it, at least this year) I want to do all the processing on the USB host (e.g. a Panda Board running an embed friendly distro of linux) we'd only be sending digested data via the TCP/IP link (e.g. the red ball is at coords X1,Y1,Z1, the blue ball is at coords X2,Y2,Z2, the floor is at Distance, Theta , Psi, a wall is at ..., ). It is hard to imagine that this would tax the link very much. Joe J. |
Re: Running the Kinect on the Robot.
Quote:
There's no question that using the depth sensor would be even more robust, but I have performed reliable shape recognition using only RGB techniques in far less constrained environments than an FRC field (albeit with far more engineering time than I would be willing/able to devote to FRC programming :) ) |
Re: Running the Kinect on the Robot.
Firstly, it needs to be decided whether placing the kinect on the robot will in some way enhance the robot during the hybrid period. Seeing as I can't really think of a reason why it would help to give feedback to your robot during hybrid, we'll assume putting it on the robot is a better idea.
But so far, most of the discussion focuses on interfacing the kinect to the cRIO on the robot, directly or indirectly. Here's why this is not a good idea:
The problem is is that I don't have any counterpoints. The fact that the kinect uses a USB interface is a huge issue. Last year our team worked out a system to have an application on the driver station grab images from the ethernet camera, do the processing on the laptop, and send back commands, but this only worked because we were able to bypass the cRIO entirely when doing our image transmission. To do something similar this season with the kinect, you would need to convert the USB image stream to ethernet... and at this point (due to the hardware required to do this), you might as well put a computer directly on the robot, which is list item #3. So this turns into an argument of smart cRIO vs. dumb cRIO (in the dumb/smart terminal sense). Last year, our team had a dumb cRIO with a command framework that worked pretty well, interpreting commands sent back from the computer. This year, a similar system would be doable, but only by shelling out for an integrated system and using that to do the image processing. The deciding factor becomes cost. While you might be able to go cheaper than a Panda Board, someone had already mentioned Beagle Boards and similarly processored boards being too slow. It really depends on how worthwhile you think the depth data from the kinect's IR camera will be. Personally, I don't think it will be that gamechanging, seeing as you should know distance from the basket based on where you start. As for using it in hybrid mode...? Still seems rather useless, seeing as anything you might want to tell it would be static, and could be accomplished through more orthodox means (like switches on the robot or something). Our team will probably forgo the kinect entirely, and might end up trying to sell it if we can't find an off-season project to put it in. |
| All times are GMT -5. The time now is 02:15. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi