|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
|||
|
|||
|
The kinect on the Robot
For starters no one can deny the great capabilities that the kinect has and most programmers would love to put this on their robot.
a few problems 1. bad connection between crio and kinect 2. the kinect cant use the ir depth sensor to find retro reflective tape 3. programming in java, c++, or lab view would be kind of reinventing the wheel with C# implementation of the kinect for the teams that want to use the kinect i have a solution that might be a little hefty for some teams just a few items to pick up atx motheroard http://www.newegg.com/Product/Produc...82E16813138293 cpu 2.6 Ghz http://www.newegg.com/Product/Produc...82E16819103944 a stick or 2 of ram and finally a DC - DC power supply http://www.mini-box.com/s.nl/it.A/id.417/.f What a computer on the robot why??? usb connections and Ethernet connections usb -> kinect Ethernet -> crio sending basic commands like aim left or right what operating system you might ask puppy linux or tiny core for their capability of loading all data from a flashdrive to ram so if a cold shutdown occurs it doesnt mess up your harddisk with mono in your linux machine you can start up an exe file developed in C# on your programming computer and transfer it from the bin file have the exe file start up when your linux box starts send commands over Ethernet saying aim left aim right complexity 8/10 benefits over regular camera 2/10 bragging rights 10/10 the reason i posted this online is because i want it to be done but it appears our team has decided not to do it. Does this sound like a good method and what do you guys think |
|
#2
|
|||
|
|||
|
Re: The kinect on the Robot
http://www.chiefdelphi.com/forums/sh...ad.php?t=99275
Read that. Tell me the benefits of a kinect over a webcam. |
|
#3
|
|||
|
|||
|
Re: The kinect on the Robot
There really isn't any.
You can use the camera to find how far away the target is. It's even easier to do with the camera even better results with the light ring. You can use the little mount FIRST gives in the KOP for tilt on the camera. And you can use the accelerometer in the KOP if you care to have one. Hence we are going to use the axis camera. All you get is bragging rights. |
|
#4
|
|||
|
|||
|
Re: The kinect on the Robot
Quote:
|
|
#5
|
|||
|
|||
|
Re: The kinect on the Robot
Advantages?
The Kinect uses patterned light and a camera to produce a depth map. If environmental factors don't interfere with it, it gives similar data to very expensive LIDAR sensors. If used appropriately, this is great to use to simplify the image processing and make independent measurements that can improve the interpretation of the vision code. Of course this is only helpful if you make it reliable and solve all of the issues listed in the previous posts. Greg McKaskle |
|
#6
|
||||
|
||||
|
Re: The kinect on the Robot
Quote:
2. You cannot see the retro tape, but you can see the border tape. I'm currently using depth / color intersection and a couple of std dev passes to find the backboard in less than ideal lighting situations. 3. Depending on your algorithm this may actually be a good thing. The openCV libraries are a bit overkill for this years challenge. the benefits over a regular camera are 7/10 in my book. see my notes below. Quote:
In 2006, it was great, we had one illuminated target that required little calibration. We averaged 8 for 10 in autonomous. http://video.google.com/videoplay?do...73997861101882 The kinect helps to design a system with a much better tolerance. I would recommend testing your system in a very dimly lit room at night, and a very bright room during the day. If your system works in both, you are much more likely to have a system that will work at competition. A system with this flexibility is very difficult to achieve with just a camera and a light source. Especially from 12' away with all of the other reflections and crowd colors on the legitimate FIRST playing field. The kinect gives me some key advantages. First off I can filter out the crowd and drivers from view. Second it helps me to separate the backboard from any other playing field object by taking the intersection of depth and color. I don't know of any teams that were successful last year with the retro reflective tape. I believe a few will be this year, however I strongly believe that teams would be much more successful with the kinect, because there is a strong sdk backing it, and it is less susceptible to environmental variables. |
|
#7
|
||||
|
||||
|
Re: The kinect on the Robot
You guys talk of nice database solutions and full O/S systems. But if your autonomous mode has to depend on an O/S that takes fourteen seconds to boot, then 20-30 seconds for PostGRE, mySQL, or some other crap DB to fully initialize to a usable state, then don't expect it to work in a timely manner. I deliver these type of systems as part of my projects for work, and quite honestly they never go on systems that go from cold boot to full use in under several minutes. Even teams who are very organized sometimes get stuck with a match or two where they're rushed and have 30 seconds to set the robot down and get behind the wall.
My advice -- hack the API and wrestle it into doing what you need it to do (point clouds). There's the real creds. Don't do it in the context of 'sticking it to Microsoft'; that's tacky and unproductive. Do it because it's quite literally the best technological way to do what you want to do. Of course, this assumes you have a full field setup so you can adjust the point cloud to something that's remotely useful and non-noisy when facing a wall under sometimes blinding light. Last edited by JesseK : 27-01-2012 at 21:34. |
|
#8
|
|||||
|
|||||
|
Re: The kinect on the Robot
One.
Why go through all this mess. Now if I'm wrong then so be it. There are existing arduino implementations of using the kinect as a sensor. SO how did they do it? What my electrical team has suggested is a Arduino USB shield wired to the I2C on the digital sidecar the USB host can be programmed to decode the kinect.data. into serial data. Look up the OpenKinect project and there is a quadcopter programmed to detect obstacles by using the kinect. |
|
#9
|
|||
|
|||
|
Re: The kinect on the Robot
The reason I choose puppy linux was because it has the capability to boot in 30 seconds and can load its whole operating system into ram.
We thought about using windows but its big and clunky to boot and we all know if we simply cold shutdown a windows computer it will want to check the disk for errors The arduino was one of my first thoughts but I wasnt sure if it could take it, but i also saw a team using the beaglebone and had successful results |
|
#10
|
|||
|
|||
|
Re: The kinect on the Robot
Quote:
I think that sort if information is very valuable to the teams trying to do a vision system. A list of tests that likely make your code fail is often more valuable than anything else you can be given. Personally, I would want to see the Kinect solution work outside on a cloudy day and with fluorescent and other lights behind the DS wall and shining into the camera. Those are the environmental factors that will be challenging when using the Kinect. We can't see IR, so we don't really know IR pollution until we use a special camera. Personally, I'd love to see teams succeed with Kinect and with camera, and with IMUs. Just make auto exciting to watch. Greg McKaskle |
|
#11
|
||||
|
||||
|
Re: The kinect on the Robot
Quote:
So the first piece of advice I could give is mount your camera as high as you can. Second would be look at this frame from the field over and over http://www.youtube.com/watch?feature...GQ95_I#t=31 s One question I have is how does the half inch smoked poly look to a kinect or a camera. The camera may pick up the supports of the baskets and the player station framing. Finally even though you are using your own light source, the further out you get, the more impact ambient light will have. I would test you vision system from every distance you intend to use it at. If it can't be calibrated in 5 minutes, you'll probably have a tough time calibrating it at competition. |
|
#12
|
|||||
|
|||||
|
Re: The kinect on the Robot
Unless you did this....
http://www.roborealm.com/tutorial/FIRST/slide010.php |
|
#13
|
|||
|
|||
|
Re: The kinect on the Robot
It a team posts an easy enough way to do this with detailed instructions for C++ (I'm good with computers but not someone who could figure this stuff out on my own). I would be glad to give it a try since I got a Kinect at my house and we can't get a new camera right now...
|
|
#14
|
|||
|
|||
|
Re: The kinect on the Robot
The FIRST field is indeed shiny. It has aluminum diamond plate all along the wall, and lexan up above. Lexan is both transparent and reflective. All polished surfaces, like glass, lexan, aluminum, and painted surfaces will reflect to some degree, and that means that overhead lights, and lights from the robots will show up where a reflection point (where the camera and light reflect about the surface normal). Furthermore, the lexan doesn't prevent lights from shining through the back surface, so lights in the stands, windows and other light sources can shine into the camera through the driver wall of lexan. And there are also the t-shirts, hats, and other gear worn by the drivers that are showing through the lexan. It all leads to an image that is quite difficult for simple processing to deal with.
This is why I wouldn't recommend looking just for color. If you combine color and shape info you will be far more robust. This will almost always reject glare, even from the diamond plate. If the shape info is robust enough, you don't even need to use color, just brightness. The camera will most definitely pick up the hoop, net, and supports. Camera placement is indeed important in part due to the hoop and net blocking the reflective tape. As shown on the targets in the paper, the lower edge will be the first to be impacted. As for blinding drivers, I don't think the LEDs need to be very bright. Certainly they aren't as bright as the BFL. Greg McKaskle |
|
#15
|
|||
|
|||
|
Re: The kinect on the Robot
Has anyone tried having 2 kinects side by side
if the kinect uses a density of how many dots per square inches having 2 kinects would double the amount of dots and effectively making the kinect think the distance is shorter than what it really is |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|