![]() |
The kinect on the Robot
For starters no one can deny the great capabilities that the kinect has and most programmers would love to put this on their robot.
a few problems 1. bad connection between crio and kinect 2. the kinect cant use the ir depth sensor to find retro reflective tape 3. programming in java, c++, or lab view would be kind of reinventing the wheel with C# implementation of the kinect for the teams that want to use the kinect i have a solution that might be a little hefty for some teams just a few items to pick up atx motheroard http://www.newegg.com/Product/Produc...82E16813138293 cpu 2.6 Ghz http://www.newegg.com/Product/Produc...82E16819103944 a stick or 2 of ram and finally a DC - DC power supply http://www.mini-box.com/s.nl/it.A/id.417/.f What a computer on the robot why??? usb connections and Ethernet connections usb -> kinect Ethernet -> crio sending basic commands like aim left or right what operating system you might ask puppy linux or tiny core for their capability of loading all data from a flashdrive to ram so if a cold shutdown occurs it doesnt mess up your harddisk with mono in your linux machine you can start up an exe file developed in C# on your programming computer and transfer it from the bin file have the exe file start up when your linux box starts send commands over Ethernet saying aim left aim right complexity 8/10 benefits over regular camera 2/10 bragging rights 10/10 the reason i posted this online is because i want it to be done but it appears our team has decided not to do it. Does this sound like a good method and what do you guys think |
Re: The kinect on the Robot
http://www.chiefdelphi.com/forums/sh...ad.php?t=99275
Read that. Tell me the benefits of a kinect over a webcam. |
Re: The kinect on the Robot
There really isn't any.
You can use the camera to find how far away the target is. It's even easier to do with the camera even better results with the light ring. You can use the little mount FIRST gives in the KOP for tilt on the camera. And you can use the accelerometer in the KOP if you care to have one. Hence we are going to use the axis camera. All you get is bragging rights. |
Re: The kinect on the Robot
Quote:
|
Re: The kinect on the Robot
Advantages?
The Kinect uses patterned light and a camera to produce a depth map. If environmental factors don't interfere with it, it gives similar data to very expensive LIDAR sensors. If used appropriately, this is great to use to simplify the image processing and make independent measurements that can improve the interpretation of the vision code. Of course this is only helpful if you make it reliable and solve all of the issues listed in the previous posts. Greg McKaskle |
Re: The kinect on the Robot
Quote:
2. You cannot see the retro tape, but you can see the border tape. I'm currently using depth / color intersection and a couple of std dev passes to find the backboard in less than ideal lighting situations. 3. Depending on your algorithm this may actually be a good thing. The openCV libraries are a bit overkill for this years challenge. the benefits over a regular camera are 7/10 in my book. see my notes below. Quote:
In 2006, it was great, we had one illuminated target that required little calibration. We averaged 8 for 10 in autonomous. http://video.google.com/videoplay?do...73997861101882 The kinect helps to design a system with a much better tolerance. I would recommend testing your system in a very dimly lit room at night, and a very bright room during the day. If your system works in both, you are much more likely to have a system that will work at competition. A system with this flexibility is very difficult to achieve with just a camera and a light source. Especially from 12' away with all of the other reflections and crowd colors on the legitimate FIRST playing field. The kinect gives me some key advantages. First off I can filter out the crowd and drivers from view. Second it helps me to separate the backboard from any other playing field object by taking the intersection of depth and color. I don't know of any teams that were successful last year with the retro reflective tape. I believe a few will be this year, however I strongly believe that teams would be much more successful with the kinect, because there is a strong sdk backing it, and it is less susceptible to environmental variables. |
Re: The kinect on the Robot
You guys talk of nice database solutions and full O/S systems. But if your autonomous mode has to depend on an O/S that takes fourteen seconds to boot, then 20-30 seconds for PostGRE, mySQL, or some other crap DB to fully initialize to a usable state, then don't expect it to work in a timely manner. I deliver these type of systems as part of my projects for work, and quite honestly they never go on systems that go from cold boot to full use in under several minutes. Even teams who are very organized sometimes get stuck with a match or two where they're rushed and have 30 seconds to set the robot down and get behind the wall.
My advice -- hack the API and wrestle it into doing what you need it to do (point clouds). There's the real creds. Don't do it in the context of 'sticking it to Microsoft'; that's tacky and unproductive. Do it because it's quite literally the best technological way to do what you want to do. Of course, this assumes you have a full field setup so you can adjust the point cloud to something that's remotely useful and non-noisy when facing a wall under sometimes blinding light. |
Re: The kinect on the Robot
One.
Why go through all this mess. Now if I'm wrong then so be it. There are existing arduino implementations of using the kinect as a sensor. SO how did they do it? What my electrical team has suggested is a Arduino USB shield wired to the I2C on the digital sidecar the USB host can be programmed to decode the kinect.data. into serial data. Look up the OpenKinect project and there is a quadcopter programmed to detect obstacles by using the kinect. |
Re: The kinect on the Robot
The reason I choose puppy linux was because it has the capability to boot in 30 seconds and can load its whole operating system into ram.
We thought about using windows but its big and clunky to boot and we all know if we simply cold shutdown a windows computer it will want to check the disk for errors The arduino was one of my first thoughts but I wasnt sure if it could take it, but i also saw a team using the beaglebone and had successful results |
Re: The kinect on the Robot
Quote:
I think that sort if information is very valuable to the teams trying to do a vision system. A list of tests that likely make your code fail is often more valuable than anything else you can be given. Personally, I would want to see the Kinect solution work outside on a cloudy day and with fluorescent and other lights behind the DS wall and shining into the camera. Those are the environmental factors that will be challenging when using the Kinect. We can't see IR, so we don't really know IR pollution until we use a special camera. Personally, I'd love to see teams succeed with Kinect and with camera, and with IMUs. Just make auto exciting to watch. Greg McKaskle |
Re: The kinect on the Robot
Quote:
If people would take a look at a scene containing retroreflective tape with the IR Feed from the Kinect, I think they would instantly understand why the Kinect is awesome. IR pollution can be an issue. But since we are looking for rectangles, it is easy to filter out other objects since most light sources aren't rectangles Another huge advantage over a single camera system is being able to get a depth value for every pixel in the image. You can glean a LOT of informaion from this and you need no reference object in the scene. |
Re: The kinect on the Robot
http://www.roborealm.com/
Contact them with your team name, location, website, and number. The mentor of Team 443 will email you back about how they have already been using the Kinect ON their robot. |
Re: The kinect on the Robot
Quote:
So the first piece of advice I could give is mount your camera as high as you can. Second would be look at this frame from the field over and over http://www.youtube.com/watch?feature...GQ95_I#t=31 s One question I have is how does the half inch smoked poly look to a kinect or a camera. The camera may pick up the supports of the baskets and the player station framing. Finally even though you are using your own light source, the further out you get, the more impact ambient light will have. I would test you vision system from every distance you intend to use it at. If it can't be calibrated in 5 minutes, you'll probably have a tough time calibrating it at competition. |
Re: The kinect on the Robot
Unless you did this....
http://www.roborealm.com/tutorial/FIRST/slide010.php |
Re: The kinect on the Robot
It a team posts an easy enough way to do this with detailed instructions for C++ (I'm good with computers but not someone who could figure this stuff out on my own). I would be glad to give it a try since I got a Kinect at my house and we can't get a new camera right now...
|
| All times are GMT -5. The time now is 23:22. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi