![]() |
Finding where you are in the field
Have anyone thought about doing robot localization this year to help shooting?
It seems that the sensors we are allowed to use are limited to the ultra sonic range sensor and the camera/kinect, and class I laser sensor (Does someone have more information on Class I laser sensors?) Here are some problems with each:
Has anyone thought about solving these problems? Also, how are you guys thinking about programming the robot to localize itself? I'm thinking of a particle filter with the ultrasonic sensor, but I don't know about its effectiveness due to other team's sensors. Ideas? |
Re: Finding where you are in the field
|
Re: Finding where you are in the field
I coded this using a combination of the ADXL345_I2C in the KoP and Ultrasonic rangefinders, but we're not going to use it because it's fairly useless to know where the robot is in this game. For shooting, all you need to know is the width/height of the rectangle on the backboard and the angle to it.
|
Re: Finding where you are in the field
Why would it be fairly useless to know where the robot is? You always know where the hoops are, so if you know where the robot is (and how it's oriented), you know where to shoot!
|
Re: Finding where you are in the field
Personally, I would do this: We know the size of the rectangle to a margin of error of a couple millimeters. We can calculate the relative distance from the basket by seeing the angle of rotation and size on the image.
|
Re: Finding where you are in the field
Quote:
|
Re: Finding where you are in the field
Quote:
Second of all... Oh come on where is the fun in using their code! ;p (even though if it works.. I probably will) |
Re: Finding where you are in the field
Stick this in whichever class you have camera tracking:
Code:
import edu.wpi.first.wpilibj.image.*;If you don't use Java, I can't help you. |
Re: Finding where you are in the field
Quote:
|
Re: Finding where you are in the field
Quote:
Looking at the documentations.... there are a very scarce amount of it for the class.. |
Re: Finding where you are in the field
Check out this thread.
|
Re: Finding where you are in the field
I've been pondering, why would stereo vision be difficult? Is it due to the improbability of connecting two cameras to the D-Link or is it due to the difficulty in simply using the two cameras?
|
Re: Finding where you are in the field
Vision tracking using Java this year will be more difficult than in the other languages, unfortunately. Unless there is an update released soon, not all of the NIVision capabilities useful for rectangle tracking have been wrapped by WPILibJ (whereas in C++, they are all available).
It is unfortunate that despite the supposed "equal" capabilities of each officially supported language, in 2012 there are very unequal capabilities when it comes to vision processing. We are looking into offboard (Driver Station laptop or separate single board computer on the robot) solutions to vision processing, but these capabilities are very poorly documented at the moment. We have written/modified JavaCV/OpenCV code to reliably track the goals, and are now playing around to find a way to get that code to send back it's "answers" to the cRIO, using either NetworkTables or a separate socket interface. |
Re: Finding where you are in the field
Quote:
I'm trying very hard to figure a way around this issue because I have absolutely no idea how to do vision tracking in C++ this year. |
Re: Finding where you are in the field
Quote:
You'll know that barrier is in the center of the field, so, you can start again a new measurement with this new reference! |
| All times are GMT -5. The time now is 15:23. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi