![]() |
Finding where you are in the field
Have anyone thought about doing robot localization this year to help shooting?
It seems that the sensors we are allowed to use are limited to the ultra sonic range sensor and the camera/kinect, and class I laser sensor (Does someone have more information on Class I laser sensors?) Here are some problems with each:
Has anyone thought about solving these problems? Also, how are you guys thinking about programming the robot to localize itself? I'm thinking of a particle filter with the ultrasonic sensor, but I don't know about its effectiveness due to other team's sensors. Ideas? |
Re: Finding where you are in the field
|
Re: Finding where you are in the field
I coded this using a combination of the ADXL345_I2C in the KoP and Ultrasonic rangefinders, but we're not going to use it because it's fairly useless to know where the robot is in this game. For shooting, all you need to know is the width/height of the rectangle on the backboard and the angle to it.
|
Re: Finding where you are in the field
Why would it be fairly useless to know where the robot is? You always know where the hoops are, so if you know where the robot is (and how it's oriented), you know where to shoot!
|
Re: Finding where you are in the field
Personally, I would do this: We know the size of the rectangle to a margin of error of a couple millimeters. We can calculate the relative distance from the basket by seeing the angle of rotation and size on the image.
|
Re: Finding where you are in the field
Quote:
|
Re: Finding where you are in the field
Quote:
Second of all... Oh come on where is the fun in using their code! ;p (even though if it works.. I probably will) |
Re: Finding where you are in the field
Stick this in whichever class you have camera tracking:
Code:
import edu.wpi.first.wpilibj.image.*;If you don't use Java, I can't help you. |
Re: Finding where you are in the field
Quote:
|
Re: Finding where you are in the field
Quote:
Looking at the documentations.... there are a very scarce amount of it for the class.. |
Re: Finding where you are in the field
Check out this thread.
|
Re: Finding where you are in the field
I've been pondering, why would stereo vision be difficult? Is it due to the improbability of connecting two cameras to the D-Link or is it due to the difficulty in simply using the two cameras?
|
Re: Finding where you are in the field
Vision tracking using Java this year will be more difficult than in the other languages, unfortunately. Unless there is an update released soon, not all of the NIVision capabilities useful for rectangle tracking have been wrapped by WPILibJ (whereas in C++, they are all available).
It is unfortunate that despite the supposed "equal" capabilities of each officially supported language, in 2012 there are very unequal capabilities when it comes to vision processing. We are looking into offboard (Driver Station laptop or separate single board computer on the robot) solutions to vision processing, but these capabilities are very poorly documented at the moment. We have written/modified JavaCV/OpenCV code to reliably track the goals, and are now playing around to find a way to get that code to send back it's "answers" to the cRIO, using either NetworkTables or a separate socket interface. |
Re: Finding where you are in the field
Quote:
I'm trying very hard to figure a way around this issue because I have absolutely no idea how to do vision tracking in C++ this year. |
Re: Finding where you are in the field
Quote:
You'll know that barrier is in the center of the field, so, you can start again a new measurement with this new reference! |
Re: Finding where you are in the field
Quote:
Basically, if you look at each of the sensors available to us, they are all useful for localization but none of them is perfect: * Gyros/Accelerometers: Very fast response and good accuracy initially, but by integrating accelerations and velocities over time, they drift. * Encoders: Very precise distance/speed measurements...as long as your wheels don't slip and you know the precise diameter of your wheels. * Vision system: Seeing a known "landmark" like the goal tells you a great deal about your absolute position on the field, but you can't always see the goal, and you will sometimes get false alarms depending on the environment. In robotics, we often find robots with this arrangement of sensors. By fusing their outputs together, you can get a system that compensates for the individual failings of each sensor. For example, you might use your gyro for most of your heading measurements, but if you get a good shot of the goal, you "reset" your gyro to reduce/eliminate drift. Common fusion techniques include Extended and Unscented Kalman Filters. Unfortunately, getting these systems working is a Masters/PhD level challenge, and would be difficult to get working well in 6 weeks for anyone (especially since you won't have a testable robot for much of that time). That said, I am hoping that at least a few ultra high end teams take on this challenge (I'm looking at you, 254). Jared |
Re: Finding where you are in the field
Quote:
Quote:
Even though it doesn't sound difficult, it is very tough to do in practise. |
Re: Finding where you are in the field
I think the group of people here who believe that they can rely soly on the vision target that is significantly behind the basket do not understand what happens when you move to either side rather than shooting straight on. THAT is why field awareness will be usefull, unless you plan on shooting only from the key.
|
| All times are GMT -5. The time now is 02:09. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi