Finding where you are in the field

Have anyone thought about doing robot localization this year to help shooting?

It seems that the sensors we are allowed to use are limited to the ultra sonic range sensor and the camera/kinect, and class I laser sensor (Does someone have more information on Class I laser sensors?)

Here are some problems with each:

  • Ultrasonic Sensor: Interferance from other team’s sensors
  • Camera/Kinect: Stereo vision is tough to do. Kinect setup on the robot is equally as difficult.
  • Laser sensor: Class I may also be outlawed as it’s not suppose to blind the drivers.

Has anyone thought about solving these problems?

Also, how are you guys thinking about programming the robot to localize itself? I’m thinking of a particle filter with the ultrasonic sensor, but I don’t know about its effectiveness due to other team’s sensors.

Ideas?

http://www.frc.ri.cmu.edu/projects/colony/pdf_docs/gyropaper.pdf

I coded this using a combination of the ADXL345_I2C in the KoP and Ultrasonic rangefinders, but we’re not going to use it because it’s fairly useless to know where the robot is in this game. For shooting, all you need to know is the width/height of the rectangle on the backboard and the angle to it.

Why would it be fairly useless to know where the robot is? You always know where the hoops are, so if you know where the robot is (and how it’s oriented), you know where to shoot!

Personally, I would do this: We know the size of the rectangle to a margin of error of a couple millimeters. We can calculate the relative distance from the basket by seeing the angle of rotation and size on the image.

Using the camera and the code supplied by National Instruments to find position and angle is much easier and more accurate than using the gyro (off by up to 7* in either direction) and ultrasonics, which can be interfered with by other ultrasonics. Also, this.

First of all, Where is this code?

Second of all… Oh come on where is the fun in using their code! ;p (even though if it works… I probably will)

Stick this in whichever class you have camera tracking:

import edu.wpi.first.wpilibj.image.*;

NI’s code is in NIVision.java

If you don’t use Java, I can’t help you.

You’re not thinking what I’m thinking. :slight_smile:

I am using Java.

Looking at the documentations… there are a very scarce amount of it for the class…

Check out this thread.

I’ve been pondering, why would stereo vision be difficult? Is it due to the improbability of connecting two cameras to the D-Link or is it due to the difficulty in simply using the two cameras?

Vision tracking using Java this year will be more difficult than in the other languages, unfortunately. Unless there is an update released soon, not all of the NIVision capabilities useful for rectangle tracking have been wrapped by WPILibJ (whereas in C++, they are all available).

It is unfortunate that despite the supposed “equal” capabilities of each officially supported language, in 2012 there are very unequal capabilities when it comes to vision processing.

We are looking into offboard (Driver Station laptop or separate single board computer on the robot) solutions to vision processing, but these capabilities are very poorly documented at the moment. We have written/modified JavaCV/OpenCV code to reliably track the goals, and are now playing around to find a way to get that code to send back it’s “answers” to the cRIO, using either NetworkTables or a separate socket interface.

Ive been thinking about setting up localization/field coordinate system for this game. The only problem I can see is the bump causing some of the wheels to be off the ground and the encoders giving inaccurate measurements.

I’m trying very hard to figure a way around this issue because I have absolutely no idea how to do vision tracking in C++ this year.

Could you use one accelerometer located in the same axis of your wheels in order to reset the reference of enconder when the robot pass through the barrier?
You’ll know that barrier is in the center of the field, so, you can start again a new measurement with this new reference!

In the “real world” of mobile robotics, typically you would deal with this problem doing some sort of sensor fusion.

Basically, if you look at each of the sensors available to us, they are all useful for localization but none of them is perfect:

  • Gyros/Accelerometers: Very fast response and good accuracy initially, but by integrating accelerations and velocities over time, they drift.

  • Encoders: Very precise distance/speed measurements…as long as your wheels don’t slip and you know the precise diameter of your wheels.

  • Vision system: Seeing a known “landmark” like the goal tells you a great deal about your absolute position on the field, but you can’t always see the goal, and you will sometimes get false alarms depending on the environment.

In robotics, we often find robots with this arrangement of sensors. By fusing their outputs together, you can get a system that compensates for the individual failings of each sensor. For example, you might use your gyro for most of your heading measurements, but if you get a good shot of the goal, you “reset” your gyro to reduce/eliminate drift. Common fusion techniques include Extended and Unscented Kalman Filters. Unfortunately, getting these systems working is a Masters/PhD level challenge, and would be difficult to get working well in 6 weeks for anyone (especially since you won’t have a testable robot for much of that time).

That said, I am hoping that at least a few ultra high end teams take on this challenge (I’m looking at you, 254).

Jared

Stereo vision? Distortions? This one seem somewhat difficult to integrate.

I’ve been pondering, why would stereo vision be difficult? Is it due to the improbability of connecting two cameras to the D-Link or is it due to the difficulty in simply using the two cameras?

Stereo vision has a couple of steps. First is positioning the 2 cameras just right and get a very accurate measurement of the distance between the 2. Secondly, numerous algorithms that’s entirely non-trivial will be needed to locate “things” in the 2 images and compare their location in the 2 images to get the distance of those objects

Even though it doesn’t sound difficult, it is very tough to do in practise.

I think the group of people here who believe that they can rely soly on the vision target that is significantly behind the basket do not understand what happens when you move to either side rather than shooting straight on. THAT is why field awareness will be usefull, unless you plan on shooting only from the key.