# Our Algorithm\Idea for Target tracking

So, let me preface this by saying that I think this sounds near ridiculous, but for some reason makes logical sense if you sit and think about it. If you have any criticisms\ideas please let us know because its a very uncommon idea.

So, the first thing we decided was that we had to somehow shoot the ball into the hoop. This is a simple kinematic formula, and everything can be calculated if you view each side as a Cartesian plane. If you know your position non the field you can figure out where to aim. We are going to cross the barrier by hitting it with wheels, so using a z accelerometer and a boolean we will keep track of which side of the field we are on.

If the robot is facing the hoop on the starting side of the field we let the point 0,0 be in the left corner on the hoop wall.

Sensors we use:
1 ultrasonic on the midpoints of each side of the robot, and 2 more on the back corners of the robot facing backwards for a total of 6 ultrasonics.
3 encoders, 2 on the driving toughboxes, and 1 on the turretting wheel that angles the shooter left and right.
1 gyro and 1 accelerometer.
3 touch sensors for detecting the number of balls in our clip.

Getting initial position:
drive backwards and detect the distance difference between the back left and back right distance sensors to grab initial angle.
rotate to 0 degrees, and 27 - back distance gives y location
drive backwards while continuously checking if the right + left + width of the robot add up to 27. if they do that is the x value.

Keeping track of position:
constantly check if the robot position can be worked out using the distance sensors with trig\distance sensors. When it can’t be use gyro value.
Double integrate the acceleration to get distance in each direction while the values from the distance sensors don’t give proper values (since other robots can get in the way).
Keep track of the side using the z-axis accelerometer, and finding what value it has to hit.
Encoders can compare distance traveled to get the angle and distance to compare with accelerometer (least trusted value though because robot can slide\be pushed).

We have also considered using 2 accelerometers and gyros to ensure we are getting good values.

Any input on the feasability of this idea?

We are programming in Java.

One issue with this is the accelrometer. it is very very noisy for the non-gravity components of the vector it gives you. Also, if anything gets between your ultrasonics and the walls, it will mess up your numbers. I would recommend tracking the targets via the camera, it will be a lot less error prone and most likely easier to get working

I would tend to agree with Dan on this one - knowing where your robot is at all times is extremely hard to pull off, especially because of the error inherent in all of the sensors.

Even if your error were almost negligible, as the match went on, your calculated position would be more and more off from your true position - the drift would accumulate, and you don’t have time to recalibrate mid-match.

Yeah so we realized those problems and when we wrote the pseudo code we have stuff to compensate.

Let front ultrasonic sensor be a, right sensor be b, rear c, right d.

Since each half of the field is a square, a+c+length=b+d+width

Can be used to validate our data. Accelerometer is used whenever the data from the ultrasonics is invalid.

Similarly, when that is true the robot angle is equal to 90-arcsin(27/(a+c))

I imagine it like having your eyes closes and navigating. Whenever the data validates it is like opening your eyes for a second.

I’ve tried using the accelrometer a couple of years ago just for this purpose, and I can tell you that the noise will far exceed any change caused by movement. But, if you are set on it, you should research some better methods for double integration; the traditional method has way too much error on its own.

The idea is to use a polynomial to approximate the function, rather than a flat rectangle or trapazoid as one learns in high school calculus. If you have any questions, I can help. I don’t see the accelerometer working without something like this, since the error for the integration alone will be too high, let alone error from the reading (it is only meant to tell you the direction of gravity, nothing more!)

Also, the mentor on my team who deals with such things has had trouble getting the kit ultrasonic sensors to go more than ~20 feet, iirc. You will need to find some high quality ones for that to work.

By no means do I have my heart set on this method. Has anyone gotten the camera working from ~30ft? (with detection of angle and distance?). Is there any way I can measure or see the issues with the accelerometer?

If you want to measure distance from 30ft, you could track the two center rectangles. To center the image, you could take the average of the x-axis centers of mass for the two rectangles. To measure distance, you can measure the difference between the center’s of mass since the distance between the two rectangles is constant (they’ll be closer together at farther distances and farther apart at closer distances).

This might be relatively accurate if you’re in the center of the field (middle of 27ft) or just a few feet from center. However, you will probably have distance and centering errors when you get too far off center.

If you are only varying the angle of your ball shooter, you probably won’t find a shooter speed that works at both close and far distances (at least not for consistency I would think). You could also potentially vary the firing speed and keep a constant firing angle.

• Bryce

I would imagine that the vision tracking would have difficulty when it is off center. If it is off center the robot would appear to be farther away.

So, there is no currently known way of tracking the center upper hoop from any point on the field?

My code finds the position of the robot and its orientation from a single target, at any position on the field. I am working on multiple target tracking, which will be more accurate.