# Tracking low goal in autonomous

I’m the head programmer for team 2197 and my team has given me the task of getting over a defense then move the robot in front of the low goal and put the ball in the goal in auto. I was thinking about using a camera but I’m not sure how I’d implement that or if there’s a better way. I also have access to a gyro and an ultrasonic sensor so I can use those in my code too. If anybody has any suggestions on how I should make this code it’d help a lot

Use the gyro to keep the robot pointed in the right direction while moving.
Use encoders on the wheel motors to estimate distance traveled.
Use gyro to point to castle wall.
Use sonar to get to the wall.
Use gyro to turn to the low goal
Use sonar to get to the low goal

At various points, you can use the camera to fine tune your position and direction.

How exactly would I use the camera to fine tune my position I’ve never used vision processing before so I have no idea where to start

There is a tutorial on using vision to identify the target. Sorry I can’t look it up for you,I am on a limited tablet. Try searching on frc vision.

Once you have identified the target, you can use its relative size to estimate distance, and the position of the target in the frame to estimate how much you have to turn to line up withe the target.

Tutorial on vision processing
https://wpilib.screenstepslive.com/s/4485/m/24194

There is a vision example that implements the content in the white paper. It has code that will perform the vision processing on the DS laptop which can be integrated into your dashboard, and it has code that will perform the processing on the robot. I think it is tutorial 8 steps you through it. Tutorials are found on the Getting Started window.

I’ve looked through those tutorials and I understand for the most part how to process the image and make it detect the objects/colors I want but the problem I’m having is using that information to move the robot how would I do that?

One of the last steps is to put the target into a coordinate system that goes from -1 to 1, much the way that the joystick does. So the math needed to turn the target results into an input to RobotDrive is relatively simple. It may need to be reversed or scaled, but that was why it is in that mapping.

I thought this was not an effective method to do tracking, because the framerate of the vision algorithm is usually too slow to be a direct input to a control loop.

One of the big problems with visually tracking the low goals is that there does not seem to be a clear description of the details of what is behind the low goal. While it may sound rather roundabout at first, you may want to consider having a camera near the back of your robot pointed nearly vertically but forward with an LED ring to pick up on the high goal reflective tape.

Another tactic to lining up on the low goal may be to put an angled plate outside the frame perimeter down low in such a location that the castle wall and the partition help drive you in to the goal. You’d need one for each side as far as I can see at a casual look.

Caveat: my team is not planning to work the low goal, so neither of these ideas are tested at all.

You can use the picture to calculate angle and distance. You then need to tell the drive system how to accomplish that. Once you have done that, you take another pic to confirm. Get new angle/distance. Rinse and repeat.

Using the camera to close the loop is not as predictable as calculating an amount to turn and using a gyro to close the loop. But if your framerate is reasonable and your robot is moving or otherwise can turn pretty well, this also works. In 2008, we programmed and demo’d the Toro robot over and over at champs. It stayed a fixed distance from and followed a colored piece of paper or a T-shirt. And yes, it was running on the 8-slot cRIO using an Axis camera. No coprocessor required.