How is Vision Tracking incorporated into the robot movement Deep Space

I understand the basics of vision tracking but am trying to figure out how it is incorporated into the actual driving of the robot. I would love to hear a step by step description of what is going on from the point vision is activated (loading station identified) to the point the hatch panel is delivered.
I understand the camera captures the reflective tape image, figures out where it is and how to get there. But that’s where I get confused. I see these teams slamming into the loading station wall (perfectly aligned) and am wondering what is going on.
Are you calculating the distance to the hatch panel loading station or are you just telling the robot to keep driving into the target until you can’t go any further?
Same question with delivering the hatch panel.
Are you activating the mechanism to secure the hatch panel at a particular location? And unloading also? Are you using other sensors to assist in the process?
Does your process rely on the driver or do you just activate the process and it works without any further input from the driver?
I’m sure there are many ways to do this but I would appreciate some concrete step by step examples of how teams did this.
Thank you

Indeed, I’ve seen a lot of approaches this year, but here’s ours:

In teleop our driver engages vision by pushing the steering stick (right joystick) forward. Our Drive command then starts reading our limelight instead of the X (side-to-side) axis of the steering stick to control our rotation.

We also estimate the distance to the target and slow the robot down as it approaches the target. That way picking up (and scoring) a hatch is a simple “push both sticks all the way forward” operation.

Our hatch mech is on a spring-loaded plunger with a hall-effect sensor, so we can detect when it has engaged a disk. That triggers the pincer to grip the disk and rumbles the controllers so the driver knows to back out.

We don’t automatically release the hatch when we reach a target (that felt too risky), but we do rumble the controllers again so the operator knows to release. Lights then tell the driver it’s safe to back up.

1 Like

We only really used our Jevois when getting and placing hatches. We have a command that only does anything when our intake is extended. If we have distance data from solvePnP, we use the x (lateral) distance from the centre of the vision targets and move our hatch intake to that position.

Our team uses a Limelight. We use the X and Distance values that the Limelight sends over networktables. It’s a simple PID loop to tell the robot to drive towards a certain distance and yaw so that the target is in the center of the image.

The driver holds down a button which performs this task until he lets go.

Are you having trouble grasping closed-loop control?


I believe tuning the PID is the greatest weakness. If you use Java, would it be possible to refer us to a copy of your code.
We are using Limelight, Java and Talons.
Also, how does your driver know when to let go?
Thank you.

^This is the command that runs when the driver holds down a button on his gamepad.

You’ve basically got the same setup as us - Limelight, Java, Talons.

The driver just lets go when the robot places the hatch. He could hold it down but the closed loop control is configured to stop when it reaches the correct distance (hatch placed).

1 Like

We pretty much do the same thing as Landoh12 as far as the driver holding down the button for auto delivery and pickup. We added a last step of having the robot backup a specified distance after completing the delivery or pickup. The instructions given to the driver were to hold the button down until the robot backed up. Also, the LED lights on the robot will change color once the game piece is delivered or picked up so the driver could also look at that.

I am just wondering what types of lights did you use, and what controllers did you use for rumbling. We had Dualshock 4 Remotes that would rumble but it took downloading a program in order to do this, and we lost 3 buttons in the process.