How to make Robot line up with retro reflective tape using Java Vision

Hello all,

I’ve tried to use vision multiple times before, but to no avail. What I’m trying to do is make our robot adjust itself so that it is in line with retro reflective tape using java and Grip. I have already created the GRIP file to identify the retro reflective tape as a contour and added the GRIP file to my java project in VSCode.

Here is what I have already.

m_robotContainer = new RobotContainer();

leftFront = new VictorSP(0);
leftBack = new VictorSP(2);
rightFront = new VictorSP(1);
rightBack = new VictorSP(3);

leftGroup = new SpeedControllerGroup(leftFront, leftBack);
rightGroup = new SpeedControllerGroup(rightFront, rightBack);

drive = new DifferentialDrive(leftGroup, rightGroup);

UsbCamera camera = CameraServer.getInstance().startAutomaticCapture();
camera.setResolution(IMG_WIDTH, IMG_HEIGHT);

thread = new VisionThread(camera, new GripPipeline(), pipeline -> {
  if (!pipeline.filterContoursOutput().isEmpty()) {
    Rect r = Imgproc.boundingRect(pipeline.filterContoursOutput().get(0));
    synchronized (imgLock) {
        centerX = r.x + (r.width / 2);
        // System.out.println(r.width);
    }
  }
});

thread.start();

Then, in autonomous I have:

double centerX;
synchronized (imgLock) {
    centerX = this.centerX;
}

double turn = centerX - (IMG_WIDTH / 2);
drive.arcadeDrive(.2, turn * 0.005); 

Help?

1 Like

What does the robot do incorrectly? I assume you tested the vision code to make sure centerX is calculated correctly.

I strongly encourage you to use a Gyro. After you put together an angle alignment code, you can simply use your target’s position related to the center of the camera as a desired angle and it will be easier for you to accurately center onto the vision targets.

This is a worse solution than appropriately targeting using the X position of the target. It adds an extra element of error to the loop, and there is no conceivable reason you would need a gyro if you already have to know the angle from the vision pipeline in order to make use of the gyro.

Some pretty loaded wording here. I’m not convinced everyone is talking about the same thing.

Check out 254’s presentation on vision & mechanical integration.

Specifically:

Delays & framerate in a vision processing system often rule it out as being the only feedback sensor in a drivetrain positioning closed loop control system. Gyroscopes are usually the next-best answer for the sensor to close-loop on.

To be clear, I said that there is no conceivable reason you would NEED a gyro, not that there isn’t a reason you would use one. Lots of teams do vision targeting just fine without auxiliary sensors. For an OP already having difficulty with vision targeting, adding 254 level software complexity is far from a good solution.

We have had a lot of problems (overshooting, undershooting) while only using vision as a feedback on my first year on FRC. I just wanted to encourage them to not make the same mistakes I did a while back. Implementing a gyro turn function with basic knowledge of a PF loop isn’t a really hard thing to do even if you are new to programming.

Ok. I don’t quite agree with the wording - I hate to get nitpicky, but I want to ensure OP doesn’t go back to their team and say “ChiefDelphi says we definitely don’t need a Gyro, so that can’t be the problem!”.

Even if their code and robot are all bug-free, the system may not perform as desired.

What you say can be true, but is not universally guaranteed.

This is only true up to the point where drivetrain speeds exceed the ability of the vision system to keep up.

Yes: The reason you would use an auxiliary sensor (like a gyro) is that the drivetrain speeds have exceeded the vision system’s ability to provide up-to-date information.

From what I can see: OP has not constrained the problem enough to where I can say with certainty they do not need an auxiliary sensor.

OP: the usual symptom of hitting this system limit is that when tweaking your P gain (the 0.005 multiplying your turn command), there will be no value that simultaneously:

  1. Causes the drivetrain to zero on the target quickly.
  2. Doesn’t overshoot or oscillate around the setpoint for too long.

If you have a gyro or wheel encoders, use shuffleboard to plot the values on the same graph as centerX. Start the plot running with the robot still and in view of the target. Then rotate it a few degrees. Note how much time delay there is between when the encoders/gyro show the robot moving, and centerX changes. You’ll probably want this number to be sub-100ms for most FRC applications without an auxiliary sensor…

For the record, b_marap’s solution is a very reasonable one, and is what our robot will be doing this year.

For our vision setup, we’re using vision on the RoboRIO using a Microsoft webcam, and Java with Grip. We’re also using an IMU Gyro from Analog Devices that came in the kit of parts one year.

Gotcha. FWIW: In previous years we were seeing ~300ms delay on some of our processing with the webcam+RIO solution (but, libraries have improved since then, and we probably weren’t doing everything quite right).

Could you suggest some code I could use to achieve my goal or suggest changes to my code or methods of coding this? It is my first time writing vision code, but I have written code for the robot in general. I am void of any other potentially good resources for vision coding.

I don’t think I can sign up for providing exact code that will “just work” on your particular robot. My biggest recommendation would be to follow @b_marap’s suggestion. Broken down into a concrete set of steps:

  1. Implement logic that causes your robot to rotate a fixed number of degrees. It should do this by calculating the error between some desired angle (an input), and the actual pose angle of the drivetrain (measured from a gyro)
  2. Add logic to trigger that rotate-to-degrees functionality based on a button push. Hard-code the number of degrees to something like 10 or -35. Press the button, and make sure the robot accurately turns the correct number of degrees.
  3. Add logic to read the number of degrees (x pixel offset) from your camera’s centerline to the target. Ensure via prints or Shuffleboard that the data exists on the RIO correctly
  4. Add logic that, on a button press, reads the most recent pixel offset from the camera, and passes it to the rotate-to-degrees code. This becomes your “auto-align” button.

You’ll have to work with the drivers to ensure they stop and point the robot at the target for at least 300ms before hitting this auto-align button.

Note - there are a lot of moving pieces here. Guess and Check rarely works when code gets this complex. The first goal should be to understand what the code ought to do. Only then should implementation start, and at every step check it is in fact doing what it ought to do. I generally don’t recommend things like this during the build season. They’re easier to tackle in the offseason when you have time to fiddle and learn.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.