We’re using OpenCV code running on a Jetson. We have it succesfuly recognizing the target and calculating the exact center of the target. I’m a sophomore, but this is my first year being lead and our previous lead went back to college. Like the title says, I succesfuly have it showing X and Y coordinates in the dashboard, but my question is whats the best way to go about making the robot move to get to the right X and Y?
Sounds like you’ve already done the hard part. Great job!
Next step is to get the movement working. That’s somewhat dependent on the design of your robot (camera placement, fixed vs movable, etc). The key concept is to find the “error” (distance between “current” and “desired” positions), and make the appropriate motion to reduce the error to zero. There are a number of ways to make this happen, some more complicated than others, but a commonly used concept is via a closed-loop control system such as PID. There are a variety of pages in the WPILib docs that do a much better job at explaining it than I ever could.
Here is some example code from Limelight’s documentation:
http://docs.limelightvision.io/en/latest/cs_drive_to_goal_2019.html
They do exactly what you are trying to do. The only difference between where you are, and possibly where you want to be is that their tx value is returning the offset in degrees from center (negative is left, positive is right).
You would just want to create the transition from pixels to degrees. Which should be simple, knowing the field of view of your camera, divide that by two for the +/- in degrees, and then just figure out how many pixels translate to a single degree etc.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.