Semi-Auto Vision Best Practices - Week 1

I have been viewing the ISR District quals and I have noticed that most teams have vision systems on their bot but the shots seemed to be very inconsistent for many teams. I was wondering if this is simply a lack of software implementation, the error caused by estimating distance or the LEDs not being bright or etc.

Basically, what were some of the things you noticed about integrating vision to decide your RPM. I am looking forward to hearing your best practices and the possible precision levels of a good vision system. (For example can a good vision system decide for precise shots across the trench and best practices to achieve them?)

1 Like

probably for shooters a lot of teams are using some kind of interpolating tree map, whether that’s on hood angle or flywheel rpm

i’m inclined to believe many of the inconsistent shots are either because of badly tuned control loops, data trees, or a lack of either

just well implemented image processing should do you well for angular misalignment with the target, but if you want to determine distance you’ll likely have to dip your toes in solvepnp to get distance and maybe even fuse those sensor values with odometry for some magic

not to mention, once you can determine angle you’ll need a flywheel that’s both mechanically capable and running on a control loop well-tuned enough to actually maintain the accurate speed for the shots you need to make

for super-precise shots from behind the trench your vision and flywheel control loops could even benefit from some kind of filtering (e.g. EKF/LQR) but things like that are usually out of the scope of most frc programming teams


Those are some valuable resources about the topic! Thanks! I have been thinking to estimate distance through perpendicular misalignment just like LL documentation suggests. Can you tell me about why would SolvePNP would be a better solution. Would the LL one be a precise solution as well?

The simple solution works well for us last year in determining the distance, but solvepnp gives you much more information than the distance and angle. We use it to estimate the position of the robot on the field and therefore to know how much off the target center line is the robot, which helps in shooting the inner target from the side (trench). At the same time, opencv calibration also does the camera intrinsic and distortion math for you, which makes your life “easier”

Thanks, have you had luck getting the PC into the inner port from the trench. If so, how could you use the position offset you receive from SolvePNP to decide your and robot rotation&position? I know I am asking for a lot but if you know of any teams that have used this, I would love to take a look their code to see how they achieved this.

Putting PC into inner port from trench is more of a mechanical problem of how accurate your shooter is.
This is our code for target detection (quite messy). The field_x, field_y, field_z gives the position of the camera on the field, following the WPI convention that x+ is toward target, y+ is toward left of target. The origin is the target. The yaw, pitch, roll also show the orientation of the camera on the field.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.