Vision Latency Compensation

Our major goal over the past week has been coming up with a system to offset goal tracking based on the motion of the robot (correct turret to the left if the robot is moving to the right, speed up flywheel if moving back, etc.). However, this is (mostly) useless unless I can find a way to compensate for the pretty bad latency of our limelight (close to .5 second). I looked into using PoseEstimators (Kalman filters) to combine the camera data with odometry to get a more accurate system and then use that for targeting, but it seems this would be pretty slow (I’m already getting watchdog warnings, so I am walking a fine line with code speed right now), and I am not comfortable with re-writing the entire targeting system with only 3 weeks to competition (I am a freshmen so I barely understand the linear algebra behind it. Also, our limelight is on a moving turret, so there would be a lot of variables involved). I was hoping for a simpler solution involving compensating for the latency of the limelight (store previous limelight values and use them to figure out future values, maybe taking into consideration robot and turret speed?). Or, I did read about one team using the odometry approach, but instead of focusing on keeping odometry accurate always, they reset odometry using a known limelight value before aiming, then targeted from that. I was wondering what approaches other teams have used, and what works best. Thanks!

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.