View Single Post
  #14   Spotlight this post!  
Unread 12-04-2016, 20:06
Fauge7 Fauge7 is offline
Head programmer
FRC #3019 (firebird robotics)
Team Role: Programmer
 
Join Date: Jan 2013
Rookie Year: 2012
Location: Scottsdale
Posts: 195
Fauge7 is a name known to allFauge7 is a name known to allFauge7 is a name known to allFauge7 is a name known to allFauge7 is a name known to allFauge7 is a name known to all
Re: How fast was your vision processing?

My team created and used tower tracker. Unfortunately due to our robots constraints we were not able to use it effectively but will try our hardest at competition.

Since tower tracker runs on the desktop it gets the axis camera feed which is maybe 200ms off. Then can process the frames real time so maybe another 30ms and sends it to network tables which is another 100ms off, and the robot to react which is real time by the time its ready for the vision. Robots can use snapshots of what they need to effectively use vision processing. When lining up you only need 1 frame to do the angle calculations. Then use a gyro to turn 20 degrees or whatever it is and then find out the distance. Multiple iterations help all of it of course. TL;DR 400ms max delay snapshotted gives us good enough target tracking.
__________________
Engineering Inspiration - 3019


Tower Tracker author (2016)
  • 1 regional finalist
  • 1 regional winner
  • 3 innovation in control awards
Reply With Quote