We have a vision target module in our library that will take care of the multi-threading stuff. It spawns a separate thread and will process each frame to look for objects for the specified criteria. If a new frame is processed, the new result will overwrite the old one. In other words, we always have the latest result cached. If the main robot thread wants to look for “targets”, it gets the cached result from the vision targeting thread.
We have it running as a separate program on the Driver Station. We use Network Tables to communicate with the RoboRio. It grabs the same picture that is being displayed on the DS for the driver.
when “picture” is 0, the vision program does nothing.
When “picture” is 1, it starts vision processing, and changes the value to “2” to indicate it is working.
When it is done calculating, it sets “Angle”, and “Distance”, and changes “picture” back to 0.
On the RoboRio, it waits for “picture” to go back to “0”. when it sees that, it takes “Angle” and “Distance” and drives there (using NavX).
Rinse, repeat, until Angle and Distance are close enough to shoot.