[Vision] Multithreading?

What is the best way to have a vision loop that constantly updates numbers, but with a short sleep period but not hinder the normal teleop loop?

If I want to update the numbers every half second or so is there a way I can do that without making the whole teleop loop wait? I’ve looked at Runnables, but is there an easier way to do this?

as long as you’re sure that this is what you want to do…

new Thread("Vision Processing Thread"){public void run(){
         }catch(Exception e){}

(apologies if syntax errrors i wrote this in the texte ditor)

Is there a better way to do this? What is the normal way to include vision processing?

Something like this might work for you:

// Create a new Timer to schedule vision processing
Timer visionScheduler = new Timer("Vision Scheduler", true);

void doVisionProcessing() {
    // Your vision code here

public void autonomousInit() {
    // Perform vision processing every 50ms, starting in 0ms (i.e. *now*)
    visionScheduler.scheduleAtFixedRate(new TimerTask() {
            public void run() {
        }, 0, 50);

Keep in mind that you’ll need to do the normal threading stuff like locks/synchronization blocks and declaring shared variables volatile, but that applies for any multithreaded program.

Otherwise you could put it in a periodic method.

We have a utilityin our github to do this. Send an e-mail to gixxy if you have any questions (he doesn’t check CD daily).

Thread cameraThread = new Thread()
	public void run()
		//camera code goes here

Then do…



cameraThread.stop(); //not sure if this is depreciated

java.util.Timer myTimer = new java.util.Timer();

myTimer.schedule(new TimerTask() 
	public void run()
}, 1000);

The difference between this and sleeping the thread is that the timer doesn’t suspend the run-time of the program. I’m thinking that this is what you are looking for.

We have a vision target module in our library that will take care of the multi-threading stuff. It spawns a separate thread and will process each frame to look for objects for the specified criteria. If a new frame is processed, the new result will overwrite the old one. In other words, we always have the latest result cached. If the main robot thread wants to look for “targets”, it gets the cached result from the vision targeting thread.

We have it running as a separate program on the Driver Station. We use Network Tables to communicate with the RoboRio. It grabs the same picture that is being displayed on the DS for the driver.

when “picture” is 0, the vision program does nothing.
When “picture” is 1, it starts vision processing, and changes the value to “2” to indicate it is working.
When it is done calculating, it sets “Angle”, and “Distance”, and changes “picture” back to 0.

On the RoboRio, it waits for “picture” to go back to “0”. when it sees that, it takes “Angle” and “Distance” and drives there (using NavX).

Rinse, repeat, until Angle and Distance are close enough to shoot.