I’m a bit conflicted. I’m the primary developer of my team’s vision tracking and wondering: At what point would multithreading be beneficial compared to just having the minimum? Multithreading being much harder to debug and teach compared to an otherwise relatively linear code.
Having the minimum what?
You are really close to us (we are 15 minutes away). Feel free to PM me and we can setup a time to meet at a library or you would be welcome at one of our meetings (we are meeting Wednesday evenings until school ends) and discuss if you would like.
Regardless, there are a couple different reasons and types of multithreading that are beneficial.
-
When you need to control the timing of a loop, you cannot rely on the period methods which will be called based on when DS packets arrive. By far the easiest way to handle this circumstance is using the Notifier.
-
The second main reason to use threading is to handle a process that may block or delay your period loop, and cannot be easily sliced into a state machine. A good example of this would be a communications loop speaking UDP or TCP/IP to an external device and the code is easiest to implement using a blocking read. Put this in its own loop and write outputs to a global status class with synchronized methods to prevent surprises.
There are plenty of other reasons, many bad, a couple good – but you are right to question complexity whenever possible; however, there are times when multithreading will make your code either better or even simpler.
We’re pretty close to you guys also, and are starting to work on setting up threaded drive code - essentially, there will be a separate thread running more often than the main code to deal with closed loop driving based on angular velocity. PM me if you have any questions about multi-threading.
We’ve also had some success running vision tracking on the driverstation - latency didn’t seem to be a huge problem, and the laptop CPU is much better than the roborio’s (i7 vs armv7).
Multithreading for vision processing makes good sense, period.
One thread only acquires images as fast as the camera is capable of capturing them.
The second thread processes images it acquires from the capture thread.
This approach allows the processing thread to run at full speed without having to wait for a capture to take place on the camera.
For example, with an RPi 3, we produced target info at about 20 times a second when we single thread. We produce target data more than 70 times a second when multithreading. Granted, that is faster than the camera can produce them, so some of the target data is a replication from the previous frame. What this gives us is the most current target data possible. When single threading you can miss image info as the camera sits idle while it is processing the previous data.
All that said, the actual gains in performance really make very little difference when applied in FRC. There are techniques that allow you to compensate for the lag that is included whether you use multithreading or not.
Some interesting responses so thanks! The reason why I’m asking is I personally want to tackle vision but not too many people in my community are interested in programming. And out of my entire school, only around 4 are interested and an even smaller number that are serious. When you add the complexity of multithreading into the equation than I’m forced to ask the question. Is it worth it?
If you’re doing Vision on the roborio, then your vision processing should definitely be in a separate thread/process, and if I recall correctly CameraServer/cscore has explicit support for doing that.
For most other problems in FRC, multithreading is only worth it in advanced use cases.
Here is my quick jumbling together of multithreading and statemachine support. I literally just jumbled this together in the last hour. Not even sure if it’s right, if anyone can I’d definitely like someone to review it for me?