We are on our 6 week adventure tying to get some vision code working on our test chassis before our regional. Anyhow our bot is ready for 3 cameras; two USB lifecams and a Axis ip. We have a Kangaroo mini PC ready to co-process.
Our questions are:
Is GRIP ready to selectively switch between video sources? It seems that there are functions related to this but no way to satisfy the inputs.
We thought about running the USB cameras on the Roborio for just the Autonomous parts of the match (Gear Placement). And the IP camera for fuel shots.
Maybe Roborealm is the way? It too seems to have multiple vision pipelines that can be selected using Networktables variables. My experiences is less than satisfactory with this program sending video to the driverstation.
For switching between cameras cscore is the way to go. As for multiple pipelines one way you could do it is only use one pipeline but in your processing class/file you can check the contours and differentiate between ball tape and gear tape. Then you calculate values depending on what it saw.
Is it possible to create multiple pipelines/vision threads within one program? My team wanted two cameras each using a GRIP generated pipeline running on the roborio. The way I tried to do this was by having two seperate vision threads using different camera sources both running in a subsystem file (we are using Command based programming ) however it seems to be causing issues. Is it not possible to run multiple threads at once on the roborio?
The roborio is a dual core machine, so running 3 heavy threads (2 vision threads + the FRC user thread) could definitely be causing slowdowns/lag/unresponsiveness if the thread scheduler can’t keep up.
It will only run one thread. We have a thread for the boiler and a thread for the gear. The thread that is started first tends to update more but it is inconsistent/slower than running one thread. Sometimes neither thread will update at all. The images will still stream to SmartDashboard. However, we have also had issues with vision which have stemmed from running in the Command based version. I have yet to find an example of a vision thread with a GRIP pipeline example that is not running in an iterative robot program.