Strategy for displaying video?

We expect to have two cameras. One will be for general navigation for the driver. The other will will attached to a Pi, and its video will go through an image processing detection step for the autonomous mode, and we want the driver to be able to be able to see the output of the image processing to make sure the detection processing is actually working.

What’s the best approach to displaying both these sources at once? Can we have a separate monitor(s), or is there accommodation in the display system for one or both streams? It was not clear to me from the manual this year how this can be done. Thanks.

The SmartDashbord and new ShuffleBoard both support displaying multiple camera feeds on the Driver Station Laptop screen. Displaying the feed through the USB camera that is not going through the Pi, assuming that it is directly plugged into the RoboRIO, is extremely easy. Link to the ScreenSteps guide on linking RoboRIO USB Cameras to SmartDashboard.

Sending data through the Pi vision processing program, and then sending it to the Driver Station is a little more complicated. No matter how you handle this, it will slow down your Vision processing, and speed is key when it comes to Vision. The way I see it, your best bet is to compile the independent CSCore library (built into WPILib) onto the Pi and write your vision code in C++ (the language CSCore is written in), and at the end of every vision loop, push that new frame to the CameraServer. Your other option however, if you want to write your Vision code in Python, write your own UDP/TCP CameraServer that does the same thing, which is complicated and time-consuming.

Overall, you shouldn’t do this at all. Perhaps you could send a chart of the vision data to the SmartDashboard, and if your driver wants to see if vision is working, they can look at that chart and verify that it makes sense. We used that method last year and it worked extremely well. Overall your best bet is to use separate cameras for vision processing and driver feeds, especially now that we are in week 2 of build season and creating an efficient and reliable vision program usually takes much longer than expected.

I agree with Max. The delay caused by sending the processed frames to the dashboard is not worth the sacrifice. I don’t know any way to send it without damaging the speed of the vision, either. Displaying a table would be a good idea though.