Displaying video from USB camera on Pi

We are doing video processing on a Pi in preparation for autonomous mode. We have written opencv image / video processing on a Pi with a USB camera. We now need to know how to display the processed video to the team, so they can verify that the image processing is working. I’m helping with image processing but am new to FRC and don’t know anything about the driver station or smart dashboard. How are people doing this? thanks.

Set up a MJPEG Server on the pi. Then you can connect to it using the raspberry pi’s url and the appropriate port (something around 1185).

There’s a few different ways to approach this. I settled on the first with my team, though the second is also viable.

(1) Processing runs on one thread, and communications on another. All incoming frames are read, then simultaneously sent to the Driver Station over TCP packets and processed. This requires a static IP on the Driver Station. In competition, this should be 10.TE.AM.5 with a class A subnet mask, but at home it’s different. Connect to the radio with DHCP enabled and see what IP the DS is on and use that. You’ll need a client script on the DS to display this data.

(2) NetworkTables is what the DS software uses. I believe it’s MJPEG data under /camera, though I’d double check documentation for that.

Code for the first approach can be found in our 2017 robot code under vision processing (https://github.com/SachemAftershock/2017-robot-code). Sorry I couldn’t elaborate more, I’m on mobile right now. If you have any questions feel free to reply here or PM me and I’ll get back to you whenever I can.

Thank you for your response! I have a few questions:
-What pi were you using and did you ever overload it?
-How many images were you sending, if we wanted to send two video streams would that be too much?
-What resolution were the camera(s)/what resolution did you send at?
-If the rio restarted during the match, did that throw a wrench in your system, and if the sending thread went down, would the processing still work?
-Do you recommend this option for a team with limited multi threading experience, although I believe I understand your code?

We used a Raspberry Pi 3 (Model B?) on Raspbian Jessie. It never got overloaded with the simple networking and color segmentation stuff, it’s generally not too computationally expensive at our locked 15 FPS.

We were sending one video stream at a time with 360 X 240 images at 15 FPS, with a button on our driver controller to toggle the camera being streamed. I don’t think higher framerates would have much stress on the Pi itself, but this year there was a 7 Mb/s cap on FMS communications. We used the lower framerate and resolution to avoid compression. You can easily do two camera feeds at 360 X 240 15 FPS with 30% compression, one camera feed at 640 X 480 10 FPS with 30% compression, or a number of other combinations. See the FMS whitepaper here.

The RIO wasn’t a problem ever in the video stream communication since the comms were straight DS-Pi. However, the only big problem we had with this system was if FMS got restarted. This would cause problems with the TCP Socket getting messed up, and the robot would have to be restarted. You could have some solutions worked out with this using some UDP protocol, but it wasn’t something we ever got around to.

As for the second part of your question, there’s two answers. If the entire Sender thread goes down, the coordinates will not get updated unfortunately. This never happened to us though. If only the TCP Socket fails, the coordinate updating will still work.

Sure! You could make it an excuse to gain experience with multithreading :). If you have any questions while attempting it, message me and I’ll try to help. I’m in the middle of writing a whitepaper for this system which I’ll send to you when I’m done as well.

Thank you for such a detailed response! This is very helpful to help us deciding on which route to go on. If we decide on multithreading, I’m sure we’ll be in touch!