Our team has successfully implemented a method of streaming video from a coprocessor to the driver station. We first convert our OpenCV frame into a JPEG image via the PIL library, and we also resize the image to decrease bandwidth consumption. We then use BytesIO to write the image to a byte] and then use the putRaw method of NetworkTables to send it over the network. On the Driver Station laptop, we have programmed a separate application to receive, decode, and display incoming frames. We have capped the frames sent per second in the code on the Jetson to 20 FPS. We measured the size of each byte array, and it is about 5 to 10 kilobytes per image, and at a rate of 20 FPS our max bandwidth consumption is about 1.6 megabits per second, far less than the 6.9 megabits per second limit. Are there any other concerns or issues with our methodology?
While it’s impressive that it works, NetworkTables isn’t really designed for this. In particular, it will send a full copy of the image to every NetworkTables client. You will also see significantly increased latency versus other methods because the image needs to traverse from the coprocessor to the roboRIO and from the roboRIO to all other clients, at each step incurring a delay as the NT dispatcher runs only every 100 ms by default (so you’ll see at least 0.2 seconds of latency you wouldn’t with other methods). Also, if you have more than one dashboard application running on your driver station (even if it’s not actually viewing video), each one will be separately sent the image, linearly increasing the bandwidth consumption.
Thank you for the informative reply! We did these shenanigans as a last-resort due to some issues with getting MJPEG working, but I guess I can take more stabs at it. I do agree it’s a bit shady what we have going.