Hello, all my coding team is trying to find out how much bandwidth is used by the RoboRio if a USB Camera is directly attached it for broadcast? We are weighing our options for vision. We want to have a camera on the chassis for the driver and a camera on the Ball/Hatch Mechanism. We are also looking at two Raspberry PI with rPi cameras attached. We want to make sure we dont go over the 4MB cap that the rules states. Any help is greatly appreciated!
You can use 1 or 2 cameras that send a feed back to the driver station. For the cameras for vision, those shouldn’t take up bandwidth as long as you aren’t sending those to the driver station.
You can always compress your camera feeds to comply with the limit.
@retrodaredevil do you know what the bandwidth is taken up by connecting a USB camera right to the RoboRio?
The answer is “it depends”. Bandwidth utilization is a function of resolution, FPS, and compression quality (which is why all of these are settable in the dashboard). You will need to test it yourself and do emperical measurements. Most/all of the dashboards tell you how much bandwidth is being used by each camera stream.
If you want an idea, we got around ~1-1.5mbps on Rio streaming 144p at 30fps with the compression slider on Shuffleboard around halfway. I’m not exactly sure what those values are, I think they’re between 0 and 100 but we’re still messing with it. If you turn it up you can get it to use up less without much loss in quality.
The bandwidth is also going to depend on the complexity of the images, e.g., a stream in your workshop will probably be easier to compress than a field image looking at complex reflections off a diamond plate.
So, you need to be able to adjust to the actual field at competition.
Here are some examples:
(1) USB camera and (1) IP camera taking 2.4 Mbps at 320x240, ~ 50% compression and 15 fps:
Single USB camera-320x240, 43% compression, 15fps, 0.6 Mbps:
Just to be clear, the bandwidth has nothing to do with where you connect the cameras (Rio, coprocessor, etc) and everything to do with the image settings, as people have said.
You can also save bandwidth if you make the stream switchable between the 2 cameras. So, instead of streaming both cameras all the time (eating twice as much bandwidth), use a button to switch which camera is sent. It does require a bit of programming in the processor (Rio, or coproc), but it is not complicated.
Personally, I will be using a Tegra to broadcast a heavily compressed image to a socket server. I have a few reasons for this. Primarily, a Tegra can very quickly compress images to a laughable size through OpenCV and send them over the network to the DS (180kbps at 30fps), which simultaneously decreases latency and frees up RIO resources. Secondly, it allows me to do both vision processing calculations and driver vision on each camera, which is incredibly useful.