Is there any reason to not allow anything above 4 mbps?


Title. There shouldn’t be such a small bandwidth limitation. Our team has solutions for the problem, but I just wanted to discuss why it’s necessary to have such a low bandwidth limit. Is it price of the FMS? Is it price of radios? Is it other factors? Is it a massive conspiracy? Definitely not a massive conspiracy.

My opinion on the issue is that it would actually be cheaper to have a higher bandwidth because teams wouldn’t have to buy such specialized hardware to get better framerates with higher-stress compression algorithms (i.e. HEVC). It would also ease strain on programmers and make driving the robot smoother and easier with higher bandwidth (like more cameras, higher frame rates, or even higher quality video).


There’s a lot of floating signals at competition. 4mbps is about as high as you’ll get consistently without running into weird interference issues. It would be nice to get more, but it’s practically very difficult.


I like your reasoning. The only thing that irks me about this is that we’ve never seen this in action - from my perspective, the bandwidth limitation has always been artificial on the radios and the FMS. Maybe you are right, but still - why couldn’t we, for example, use the higher channel numbers from 5 GHz that are rarely used but allotted for 5 GHz spectrum?

ninja edit: See here for information on what I mean.


aside from the feasibility aspect, I see it as part of the constraints, it forces teams to come up with more elegant or complex solutions to work around this issue. right now we live in a world with bloated code. back in the early days of networking and processors you had to not only write code that was functional, but also elegant in the sense it had to be extremely memory, computational, and network efficient, just because the technology wasn’t there. In more modern times the growth in the power of processors and networking systems at an exponential rate has made to the point where no longer does code have to be elegant and efficient, it just has to work, because when in doubt you can just throw more processing and network power at the problem. Having worked in an industry that uses insane amounts of computers and networking systems to make the factory process pipeline work, this is not always an option, and as processors become harder to make more powerful we must learn how to write more complex programs with the same amount of power available, I know dean specifically is a huge advocate for this. So while it is possible to remove this limitation, not only does FIRST not have to retrofit all the fms systems at an expense to make it work, but it also forces teams to be creative in the ways they manage data and such instead of just throwing more video streams and sensor data to the driver station to make their life easier with no consequence.


Hmm… didn’t know Dean was an advocate of this. Interesting.

The only problem I see here is that eventually you’re going to hit an upper limit - unless the bandwidth limit is increased or some magic piece of hardware able to do HEVC encoding of 1080p video at 30fps in real time at a low cost is released, I think this bandwidth limitation will eventually hinder everyone. Nobody can algorithm their way through a limitation - the limitation will almost always win. It took five years for a bunch of mathematicians and talented programmers to build a better algorithm to get over the limitations of an existing one, not to mention countless hours of work by lawyers to ensure that it fits within the law (patents specifically). What I’m saying is: we, as a group won’t be able to beat the existing solution without lots money and resources, or taking the easier solution of raising or eliminating the bandwidth requirement altogether.


At champs, there are 6 fields, plus wireless practice fields. To avoid co-channel interference, it’s important the fields all be on separate non-overlapping channels.

There are only 13 non-overlapping channels with 40 MHz bandwidth, and all but 4 of those are DFS channels (Dynamic Frequency Selection), which are non-ideal as those channels can be forced to dynamically frequency hop based on nearby radars. At 25 MHz bandwidth, there are a total of 25 non-overlapping channels, but again the vast majority of these are DFS; only 15 are non-DFS.

With clearly not enough channels for every robot to be on its own 20 MHz channel at champs, that means you have 3-6 robots per channel. In the best case 802.11n in 20 MHz can get 72 Mbps, so divided by 6 = 12 Mbps. In the worst case (poor transmit environment), it only gets 6.5 Mbps, so divided by 6 = ~1 Mbps. The reality lies in between those extremes.

However, both these numbers assume the best case in terms of airtime. This doesn’t account for the bandwidth loss (significant) caused by thousands of cell phones searching for wifi–a single “ping” for nearby APs takes a huge amount of airtime, or the fact that radios mounted in noisy robots/buried in metal will significantly drop their modulation rates, decreasing not only their max bandwidth, but consuming airtime that other robots could use. The latter is a particular concern this year–if you have one robot with a badly positioned radio trying to use 4 Mbps, it could easily eat up the equivalent airtime of 2-3 other robots sending 4 Mbps in best case wireless conditions. Plus, robots use a fair amount of airtime even when not using a commensurate amount of bandwidth, because each command and status packet takes airtime even if there’s no other data to send.


Fairly sure this already exists.

Honestly, i wonder why people arent utilizing H.265/whatever googles equivalent is (probably something obvious) as it literally is the solution youre looking for. A good comparison can be found here.


Generally that’s because most teams are using COTS USB cameras that only provide MJPEG, and the Rio doesn’t have enough horsepower to compress H.264 at higher resolutions.


Many already use other solutions (Jetson, RasPi, someone used a 1050Ti in the last couple years) than the rio, which should mitigate that problem.


The other reason is that the provided software and dashboard solutions don’t support H.264, so teams that choose to go that route have to do a lot more from scratch. This is something I’m looking at addressing for future years.


Im hoping you mean h.265, otherwise im fairly confused by this.

The following ports are opened for communication between your Robot and Driver Station. All other ports are blocked. All ports are bidirectional unless otherwise stated.

  • UDP/TCP 1180 - 1190: Camera Data
  • TCP 1735: SmartDashboard
  • UDP 1130: DS-to-Robot control data
  • UDP 1140: Robot-to-DS status data
  • HTTP 80: Camera/web interface
  • HTTP 443: Camera/web interface (secure)
  • UDP/TCP 554: Real-Time Streaming Protocol for h.264 camera streaming
  • UDP/TCP 5800-5810: Team Use


That’s just a list of allowed ports. You can use whatever compression scheme you want to, whether it be on port 80, 443, 554, 5800-5810, or 1180-1190. I’m talking about the CameraServer libraries and dashboards such as Shuffleboard, SmartDashboard, and the LabView dashboard, all of which currently only support MJPEG.


Im amazed and saddened then. Id expect something more modern for something FIRST is trying so hard to push.


Why are you transmitting 1080p? My custom OpenCV camera server transmits video in realtime without exceeding 200kb/s per stream, and does so by significantly sacrificing image quality. Who needs it? The cameras are to give a general idea of the perspective of the robot, not a glistening, high detail desktop wallpaper.

I think that 4mb/s is completely reasonable if you go about the problem correctly. That is, if your image looks like this (average output of my system. This frame is, when converted to base64, 2.5kb. Snipping Tool made it massive):


Neither HEVC nor AV1 are ready for mainstream use in FRC applications. There’s very little investment in HEVC due to the intractable rights issues. It’s hard to get past .1 fps rate for AV1 today on any sort of desktop CPU.

@Maxcr1 describes the path teams should follow if they want to improve the perf of their video pipeline.


I agree with MaxCR1, you don’t need much resolution to drive a robot, just low latency. In previous years 7MB/s limits we’ve seen fields fall over when too many robots were streaming. I’m sure First lowered the limit to be safe since this game really encourages streaming! Sure it will be nice when the bar is raised some day but its pretty easy right now to just stream at low-res, high compression.


We must also remember that the FMS has data it is running on the same network. Typically only 1 or 2 robots on the field used the max ~7Meg limit. If all 7 maxed out you would greater than 50% of the WiFi bandwidth, then add that to the wired network with the added FMS, Allen Bradley controllers and the I/O, you now have a really full network at 100Meg.

I am suspect they are trying to do two things.

  1. Avoid rematches doe to LAG
  2. Think like NASA, communications to outer space must be optimized and efficient.


The 1080p was just an example. It gives a bit of a hypothetical of the required horsepower. HEVC is slow at much higher framerates, but encoders are typically just as fast and produce just as good results at 1080p or lower.


Maybe I’m missing something, but I don’t see any reason FRC isn’t switching over to 5GHz. It would solve the interference issues and be pretty simple to implement.


The FRC field has been at 5GHz since 2009.