CHS Timonium Event Packet Loss

Throughout our whole event at Timonium, we’ve seen incredibly high packet loss when connected to FMS. After talking with many other teams and the FTAs, it seems almost everyone is encountering the issue. This is very frustrating as it is costing many teams ranking points. If you’re suffering the same issue, can you reply with a picture of your log?


Was this graph on the field in a match (I don’t see the auto/teleop colored bars at the top), or in your pit? If in the pit, were you connected via ethernet or USB (I presume not wifi becasue event rules prohibit it)?

If every team is affected and it’s only when connected to the field over wifi, then it’s likely from a hostile wifi environment in the venue and the FTA should be working with their support contact and event management to coordinate a less congested channel.

If not every team is affected, or it happens when tethered (ethernet or usb) in the pits, then you should be working with the CSA to troubleshoot. Most likely culprits are old software (rio image, DS version, etc), high CPU/RAM utilization on Rio or DS, other software running in the background on DS (i.e. windows updates), or radio placement.


This was in a qualification match. It’s affected every robot since Quals 1.


Yeah… This was definitely a widespread and systemic issue. We are talking at least a dozen teams having this issue. While we (836) were part of the alliance that ultimately won the event, it certainly left a bad taste in our mouthes. The entire event would have gone very differently if the fates of each team were decided by robot/drive team skills rather than the whims of packet loss. Here’s a sample of the log files that were shown to me. I don’t have an exact count but it seems like we had this issue in around half our qualification matches.


I’m a mentor of 614, and I stood in the question box with at least 6 teams over the course of the weekend as a silent observer for their packet loss problems. I can confirm that you weren’t the only ones to experience it. I can also confirm that some teams didn’t have any packet loss, as we were lucky enough to escape its randomness. Sorry for the wall of text, but I’m just going to lay it all out below.

tl;dr: Field had massive issues, event ran on time, never once was a replay granted for packet loss. The only pause was lunch before eliminations. Teams suffered, no one was happy.

Here’s the deal with this though: there wasn’t much on any scanner that anyone could see. The venue shut down any wifi near the gym. Along with that, there was an offseason competition held in the exact same space 5 months ago. No problems happened then (I was part of the planning committee, field crew, and had a team there). It doesn’t exactly line up that these issues would be specifically venue or team related.

The biggest issue that I have with this is that the event ran on time. We always joke as veterans of this program that FIRST always is behind schedule. Heck, if anything breaks on the field mechanically then we stop everything until it’s fixed. We saw that numerous times week 1 with charge stations. In previous years, we’ve seen hours-long breaks to ensure that lights work. I have no doubt that our FTAs were doing absolutely everything they could to make things work. They are both some of the most experienced and professional volunteers that I’ve had the pleasure to work with in my 10 years of mentoring in CHS. We know they reached out to FIRST HQ to help with the situation, and still nothing was working. But, why not just stop and assess the situation for an hour?

Also, during none of these conversations was a field fault ever even entertained by the Head Ref. In fact, no qualification matches got replayed at all. The question box was a revolving door of teams experiencing the exact same issue. Every question box conversation ended with the same “I don’t knows”, and “it’s random and affecting other teams too”. We played 72 matches, and the first issue that arose was in the first ~10 matches.

The unfortunate nature of a field issue causing your robot to not work the way it can work is absolutely devastating. Teams had no confidence that they would work any time throughout the event. Strategy meetings started with shrugs and sad faces, and then hypothetical scenarios were talked about if “things actually worked”.

The perception from the team side of the event was not a positive one. When an event runs on time it should be because everything is going smoothly where the field isn’t having issues and teams are given every chance to get working on the field (within the rules of course)

That was the exact opposite of what was happening. The Head Ref, FTAs, Event manager, FIRST HQ, and CHS Director of Programs could have stopped matches to take time to diagnose the problem. They could have run test matches to ensure the problems didn’t persist. At the very least periodic announcements should have been made so that all teams knew what was going on. They could have put the schedule aside, and made sure that teams felt heard, and supported. I know plenty of teams who would have stayed late on Saturday or come in early on Sunday if it meant there was time for field troubleshooting.

Instead, teams were changing their entire control system to see if that was a root cause. Mentors took it upon themselves to collect data on what hardware, sensors, and software other teams were using to find correlations. Teams were assisting each other in reimaging their RIOs and radios to see if that could solve things. Teams were taking time away from enjoying the competition to help the problem that they were seeing with not just their robot, but with everyone there. While teams helping each other is a pillar of FIRST, it should be to help teams with their own internal mechanical, electrical, or code issues, not the field. This was the teams banding together against the field issues, trying to “fix” perfectly good robots to magically work with these weird external issues.

Heck, we even played eliminations with everyone closing out of their dashboards. Every single team hard coded their autonomous modes on the fly less than 5 minutes before the start of playoff matches so they could close SmartDashboard and ShuffleBoard because that MAY have been the issue. Teams were playing with their robot not at 100% functionality to ensure that they could at least help with the field issues.

Sorry for the wall of text, if you got to this point, please know that I was so proud of all of the teams at MDTIM. They persevered through some of the worst field issues I’ve seen in my years of this program, and still managed to compete to the best of their abilities. The volunteer crew in CHS is top notch, and I wouldn’t trade them for anyone else. But, we can do better. Our kids deserve better.


As I said over here… this is how it has to be. It’s an unfortunate reality of the current situation with the field wireless connection. I think we have to look out for each other.


I wonder how much investment in event location scouting involves analysis of the wifi environment. Are there any standards set by Manchester and communicated to program delivery partners? Is there anything teams who act as hosts in real or nominal capacities at district events can do in this process to help determine ambient conditions before an FTA ever arrives?

Wifi is obviously not an application purpose built for simultaneous control of 6 amateur robots in such a environment. That being said, FRC robots have been running off WiFi for over a decade, and there are dozens of volunteers that can contribute to a better knowledge base because of their professional experience with it. How much is that leveraged by Manchester?


Thanks for sticking up for everyone, having you watching the discussion made me feel much better about it when our student was there


There’s absolutely nothing about it in the District Planning Guide or Appendix 1 - Venue Site Selection, only the requirement for 2 ethernet drops for the FMS and webcast.

FIRST Chesapeake distributes a document called “Robotics Competitions Venue Network Requirements” which sets the requirements for its event venues. Folks should ask FIRST Chesapeake for a copy. I’m unsure how much of the document content came from Manchester versus was created indigenously.

Here’s what our offseason event communicates to our stakeholders each year:


This is the only sections that deal with Wifi in the “Venue Network Requirements” document sent out to venue contacts:

Wireless Requirements [Channel Availability]

FRC equipment will create and utilize: (will not be provided by venue)

  • 1 Channel, 5 Ghz, 802.11n
    • 20Mhz channel width, Band 1 or 4
    • Channels 36, 40, 44, 48, 149, 153, 157, or 161
  • 1 Channel, 2.4 Ghz, 802.11b
    • 20Mhz channel width
    • Channels 1, 6, or 11
  • The venue should ensure that at least one of the channels listed above is clear for event use and report this channel to the contact details at the bottom of this document.

The last bullet seems to be the issue. From the two events I have experienced, the number of dead robots this year is surprisingly small. I think the PoE connections have a lot to do with it. I can’t comment of the field issues this year because I haven’t experienced them.

Regrading @PayneTrain 's suggestion about making some sort of venue requirement for Wifi: The northern part of CHS has a hard enough time finding teams/schools/venues to host local events already. If they take a one of few venues off the list that is even willing to host, where would they play?

We did see the same, in the beginning we were told it had to be with our setup and it got to the point where we played with the Rio plugged into the radio directly, bypassing POE, and no vision or network switch, and a network cable I borrowed (and forgot to return -ed) from the spare parts station.

Some matches were better than others. For some it was terrible right up until auto started and then it was normal-ish. Sometimes it was throughout. We saw disconnects in DS, sluggish response on the robot, and auto routines not running.


The last bullet was in fact not the issue. The venue had internet available for FMS, and they could communicate out perfectly fine. The connection from FMS to the robots was terrible.

The issues we saw were nothing to do with team error. This culture of assuming that every issue comes down to a team problem isn’t helpful. This was a systemic issue throughout the competition affecting 30% or more of the teams and seemingly random intervals and severity.

I’m not gonna derail this thread, but I know for a fact that there was 2 venues this season that got passed up by CHS instead of having a 7th event. If you’d like to talk more about that I’d be happy to. I’m sure you have my contact information. But please don’t make assumptions related to host sites up here.


I think Matt was saying that the missed step COULD have been ensuring that the wifi frequencies needed for FMS communication to robots are clear of interference from outside sources [1]. NOT that the internet connection wasn’t provided for FMS/streaming or that the teams/robots were doing anything wrong.

[1] EDIT But you provided evidence that points against this:

1 Like

@PayneTrain’s cul-de-sac. Pits can be in the backyard.


exactly. the field communicates to the interwebs through a LAN connection. the Wifi requirement is directly related to the FMS and robot communication to that system.

sure. It would be great to hear your perspective.

The part I don’t get is why shutting down Shuffleboard / VS Code on the competing robots actually seemed to work. After the field staff figured this out just before elims, I didn’t notice any field issues. 836 still seemed to stutter a bit, but thankfully were able to compete at a high level. I’m hoping someone is still chasing this down.

1 Like

they asked us to do it in the first match and both alliances had tons of issues

The problem was all the channels were clear. There was maybe a single network on the wifi list of any scanner throughout the event. No hotspots, no venue wifi, nothing. Also, we had run Battle O’ Baltimore at the exact same venue 5 months ago with none of the same issues.


Alright, that makes more sense than shuffleboard being the cause.