Communication Issues?

Watching some of these regionals, I’ve noticed an abnormally high number of robots remaining stationary for at least a portion of the match, if not all of it. Lone Star replayed at least two of it’s qualifying matches, from what I saw.

Has your team had any field communication issues, and if so, what was the cause/outcome?

Our team had some issue in Traverse City, I didn’t watch enough matches to draw many conclusions; but I saw during the finals our side had issues a number of times, our drive team said that the team in the Red #3 station would drop from the FMS (the volunteers tend to cast that stuff aside in our experience, I’m unsure if they reported it, or assumed they wouldn’t care and did not report it)

From their reports it was not “media disconnected” from windows, but the “FMS not connected” on the driver station, the robot was not off.

Here’s a video showing the issue in our second match of the finals (watch the middle robot on red)

I haven’t noticed many still robots in ONTO and ORPO. I’m sure they are working on any issues as they come up.

To teams competing this week and at future events - If you’re unsure about a sudden loss of control of your robot, approach the FTA and ask questions. Take note of as much information as possible, such as what your FMS connection light was doing at the time of the issue (The blue/red blinking lights at the alliance station), what your dashboard/DS was saying, and any symptoms seen on the robot (complete loss of control, latency in command execution).

The volunteers don’t have anything to do with the field system, and can’t give you a definitive answer. Only the FTA and FTAA(s) can really help you.

I hate to say it…but it seems that NYC is having some comm issues with the blue alliance station…hey had to replay a few matches, and they had to replay one of them twice.

EDIT: They are about to replay the first quarterfinal match, but just made the announcement to all teams to “turn off your dashboard for all matches unless you really need it.”

NYC had all teams with cameras not use them in eliminations due to comm issues - even teams that said they needed it. Even then we had some matches with 10-30 minute delays. and many many elimination match replays.

ONTO had only one apparent comms issue that I saw all weekend, 2809 in QF4.3.

I don’t know if the cause was determined, but the match was not replayed.

I was part of the Backup Team for one the blue alliance for the first quarter final. I noticed some issues with one of our alliance bots, but I had just figured that it was a malfunction on the part of that team. I spoke to one of the volunteers at the FTA table, and they told me that the bandwidth usage on one of the robots was through the roof. They couldn’t figure out who caused the problem so they told everyone to shut off their Dashboard. No cameras, all of the teams were driving blind. We also had to restart an earlier match because of a bad battery given to an alliance member. The number of match resets in NYC was too high, and there were also problems with scoring,

Please keep in mind that stationary robots does not equal “no comms” or whatnot. 1114 lost their battery in a qual match, and funnily enough were stationary for the rest of the match. Was that a “comm issue?”

Gregor - I was sure to look for the telltale orange RSL of the robot(s) in question on the various webcasts I was watching. During the time I was watching, I didn’t see any unpowered stationary robots.

That was just an example. Unmoving robots does not equal a field fault. A dead DS laptop, siezed gearboxes, loss of radio power, and a CRIO reboot are some of the hundreds of reasons that can stop a robot in its tracks (wheels?).

You cannot diagnose anything useful from a webcast.

Which was the purpose of this thread - to get a general feeling of how many of those non-moving robots were due to a communication issue.

My original post wasn’t directed at you, more so to the other posts.

Several teams at GULL LAKE district in Michigan had some issues. most notably team 858 the Demons. We tried to help them figure it out but could not. It was probably a bandwidth issue as they experienced servere latency. I.E. the robot did not respond to the driver input until seconds later.

Other teams such as 2767 had a break in communication as well. Our robot went “dead” for about 20 seconds in a semifinals match. I think it is problem that needs attention.

THIS THIS THIS THIS THIS.

Thank you.

Stop screaming at field resetters/queuers when your robot doesn’t move. Whether it’s a comm issue, a robot issue, a field issue- whatever. Only the FTA (navy blue polo with ‘FTA’ on the back) and FTAAs (yellow vest with 'FTAA 'on the back) can help with those sort of issues. Approach them calmly with your notes on the observed issue and they should do their best to help you out.

San Diego had one really weird match. Match 88, every robot misses in autonomous. About 10 seconds into teleop every robot started to lag and the d’s would flash from no connection to teleop enabled. I had my driver get his hands off the controls, but other teams we spinning due to lag. We had to reset and turned our robots on one at a time. Of course my team was the suspicious one, with our full resolution camera, smart Dashboard, and c++ so the field csa came to stand behind our station and made us very nervous by joking that we look like the problem. But no problems happened in the replay of the match and nothing happened for my team prior to or after that match.

Also, the Control System Advisors who often wear an orange hat and help setup WPA keys can be an invaluable source of in pit help (since the FTA cannot leave the field for long). CSA can talk with the FTA to get techincal details about your issue and actually check/fix your control system. They can relay info back and forth to the FTA so they can help diagnose in future matches.

The field and robots at week1 Hatboro had a lot of issues on Fri and Sat. The 2 FTAs and FTAAs worked tirelessly to resolve these issues. They proactively sent 2 CSA and me (when I had time away from RI duties) to pits of teams identified as having issues. Issues where diagnosed and Sun seemed to pretty smoothly.

I have been officially or unofficially helping resolve these issues for years now. In my observation, it is almost always a robot issue. I think the Pareto principle applies, where 20% of the robots cause 80% of the issues. Of course the field, comms, fireware and libraries all can have thier issues, most notably the mandatory C++ update released during week 1. If something like that is possibly the issue, the FTA are eager find it and share it with FRC engineering if necessary.

Although your robot may have issues, it only becomes your fault if you don’t work responsibly to resolve the issues. Get as much info as you can from the FTA. If you don’t have the experience to troubleshoot it yourself, please seek help from someone. Even if you are experienced, a second set of eyes can find those things you glance over without really thinking.

Dear CD,
I believe the communication problems we were having on the field at the Michigan Waterford District were solved by eliminating unnecessary Dashboard variables and upgrading the 09 classmate to a newer Laptop for the FRC Dashboard component used with the Joysticks.
Here’s what happened and my solution. Our student driver complained of lagging on the field during the last minute of operation. Upon finishing the match I took the robot (with same battery) to the practice field, tethered the system, and had the other student driver drive until the battery was dead and the system worked fine. I began to think perhaps a radio issue, however, the radio is mounted in a good position so I didn’t think that was a good path to go down.
I went home for the evening and remembered that when I was experimenting with potentiometers at home with the Dashboard and the 09 classmate that the POT readings tracked closely upon start up but after a minute or so of operation would track poorly. I checked the 09 classmate CPU and it was like 75% utilization. I just made a mental note at this time (at home). But I decided this was the path to follow.
Saturday morning I talked to the Field FTA and he confirmed that he has saw long “trip times” on our robot on the field the previous day (why didn’t he tell me?). I think he quoted 25 msec. I told him I would be changing Laptops for the dashboard and would like to be updated on my communication packet trip times.

Now, because I wanted my best operation on the field. I also changed the Dashboard program to eliminate four dashboard variables I did not need. So I changed both the FRC dashboard laptop and my program. I had the same driver drive in the morning and she reported smooth operation. The Field FTA later quoted that my packet trip times were down to 4 or 5 msec which was “good”.

I am not using a camera (disabled) and the software I am using is “solid”.

So my recommendation to teams is to certainly disable the camera if you are not using it, eliminate any Smart Dashboard parameters that you aren’t using for the field, and update your FRC Dashboard laptop computer (faster CPU/more RAM?) for the Field.

I hope this helps you… Please comment if you think this solution may have helped you so we can continue to make a better season for everyone.

PS: Talk to the CSA & FTA field folks if you are having what you think is a field communication issue. I am also adding that the possibility of a programming issue is always present in this competition environment.

As an addon to Marc’s comments, all team programmers should get in the habit of checking their Driver Station log for each match when the robot comes back from the playing field.
It contains a wealth of information recorded 50 times a second for the entire match:

  • Communication packet trip time
  • Lost packet count
  • Robot battery voltage
  • cRIO CPU utilization %
  • Markers for lost comms (and most importantly, the duration of lost comms: ~25 sec = radio reboot, ~40 sec=cRIO reboot)
  • When Auto/Teleop occurred
    Start -> All Programs -> FRC Driver station Log Viewer

Also, get in the habit of checking the built-in Window Task Manager or Resource Manager tools (Cntl-Alt-Del) for:

At Lone Star, there were a few matches with lag and comm issues. The FTA asked us, 1477 and possibly others to turn down our camera resolution/framerate, citing an overloaded bandwidth as the cause of our problems. As a result, we turned down our Axis camera feed from approximately 5mb/s to 3mb/s. Our programming lead was a bit confused, as we though the bandwidth was capped at 7mb/s. Anyways, we replayed two of our matches, still losing one and winning the other, when the other team complained of lag or lack of comm.

Interestingly, we experienced jerkiness during these matches, but were never as severely affected as other teams, some of whom did not move. Also, in eliminations, our alliance captain lost communication multiple times. Luckily, we were able to narrowly upset the second alliance, but were defeated in semi-finals by a single autonomous frisbee in both matches.