FMS Affecting Performance In Autonomous

Hello all, I’m a programming lead from 604. I’m not sure if you saw any of our matches in Houston (we were in the Hopper division), but if you did, you may have noticed that in autonomous, when attempting the side peg, up until eliminations our robot would consistently stop in the middle of its routine, moving forward, turning to align, and then cutting out when it was supposed to drive forward into the peg. It turns out that this was the result of a rather interesting issue, which I have not yet fully figured out, and would like to know if other teams have had similar problems, and if they have, how they ended up resolving them, and if they would consider it to be a field fault.

I’m sure you’re all familiar with the concept of autonomous performing differently on the field than in practice environments, such as with sensors, distances, and various other calibration measurements. However, this is the first time when using dead reckoning that our robot has simply stopped moving exclusively when playing matches. This never occurred when playing with FMS connection on the practice field, nor did it occur when running Practice Mode while on the cart or on the floor. The logs show that the robot enters the initial forward step, completes it, enters the rotation step, completes it, and then enters the final forward step-- then ends up doing nothing, and never exits out of it. I tried a variety of different methods of isolating the cause of the stop: I switched from using PID values to using timer-based drive, I turned all of my steps into one big macro step, and the like. Each time, the result was the same: the robot would refuse to drive the final forward distance.

As it turns out, the issue was with getting calibration values from the dashboard (thanks, 2848). For whatever reason, the robot would work perfectly fine in normal environments, but when connected to FMS on the Hopper division, it would be unable to pull the values we put on the SmartDashboard. This never happened at any regionals or on the practice field FMS. Hard-coding the values seemed to resolve the issue, which is why it ended up working in elims and on the Einstein field. We were later told something about how the Hopper Division network was also being used to host the displays for all of the other divisions, and how that possibly could have affected our performance (although it should be noted that we never tried to pull calibration values from the dashboard during Einstein-- we didn’t want to risk it-- so we don’t know if the issue was exclusive to the Hopper field).

As a summary, the robot worked perfectly fine on practice FMS, in practice mode, and when testing in the queue line. When connected to FMS on the Hopper field, it would cut out in the middle of auton. Hard-coding the calibration values on the dashboard resolved the issue, implying that the issue was in retrieving values from the SmartDashboard (something that utilizes NetworkTables, which sends data managed by the FMS).

Do any teams have any thoughts, regarding similar issues, how they resolved them, and whether this should be considered a field fault? Either way, people should know about this problem, and that hard-coding calibration vallues seemed to fix it.

Never rely on smartdashboard for setting values (at least, not in competition code) if you can possibly avoid it. Never, ever, ever. Bad idea.

It has never been reliable, and until FMS is not a total black box, it likely never will be.

Probably not the answer you’re looking for, but we had issues in 2016 with the Java SmartDashboard calibration values widgets. Similarly, we didn’t have enough time to fully debug both on real fields and in the pit/practice areas.

I never really liked the idea of storing calibration values on a driver station (what if you swap laptops? the robot behavior will change :frowning: ).

So, for 2017, we designed our own calibrations setup with three major components:

–Website-based interface for adjusting values
–Calibration values saved to file on the roboRIO and loaded at software boot time.
–Specific “calibration” object defining a number which can be tuned through that website interface, loaded/saved from file, etc. The user of a calibration can delay processing a number change until a specific event (ex: prevent changing a PID value while running to prevent instability).

We’ve got the code released at:

and

I wasn’t at Houston and I haven’t seen that before, but I have a few questions.

  1. What language are you programming in? (You said SmartDashboard so I could assume Java but I want to confirm)
  2. Were you writing/posting any other values to NT? If so, did they work?
  3. What was your roboRIO CPU at?

Few more followup questions:

  • What’re all the software versions you’re running? Have you installed all the updates (including the optional ones)? I vaguely remember there being some sort of smartdashboard bug in the kickoff version, but I don’t remember what it was.
  • Can you post a driver station log file from a match where this happened?

We have seen sporadic issues with the dashboard sending data to the robot on FMS, but we have never seen an issue with the robot sending data to the dashboard. As a result, all our data is stored robot side and sent to the dashboard. We’ve seen this for several years.

Yeah there was a SmartDashboard bug where the autonomous mode chooser wouldn’t show up. This was fixed with the latest update of WPILib, but I have noticed sometimes in competition where SmartDashboard is very buggy and lags. For example, one time during a quals match on the Turing field SmartDashboard had no camera feed from the RoboRio USB camera so we had to restart SmartDashboard in the middle of the match.

4513 hasn’t had any issues with SmartDashboard this year. Auto AutoChooser checkboxes worked beautifully and we were able to do what we needed within SmartDashboard (like reset the gyro, limit switch press/unpress, gryo heading, etc) directly from there.

We use C++ in case anyone is wondering.

As general advice, I’d recommend bouncing any critical signals (DS -> robot) to then be resent back to the DS/smartdash/etc so drive team can confirm what is selected. If there is a mismatch at least they know to resolve it (reboot, restart, tell FTA, etc.)

Our team had some issues with the FMS all through hopper as well rip :frowning:

Exactly.

This is what we do. We’ve delayed a few matches rebooting our roborio when we see values not being resent but rebooting always fixed it.

We had this same issue at CVR. The funny thing is we bounced back the chosen auto and it was selecting correctly, but not running the auto. When we hardcoded it in, the auto ran. At OCR, the selector worked with the exact same code.

I’ll just add that there is a way to take specific values and save them in the roboRIO. In a LiveWindow interface, there is a function called setPeristent() and getPersistent(), you say what key you want to save, and the code saves it in an ini file in the roboRIO.

This would be perfect to save PID values.

Note, I have never used them, just sounded useful for you.

While our autonomous never did a full stop, it ran enormously different on the practice field vs. the real field. The first portion of our side peg auto drove forward some number of inches with a PID. What we tested on the practice field had to be increased by about 5 inches to actually land on the real field. We also noticed the same issue at the PNW championship. I think it has something to do with the delay caused by the radio (it turns after encoder feedback says it is past a certain point so in theory if there was delays it could drift more or less afterwards?) but I just don’t see how that caused such a large discrepancy.

Thank you so much! Our team had the exact same problem at MSC last week and it ended up killing our rankings. Sorry you had to deal with this at Houston but thanks for sharing your experience and your solution to the problem!

Bump, specifically the last part. Log files would be very helpful for us to find your issues.

I hate to say it but we have avoided using smartdashboard for sending anything to the robot due to past experience with problems like this. Our programmers are also taught to put all of our smartdashboard ‘sends’ in functions that are called at a much lower rate than everything else in the robot (e.g. 1 per sec).

I actually have a theory on what is causing these issues. The period for the periodic functions in iterative robot in wpilib/ teleop vi in lv is actually based entirely on the DS packet time. This means the periods these functions run at are nowhere near actually being periodic. Tethered, this usually isn’t an issue, as there is nothing to slow a packet down. However when on the field with multiple robots, trip times usually do go up, however still within spec. But if teams are dependent on their packet timing to run control loops, this can wreak havoc on these loops, as they most likely depend on a constant time base.

Regarding Smart dashboard, we do know about some of these issues, especially involving sendable choosers. We are looking heavily into solutions to solve this once and for all, as we know it does bug teams.

Note that most of these issues are with the field networking setup, and the FMS software itself would be basically impossible to interfere with these things. The software that comms with the robot is super simple, and completely separate from any ds Rio comms. It’s most likely the field networking, or connections races between the Rio and ds that are the root cause of most issues teams see.

I would take a look at the FMS whitepaper too:

https://wpilib.screenstepslive.com/s/fms/m/whitepaper/l/608744-fms-whitepaper