Robot Maintenance and Problem Prevention

What do teams do to make sure that their robots are in top performance each match beyond standard checklists (verifying sensor and motor functionality.)

Robot reliability was something that had plagued us during the 2017 season in part due to the larger number of matches with Ontario’s change to districts. 2018 we put a large emphasis on making sure everything got checked every time, and this helped us tons going into matches knowing that our robot was good to perform (Sensors are functional with the right polarity, air pressure is filled, motors moved, battery beak testing). While this caught dozens of issues that could have impacted our matches, we still suffered major electrical bugs at our district champs after making two finalist runs at our district events 30ish matches into the robot cycle, as well as ~40 hrs of run time on the robot pre bag and in unbag sessions.

To me it seems like there are two main strands of making sure your robot performs well, and you need both.

  1. The robot is soundly made such that there are few points of failures
  2. Unavoidable points of failure are mitigated through rigorous testing.

So a couple of questions for you all.

What does your team check to make sure your robot works every time?
How do you make your checks as efficient as possible (time wise, and finding all the issues)?
What common/particularly frustrating issues have you had that could have been prevented, and how did you prevent those from occurring?
How do you structure your pre-match checks? Who is in charge and responsible for them?
How are issues monitored/communicated between team members?

Thanks for the help!

Was inspired by 2767’s post here

As well as JVN’s blog ( I can’t find the post right now, but the one where he talked about what 148 does to prep for events/championships)

What my team started doing a few years ago was assigning each drive team member parts of the robot to check every match or as often as possible.

I think one of the biggest things that helps is knowing the weak points of the robot, and checking those between every match, and checking everything on the robot every few matches. I personally was in charge of checking electronics, and you should always do pull tests on applicable wires, and double check the PWMs and similar connectors are fully in. I can’t say how many times I pulled on a wire and it came out, and we fixed it before a match instead of the robot dying in a match.

As to making it time efficient, you don’t have to check everything every match, just cycle through different parts of the robot between matches (but if say, you rammed your intake into a wall, you should probably check that).

I would love to hear a presentation on this subject by 4607 C.I.S.

As seen at north half champs:](

before every match we do an ops test where we test all parts of out robot to make sure they are working. Our pit crew will look at all parts of the robot to make sure everything is how it should be, we prioritize the parts of the robot that we have had problems with before.

Interestingly, we were preparing to do a thread about our FMEA process this year, but we were waiting for a higher volume time of the year to post it to CD (early December or something like that). I posted a link to this thread on our Slack so hopefully that will help to generate a better response to this thread.

I can speak briefly about the process that we had, but others (the students) on our team would be able to do a much better job. Basically as a team we recounted all of the failures that we have had over our 5 year history leading up to the 2018 season… and there were a lot of them. Things like pneumatic leaks, forgetting to remove safety straps from the robot before a match, loose hardware etc. We compiled all the failures into a spreadsheet and that gave us a baseline of things to look out for. Some failure modes were more common than others and therefore we prioritized preventing them.

Throughout the 2018 season, every time we brought the robot back to the pit, we did a full systems check, starting with all of the high risk systems and working our way down the list. As we did the systems check, any issues were entered into our FMEA database which fed into the graphic that you saw in our pit. By the time we were at Detroit, that spreadsheet was populated with every failure we had experience throughout the year. With all the information handy, (and to have it be specifically for the current robot) we were easily able to check the most failure prone systems every match to prevent future failures and also diagnose problems rapidly in between matches.

To my understanding our hope is to publish a white paper that better describes our entire process, which will include a template of the Excel spreadsheets we used.

I’m definitely going to try to convince our team to do this kind of tracking.

Have you considered adding estimated occurrence and severity, then sorting by their weights? It is a common way to prioritize. Detection in the case of FRC can usually be left out since most failures are a “one time” failure so detection of any of them can be very difficult in advance.

Pretty awesome that you’re giving the kids these tools now so they are familiar with them in the future! Now teach the fish bone diagrams, decision trees, p-boundary diagrams… reliability and maintainability is alive! :smiley:

-a newly appointed R&M Engineer

One problem I’d love to know how to mitigate is SmartDashboard errors, where my drive team selects the correct auto but then something else runs.

I’ve been told that “all you have to do is restart the driver station software before the match”, but that’s frustrating and in a rush can be easy to miss.

Is this an error other people see, and if so, how do you mitigate it? I know 340 does physical switches on the robot as a solution, but I’d love it if the software we use just worked?

Also this is just awesome^^^ Could you send me a copy of the spreadsheet? Perhaps edited to blank with all your failures removed, if you don’t want me to see them. :smiley:

Oh man, welcome to our 2017 season. Our fix was to put another widget on the dashboard which shows the current auto, as registered by the Rio, not what is in the sendablechooser. That way the drive team knows to keep messing with smartdash until they finally see the proper auto displayed

The better fix was to write our own chooser widget. In any event I sympathize. What a garbage thing to have to fix!

I’m not afraid to show our previous mistakes. They’re a reminder of how far we’ve come and how much further we have to go. I’d be happy to share the spreadsheet with you if I had it… I’ll see what I can do about speeding up the process of releasing everything. I know we’re having a meeting tomorrow to discuss the topic.

We use a Smartdashboard Preference to store a number that is used for our selection. That way it’s persistent and we can set the auto mode in queue before the match. It also has another smartdashboard text box that displays the name of the auto mode that is currently selected by that preference.

Ahhhh yes. We have encountered this many, many times using the standard dashboard and trying to send data to the robot using combo boxes / radio buttons / numeric controls. So much so that we now set it ON the RIO using the controllers and just echo it to the dashboard. We’ll never do it any other way again.

Hi. Yes, on our main data page of the FMEA workbook (we only show the live checklist and that tournament failures with comparison graphs on the screen) we score the failure events by occurrence, severity, and detection so we get a risk score. This allows us to focus our energies on the highest priorities. We have combined the design FMEA with a process FMEA for a continuous review on a changing system. We also do the 5 whys immediately in the pit if we have an error on the field and the team has been taught the fishbone for a more thorough investigation as well. We have seen a nice reduction in repeat issues this year overall, and the team seemed more relaxed in being able to rely on our checklist (that incorported several inspect criteria as a result of our investigations) rather than remembering to review items in the heat of the battle to get our robot back in queue. After the tournament the team incorporates the lessons learned and reviews the short term actions to decide and drive long term actions. We are very excited to start the design phase again soon to see what we can identify and mitigate or eliminate before we get on the field again. I highly recommend a program like this!

We use a long checklist before each match. It takes 15-20 minutes to go thru and it has cut our problems down greatly. I will say watching from a distance the team members are very diligent in preforming the checks and have found many things, Maybe next year we should have a post match checklist.
This would help find hot motors and other problems even when the robot has no problems on the field. This would allow better preventive maintenance if we good set procedure.

In 2017 and 2018 we did this.

We have a test mode that goes through a long list of systems checks autonomously with the robot on blocks. Among other things we check for:

  1. Motors drawing roughly the amount of current we expect (and all motors in the same mechanism drawing ~equal current).
  2. Motors spinning the right direction and approximate speed (on mechanisms equipped with sensors).
  3. Actuate all pneumatics and watch/listen for problems.


This is not hard to do and may well be the most valuable code you write all season. It’s not a total replacement for a manual inspection process, but it lets you knock out a whole bunch of items all at once.

We just did a major streamlining of the spreadsheet with new links based on what worked for us this year. I’ll get an updated version prepared for sharing. I believe we can only post pdf’s but that will give you some ideas for your start.

This year we used shuffleboard for our dashboard instead of smart dashboard which we used in previous years. We did not have any issues with our priorities being selected and sent (only had issues with FMS sending us the wrong plate assignments a few times).

(We program in Java)

This is the most useful thread I’ve seen on CD in a long time.

This seems like such an obvious idea now that I’m reading it, but it’s never something I’d thought to implement before. Thanks, Jared. (If you have any other poof reliability secrets please share :smiley: )

Kris @ FRC #4607, could you please send me some examples of you FEMA and 5-ways?

During the season I explained your test mode to one of our pit crew and she asked me why we haven’t implemented this yet.
…I didn’t really have an answer for her.

I almost think that these should be some of the first bits of code you develop for a mechanism.

Jared, do you mind sharing how 254 develops these tests, and how it integrates into your early development process?
For instance, do you develop these after you have already designed your state machine for a mechanism or are they developed earlier in the process?