Endgame Score Indicators

Given that the endgame scoring tasks are typically both a significant percentage of the total match score and are typically needed to earn one of the bonus ranking points, getting proper credit for these scoring tasks can be very important in determining the outcome of a match (win vs loss) as well as earning valuable ranking points.

The endgame scores are typically recorded by the refs as the match comes to a close (or shortly after the end of the match) and is generally not reflected by the real time scoring display, so the first time that the teams see how the endgame was scored is when the final score is displayed. By this time, the robots are typically either in the process of being removed or have already been removed from the field.

Although the refs generally do a good job of getting these scores recorded correctly, occasionally scoring achievements get missed. When those mistakes do occur, the recourse for the teams affected is to go to the question box. But unfortunately, by this time, there is no way for the head ref to review how the endgame was scored and whether all the tasks were properly credited. If a task was missed, the explanation that is given is that the refs didn’t see it.

The idea of video replay has been discussed elsewhere, and if that were to happen, that would probably resolve the majority of these discrepancies. I understand the reasons why video replay would be challenging to implement and I don’t want to re-hash that debate here. Rather, I want to propose an alternative that might be easier to implement and would hopefully allow the referee team to catch and correct any missed scores while the robots are still in position.

My idea is to add score indicator lights to the endgame element in each scoring location. As the refs enter their scores, the lights would illuminate. This would provide a quick visual for both the teams and the head ref to see how things were scored. If there is a robot in a position without a corresponding light, the head ref can ask the ref that recorded the score why the discrepancy exists. If the ref did not score the task due to some rule violation, he can explain that to the head ref (so that the head ref can explain it to the teams if they question it). If the ref that entered the score missed something, he can correct it before the final score is entered.

I recognize that there have been some games that implemented things similar to this. 2017 used automated endgame scoring (with the touchpads). And while that had its own set of technical issues, I think this was a valiant attempt by the game designers to remove the potential of human mistakes from the score recording. With many games, automated scoring like this is technically challenging, so an indicator system would add a level of error-proofing to the manual scoring task. 2020 had an indicator for whether the switch was level during the endgame. This is a good example of what I am thinking of, but only covered one aspect of the the endgame scoring.

Here are a couple of examples of what this might look like:

2022 - Install light stacks (with 3 lights each) adjacent to each of the rungs of the hangar structure. If there were two robots on the Traversal Rung and one robot on the Mid Rung, as the ref scores these robots, two of the 3 lights on the Traversal Rung would light up and one of the lights on the Mid Rung would light up. This would give a very quick way for both the refs and teams to see how the endgame was being scored. If the lights did not match the apparent robot positions, then this would be quickly visible so that the referees could correct the scoring or confirm the scoring.

If, for example, the ref saw that a robot extended too early, and that robot was on the Traversal rung when the match ended, then there would be one fewer light illuminated for the Traversal Rung than the number of robots that were hanging from that rung. The ref would be able to quickly explain why he had scored it this way so that everyone was clear on the scoring. Similarly, if a robot did not sustain their climb for the full 5 second period, the ref would be able to explain why the climb did not count.

2018 - A light stack could be added to the top of the scale structure, again with 3 lights per alliance. The ref’s scoring would illuminate the lights depending on how many robots completed the climbs. If the alliance scored the levitate power-up, one of the lights would illuminate in a different color indicating the credit for the climb from the power-up.

If there was a robot that climbed with a second robot on a buddy lift platform and that second robot did not reach the height needed, then the ref would only score one of the two robots and only one of the two lights would illuminate. That ref could explain the reason that the second robot was not scored.

It seems to me that such an indicator system would be technically possible to implement with the current technology that FIRST uses in the arena. Multi-colored light stacks are used in the drivers station area and lights are often used to indicate various things on the field. It also seems to me that such a system would give the referee team a simple way to cross check their scoring and avoid the unfortunate situation where they have to explain that they did not see something that occurred in the endgame.

Just to be clear, this is by no means meant to disparage the wonderful job that the referees do at each and every tournament. They have a challenging job. My goal, with this idea, is to make their job a little easier, reduce the possibility of mistakes and to also improve the experience for the teams by giving them more visibility into the scoring that occurred in the final critical moments of the match. It might also improve the audience experience as they can see the endgame scores being displayed.


First, I like the idea. If it’s possible to implement it, it’d add a great visual element to the games.

That said, there’s one issue. And that is:

Current standard procedure is that the ref would score the task based on what’s on the field and call in a discrepancy (which triggers review notifications etc.) HR and ref(s) discuss the penalty(s). HR tells the scorekeeper to remove (or add) whatever needs to be removed (added). The system you’re proposing would not work properly with that particular process, so either the current system would be changed, or there might need to be some tweaks to the proposal.

As an example: Hueneme Port Q15. Due to multiple penalties, various climbs didn’t count initially. (Take a look at the last 30 seconds of the match, then the score screen, then TBA’s site–they’re all different from each other.)

Actually, this could be integrated into the indications. The ref initially scores the task based on what he sees and the corresponding light turns green. Then, he notes the discrepancy and the same light turns yellow indicating that it is under review. If the referee team agrees on the discrepancy, the light is turned off (or turned red). The teams, audience and emcee would be able to see that there was a discrepancy being discussed and then the emcee could explain the result (as they do today).


Yeah. Good idea. Let’s hope it gets done.

1 Like

This year, the live scoring didn’t show any endgame points. This is a change from previous years, where endgame points that were entered roughly before T=0 would show up. My understanding is that this was a deliberate change to avoid inconsistency in the real-time scores coming from delays in entering endgame points.

That appears to line up with what you’re seeing in Hueneme Port Q15 (edit: except a mid rung climb on blue that presumably changed due to referee rulings/penalties). Live scores ended at 20-88, which matches the final scores excluding endgame. The 24-103 at the end of the video appears to omit a traversal climb (possibly from a G208 violation at around 0:29 from the end of the match) that was added on later - i.e. a deliberate change, not a FMS-caused delay.

Anyway, +1 for a way to confirm endgame points before robots are cleared, regardless of what it looks like. It could be as simple as keeping the endgame screen visible and read-only on the ref panels for teams to confirm (usually it disappears once refs submit scores).


This was specifically written into the rules. HANGAR points didn’t happen until at or after time T=0, so they couldn’t be counted before the end of the match.

From Section 6.4:

C. assessment of HANGAR points is made 5 seconds after the ARENA timer displays 0, or when all ROBOTS have come to rest following the conclusion of the MATCH, whichever happens first.

Marshall I think a strong case could be made that this topic falls within the scope of what the Timeout Process / Playoff Tournament committee is charged to dig into and offer recommendations about.

These indicators would reduce the likelihood that teams are surprised by what finally shows up on the score display. This in turn would reduce the time spent by teams coaches in parsing and checking the final scores against their impression of events. Sounds like it would be a solid improvement to the tournament process.

Maybe @wgorgen could present the proposal to the committee?


Could we add a camera to FMS that points at the end game area to take a picture 5 seconds after the match? Then when a team goes to argue there’s evidence of whether or not they were on the ground in 5 seconds or in one hour. This is consistently an issue year to year - custom sensors per game would work but just repackaging the same camera onto the field and not rewriting any software seems easier.


Easy with the rational solutions. We wouldn’t want to make the team experience better for these paying customers…



Technically, or logistically?

Any answer of “no” from the technical side should probably have a reason along the lines of “system hardware can’t handle a video/image input”, which will raise other questions because FMS can output images no problem.

Logistically, though, there’s an argument. Said camera needs to be consistently placed and in focus. Don’t get me wrong, it can be done–inside a corner truss for example–it’s just making sure it’s accident-resistant.

Taking a picture would be good, if it was possible. I’d actually like to see a picture at time = 0 as well as 5 seconds later. The technology to take the picture is probably fairly simple and should be able to be integrated with the FMS to automatically capture the pictures.

The biggest issue I see with this idea is how the pictures get presented to the refs and used. I am not sure whether this would be the primary means of determining the end game points or just a simplified video replay if there is a challenge. Don’t get me wrong, I would like to see video replays if they were possible and if this idea makes that possible, I would be in favor. But I can’t see how this doesn’t have a lot of the same objections that the video replay idea has run into.

My hope is that the indicators would allow a quick scan of the scoring by the head ref so that he could quickly assess whether there were discrepancies that needed a closer look. If the indicators are clear enough, the HR should be able to scan the field during the 5 seconds post match and make an assessment that the credited scores were matching what he was seeing and focus his attention on anything that he needed to pay more attention to. And my hope is that by simplifying this assessment enough, that the need for pictures would just kinda go away. But if we can add pictures as a level of redundancy or to make the scoring task even easier, I am for it.

Based on the discussion in the IRI Q59 topic, I have changed my mind regarding my hesitation about capturing pictures of the endgame configuration. I believe that this image should be captured and presented to either the ref who is scoring the endgame or the HR (or both). Heck, for this year, if you simply had a way to automatically freeze the live scoring view at T=+5 then the question box would be a simple matter of pointing at the live scoring screen and saying “look”.

I also want to capture the idea of adding some sort of tone or light indication to tell the refs when the post match time period for evaluating the scoring achievements has expired, if applicable. This too was an excellent idea that came out of that discussion.


This was in place this year at least. Though it seems to me that pulsing then solid would be easier to tell the difference than the other way around.

1 Like

I have worked a decent amount with technology that uses lights to indicate things to the user. Studies have shown for things like this you want solid then pulse. The reasoning is it takes longer for a human to realize “oh the light is no longer going off” then it does to realize “oh the light started blinking” this is because they don’t want to be wrong and have the light turn back off.

Basically if the light is “on” the moment it is “off” you know the time has passed and to look at what you need to. If the light is “on”, “off”, “on”, “off” “on” you don’t know for sure if it will stay “on” unless it is a set pattern and you were looking at the whole thing.

The best thing would be lights are yellow at t=0, lights are red at t=+5 as color differentiation is even easier than light recognition. It would be better if it was green and red but since the field already uses purple and green for field all clear we have to make due.


Yeah I definitely like the color idea better even. Seems less prone to doubt