Given that the endgame scoring tasks are typically both a significant percentage of the total match score and are typically needed to earn one of the bonus ranking points, getting proper credit for these scoring tasks can be very important in determining the outcome of a match (win vs loss) as well as earning valuable ranking points.
The endgame scores are typically recorded by the refs as the match comes to a close (or shortly after the end of the match) and is generally not reflected by the real time scoring display, so the first time that the teams see how the endgame was scored is when the final score is displayed. By this time, the robots are typically either in the process of being removed or have already been removed from the field.
Although the refs generally do a good job of getting these scores recorded correctly, occasionally scoring achievements get missed. When those mistakes do occur, the recourse for the teams affected is to go to the question box. But unfortunately, by this time, there is no way for the head ref to review how the endgame was scored and whether all the tasks were properly credited. If a task was missed, the explanation that is given is that the refs didn’t see it.
The idea of video replay has been discussed elsewhere, and if that were to happen, that would probably resolve the majority of these discrepancies. I understand the reasons why video replay would be challenging to implement and I don’t want to re-hash that debate here. Rather, I want to propose an alternative that might be easier to implement and would hopefully allow the referee team to catch and correct any missed scores while the robots are still in position.
My idea is to add score indicator lights to the endgame element in each scoring location. As the refs enter their scores, the lights would illuminate. This would provide a quick visual for both the teams and the head ref to see how things were scored. If there is a robot in a position without a corresponding light, the head ref can ask the ref that recorded the score why the discrepancy exists. If the ref did not score the task due to some rule violation, he can explain that to the head ref (so that the head ref can explain it to the teams if they question it). If the ref that entered the score missed something, he can correct it before the final score is entered.
I recognize that there have been some games that implemented things similar to this. 2017 used automated endgame scoring (with the touchpads). And while that had its own set of technical issues, I think this was a valiant attempt by the game designers to remove the potential of human mistakes from the score recording. With many games, automated scoring like this is technically challenging, so an indicator system would add a level of error-proofing to the manual scoring task. 2020 had an indicator for whether the switch was level during the endgame. This is a good example of what I am thinking of, but only covered one aspect of the the endgame scoring.
Here are a couple of examples of what this might look like:
2022 - Install light stacks (with 3 lights each) adjacent to each of the rungs of the hangar structure. If there were two robots on the Traversal Rung and one robot on the Mid Rung, as the ref scores these robots, two of the 3 lights on the Traversal Rung would light up and one of the lights on the Mid Rung would light up. This would give a very quick way for both the refs and teams to see how the endgame was being scored. If the lights did not match the apparent robot positions, then this would be quickly visible so that the referees could correct the scoring or confirm the scoring.
If, for example, the ref saw that a robot extended too early, and that robot was on the Traversal rung when the match ended, then there would be one fewer light illuminated for the Traversal Rung than the number of robots that were hanging from that rung. The ref would be able to quickly explain why he had scored it this way so that everyone was clear on the scoring. Similarly, if a robot did not sustain their climb for the full 5 second period, the ref would be able to explain why the climb did not count.
2018 - A light stack could be added to the top of the scale structure, again with 3 lights per alliance. The ref’s scoring would illuminate the lights depending on how many robots completed the climbs. If the alliance scored the levitate power-up, one of the lights would illuminate in a different color indicating the credit for the climb from the power-up.
If there was a robot that climbed with a second robot on a buddy lift platform and that second robot did not reach the height needed, then the ref would only score one of the two robots and only one of the two lights would illuminate. That ref could explain the reason that the second robot was not scored.
It seems to me that such an indicator system would be technically possible to implement with the current technology that FIRST uses in the arena. Multi-colored light stacks are used in the drivers station area and lights are often used to indicate various things on the field. It also seems to me that such a system would give the referee team a simple way to cross check their scoring and avoid the unfortunate situation where they have to explain that they did not see something that occurred in the endgame.
Just to be clear, this is by no means meant to disparage the wonderful job that the referees do at each and every tournament. They have a challenging job. My goal, with this idea, is to make their job a little easier, reduce the possibility of mistakes and to also improve the experience for the teams by giving them more visibility into the scoring that occurred in the final critical moments of the match. It might also improve the audience experience as they can see the endgame scores being displayed.