Automated Scoring

Over the years, FIRST has used automated scoring to give accurate real time points, and to ease the work on the refs. This year, FIRST made every scoring system automated (besides fouls of course). I was hoping to get some other perspectives about whether or not this is a good thing, and if it should continue in the coming years.

In 2016, only the boulders were automated. The endgame points, and the defenses were done manually (to my understanding at least). The climbing at the end not showing up allowed for more suspense (sure you could do the math, but fouls that may have been left out, and teams that were just barely over the line added to the anticipation of the final score.) Those who were at champs last year must remember the finals match 3, everyone was on the edge of their seats when they saw the scores being so close. That excitement is lost when everyone has climbed at 7 seconds left, and the scores were final. Of course last minute climbs, and fourth rotors were still exciting but they were lessened due to the fact that placing a gear at 5 seconds left doesn’t help if the human players can’t get it on in time. It’s the same idea with fuel, due to the time it takes to register the points. So climbing early was really the best thing to do, and led to people knowing the outcome before the animation even played. I personally think that endgame points should not be counted during the match, and only at the final score screen should we get to see them. (some may prefer it this way, which is what I am curious about)

When it comes to counting fuel, Cory mentioned it in the Lessons Learned thread
https://www.chiefdelphi.com/forums/showthread.php?t=158283&highlight=the+negative

The boiler had some inaccuracies. This can be shown in Carson qm83 where we shot 10 balls of fuel https://youtu.be/lUiFPrVV35s but only had 9 count. Another is Carson qm67 where 33 was 1 ball away from 40kpa, and winning the match. If you count every ball then it turns out they actually had the right amount, or if they had been luckier with fuel getting scored in auto vs teleop. This one ball took 33 from 1st seed, to 2nd seed (and led to the scorching of the division.) Is this randomness okay since it is an issue for both alliances? Luck is always a factor in any competition as well, so is it okay to have a scoring system that relies on a little luck? I am torn on this, so I’m very interested in hearing what people think.

Although I have never been a ref, I can’t imagine it’s an easy job. No one wants to make a judgement call that could win or lose a match. I think that the fouls this year were pretty fair in that regard, pinning is always a gray area but it didn’t come up very much for it to be an issue. Was it easier for refs to focus on fouls since there wasn’t anything else to look for, and hopefully make better calls since that was their one job? (not to say that refs did a bad job in past years, only that it’s easier to do one job than two.)

Should fully automatic scoring be a continuing trend? Or should it be an occasional luxury for the major aspects of the game? Are automated scoring systems okay to be a little random at times, or should FIRST design the game around a way to guarantee every point is exact (for example having scored fuel be out of play, so that refs can count them at the end for an official score.) Is knowing the final score before the match is over a good way to keep people engaged? I am very curious to see how people feel about this, and hear their experiences with the fully automated scoring system. If I made an error in anything I said please correct me, thanks.

You have a lot of points here, so I’ll try to address them one by one.

As a spectator, automated scoring is one of the best things to happen to FRC events. Within a margin of a couple of seconds, the scoring is updated to match exactly what has happened on the field, ignoring penalties (I’ll address those when I bring up your point about referees).

The mechanisms used, of course, can’t be perfect. I’ll address each individually.

Touchpads:
I can’t tell you how many touchpads getting fixed or replaced at the end of matches, because of robots slamming into them when they climb, and not letting up. But, when they weren’t getting destroyed, they scored more or less exactly as they should. I think we might have granted one climb to a team that was up but it didn’t get triggered, but this would have been at least a month ago.

Airship / Rotors:
Again, generally consistent. I remember in early weeks having trouble getting some of the rotors spinning when triggered, but that the sensors didn’t seem to fail. In those situations, the score would still be updated and I’m pretty sure the light next to the rotor still turned on.

The one consistent failure case I remember would be when human players waving their hands back and forth to signal to the driver station, inadvertently triggering the sensors and giving extra rotors. This was pretty easy to catch. Sometimes, human players would get confused why they wouldn’t get credit for turning a rotor, which was because they forgot to turn the earlier rotors.

Boiler:
Mostly consistent, but undoubtedly more issues than the other scoring mechanisms. It got the benefit of being the least used automated scoring mechanism of the year. Unfortunately, it had some really strange design decisions, those being not having enough torque to spin both serializers at the same time at the right speed , and having a light sensor* to count balls leaving. I think this would have been a find solution if the fuel wasn’t a wiffle ball, but it was, so they had to have some way to account for additional complexity as a result of potential false positives and negatives when counting fuel.

At each competition, serializers needed to be calibrated within a certain margin of error. A small margin of error, but as with machining, the margin of error is necessary. As we knew this year with the gross number of jokes and memes about it, fuel mattered**. At least, getting just enough for a point or two did, which is why the calibration was so important.

Every year, teams get shafted by the scoring system missing something. Take MSC Finals 1 last year, where I’m not sure if it missed a packet or what exactly happened, but somewhere between the referee and the scoring table, 67’s final crossing was missed. (Head Refs ran the field. Of course they caught the crossing.) If I remember that match correctly, the referee recorded the crossing, but the packet w/ that update was dropped in the network, and was never recognized by the scoring system. That was overturned, but there were other matches where mechanisms failed, and consequently gave the other alliance the win. Take 2016 Carver QF116, where 27’s successful auton ball literally wasn’t counted. This didn’t get overturned because proof required video, which can’t be accepted by the field crew in order to overturn calls. I don’t see this changing anytime soon, but I don’t think it should. These are examples of 27, but only because I happened to see those. I have no doubt there are other examples. Unfortunately, this is a consequence of more complicated games requiring better automated scoring. 2013 proved to be a struggle in this as the season went on, with a volunteer dedicated to counting discs as they realized that the weighing system used that year to count discs didn’t play well with the chain draped to dampen the speed of flying frisbees.

I think this year, additional mechanisms to aid the referees in scoring were necessary. Take climbing. Last year, we judged climbing from the sides of the field, and had to give benefit of the doubt. That was in part due to the different requirements for a climb last year, which were a result of other design decisions made in that game.

This year, the referees didn’t have to worry about asserting that climbs happened. Our responsibility was to point out false positives (ropes with tension that caused the touchpad to trigger, etc). It helped that there were penalties associated with these actions, because it meant that every one of our responsibilities was something that we could address as a flag waved or a card raised.

On Reffing:
Reffing has always had it’s difficulty curve, this year was no different. I think refs from before me can tell you all about how 2014 was not a good year for referees, but my memories of that year are as a spectator in my freshman year of college.

Each year the difficulty changes. In 2015, I remember the one thing being balancing scoring at your scoring platform and making sure human players weren’t committing penalties when they handled the pool noodles. For the most part, it wasn’t bad. That year had other issues :slight_smile:

Last year, referees generally had larger areas to cover, with less overlap than this year. One referee on each side was dedicated to scoring crossings, while the other two on the sides were responsible for courtyards, and one in the center for the center third of the field. You also have your head ref, who is watching the entire field to provide additional eyes when lots of action is happening in a small area.This year, there’s a ref dedicated to a quarter of the field each, with well-known hotspots for action. the two on the side of the scoring table were masters of the retrieval zone, while the ones on the opposite side worked with the head ref to cover the central zone and their boilers. All refs kept an eye on their airship when they weren’t dealing with multiple robots in their zone. This year, the ref allocation felt fairly balanced, in part because each zone of the field had multiple eyes on that part of the field. That wasn’t as apparent last year with the defenses, which more or less gave us tunnel vision into the zones we watched.

Overall, I was very happy with how FIRST set up the referees. Having multiple sets of eyes to watch each area of the field increased consistency in calls, which is very important in every game. If FIRST can continue with redundancy in the referee’s responsibilities, then we could have games that can be better reffed, which is to everyone’s benefit. More than that needs to happen in order for games to have better automated scoring systems, which is dependent on continuous consideration of rules design, overall game design, and the engineers working on automated scoring.

Automated scoring makes the referee’s lives easier. Games with anything more than basic scoring systems benefit from it, but I don’t think we’ll ever see a faultless system. It’s still not meant for every game. 2014 was a great game to watch because of it’s simplicity as a spectator, albeit the scoring system didn’t really lend itself to being automated whatsoever.

I think referee involvement in scoring needs to be inversely proportional to how much regulation there is on other aspects of the field. 2014 was very taxing because there was so much for referees to track all across the field, both in scoring and assessing penalties. 2017 was pretty decent on referees because we only had to worry about assessing penalties. I’m not terribly picky on FIRST going towards games with more or less automated scoring, but I don’t think we should only have one or the other any given year.

*Not 100% sure I’m naming the right type of sensor
**Everything mattered, most teams were so proportionally bad at fuel that being able to do it at all put you over the top

Automated scoring should be utilized when it can easily be used, or when refs can’t physically count how many were scored (fuel, frisbees).

In those cases, though, people are going to have to accept that it will never be foolproof. The complainers will always complain about them not being perfect but you’ll never get a perfect system. I’m ALMOST over the many many bad calls and non-functional touchpads on minibots in 2011…

In 2013, didnt they have somone count the frisbees at the end to get an accurate score? I feel like a not perfect automated scoring for the match is okay, as long as there is some way to verify it at the end. May take a long time to get final scores, but for a year like thia where they had to replace touchpads and pegs so much, there was some downtime already.

I prefer games that can have automated scoring as an estimate but with a final count done at the end of the match. 2013 was probably the best game for this since it gave a decent guess at how many Frisbee were scored and could be accurately counted afterwards. I know it was meant to be fully automated scoring but I think it worked out better this way. Game pieces being returned to the field after being scored are really the only thing preventing final count scoring at the end of a match. Accurate scoring could also be similar to 2014 where there is only one ball per alliance so it’s a lot easier to track points.

I like that the scoreboards update in real-time, but I wish refs didn’t automatically assume that the automated score is always correct. There have been several times this year between the boiler and the touchpads that it looked like points should have been counted but didn’t and many times the refs wouldn’t take more than a glance from a distance at the issue before releasing the final scores.
It should not be assumed an automated system will work correctly 100% of the time, and when errors do occur, they should not be allowed to decide the outcome of matches.

I agree with you - nobody on the field trusts the automated scoring system to a fault. But the referees are not responsible for making sure the touchpads and the automated scoring systems are always up to snuff, that’s the FTAs and, at least in Michigan, also our Field Supervisor. It was more or less habituated for the field crew to check the touchpads and pegs between matches.

I assure you, you can find matches that were decided by errors in scoring within the margin of error in calibration. However, in any situation where a student identifies a piece hardware on the field to show that a field fault occurred, if it affected the outcome of the match, there is a match replay.

Remember, the volunteers aren’t there to get through the 100 matches and be done, we’re there to make sure everyone is a part of and gets fair treatment across all matches.

Don’t get me wrong, I place far more of the blame on the game design than I do on the volunteers. Past games have been designed so that they can be manually scored (albeit, some easier than others), this years game was not. There was no real practical way for a person to determine how many balls scored on the fly or if the touchpads were really supposed to be activated or not, but had the game been designed in a way that automatic scoring was secondary, then it would be possible to use a different, perhaps more consistant metric to determine scores instead of “limit switch triggers light to turn on” or “IR sensor counted this many breaks in the beam” which are suseptible to mechanical and electrical failures or inconsistancy.

as someone who has worked with the scoring system at events i want to say that first had the FTA’s and/or field staff do calibration every morning of comp to make sure the boiler count was accurate. On the note of the match that was in question where the 10 fuel was scored there are two possible situations that i can think of ( note that i was not on Carson as a volunteer or field staff) so one that either the fuel was too close together that it did not count one of them or two is that one fuel pass the sensor when auto =0 so it was counted as part of tellop score but now that i think about it there is a 3rd possible reason and that is that a fuel was stuck in the center of the high boiler so i can’t say much on that but if a FTA or a field volunteer who worked Carson can give there point of view. in regards to automated scoring i have to agree it is one of the best thing for FIRST as regards to audience members and the general public.

I think the automated scoring is a great way to take the load off the Refs and let everyone see scores in real time, I only see 1 problem with the system. We competed in the Galileo Division in Houston. In Qualification Match 68 We we were with 1538 and 4112. Before the match we were around 20th. Getting 3 RP would bump us up into the Top 10. Towards the end of the match we needed 1 more gear to get 4 rotors. So with around 25 seconds left we grabbed another gear, got heavy defense played on us, delivered and climbed in around 8 seconds. As you can see at the end of the match when we spin the rotor 3 times, during the fourth spin 1538’s pilot’s lanyard gets under a gear, the gear then pops off. We never got the fourth rotor. All refs agreed that we spun it three times, but the automated scoring system didn’t pick it up. We asked and asked and tried to get the fourth rotor but they said they could not overrule the automated scoring system, even though the refs all agreed that we spun it.

Overall they were told not to overrule automated scoring system, we never got the fourth rotor, and they all agreed we spun it 3 times, but even for the good of the game and accuracy of ranking they could not overrule the automated scoring system.

I think that the refs need the power to overrule the automated scoring system. Absolutely no disrespect to the refs reffing that match or any refs that made a mistake, but I think they need the power to overrule the automatic scoring system.

At Hudson Valley, we had an instance where one of our alliance partners had one of the three “prongs” on the touchpad pushed all the way up. The touchpad never came on. We contested it immediately. The FTA chalked it up to the delay associated with the touchpad (which doesn’t make sense because you can still hold it through T = 0). The head ref refused to change the score even though you could look at the touchpad and see that it was still pushed up.

The point of this anecdote is that FIRST needs to have protocols in place for scores to be corrected, and needs to make it a point to head refs. It lost us the match, and I would be lying if I said I wasn’t still a bit upset about it.

Understandably so. And are you 100% sure they DON’T have that protocol? Because if you are, I’d like to make a little bet with you that one exists. (I only take safe bets, BTW–defined as “ones I’ll win”. ;)) Now, whether any such protocol was followed is open for discussion. I wasn’t there so I can’t say for sure.

The points are awarded by rotors turning, not crank handles turning. You’ll notice that 3 turns of the handle won’t count anything if, say, there’s a 11-second gap between the second and third turn. Yes, you do need to turn the crank handle 3 times to start the rotor.

I should also point out that if the rotor had been seen to be turning before the gear popped off, someone would have probably gotten red-carded. Which was probably about a fraction of a second from happening…

Having an overrule is great, and some automated scoring could be overruled this year (obvious malfunctions), but then you get the case of “But So-and-So got an overrule in this similar case, why can’t we get one?” (See also: “But it passed at X Regional!”)

Automated scoring is great when it makes sense to use it, and the system is appropriately designed for the game. I remember back over a decade to the first use of automated scoring ('06). One part worked quite well–the high goal, when the goal wasn’t jammed solid–but the other part was prone to problems. I still remember an entire FINALS match from that year that was played wrong because a sensor had recorded too many balls in the low goal in auto–an impossible number given the starting quantities allowed, caused by too many balls in the catch corral. (Someone hadn’t followed manual instructions from the most recent update that year, IIRC.) Manual backups were implemented… Ever since it’s been spotty. 2010 was great, for scoring… but the automated penalty was a nuisance. 2012 worked well; 2016 I don’t think there were ANY problems in the boulder-scoring system. 2011’s minibots, not so much.

Automation has its limits. It’s not the end-all–as I pointed out to some folks last year, correctly scoring Crossings via automation would be rather difficult without a massive effort including color recognition–but it does take a big load off of the refs. (That and end-of-match position-based object scoring, which doesn’t require automation at all. Too bad that by my count that’ll be the 2019 game not the 2018 game.) On the other hand, it has to WORK. Sometimes it’s just easier to train a scorer. (2014 and 2016 should, IMO, have had scorers trained in how to do possessions and crossings, respectively, as well as the refs.)

Comparing this year’s FTC and FRC games, I can say I would gladly take automated scoring over human recording any day. I think future FRC games should be designed for pure automated scoring. Sure automated has its problems, but would you have another game like 2014 where refs have to watch every robot AND score at the same time?

I think a game designed for automated scoring will run much more smoothly than one that’s not (i.e 2014 vs 2017). I think if this game had a few tweaks (smaller Airships, point balance, less fuel, sturdier field), it would easily be one of the best games in recent history. Even though this game had some negatives, I hope that we send a message of what improvements that we would love to see next year.

That’s odd considering, as I recall, there were events in early weeks where the sensors didn’t work at all and refs had to enter the scores manually. In fact I seem to recall there was even a team update describing the proper way for pilots to get the refs attention so they could see them spin the rotors if the sensors didn’t work.

I think automated scoring is a good thing and is probably here to stay. If there is a fully automated scoring game again, however, there needs to be more manual checks. At Wisconsin there were several matches where alliances were credited with 4 rotors even though they scored less than 12 gears (some were corrected, others were not). At Midwest there were at least two occasions where robots were credited with a climb even though they just pulled the rope to the side, depressing the touch pad. Both of these errors could have easily been corrected if there was a quick 30 second manual check at the end of the match.

I would love to know what this protocol is. Our team was also negatively affected by gears falling out of the 4th rotor train while it was being spun up (without adjustments to the score/rp). And I’d love to get an insight into that process, so in the future I can understand why points are/aren’t being added.

How sure are you that a standoff was actually pressed in the 1/2" required before T=0? Remember that just being in contact with the touchpad does not satisfy the scoring rule, but requires at least one of the standoffs to be pushed in continuously. The stupid simple design of the touchpad mechanism was also that any one of the standoffs needed to be pushed in 1/2" and you could switch between them as long as at least one of the three was engaged at all times for the duration (push 1 in with a swinging bot and engage 2 before 1 disengaged and the second count would continue towards lighting). Was the robot rocking violently during the climb like many did this year, thus engaging and disengaging the all standoff detection switches?

I saw several times this year where they weren’t, but the robot momentum (chalk it up to 100:1 and such gearboxes) moved it past that contact point after the T=0, making it look like it was up but really wasn’t when the system stopped looking. Some robots had a good amount of continued upward momentum on the rope after disable due to those massive gear ratios.

I also saw several times where the light did illuminate after T=0 due to having pushed a standoff up between T=1 and T=0 and the carryover worked as intended for the full second. I also did that test myself during field tests at my events to make sure I knew what was happening when I saw it per the game manual.

Having the head ref look at a touchpad after the match is over would be a bad precedent. Would you like a valid score removed because a robot backed down and disengaged after the T=0 disable? That would be asking the same thing.

The automated scoring for the touchpads worked incredibly well, one of the best I can remember for the 7 years now that I’ve been on the field for the duration of the competition (FTA & FTAA). Climbing under the airships to do the wiring was not the most fun, but I’ll take it over some other elements I’ve had to deal with through the years too.

The 3 turns needs to be at the gear closest to the rotor, where the sensor is. In the case of the 4th rotor, it almost take a full 4th turn of the crank to actually get that sensor gear to turn 3 times due to the added up lash in the gear train.

Also, don’t turn the crank like you are trying to start a Ford Model A. Turn it steadily, maybe 1/2 second per rev, and it’ll start the rotor quicker. Cranking it too fast can cause the gears to kind of ride up the axles and not trigger the sensor properly. Go slower to go faster.

Hate to say it, but if gears are falling out as rotors being spun up, the protocol isn’t going to give you points. (If the rotor fails to respond after the spinning, you’ve got a chance.)

Let’s put it this way: It’s a way of avoiding replays due to obvious field faults by correcting them as seen, IF somebody sees the issue in time.

I can’t speak for the general case, but I was present for one instance in an early week where it took a call back to FIRST Headquarters and a decision by them to overrule the automated scoring system, when everyone watching had seen the error.