Why is Real Time Scoring So Bad in Destination Deep Space?

Real time scoring has been very bad so far this year, and I know the reason why. It’s because humans are scoring the game pieces in real time and referees are sharing panels with scorers which slows down the entry of penalties. Sandstorm scores other than driving off of the platform aren’t even put into the score until after sandstorm. All this makes the game hard to follow for the average observer and it makes it almost impossible to track what your opponent is doing.

My real question is not why is real time scoring so bad, but why did they choose to make it this way? Why isn’t the scoring counted off of sensors? We’ve made such great advancements in the last 3 years in the use of sensors for accurate real time scores, why did that go out the window this year? It takes away from the fun of watching the game if you need to basically track the score yourself.

1 Like

Every single time FIRST has attempted to use automatic sensors, we FIRSTers have found ways to break them, from scoring to fast to the sensors simply not working.

A scorer always works. A sensor on the field does not, and trouble shooting one takes time out from gameplay. I expect folks would really prefer not having to deal with a scoring system that is unreliable. See years 2006, 2010, 2013, 2017 for reference.

FIRST does continue to try to implement sensors where they make sense. But instrumenting the scoring locations in deep space would take a minimum of 40 sensors PER alliance. One for each game piece location. After that, you’d need to consider the wiring, and the input to handle that, and then the setup time and troubleshooting at each event.


I’ll ask you one really simple but extremely difficult question.

What sensors do you know of that could properly detect all scores positions of hatches and cargo?


The reason I have heard for the hatch panel covers and the cargo scoring to be manual is that loading the field with 80 sensors would be cost and complexity prohibitive.

1 Like

Consider some of the recent games:
2018 - Switch and Scale scoring needed 6 cheap limit switches per field, while vault scoring needed 6 more expensive proximity sensors
2017 - The rotors needed 8 light-break sensors per field, while the boilers needed 4 (maybe 8 due to the holes in the balls?) light-break sensors per field
2016 - 4 light-break sensors per field

This year for automated scoring, you would need 40 sensors, at minimum, just for the hatch panels. And it’s not clear to me what sensors you would use to guarantee accurate scoring in all situations. You would then need another 40 sensors, at minimum, for cargo scoring - and you would need some way to filter out false positives, like cases where a hatch panel is half way inside the rocket or cargo ship. That’s a lot of sensors, which means a lot of money, and a significant challenge to ensure accuracy of the scoring system. Humans, while slower, are both cheaper and more accurate in this situation.


Given the allowable variations for placing a HATCH PANEL, particularly on the rockets, it’s pretty hard to figure how to reliably score this with one sensor (unless it’s a camera backed by vision processing).


As someone who works in a automated manufacturing plant as a software engineer the answer isn’t difficult.

A photo-eye or camera per individual cargo/rocket bay.

Did I say it would be cheap? No…but it’s possible.

1 Like

Actually the switch and scale in 2018 used beam break sensors (https://youtu.be/kA6eSv4l280), but your point still stands.

Eh, close enough :slight_smile:

1 Like

My biggest problem with the scoring this year is that sometimes they’ll score the HAB points before the audience display goes away and sometimes they won’t. I would seriously prefer if they just didn’t count HAB points until 5 seconds after the match (when they should be scored) and after the audience display goes away.

When they score HAB points for Red but not for Blue it really screws things up.


Agreed. We got used to this after a while and would do the mental math, but it’s very confusing for audiences. They could even have the announcers say “And here are our scores, with the climb bonuses added!”

Sensors shouldn’t be added after the game is designed, they need to be included in the design process. Obviously it would take a crazy amount of sensors to give this field, as-is, automated scoring. But, if the GDC valued automated scoring, it should drive design decisions instead of being tacked on.

For example, the cargo bays could only fit one ball, and a single weight sensor could connect across all 8 bays. Same for rockets. Design changes focused on automated scoring drop you from 40 to 6 cargo sensors.

1 Like

I think that this game, even at its core element, is ridiculously difficult to score. What if a robot dies on top of a cargo ball? What if a ball pops in the scoring position? What if, what if, and so on and so forth. In particular, sensing hatches is really difficult if we want the velcro solution. A more easily sensored solution would most likely require much more precision from teams in placing hatches, and it’s clear that the GDC (with good reason) wants a relatively low floor of play.

Real time scoring is great and I would love for every FIRST game to have it, but that would limit the types of games the GDC could make. RTS would not work for 2011, 2014, and 2016 defenses, just to name a few. I think 2013 had the best RTS for what the game was (massive quantities of frisbees), but then was verified afterwards to ensure an accurate score.

Simply put, designing a game around RTS limits what kind of games your can make, which would lessen the “challenge” portion of FIRST. It should be implemented where it can be, but should not be a dealbreaker for a challenge.


2011 had automated minibot scoring. 2016 had automated boulder scoring. 2018 scales and switches were automated. Some of those could have been manual. That would’ve sucked. You need a balance, and this year’s game is too far to the side of manual scoring when it didn’t need to be.

I proposed a way to automate one scoring aspect. I’m sure there are more and better ways to do it. Also, only automating cargo scoring but having manual hatch scoring would still be better than what we have now.

This. Lost a few matches where it looked like you were in the lead and then when the final score pops up you’re ten points behind. It seems like robots in questionable states, still using manipulator to hold themselves rather than being firmly parked, don’t get scored until after they remain on HAB 3. Some consistency would be nice though, because climbing points are often worth a quarter or more of an alliance’s score.

I don’t think the issue is how real-time scoring could be done for Deep Space, but instead, why the GDC would design a game that obviously can’t be automated for real-time scoring. I think that if FRC really wants to be seen as a competitive “sport”/competition this needs to be prioritized. Imagine watching a basketball game or something similar where the score isn’t accurate until after the game. Real-time scoring makes close matches so much more exciting and provides feedback to drive teams.


Forgive my ignorance… but is basketball scored automatically, or manually? What about Football, Soccer, Baseball? Is there any major sport that actually scores using sensors, with no manual input required, or are they all done by humans?

We like automated scoring, yes. But in the interest of game variety and developing challenging, interesting games, I’m willing to forego it once in a while.


Please tell me how many of those games have more than one ball/game piece/etc.

Those games are fundamentally simpler and thus allow for easy manual scoring.

It is not an unreasonable expectation to want accurate real time scoring. Especially in a game where you can’t see every location where the opposing team is/has scoring/scored.

In the higher levels of play I would imagine mistakes may end up being made all because the real time scoring didn’t show accurately how clore or not things might be.

Doesn’t bowling have automated scoring?