Week 2 - Wednesday
This post is going to be all about how we analyzed how many points we will need to win an event in week 1.
During the offseason we spent time trying to figure out how you can predict scores for new games. The problem is complicated, there are a lot of variables that you can’t be sure of before you see the first matches. So the first step is: make assumptions! We decided to take a look at the historical data and see if that could help us.
Historical data is available, we have about 30 years worth, so at least there is plenty to cherry pick from. But the reality is that games aren’t what they used to be, and that isn’t just me being nostalgic for games I never even played, it’s also a fact that games have evolved over time. Some notable revolutions in this evolution are double trouble and triple play, all of a sudden you don’t even have the same amount of robots working towards objectives, and every year you see an evolution in the game so it is important to decide which games are still relevant to the modern game.
After a lot of debate and research we decided to start in what is often called the sports game era (2010 - 2014). A decision was made and we started scouring the blue alliance for game data, and almost immediately we ran into trouble: 2010, 2011 and 2012 often only have the match results but not the detailed results, so unfortunately they aren’t useful to us. Easy call: we’re no longer considering these games as modern FRC (in my head there are now two ages of FRC: the Data Glory Days… and the Dark Ages). Another cut had to be made for 2020, great game, great data sets, only played consistently in week 1, and had its last events played in week 3.
So we’ve narrowed it down to 2013-2019, still 7 games, so not bad, plenty of data to go from. I still had some scripting which downloads data from the blue alliance, calculates OPR (explained better than I could, by people that are smarter than me in a lot of places like: The Math Behind OPR — An Introduction – The Blue Alliance Blog) and put it in an excel spreadsheet, we ran that for those years, not only for the match results but also for individual scoring objectives. After taking a look at the data we had another epiphany: not all scoring is the same, some games score linearly (the same action repeats for the same score: think scoring cargo in 2019) and some games score stepped (the same action doesn’t always score the same points, think gears in 2017). As great as some of the stepped games are, they aren’t very useful for predicting future score development, so unfortunately they bite the dust as well: so long 2014 and 2017. We had two other dropouts for other reasons: 2015 had to go for only playing on a half field without any chance of defense and 2018 because its scoring mechanic is not at all dependent on the quality of robots (in theory an alliance robot could score way fewer cubes in one match than another alliance in another match and still score insane amounts more).
And then there were three: only 2013’s Ultimate Ascent, 2016’s Stronghold and 2019’s Destination: Deep Space. We went to work with that data, in the following three charts you can see the OPR progression from week 1 through the championships for each of these games broken down into percentiles, from the lowest scoring robots in the 0-10% percentage group to the highest scoring teams in the 90-100% percentage groups.
These graphs tell us a lot about the scoring progression. Basically, every week teams get better in every quality group, except for week 7 when the bottom 60% outperforms the bottom 60% percent of week 8. This is not too hard to explain: the district championships happen in week 7.
Scores increase every week, by relatively the same amount. So if we can predict any point that is on this scale or might correlate to this scale we can predict the scores during every week! So we set to work on coming up with what a single top 1% robot could score in a single match by the end of the season, and then use those predicted scores to normalize the three years in the analysis. The predicted scores in general are based on autonomous, maximum expected cycles completed, their points values, and the value of the end game. Since we’re trying to predict how to win events we only used the top 10% for the graph below:
Not bad of a curve fit right? All three games progress relatively similarly, so we decided our assumptions were at the very least reasonable. So based on this we decided to say the following: you have to score 60% of the ultimate match to win events in Week 1, the average of the three games is about 55% but we chose 60% to be on the safe side.
Rapid React Prediction
So if we need 60% of the perfect match, what do we think the perfect game is going to be? We discussed this a lot, checked out other opinions, and concluded about the following.
|Auto CARGO Points
|Teleop CARGO Points
|Perfect Match Top Tier
|Week 1 Top Tier
You might notice that 12.5 points is not something you can score in reality, after all you get 10 points for the high rung and 15 for the traversal. This is because we don’t really know if the traversal is worth it, yes 15 points is a lot, but you shouldn’t compare it to not climbing at all, but to climbing from the high rung to the traversal rung. It would take a while, time that you could also be scoring cargo in the upper hub. If you can’t do the climb in less time than you can do 1.5 cycles it’s not worth it (at least not in the playoffs). Still, some games at a high level might be worth it, and again better to score higher than we need than to go below so we settled on half of the time you make the climb, half of the time you don’t.
And then we’ve got to the ultimate question, how do we get the 44.7 points we project to need? Without trying to emulate a terrible soap opera, we’re going to have to leave you on a cliffhanger for that one. As of now we’re finishing up the story on the cycle analysis which is valuable input for what you have to be able to do to score x amount of points, when that is fully written down it will be released in this blog as well.