paper: Miscellaneous Statistics Projects

statistics
#21

For nearly all statistics that can be obtained from official data, one of our biggest issues is separating out individual team data from data points which actually represent something about the entire alliance. However, there was one statistic last season that was actually granular to the team level, and that data point was auto mobility. Referees were responsible this year for marking mobility points for each team individually, so these data points should have little to no dependence on other teams. Unfortunately, auto mobility was a nearly negligible point source for this game, and combined with the extremely high average mobility rates, made this a generally unimportant characteristic to describe teams. However, I thought it would be interesting to take a deeper look into these data to see if we can learn anything interesting from them.

I have uploaded a workbook titled “auto_mobility_data” which provides a few different ways of understanding mobility points. The first tab of this book contains raw data on mobility for every team in every match of 2017. The second tab contains a breakdown by team, listing each team’s season-long auto mobility rate as well as each team’s first match where they missed mobility (for you to check if you don’t believe your team ever missed auto mobility). Overall, about 25% of teams never missed their mobility points in auto, and another 18% had mobility rates of >95%. The top 10 teams with the most successful mobilities without a single miss are:

Team	Successful Mobilities
2337	86
195	85
4039	85
27	84
3663	82
2771	73
3683	73
1391	72
1519	71
2084	71
4391	71

As another point of investigation, I wanted to see if these “mobility rates” would provide more predictive power over future performance than the comparable metric I used in my workbooks last year, calculated contribution to auto Mobility Points. I compared each team’s qual mobility rate, total mobility rate (including playoffs), and calculated contribution to auto Mobility Points at their first event to the same metrics at their second event. Strong correlations imply that the metric at the first event could have been used as a good predictor of second event performance. Here are the correlation coefficients:

The total mobility rate at event 1 had the strongest correlation with all three of qual rate, total rate, and calculated contribution at event 2, meaning it would likely be the strongest predictor. However, this is a little bit unfair since the total rate metric is incorporating information unavailable to qual rate or calculated contributions. Qual rate and cc at event 1 have roughly even correlation with qual rate at event 2. Qual rate at event 1 has a much stronger correlation with cc at event 2 than does cc at event 1. Overall, this tells me that, if there is a comparable scoring category to auto Mobility in 2018, I can probably get better results by using the robot specific data rather than using cc on the entire alliance’s score. There might also be potential to combine these metrics somehow, but I have yet to look into this.

My last way to slice the data is by event. I found every event’s total auto mobility rate, as well as a correlation coefficient between each team’s qual auto Mobility Rate and calculated contribution for that event. I was specifically looking to see if I could identify any events which had an unexpectedly low correlation between auto mobility rates and ccs. This might indicate that one or more referees were not associating the correct robots with mobility points (although points for the alliance would be unaffected). Below you can see each event’s mobility rate versus the correlation at the event between mobility rate and cc for each team. I threw out events at which the mobility rate was higher than 90% since events with extremely high auto mobility rates do not provide a reasonable sample size of individual teams doing unique things.

4 events in this graph stood out to me for having unexpectedly low correlation coefficients. Those events were the Southern Cross Regional, ISR District Event #1, ISR District Event #2, and IN District -Tippecanoe Event. Of these events, only Tippecanoe has a reasonable number of match videos, so I decided to watch the first 10 quals matches at this event. I discovered numerous inconsistencies in the published data with what I could see on the video. Here are the ones I saw:
Quals 1: 2909
Quals 2: 234
Quals 7 (good music this match :slight_smile: ): 3147
Quals 10: 3940

My best explanation for these data are that one or more of the referees at this event (and potentially at the other low-correlation events) did not realize that their inputs corresponded to specific teams. Overall, the mobility rate data seem to be better than the calculated contribution data, so I’m not complaining, and I have no desire to call out specific referees, it is just interesting to me that I could track down discrepancies with this methodology.

That’s about it for now. I might adapt some of these efforts soon to looking at touchpad activation rates.

0 Likes

#22

After trying a couple of different changes to my Elo model, I have found one that has good predictive power, is general enough to apply to all years, and is straightforward to calculate. What I have done is to adjust each team’s start of season Elo to be a weighted average of their previous two year’s End of season Elos. The previous year’s Elo has a weight of 0.7, and the Elo of two years prior has a weight of 0.3. This weighted Elo is then reverted to the mean by 20%, just as in the previous system. In the previous system, only the last season’s Elo was taken into consideration. Second year teams have their rookie rating of 1320 (1350 before mean reversion) set as their end of season Elo from two years previous.

This adjustment provides substantial predictive power improvement, particularly at the start of the season. Although it causes larger Elo jumps for some teams between seasons, Elos during the start of the season are generally more stable. As an indirect consequence of this adjustment, I also found the optimal k value for playoff matches to be 4 instead of 5 which it was under the previous system. This means that playoff matches have slightly less of an impact on a team’s Elo rating under the new system.

I have attached a file called “2018 start of season Elos” that shows what every team’s Elo would have been under my previous system, as well as their Elo under this new system. Sometime before kickoff, I will publish an update to my “FRC Elo” workbook that contains this change as well as any other changes I make before then.

0 Likes

#23

With this change, Elo actually takes a razor thin edge over standard OPR in terms of predictive power for the 2017 season (season long total Brier score = 0.211 vs 0.212 for OPR). However, it should be noted that this isn’t really a fair comparison, since OPRs predictive power could probably be improved with many of these same adjustments I have been making to Elo. Even so, I think it’s pretty cool that we now have a metric that provides more predictive power than conventional OPR, which has been the gold standard for at least as long as I have been around in FRC.

0 Likes

#24

Not a huge change, but I have uploaded a sheet called “2018 start of season Elos v2” which incorporates all of the changes I have put into my Elo model. Since the original “2018 start of season Elos” sheet already had the 2-season weighted average built into it, Elo ratings are, for the most part, just 70ish points higher in this sheet.

0 Likes

#25

I am investing some of my effort now into improving calculated contribution (OPR) predictions. The first thing I really want to figure out is what the best “seed” OPR is for a team going into an event is. We have many choices for how to calculate this seed value based on past results, so I’d like to narrow my options down before building a formal model. To accomplish this, I investigated the years 2011-2014 to find which choice of seeds correlate the best with teams’ calculated contributions at the championship. I used this point in time because it is the spot in the season where we have the most data on teams before the season is over. The best seeds should have the strongest correlation with the team’s championship OPR, and by using correlations instead of building a model, I can ignore linear offsets in seed values.

When a team has only a single event in a season, my choices of metrics to use to generate seed values are basically restricted to either their OPR at their only event, or their pre-champ world OPR. There is potential for using normalized OPRs from previous seasons as seeds, but I chose not to investigate this since year-year team performance is quite drastic.

When a team has attended 2+ events, I have many more options for metrics that can be used to determine their seed value:
The team’s OPR at their first event of the season
The team’s OPR at their second event of the season
The team’s OPR at their second to last pre-champs event of the season
The team’s OPR at their last pre-champs event of the season
The team’s highest pre-champs OPR of the season
The team’s second highest pre-champs OPR of the season
The team’s lowest pre-champs OPR of the season
The team’s pre-champs world OPR

Many of these metrics will overlap for teams, but they are all distinct metrics.

Using each of these seed options, I found correlation coefficients for these metrics with every championship attending team with each team’s championship OPR. I did this for each year 2011-2014 as well as an average correlation for all four years (I didn’t weight by number of teams since there were ~400 champ teams in each of these years). The results are summarized in this table, and can also be found in the “summary tab” of the “OPR seed investigator.xlsx” spreadsheet. Raw data can be found in the year sheets of the workbook as well.

As can be seen in the table, we roughly have from most correlation to least correlation:
highest > world > last > second >>> second highest > second to last >>> lowest > first
Going into this analysis, I had anticipated that the top three seed metrics would be highest, world, and last, but my expected ordering probably would have been something like last > highest >> world.

I was actually hoping that there would be a clearer difference between these top three metrics so that I could throw out one or two of these options going into my model creation. I had always been pretty skeptical of world OPR, it seemed to me that, although it has a better sample size than conventional single event OPR, that it would perform worse since it incorporates early season matches that may not reflect teams accurately by the time champs rolls around. However, world OPR was better correlated with champs performance than was my previous metric of choice, last event OPR, so my fears with world OPR are probably not very justified.

I also tried combining metrics with a weighted average. The optimal weightings I found and correlation coefficients can also be found in the “summary” tab. For example, when combining first OPR and second OPR, the optimal weighted average would be 0.3*(first OPR) + 0.7*(second OPR) I did not find much that was interesting in this effort. Highest OPR is consistently the best predictor of champs OPR no matter which other metric it is paired with. Some of the optimal weightings are mildly interesting particularly the negative weightings given to poor metrics paired with world OPR.

Moving forward, I will probably have to try to use all three of highest OPR, world OPR, and last OPR when building a predictive model. I will also have to determine the best linear offsets to use for these metrics, and determine if the best seed metrics remain the same throughout the season, since this effort only looked at a single point in the season.

0 Likes

Visually viewing Caleb Sykes' Scouting Database: Data is Beautiful
paper: FRC Elo 2008-2016
#26

I am interested in predicting teams’ win probabilities for events before events even start. To do this, I need a metric for each team’s ability before the event starts. I decided to use Elo just because it is easily accessible to me and because OPR is not technically defined for a new event before the event starts, although many choices of OPR seeds could be used (see above post) to achieve the same effect.

One of my first questions I would like to answer before predicting event winners is to see if each team’s probability of winning the event strictly increases as their pre-event Elo rating also increases. At a first pass, this seems like it should be clearly true. However, when you think deeper into the structure of how FRC events choose winners, there is one huge exception to this rule, and that is the second pick of a high seeded alliance. These teams are generally agreed to be “worse” teams than low seeded alliance captains and first picks, yet they generally have an easier path to winning the event. Put another way, clearly the highest ranked Elo team going into the event will have the highest probability of winning, and the second highest Elo team will have the second highest probability of winning, but the question is if there exists some “valley” of Elo ranks for which the teams at these ranks are less likely to win the event than some teams at ranks below theirs. A hypothetical distribution of this kind is shown in this image. Here, there is a “serpentine valley” stretching from about rank 10 to rank 16. Note that these ranks are the teams’ start of event Elo rank, not their qualification seeding rank.

To investigate this, I compiled every team’s pre-event Elo rank for all 2008-2017 regional/district events and the winners of these events. Full data can be found in the “serpentine_valley.xlsx” workbook, although I did unfortunately lose much of the data. I apologize for that, if anyone is actually interested I wouldn’t mind re-creating it, but I got what I needed from it. The summary graph is shown here. A 2-rank moving average is also shown here, this graph just smooths out the preceding graph a bit for easier interpretation. It is difficult to say definitively that a “serpentine valley” either exists or does not exist based on this data. If it does exist, it is likely centered at about Elo rank 10, and has a width of not more than 3 ranking positions. The top of the hill, if it exists, is probably at rank 11 or 12. For reference on the magnitude of the serpentine valley, the 10th ranked Elo team has won a total of 47 events in my data set of 788 events, and the 11th ranked Elo team has won a total of 69 events.

I also performed a similar analysis using each team’s end of season Elo and each team’s end of quals Elo at the current event, but these were more periphery and did not yield anything noteworthy.

I will have to see moving forward if this effect is large enough to merit inclusion in a pre-event winner prediction model, but my guess at this point is that no such adjustment will be needed. Note that this does not in any way absolve the serpentine model of its known weaknesses. It does however provide reasonably good evidence that teams are only rarely (if ever) incentivized by the current system to start out the event pretending to be worse than they are in order to drop a few ranks in apparent ability.

0 Likes

Would there be any reason, on your own volition, to intentionally fail to get an RP (or even throw a match)?
Paper: Miscellaneous Statistics Projects 2018
#27

I’m a little bit confused as to why almost all of my papers have 40+ downloads, but only 5 unique people besides me have commented on this thread.

My theories are:

  1. People want more complete whitepapers before commenting or don’t like this format
  2. My analysis is so rigorous and easily digestible that hardly anyone has questions
  3. People are downloading my data and just glancing at it without actually understanding what I am describing
  4. My analyses are going over most people’s heads and they are afraid to ask questions

I find 1 and 2 unlikely. 3 is perhaps the most likely, and I don’t necessarily think that it is a problem. However if 4 is the case I want to strongly encourage anyone to ask me questions or speculate on things I post. Indeed, the one serious challenge I have gotten directly led to me retracting my original analysis, so I do really value feedback.

0 Likes

#28

I think its just interesting to many of us and if we truly got what all these meant then maybe we would respond more.

I find it fascinating all the statistics, I have yet to find a statistic better than my own eyes though.
OPR is fairly straight forward, some of the others I have no clue what they are like MCa

Maybe a small snippet of whats being compared would be good so a 5yo could understand?
I appricaite what you do and have used your analysis here and there when I can understand it.

0 Likes

#29

mCA stands for milli-Chairman’s Awards. A team with a rating of 500 mCA has an awards history strength equivalent to winning half of a Chairman’s Award in the current season. Higher ratings indicate teams have a stronger award-winning history, and are thus more likely to win the Chairman’s Award.

I appreciate the feedback, let me know if I can help make anything clearer.

0 Likes

#30

With the interesting new dynamic this year of random plate assignments, I decided to look back at previous years to see if team color assignment had any meaningful impact on scores. To do this, I found the optimal Elo “bonus” to give to either the red or the blue alliance in quals matches in order to maximize my Elo model’s predictive power. I call this addition the “red Elo advantage.” Since Elos can be difficult to interpret, I also included the equivalent point value impact for each year.

Going into this, I expected to see minimal impact of color assignment in any given year since FIRST tries to make the games as symmetric as possible. After reviewing the games, the largest assymetries (relative to the drivers) that I found occurred in 2005 (human loading stations all on one side) and 2017 (gear loading all on one side). There were minor assymetries in 2012 (Kinect stations) and 2015 (unloaded yellow totes). I also expected that, in an average year, the blue alliance would receive a slight bonus due to red being penalized more frequently because red would be perceived as a more aggressive color by the referees.

Here is a table summarizing the results by year. The largest advantage by far comes from the blue alliance in 2005 receiving an Elo advantage of 14 points. This is followed by 9 Elo point advantages for blue in both 2007 and 2008. I’m unsure why 2007 and 2008 have such large advantages, but 2005 was one of the years I had anticipated seeing the biggest differences due to assymetry. Also note that the fewer matches in 2005-2008 relative to later years might be causing a result purely arising from chance. I might run a significance test for each year later, but I don’t really care because all effects are so minimal.

From 2009-2017, the year with the largest impact was 2017, with a blue Elo advantage of 6, corresponding to 1.7 match points. This was also expected due to the nature of the arena last year. Other years in this time period look to have nearly negligible Elo impact.

Aggregate Elo advantages can be seen in this table. During 2009-2017 (years with more matches and the modern era of bumper colors), blue on average receives a 1.2 Elo point advantage. This is probably nowhere near statistically significantly different from 0, but it is in line with my prediction about red being penalized more. Unfortunately, we only have good penalty data since 2015, so it won’t be feasible for a few more years to see if red is actually penalized more than blue in the average game.

Another thing I realized in this process is that, even if the field is symmetric for the drivers, it will not be symmetric for the referees. The head ref’s side of the field will almost certainly receive either more or less penalties depending on the game dynamics (as should happen, otherwise the head ref wouldn’t be doing her job). Thus, it shouldn’t be surprising to anyone to see red or blue receive at least a slight edge in every year.

Like I predicted, all of these effects are minimal. However, this will give us a good reference point to see if plate color assignment this year has as large or larger of an impact as team color assignment from prior years.

0 Likes

#31

I ran significance tests for the years that were most likely to have significant advantages for one color over the other. The results are in this table. Of the years that I tested, 2017 was the only one that was significant, none of the others were even very close. Even 2017 should be viewed with caution since I essentially ran 13 significant tests, so there was a 30% chance that at least one of those tests would provide a p-value at least as low as that of 2017 purely by chance.

Basically, the only year for which we have reasonable evidence against the null hypothesis is 2017, and I would still be wary of rejecting the null hypothesis for 2017.

0 Likes

#32

I find this spreadsheet to be extremely interesting. I’m wondering what the logic is in not capping the number of years that contribute to mCA?

From the FIRST Inspires Website:

The criterion for the Chairman’s Award has special emphasis on recent accomplishments in both the current season, and the preceding two to five years. The judges focus on teams’ activities over a sustained period, as distinguished from just the robot design and build period.

Given that judges are instructed to emphasize the most recent 2-5 years, I would think it would make sense to ignore accomplishments made prior to 2013 when calculating mCA. Obviously you have the 19% regression to 0, but there is still a residual effect from an award that was won in 2009 when realistically that probably doesn’t mean much.

I’m of the opinion that keeping the entire body of work for a team is a better representation for their standing as a Hall of Fame contender, while keeping just the most recent 5 years would be a better representation of a team’s standing at a local event.

Additionally, I’m curious as to why Rookie All Star isn’t factored in for mCA? My understanding is that the Rookie All-Star is essentially the rookie team that best fits the mold of a future Chairman’s Award team. I would think that a team that has won RAS is more likely to win CA in the future than a team that didn’t win RAS.

Edit:

In terms of event predictions, I’m wondering if it would make sense to have some sort of cutoff after the top X number of teams. Realistically, you won’t have 60/60 teams at an event present for Chairman’s Award, so it doesn’t make sense for the 60th ranked team in terms of mCA to have a .5% chance of winning CA. I don’t know what percent of teams at an event typically submit for Chairman’s Award… my guess would be 1/3 of teams submit, but that’s probably high.

0 Likes

#33

All good points. I’ll go back and test out some of these thoughts with my model. Capping for only the past 5 years especially intrigues me.

Additionally, I’m curious as to why Rookie All Star isn’t factored in for mCA? My understanding is that the Rookie All-Star is essentially the rookie team that best fits the mold of a future Chairman’s Award team. I would think that a team that has won RAS is more likely to win CA in the future than a team that didn’t win RAS.

I did build a RAS value into the model, which is why it shows up in the “Model Parameters” tab. However, I found that the optimal value for this award in terms of predictive power was 0 (±50ish). I was originally surprised by this, and a little bit disappointed to be honest. I’ll try optimizing my model again to make sure this was not done in error but I doubt it. All of the weightings I use were those that maximize the predictive power of my model, it has nothing to do with personal preference.

In terms of event predictions, I’m wondering if it would make sense to have some sort of cutoff after the top X number of teams. Realistically, you won’t have 60/60 teams at an event present for Chairman’s Award, so it doesn’t make sense for the 60th ranked team in terms of mCA to have a .5% chance of winning CA. I don’t know what percent of teams at an event typically submit for Chairman’s Award… my guess would be 1/3 of teams submit, but that’s probably high.

My concern with this line of thought is that, although only some proportion of teams at an event submit for Chairman’s, we don’t know which teams those are. Obviously, teams with stronger awards histories are more likely to submit for Chairman’s than teams without such histories, but we can never definitively say which teams are and are not presenting. As an example, I ran through the weakest mCA teams to win Chairman’s last year, and team 4730 won at PCH Albany despite: having negative mCA, having never won a judged award prior to this, and having the lowest mCA of any team at their event. You can check this using my “2017 Chairman’s predictions.xlsm” workbook. Going from 0.5% to 0.1% for example is a deceptively huge jump. We would expect about one 0.5% team to win Chairman’s Award each season (since there are around 200 events), but we would only expect to see a 0.1% team win Chairman’s in about a 5-year period.

I’ll try adding a “weak team” penalty into the model that subtracts some mCA amount from the lowest X% of teams at the event to see if that improves the predictive power at all, but I’m pretty skeptical since the model seemed to be well-calibrated when I built it.

0 Likes

#34

I completely understand that all of your decisions were based off of predictive power. All of my suggestions were based on my impressions about the Chairman’s Award, and what I know about teams that have won it. Obviously not too scientific on my end :smiley:

I’m looking forward to future postings on the subject!

0 Likes

#35

Caleb
I found your scouting system to be a great source of strategy and scoring trends in 2017. I hope you are producing a new one for 2018.

0 Likes

#36

Glad to hear it. :slight_smile: I’m always happy to hear that people find my work useful.

I’m working on a 2018 scouting database and event simulator right now. They’ll definitely be out before week 1 competitions start, but I can’t promise a specific date, hopefully no later than next Monday.

0 Likes

#37

can um someone post the pics by chance…? school blocks imgur

0 Likes

#38

Here you go, I’ve reuploaded all imgur pictures here:
Post 21:
21_1

21_2

25:
25_1

26:
26_1

26_2

image

30:

30_2

31:

0 Likes