For nearly all statistics that can be obtained from official data, one of our biggest issues is separating out individual team data from data points which actually represent something about the entire alliance. However, there was one statistic last season that was actually granular to the team level, and that data point was auto mobility. Referees were responsible this year for marking mobility points for each team individually, so these data points should have little to no dependence on other teams. Unfortunately, auto mobility was a nearly negligible point source for this game, and combined with the extremely high average mobility rates, made this a generally unimportant characteristic to describe teams. However, I thought it would be interesting to take a deeper look into these data to see if we can learn anything interesting from them.
I have uploaded a workbook titled “auto_mobility_data” which provides a few different ways of understanding mobility points. The first tab of this book contains raw data on mobility for every team in every match of 2017. The second tab contains a breakdown by team, listing each team’s season-long auto mobility rate as well as each team’s first match where they missed mobility (for you to check if you don’t believe your team ever missed auto mobility). Overall, about 25% of teams never missed their mobility points in auto, and another 18% had mobility rates of >95%. The top 10 teams with the most successful mobilities without a single miss are:
Team Successful Mobilities
2337 86
195 85
4039 85
27 84
3663 82
2771 73
3683 73
1391 72
1519 71
2084 71
4391 71
As another point of investigation, I wanted to see if these “mobility rates” would provide more predictive power over future performance than the comparable metric I used in my workbooks last year, calculated contribution to auto Mobility Points. I compared each team’s qual mobility rate, total mobility rate (including playoffs), and calculated contribution to auto Mobility Points at their first event to the same metrics at their second event. Strong correlations imply that the metric at the first event could have been used as a good predictor of second event performance. Here are the correlation coefficients:
The total mobility rate at event 1 had the strongest correlation with all three of qual rate, total rate, and calculated contribution at event 2, meaning it would likely be the strongest predictor. However, this is a little bit unfair since the total rate metric is incorporating information unavailable to qual rate or calculated contributions. Qual rate and cc at event 1 have roughly even correlation with qual rate at event 2. Qual rate at event 1 has a much stronger correlation with cc at event 2 than does cc at event 1. Overall, this tells me that, if there is a comparable scoring category to auto Mobility in 2018, I can probably get better results by using the robot specific data rather than using cc on the entire alliance’s score. There might also be potential to combine these metrics somehow, but I have yet to look into this.
My last way to slice the data is by event. I found every event’s total auto mobility rate, as well as a correlation coefficient between each team’s qual auto Mobility Rate and calculated contribution for that event. I was specifically looking to see if I could identify any events which had an unexpectedly low correlation between auto mobility rates and ccs. This might indicate that one or more referees were not associating the correct robots with mobility points (although points for the alliance would be unaffected). Below you can see each event’s mobility rate versus the correlation at the event between mobility rate and cc for each team. I threw out events at which the mobility rate was higher than 90% since events with extremely high auto mobility rates do not provide a reasonable sample size of individual teams doing unique things.
4 events in this graph stood out to me for having unexpectedly low correlation coefficients. Those events were the Southern Cross Regional, ISR District Event #1, ISR District Event #2, and IN District -Tippecanoe Event. Of these events, only Tippecanoe has a reasonable number of match videos, so I decided to watch the first 10 quals matches at this event. I discovered numerous inconsistencies in the published data with what I could see on the video. Here are the ones I saw:
Quals 1: 2909
Quals 2: 234
Quals 7 (good music this match
): 3147
Quals 10: 3940
My best explanation for these data are that one or more of the referees at this event (and potentially at the other low-correlation events) did not realize that their inputs corresponded to specific teams. Overall, the mobility rate data seem to be better than the calculated contribution data, so I’m not complaining, and I have no desire to call out specific referees, it is just interesting to me that I could track down discrepancies with this methodology.
That’s about it for now. I might adapt some of these efforts soon to looking at touchpad activation rates.