OCCRA
Go to Post FRC isn't for every school; it takes a lot of drive just to fundraise every year. It is a premium program that requires a premium about of effort to sustain and be successful. And man is it fun to do. - JesseK [more]
Home
Go Back   Chief Delphi > CD-Media > White Papers
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

photos

papers

everything



Miscellaneous Statistics Projects

Caleb Sykes

By: Caleb Sykes
New: 07-16-2017 12:00 AM
Updated: 11-03-2017 12:43 PM
Total downloads: 451 times


A collection of small projects that will be explained in the associated thread.

I frequently work on small projects that I don't believe merit entire threads on their own, so I have decided to upload them here and make a post about them in an existing thread. I also generally want my whitepapers to have instructions sheets so that anyone can pick them up and understand them. However, I don't want to bother with this for my smaller projects.

Attached Files

  • lsx IRI seeding projections investigator

    IRI seeding projections.xlsx

    downloaddownload file

    uploaded: 07-16-2017 12:00 AM
    filetype: lsx
    filesize: 5.13MB
    downloads: 78


  • lsx Elo and OPR comparison.xlsx

    Elo and OPR comparison.xlsx

    downloaddownload file

    uploaded: 07-24-2017 08:57 PM
    filetype: lsx
    filesize: 2.84MB
    downloads: 42


  • lsm 2017 Chairman's predictions.xlsm

    2017 Chairman's predictions.xlsm

    downloaddownload file

    uploaded: 07-28-2017 07:28 PM
    filetype: lsm
    filesize: 738.37kb
    downloads: 68


  • lsm 2018_Chairman's_predictions.xlsm

    2018_Chairman's_predictions.xlsm

    downloaddownload file

    uploaded: 07-29-2017 08:32 PM
    filetype: lsm
    filesize: 266.15kb
    downloads: 97


  • lsx Historical_mCA

    Historical mCA.xlsx

    downloaddownload file

    uploaded: 08-04-2017 02:17 PM
    filetype: lsx
    filesize: 428.25kb
    downloads: 37


  • lsx Greatest Upsets

    Greatest upsets.xlsx

    downloaddownload file

    uploaded: 08-20-2017 05:36 PM
    filetype: lsx
    filesize: 6.12MB
    downloads: 49


  • lsx surrogate results

    surrogate_results.xlsx

    downloaddownload file

    uploaded: 09-26-2017 04:47 PM
    filetype: lsx
    filesize: 53kb
    downloads: 13


  • lsx 2017 rest penalties

    2017 rest penalties.xlsx

    downloaddownload file

    uploaded: 10-02-2017 02:58 PM
    filetype: lsx
    filesize: 5.79MB
    downloads: 9


  • lsm 2018_Chairman's_predictions v2.xlsm

    2018_Chairman's_predictions v2.xlsm

    downloaddownload file

    uploaded: 10-31-2017 11:02 AM
    filetype: lsm
    filesize: 379.14kb
    downloads: 19


  • lsx auto_mobility_data.xlsx

    auto_mobility_data.xlsx

    downloaddownload file

    uploaded: 10-31-2017 11:10 PM
    filetype: lsx
    filesize: 6.1MB
    downloads: 16


  • lsx 2018 start of season Elos.xlsx

    2018 start of season Elos.xlsx

    downloaddownload file

    uploaded: 11-03-2017 12:43 PM
    filetype: lsx
    filesize: 136.58kb
    downloads: 21



Recent Downloaders

  • Guest

Discussion

view entire thread

Reply

07-16-2017 12:10 AM

Caleb Sykes


Unread Re: paper: Miscellaneous Statistics Projects

I frequently work on small projects that I don't believe merit entire threads on their own, so I have decided to upload them here and make a post about them in an existing thread. I also generally want my whitepapers to have instructions sheets so that anyone can pick them up and understand them. However, I don't want to bother with this for my smaller projects.



07-24-2017 09:12 PM

Caleb Sykes


Unread Re: paper: Miscellaneous Statistics Projects

In this post, Citrus Dad asked for a comparison of my Elo and OPR match predictions for the 2017 season. I have attached a file named "Elo and OPR comparison" that does this. Every qual match from 2017 is listed. Elo projections, OPR projections, and the average of the two, are also shown for each match. The square errors for all projections are shown, and these square errors are averaged together to get Brier scores for the three models.

Here are the Brier score summaries of the results.

Code:
Total Brier scores		
OPR	Elo	Average
0.212	0.217	0.209
		
Champs only Brier scores		
OPR	Elo	Average
0.208	0.210	0.204
The OPR and Elo models have similar Brier scores, with OPR taking a slight edge. This is directly in line with results from other years. However, predictions this year were much less predictive than any year since at least 2009. This is likely due to a combination of the non-linear and step-function-esque aspects of scoring for the 2017 game. My primary prediction method last season actually used a raw average of the Elo predictions and the OPR predictions, which provided more predictive power than either method alone.



07-25-2017 06:02 PM

Citrus Dad


Unread Re: paper: Miscellaneous Statistics Projects

Quote:
Originally Posted by Caleb Sykes View Post
In this post, Citrus Dad asked for a comparison of my Elo and OPR match predictions for the 2017 season. I have attached a file named "Elo and OPR comparison" that does this. Every qual match from 2017 is listed. Elo projections, OPR projections, and the average of the two, are also shown for each match. The square errors for all projections are shown, and these square errors are averaged together to get Brier scores for the three models.

Here are the Brier score summaries of the results.
Code:
Total Brier scores		
OPR	Elo	Average
0.212	0.217	0.209
		
Champs only Brier scores		
OPR	Elo	Average
0.208	0.210	0.204
The OPR and Elo models have similar Brier scores, with OPR taking a slight edge. This is directly in line with results from other years. However, predictions this year were much less predictive than any year since at least 2009. This is likely due to a combination of the non-linear and step-function-esque aspects of scoring for the 2017 game. My primary prediction method last season actually used a raw average of the Elo predictions and the OPR predictions, which provided more predictive power than either method alone.
Thanks



07-28-2017 08:01 PM

Caleb Sykes


Unread Re: paper: Miscellaneous Statistics Projects

I am currently working on a model which can be used to predict who will win the Chairman's Award at a regional or district event. I am not covering district championship Chairman's or Championship Chairman's because of their small sample sizes. The primary inputs to this model are the awards data of each team at all of their previous events, although previous season Elo is also taken into account.

The model essentially works by assigning value to every regional/district award a team wins. I call these points milli-Chairman's Awards, or mCA points. I assigned the value of a Chairman's win in the current season at a base event of 50 teams to have a value of 1000 mCA. Thus, all award values can be interpreted as what percentage of a Chairman's award they are worth. Award values and model parameters were the values found to provide the best predictions of 2015-2016 Chairman's wins. At each event, a logistic distribution is used to map a team's total points to their likelihood of winning the Chairman's Award at that event. Rookies, HOF teams, and teams that won Chairman's earlier in the season are assigned a probability of 0%.

I have attached a file named 2017_Chairman's_predictions.xlsm which shows my model's predictions for all 2017 regional and district events, as well as a sheet which shows the key model parameters and a description of each. The model used for these predictions was created by running from the period 2008-2016, with tuning specifically for the period 2015-2016, so the model did not know any of the 2017 results before "predicting" them.

Key takeaways:

  • The mean reversion value of 19% is right in line with the 20% mean reversion value I found when building my Elo model. It intrigues me that two very different endeavors led to essentially equivalent values.
  • It was no surprise to me that EI was worth 80% of a Chairman's Award. I was a bit surprised though to find that Dean's List was worth 60% of a Chairman's Award, especially because two are given out at each event. That means that the crazy teams that manage to win 2 Dean's List Awards at a single event are better off than a team that won Chairman's in terms of future Chairman's performance.
  • I have gained more appreciation for certain awards after seeing how strongly they predict future Chairman's Awards. In particular, the Team Spirit and Imagery awards.


More work to come on this topic in the next few hours/days.



07-29-2017 08:35 PM

Caleb Sykes


Unread Re: paper: Miscellaneous Statistics Projects

I have added another workbook named "2018 Chairman's Predictions." This workbook can be used to predict Chairman's results for any set of teams you enter. The model used here has the same base system as the "2017 Chairman's Predictions" model, but some of the parameter values have changed. These parameters were found by minimizing the prediction error for the period 2016-2017.

Also in this book is a complete listing of teams and their current mCA values. The top 100 teams are listed below.

Code:
team	mCA
1718	9496
503	9334
1540	9334
2834	9191
1676	8961
1241	8941
68	8814
548	8531
2468	8112
2974	8092
27	8047
1885	7881
1511	7786
1023	7641
1305	7635
2614	7568
245	7530
1629	7381
2486	7100
66	7027
3132	6748
1816	6742
1086	6551
1311	6482
1710	6263
2648	6241
125	6223
558	6155
141	6083
1519	6082
1983	6060
4039	5985
33	5851
2771	5780
1902	5582
624	5578
1011	5496
118	5470
2137	5461
1218	5424
2169	5390
910	5382
3284	5353
3478	5344
771	5321
75	5306
2557	5291
233	5287
987	5224
1868	5215
3309	5175
1714	5158
932	5147
1986	5144
537	5138
597	5077
604	5068
2056	5059
2996	5054
4613	5042
399	5029
1477	5010
2220	4994
2337	4955
3618	4896
4125	4823
217	4816
1730	4803
359	4784
2655	4714
2500	4706
694	4695
1923	4667
708	4662
1622	4661
1987	4655
2642	4655
1671	4630
4013	4627
772	4626
2415	4622
4063	4604
540	4501
433	4440
4525	4426
384	4412
3476	4384
2485	4333
3008	4325
303	4307
1711	4288
2590	4266
3142	4264
3256	4260
836	4251
3880	4250
1678	4244
2471	4237
230	4230
78	4224
If I make an event simulator again next year, I will likely include Chairman's predictions there.



08-04-2017 02:24 PM

Caleb Sykes


Unread Re: paper: Miscellaneous Statistics Projects

I got a question about historical mCA values for a team, so I decided to post the start of season mCA values for all teams since 2009. This can be found in the attached "Historical_mCA" document.



09-26-2017 04:45 PM

Caleb Sykes


Unread Re: paper: Miscellaneous Statistics Projects

I was wondering if alliances with surrogates were more or less likely to win than comparable alliances without surrogates. To investigate this, I found all 138 matches since 2008 in which opposing alliances had an unequal number of surrogates. I threw out the 5 matches in which one alliance had 2 surrogates more than the other alliance.

I started by finding the optimal Elo rating to add to the alliance that had more surrogates in order to minimize the Brier score of all 133 matches. This value was 25 Elo points. The Brier score improved by 0.0018 with this change. This means that, in a match between two otherwise even alliances, the alliance with the surrogate team would be expected to win about 53.5% of the time. This potentially implies that it is advantageous to have alliances which contain surrogates.

To see if this was just due to chance, I ran 10 trials where I would randomly either added or subtracted 25 Elo points from each alliance. The mean Brier score improvement with this method was -0.00005, and the standard deviation of Brier score improvement was 0.0028. Assuming the Brier score improvements to be normally distributed, we get a z-score of -0.62, which provides a p-value of 0.54. This is nowhere near significant, so we lack any good evidence that it is either beneficial or detrimental to have a surrogate team on your alliance.

Full data can be found in the "surrogate results" spreadsheet. Bolded teams are surrogates.



09-26-2017 11:26 PM

Bryce2471


Unread Re: paper: Miscellaneous Statistics Projects

Quote:
Originally Posted by Caleb Sykes View Post
I frequently work on small projects that I don't believe merit entire threads on their own, so I have decided to upload them here and make a post about them in an existing thread. I also generally want my whitepapers to have instructions sheets so that anyone can pick them up and understand them. However, I don't want to bother with this for my smaller projects.
If you have not read The Signal and the Noise by Nate Silver, (the guy who made FiveThirtyEight) I highly recommend it. I have no affiliation to the book, other than that I read it and liked it. I would recommend it to anyone who is interested in these statistics and prediction related projects.



09-26-2017 11:50 PM

Caleb Sykes


Unread Re: paper: Miscellaneous Statistics Projects

Quote:
Originally Posted by Bryce2471 View Post
If you have not read The Signal and the Noise by Nate Silver, (the guy who made FiveThirtyEight) I highly recommend it. I have no affiliation to the book, other than that I read it and liked it. I would recommend it to anyone who is interested in these statistics and prediction related projects.
Definitely this.

I actually read that book quite a while back. At the time, I thought it was interesting, but quickly forgot much of it. It was only relatively recently that I realized that the world is full of overconfident predictions, and that humans are laughably prone to confirmation bias. I now have a much stronger appreciation for predictive models, and care very little for explanatory models that have essentially zero predictive power.



10-02-2017 11:29 PM

Caleb Sykes


Unread Re: paper: Miscellaneous Statistics Projects

I decided to investigate how important breaks between matches were for team performance. If the effect of rest is large enough, I thought I might add it into my Elo model. I was originally going to use the match start times as the basis, but after finding serious problems with this data set, I switched to using scheduled start times.

Essentially, what I did was to give each team on each alliance an Elo penalty which was determined by how much "rest" they have had since their last match. I tried both linear and exponential fits, and found that exponential fits were far better suited to this effort. I also used the scheduled time data to build two different models. In the first, I looked at the difference in scheduled start times for each team between their last scheduled match and the current match. In the second, I sorted matches within each event by start time and gave each match an index corresponding to its placement on this list (e.g. Quals 1 has index 1, Quals 95 has index 95, quarterfinals 1-1 has index 96, quarterfinals 2-2 has index 101, etc...).

The best fits for each of these cases were the following:
Time difference: Elo penalty per team = -250*exp((t_current_match_scheduled_time -
t_previous_match_scheduled_time)/(5 minutes))
Match index difference: Elo penalty per team = -120*exp((current_match_index -
previous_match_index)/(0.9))

Both of these models provide statistically significant improvements to my general Elo model. However, the match index method provides about 7X more of an improvement than the time difference method (Brier score improvement of 0.000173 vs 0.000024). This was surprising to me, since I would have expected the finer resolution of the times to provide better results. My guess as to why the indexing method is superior is due to time differences between quals and playoff matches. I used the same model for both of these cases, and perhaps the differences in start times is not nearly as important as the pressure of playing back-to-back matches in playoffs.

I have attached a table summarizing how large of an effect rest has on matches (using the match index model).


Playing back to back matches clearly has a strong negative impact on teams. This generally only occurs in playoff matches between levels. However, its effect is multiplied by 3 since all three alliance members experience the penalty. A 3-team alliance who just played receives a 80 Elo penalty relative to a 3-team alliance who played 2 matches ago, and a 108 Elo penalty relative to a 3-team alliance who played 3 matches ago. 108 Elo points corresponds to 30 points in 2017, and the alliance that receives this penalty would only be expected to win 35% of matches against an otherwise evenly matched opposing alliance.

The match index method ended up providing enough improvement that I am seriously considering adding it into future iterations of my Elo model. One thing holding me back from using it is because it relies on the relatively new data of scheduled times. At 4 years old, this data isn't nearly as dubious as the actual time data (1.5 years old), but it still has noticeable issues (like scheduling multiple playoff replays at the same time).

You can see the rest penalties for every 2017 match in the "2017 rest penalties" document. The shown penalties are from the exponential fit of the match index model.



10-02-2017 11:37 PM

Basel A


Unread Re: paper: Miscellaneous Statistics Projects

I'm a bit skeptical, because there are some effects of alliance number on amount of rest during playoffs (e.g. #1 alliances that move on in two matches will always have maximal rest, and are typically dominant). Not sure if you can think of a good way to parse that out, though.



10-03-2017 09:52 AM

Caleb Sykes


Unread Re: paper: Miscellaneous Statistics Projects

Quote:
Originally Posted by Basel A View Post
I'm a bit skeptical, because there are some effects of alliance number on amount of rest during playoffs (e.g. #1 alliances that move on in two matches will always have maximal rest, and are typically dominant). Not sure if you can think of a good way to parse that out, though.
I don't quite follow. My rest penalties are an addition onto my standard Elo model, which already accounts for general strength of alliances. 1 seeds were already heavily favored before I added rest penalties because the 1 seed almost always consists of highly Elo-rated teams. In my standard Elo model, the red alliance (often 1 seed, but not always) was expected to win the first finals match 57% of the time on average. With my rest penalties added in, the red alliance is expected to win the first finals match 62% of the time on average.



10-03-2017 10:16 AM

Basel A


Unread Re: paper: Miscellaneous Statistics Projects

Quote:
Originally Posted by Caleb Sykes View Post
I don't quite follow. My rest penalties are an addition onto my standard Elo model, which already accounts for general strength of alliances. 1 seeds were already heavily favored before I added rest penalties because the 1 seed almost always consists of highly Elo-rated teams. In my standard Elo model, the red alliance (often 1 seed, but not always) was expected to win the first finals match 57% of the time on average. With my rest penalties added in, the red alliance is expected to win the first finals match 62% of the time on average.
Because the first seed is so often in the SF/finals with maximum rest, you could be quantifying any advantage the first seed has (except how good they are as based on quals) as opposed to just rest. To use a dumb example, if the top alliance is favored by referees, that would show up here.



10-03-2017 10:36 AM

Caleb Sykes


Unread Re: paper: Miscellaneous Statistics Projects

Quote:
Originally Posted by Basel A View Post
Because the first seed is so often in the SF/finals with maximum rest, you could be quantifying any advantage the first seed has (except how good they are as based on quals) as opposed to just rest. To use a dumb example, if the top alliance is favored by referees, that would show up here.
Got it, that is an interesting take. Let me think for a little bit on how/if it is possible to separate alliance seeds from rest.



10-03-2017 10:53 AM

GeeTwo


Unread Re: paper: Miscellaneous Statistics Projects

Another factor beyond what will be recognized from ELO is nonlinear improvements due to good scouting, alliance selection, and strategy. I would expect these to affect playoffs far more than quals.



10-03-2017 01:19 PM

Basel A


Unread Re: paper: Miscellaneous Statistics Projects

Quote:
Originally Posted by Caleb Sykes View Post
Got it, that is an interesting take. Let me think for a little bit on how/if it is possible to separate alliance seeds from rest.
A first pass could be to compare cases where Alliance #X would be advantaged by rest versus disadvantaged. Would give you an idea of the relative strength of the rest effect as compared to the various other things. Gus's examples are definitely important effects.



10-04-2017 01:48 PM

microbuns


Unread Re: paper: Miscellaneous Statistics Projects

I love the upsets paper - it's fun to look at these games and see the obviously massive disadvantage the winning side had. I'm looking back at games I had seen/participated in, and remembering the pandemonium those games created on the sidelines and behind the glass. Super cool!



10-26-2017 11:36 PM

Caleb Sykes


Unread Re: paper: Miscellaneous Statistics Projects

Quote:
Originally Posted by Caleb Sykes View Post
I decided to investigate how important breaks between matches were for team performance...
I've spent a fair bit of time off and on for the past month looking into this more, and since I have other things I would prefer to work on, I'm going to stop working on this for the forseeable future. I would like to retract all the information in the quoted post. I'm undecided on if I should delete the spreadsheet.

Essentially, my rest penalty model actually decreased my Elo prediction performance for the year 2016 when I applied the same methodology to that year. This probably either means:
  • 2016 and 2017 rest penalties were drastically different
  • My 2017 rest penalties were an overfitting of the data, and do not actually represent any real phenomenon
  • Scheduled time data are unreliable for 2016 and/or 2017
  • There is a bug in my code somewhere I am completely unable to find

If any of the first three are true, I'm not that interested in pursuing rest penalties more, and I have given up looking for bugs for the time being. This also means that I will not be looking at alliance seed affecting playoff performance for now.

When I originally created the rest penalties, I never really applied them to years other than 2017 (for which I was optimizing). This meant that I made the mistake I often criticize others for of not keeping training and testing data separate. I incorrectly believed that my statistical significance test would be sufficient in place of testing against other data, and am still baffled as to how my model could so easily pass a significance test without having predictive power in other years.

So anyway, sorry if I misled anyone, I won't make this same mistake again.



10-31-2017 11:11 AM

Caleb Sykes


Unread Re: paper: Miscellaneous Statistics Projects

Now that we actually have team lists for events, I thought I would revisit my 2018 Chairman's Predictions workbook since it is the most popular download of mine. It turns out that I did not have support for 2018 rookies, resurrected teams, or new veterans in these predictions.

I have attached a new workbook titled "2018_Chairman's_predictions_v2" which provides support for these groups. I also have added an easy way to import team lists for events simply by entering in the event key. If you have additional knowledge of events (or if you want to make a hypothetical event), you can still add teams to the list manually. I have also switched to using the TBA API v3, so this should hopefully still work after Jan 1.

Let me know if you notice any bugs with this book.



10-31-2017 10:25 PM

Caleb Sykes


Unread Re: paper: Miscellaneous Statistics Projects

For nearly all statistics that can be obtained from official data, one of our biggest issues is separating out individual team data from data points which actually represent something about the entire alliance. However, there was one statistic last season that was actually granular to the team level, and that data point was auto mobility. Referees were responsible this year for marking mobility points for each team individually, so these data points should have little to no dependence on other teams. Unfortunately, auto mobility was a nearly negligible point source for this game, and combined with the extremely high average mobility rates, made this a generally unimportant characteristic to describe teams. However, I thought it would be interesting to take a deeper look into these data to see if we can learn anything interesting from them.

I have uploaded a workbook titled "auto_mobility_data" which provides a few different ways of understanding mobility points. The first tab of this book contains raw data on mobility for every team in every match of 2017. The second tab contains a breakdown by team, listing each teamís season-long auto mobility rate as well as each teamís first match where they missed mobility (for you to check if you donít believe your team ever missed auto mobility). Overall, about 25% of teams never missed their mobility points in auto, and another 18% had mobility rates of >95%. The top 10 teams with the most successful mobilities without a single miss are:

Code:
Team	Successful Mobilities
2337	86
195	85
4039	85
27	84
3663	82
2771	73
3683	73
1391	72
1519	71
2084	71
4391	71
As another point of investigation, I wanted to see if these ďmobility ratesĒ would provide more predictive power over future performance than the comparable metric I used in my workbooks last year, calculated contribution to auto Mobility Points. I compared each teamís qual mobility rate, total mobility rate (including playoffs), and calculated contribution to auto Mobility Points at their first event to the same metrics at their second event. Strong correlations imply that the metric at the first event could have been used as a good predictor of second event performance. Here are the correlation coefficients:
https://imgur.com/a/XttUk

The total mobility rate at event 1 had the strongest correlation with all three of qual rate, total rate, and calculated contribution at event 2, meaning it would likely be the strongest predictor. However, this is a little bit unfair since the total rate metric is incorporating information unavailable to qual rate or calculated contributions. Qual rate and cc at event 1 have roughly even correlation with qual rate at event 2. Qual rate at event 1 has a much stronger correlation with cc at event 2 than does cc at event 1. Overall, this tells me that, if there is a comparable scoring category to auto Mobility in 2018, I can probably get better results by using the robot specific data rather than using cc on the entire allianceís score. There might also be potential to combine these metrics somehow, but I have yet to look into this.

My last way to slice the data is by event. I found every eventís total auto mobility rate, as well as a correlation coefficient between each teamís qual auto Mobility Rate and calculated contribution for that event. I was specifically looking to see if I could identify any events which had an unexpectedly low correlation between auto mobility rates and ccs. This might indicate that one or more referees were not associating the correct robots with mobility points (although points for the alliance would be unaffected). Below you can see each eventís mobility rate versus the correlation at the event between mobility rate and cc for each team. I threw out events at which the mobility rate was higher than 90% since events with extremely high auto mobility rates do not provide a reasonable sample size of individual teams doing unique things.
https://imgur.com/a/HNdPb

4 events in this graph stood out to me for having unexpectedly low correlation coefficients. Those events were the Southern Cross Regional, ISR District Event #1, ISR District Event #2, and IN District -Tippecanoe Event. Of these events, only Tippecanoe has a reasonable number of match videos, so I decided to watch the first 10 quals matches at this event. I discovered numerous inconsistencies in the published data with what I could see on the video. Here are the ones I saw:
Quals 1: 2909
Quals 2: 234
Quals 7 (good music this match ): 3147
Quals 10: 3940

My best explanation for these data are that one or more of the referees at this event (and potentially at the other low-correlation events) did not realize that their inputs corresponded to specific teams. Overall, the mobility rate data seem to be better than the calculated contribution data, so Iím not complaining, and I have no desire to call out specific referees, it is just interesting to me that I could track down discrepancies with this methodology.

Thatís about it for now. I might adapt some of these efforts soon to looking at touchpad activation rates.



11-03-2017 12:44 PM

Caleb Sykes


Unread Re: paper: Miscellaneous Statistics Projects

After trying a couple of different changes to my Elo model, I have found one that has good predictive power, is general enough to apply to all years, and is straightforward to calculate. What I have done is to adjust each team's start of season Elo to be a weighted average of their previous two year's End of season Elos. The previous year's Elo has a weight of 0.7, and the Elo of two years prior has a weight of 0.3. This weighted Elo is then reverted to the mean by 20%, just as in the previous system. In the previous system, only the last season's Elo was taken into consideration. Second year teams have their rookie rating of 1320 (1350 before mean reversion) set as their end of season Elo from two years previous.

This adjustment provides substantial predictive power improvement, particularly at the start of the season. Although it causes larger Elo jumps for some teams between seasons, Elos during the start of the season are generally more stable. As an indirect consequence of this adjustment, I also found the optimal k value for playoff matches to be 4 instead of 5 which it was under the previous system. This means that playoff matches have slightly less of an impact on a team's Elo rating under the new system.

I have attached a file called "2018 start of season Elos" that shows what every team's Elo would have been under my previous system, as well as their Elo under this new system. Sometime before kickoff, I will publish an update to my "FRC Elo" workbook that contains this change as well as any other changes I make before then.



11-03-2017 01:11 PM

Caleb Sykes


Unread Re: paper: Miscellaneous Statistics Projects

With this change, Elo actually takes a razor thin edge over standard OPR in terms of predictive power for the 2017 season (season long total Brier score = 0.211 vs 0.212 for OPR). However, it should be noted that this isn't really a fair comparison, since OPRs predictive power could probably be improved with many of these same adjustments I have been making to Elo. Even so, I think it's pretty cool that we now have a metric that provides more predictive power than conventional OPR, which has been the gold standard for at least as long as I have been around in FRC.



view entire thread

Reply

Tags

loading ...



All times are GMT -5. The time now is 05:54 AM.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi