|
|
|
![]() |
|
|||||||
|
||||||||
|
|
Thread Tools | Rate Thread | Display Modes |
|
#1
|
|||
|
|||
|
[C^3] Predicting Offseason Performance
Predicting Offseason Performance
The first regionals are still 163 days away, but Chezy Champs is coming up this weekend, marking the first of California's offseasons. Offseasons provide the opportunity for a large amount of prescouting, since each team has an entire season's worth of data behind them. However, the question arises: how good is competition season data in terms of predicting offseason performance? To illustrate the trend, I graphed competition season OPRs against Chezy Champ OPRs from 2014 and 2015. The data includes all teams, including non-CA teams. OPR data includes division data in all calculations, but it does not include Einstein. Without further delay, here are the graphs: 2014: max vs CC, avg vs CC ![]() 2014: min vs CC, last vs CC ![]() 2015: max vs CC, avg vs CC ![]() 2015: min vs CC, last vs CC ![]() The trends are interesting: 2014 teams almost always underperformed their season expectations, while in 2015 it was clearly split between teams who qualified for champs and those who didn't. In both years teams tended to go over their min OPR but under their max. Average and last OPRs were fairly decent indicators, especially in 2015 for teams that qualified for champs. In general, teams that qualified for champs seemed to have offseason performances in-line with season ones, while those who didn't tended to score under expectations. A quick explanation on my naming: - All calculations include regionals, district events, DCMPs, and division data - Max, min, avg, and last OPRs are the highest/lowest/average/last event OPRs from that team for all events (excluding Einstein) - cmp includes only data from teams that attended champs - no_cmp includes only data from teams that didn't attend champs It is also probably worth saying that the trendlines can be misleading, especially for fewer / more heavily grouped data points (e.g. 2015 no_cmp data). For those data sets--and perhaps even everything else, counting the number of teams above the 1:1 line (i.e. the teams that outperformed their season data) vs those below it might be more accurate. The variation in teams that attended champs surprised me, so I colored them by qualification type. However, I hit my image limit here, so I've included them in the post below. The categories used are as listed below: - Captain/1st pick: team won (or received a wildcard as the finalist alliance) as the captain/1st pick of the alliance. DCMP winners were also put here even though they also qualified via points - 2nd pick: same as above, but with 2nd picks - Awards: EI, RCA, RAS - Waitlist: qualified via the waitlist (or I didn't figure out how else they qualified) - Teams that qualified via multiple means were colored according to the method highest in this list. Pre-qualified teams were not colored differently since they all qualified again through one of these methods. Raw data: offseason_vs_season_opr_data.csv |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|