Go to Post Gotta love the blurred robots :cool: - Steven Donow [more]
Home
Go Back   Chief Delphi > FIRST > General Forum
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 09-20-2016, 12:14 AM
Rachel Lim Rachel Lim is offline
Registered User
FRC #1868 (Space Cookies)
Team Role: Student
 
Join Date: Sep 2014
Rookie Year: 2014
Location: Moffett Field
Posts: 239
Rachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond repute
[C^3] Predicting Offseason Performance

Predicting Offseason Performance

The first regionals are still 163 days away, but Chezy Champs is coming up this weekend, marking the first of California's offseasons. Offseasons provide the opportunity for a large amount of prescouting, since each team has an entire season's worth of data behind them. However, the question arises: how good is competition season data in terms of predicting offseason performance?

To illustrate the trend, I graphed competition season OPRs against Chezy Champ OPRs from 2014 and 2015. The data includes all teams, including non-CA teams. OPR data includes division data in all calculations, but it does not include Einstein.

Without further delay, here are the graphs:

2014: max vs CC, avg vs CC


2014: min vs CC, last vs CC


2015: max vs CC, avg vs CC


2015: min vs CC, last vs CC



The trends are interesting: 2014 teams almost always underperformed their season expectations, while in 2015 it was clearly split between teams who qualified for champs and those who didn't. In both years teams tended to go over their min OPR but under their max. Average and last OPRs were fairly decent indicators, especially in 2015 for teams that qualified for champs. In general, teams that qualified for champs seemed to have offseason performances in-line with season ones, while those who didn't tended to score under expectations.


A quick explanation on my naming:
- All calculations include regionals, district events, DCMPs, and division data
- Max, min, avg, and last OPRs are the highest/lowest/average/last event OPRs from that team for all events (excluding Einstein)
- cmp includes only data from teams that attended champs
- no_cmp includes only data from teams that didn't attend champs

It is also probably worth saying that the trendlines can be misleading, especially for fewer / more heavily grouped data points (e.g. 2015 no_cmp data). For those data sets--and perhaps even everything else, counting the number of teams above the 1:1 line (i.e. the teams that outperformed their season data) vs those below it might be more accurate.

The variation in teams that attended champs surprised me, so I colored them by qualification type.

However, I hit my image limit here, so I've included them in the post below.

The categories used are as listed below:
- Captain/1st pick: team won (or received a wildcard as the finalist alliance) as the captain/1st pick of the alliance. DCMP winners were also put here even though they also qualified via points
- 2nd pick: same as above, but with 2nd picks
- Awards: EI, RCA, RAS
- Waitlist: qualified via the waitlist (or I didn't figure out how else they qualified)
- Teams that qualified via multiple means were colored according to the method highest in this list. Pre-qualified teams were not colored differently since they all qualified again through one of these methods.


Raw data: offseason_vs_season_opr_data.csv
Reply With Quote
  #2   Spotlight this post!  
Unread 09-20-2016, 12:16 AM
Rachel Lim Rachel Lim is offline
Registered User
FRC #1868 (Space Cookies)
Team Role: Student
 
Join Date: Sep 2014
Rookie Year: 2014
Location: Moffett Field
Posts: 239
Rachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond repute
Re: [C^3] Predicting Offseason Performance

Here are the graphs I wasn't able to fit into the previous post:



Reply With Quote
  #3   Spotlight this post!  
Unread 09-20-2016, 02:10 AM
SoftwareBug2.0's Avatar
SoftwareBug2.0 SoftwareBug2.0 is offline
Registered User
AKA: Eric
FRC #1425 (Error Code Xero)
Team Role: Mentor
 
Join Date: Aug 2004
Rookie Year: 2004
Location: Tigard, Oregon
Posts: 485
SoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant future
Re: [C^3] Predicting Offseason Performance

Have you considered statistical tests for goodness of fit? They would allow you to compare the results in an objective way.
Reply With Quote
  #4   Spotlight this post!  
Unread 09-20-2016, 11:43 AM
Francis-134's Avatar
Francis-134 Francis-134 is offline
Lifer
FRC #0190 (Gompei and the Herd)
Team Role: Mentor
 
Join Date: Jan 2003
Rookie Year: 2003
Location: Worcester, MA
Posts: 592
Francis-134 has a reputation beyond reputeFrancis-134 has a reputation beyond reputeFrancis-134 has a reputation beyond reputeFrancis-134 has a reputation beyond reputeFrancis-134 has a reputation beyond reputeFrancis-134 has a reputation beyond reputeFrancis-134 has a reputation beyond reputeFrancis-134 has a reputation beyond reputeFrancis-134 has a reputation beyond reputeFrancis-134 has a reputation beyond reputeFrancis-134 has a reputation beyond repute
Re: [C^3] Predicting Offseason Performance

This is very interesting! Do you think the rule changes in 2015 caused the increase in scores compared to on-season performances? Or perhaps the rule changes benefited the better teams / teams that could handle cans the best more than the average.
__________________

Email | Twitter | Facebook | YouTube | Twitch
iTunes Podcast | Snapchat

A proud alumnus of teams 134 and 40 || Mentor of Team 190 || Director of Fun for BattleCry@WPI
Reply With Quote
  #5   Spotlight this post!  
Unread 09-21-2016, 05:38 PM
Rachel Lim Rachel Lim is offline
Registered User
FRC #1868 (Space Cookies)
Team Role: Student
 
Join Date: Sep 2014
Rookie Year: 2014
Location: Moffett Field
Posts: 239
Rachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond reputeRachel Lim has a reputation beyond repute
Re: [C^3] Predicting Offseason Performance

Quote:
Originally Posted by SoftwareBug2.0 View Post
Have you considered statistical tests for goodness of fit? They would allow you to compare the results in an objective way.
I couldn't find an easy way to do that in excel, but I'll try to do that next time. The data is noisy, but it'd still be nice to have that to compare.

Quote:
Originally Posted by Francis-134 View Post
This is very interesting! Do you think the rule changes in 2015 caused the increase in scores compared to on-season performances? Or perhaps the rule changes benefited the better teams / teams that could handle cans the best more than the average.
Thanks!

I totally blanked on that, but rule changes could definitely have affected the scores. I'm not sure if there's a way to really analyze those effects, but I would guess that you're right about rule changes benefiting teams who were previously hitting the limit with the number of recycling containers (i.e. the better teams to begin with)
Reply With Quote
  #6   Spotlight this post!  
Unread 09-21-2016, 06:33 PM
SoftwareBug2.0's Avatar
SoftwareBug2.0 SoftwareBug2.0 is offline
Registered User
AKA: Eric
FRC #1425 (Error Code Xero)
Team Role: Mentor
 
Join Date: Aug 2004
Rookie Year: 2004
Location: Tigard, Oregon
Posts: 485
SoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant future
Re: [C^3] Predicting Offseason Performance

Quote:
Originally Posted by Rachel Lim View Post
I couldn't find an easy way to do that in excel, but I'll try to do that next time. The data is noisy, but it'd still be nice to have that to compare.
If you look on page 3 of this pdf:

http://dataprivacylab.org/courses/po.../ExcelLine.pdf

There's a picture that shows a checked box for "Display Equation on chart". Click the box below it labeled "Display R-squared value on chart".
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 07:21 AM.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi