Go to Post I trust my life to a 3650 pound rolling chunk of metal every day that, if some silicon failed in just the right (wrong?) way, would splat me pretty quickly, but I have faith in engineering. On the other hand, it doesn't turn me upside down. - DonRotolo [more]
Home
Go Back   Chief Delphi > ChiefDelphi.com Website > Extra Discussion
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rate Thread Display Modes
  #31   Spotlight this post!  
Unread 01-26-2018, 02:05 PM
Caleb Sykes's Avatar
Caleb Sykes Caleb Sykes is offline
Discount Nate Silver
AKA: inkling16
no team
 
Join Date: Feb 2011
Rookie Year: 2009
Location: Minneapolis, Minnesota
Posts: 1,611
Caleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond repute
Re: paper: Miscellaneous Statistics Projects

I ran significance tests for the years that were most likely to have significant advantages for one color over the other. The results are in this table. Of the years that I tested, 2017 was the only one that was significant, none of the others were even very close. Even 2017 should be viewed with caution since I essentially ran 13 significant tests, so there was a 30% chance that at least one of those tests would provide a p-value at least as low as that of 2017 purely by chance.

Basically, the only year for which we have reasonable evidence against the null hypothesis is 2017, and I would still be wary of rejecting the null hypothesis for 2017.
Reply With Quote
  #32   Spotlight this post!  
Unread 01-26-2018, 03:54 PM
Ginger Power's Avatar
Ginger Power Ginger Power is offline
Founder of COR Robotics
AKA: Ryan Swanson
no team
 
Join Date: Jan 2014
Rookie Year: 2013
Location: Minnesota
Posts: 1,476
Ginger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond repute
Re: paper: Miscellaneous Statistics Projects

Quote:
Originally Posted by Caleb Sykes View Post
Now that we actually have team lists for events, I thought I would revisit my 2018 Chairman's Predictions workbook since it is the most popular download of mine. It turns out that I did not have support for 2018 rookies, resurrected teams, or new veterans in these predictions.

I have attached a new workbook titled "2018_Chairman's_predictions_v2" which provides support for these groups. I also have added an easy way to import team lists for events simply by entering in the event key. If you have additional knowledge of events (or if you want to make a hypothetical event), you can still add teams to the list manually. I have also switched to using the TBA API v3, so this should hopefully still work after Jan 1.

Let me know if you notice any bugs with this book.
I find this spreadsheet to be extremely interesting. I'm wondering what the logic is in not capping the number of years that contribute to mCA?

From the FIRST Inspires Website:

Quote:
The criterion for the Chairman’s Award has special emphasis on recent accomplishments in both the current season, and the preceding two to five years. The judges focus on teams’ activities over a sustained period, as distinguished from just the robot design and build period.
Given that judges are instructed to emphasize the most recent 2-5 years, I would think it would make sense to ignore accomplishments made prior to 2013 when calculating mCA. Obviously you have the 19% regression to 0, but there is still a residual effect from an award that was won in 2009 when realistically that probably doesn't mean much.

I'm of the opinion that keeping the entire body of work for a team is a better representation for their standing as a Hall of Fame contender, while keeping just the most recent 5 years would be a better representation of a team's standing at a local event.

Additionally, I'm curious as to why Rookie All Star isn't factored in for mCA? My understanding is that the Rookie All-Star is essentially the rookie team that best fits the mold of a future Chairman's Award team. I would think that a team that has won RAS is more likely to win CA in the future than a team that didn't win RAS.

Edit:

In terms of event predictions, I'm wondering if it would make sense to have some sort of cutoff after the top X number of teams. Realistically, you won't have 60/60 teams at an event present for Chairman's Award, so it doesn't make sense for the 60th ranked team in terms of mCA to have a .5% chance of winning CA. I don't know what percent of teams at an event typically submit for Chairman's Award... my guess would be 1/3 of teams submit, but that's probably high.

Last edited by Ginger Power : 01-26-2018 at 04:15 PM.
Reply With Quote
  #33   Spotlight this post!  
Unread 01-26-2018, 07:55 PM
Caleb Sykes's Avatar
Caleb Sykes Caleb Sykes is offline
Discount Nate Silver
AKA: inkling16
no team
 
Join Date: Feb 2011
Rookie Year: 2009
Location: Minneapolis, Minnesota
Posts: 1,611
Caleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond repute
Re: paper: Miscellaneous Statistics Projects

Quote:
Originally Posted by Ginger Power View Post
snip
All good points. I'll go back and test out some of these thoughts with my model. Capping for only the past 5 years especially intrigues me.

Quote:
Additionally, I'm curious as to why Rookie All Star isn't factored in for mCA? My understanding is that the Rookie All-Star is essentially the rookie team that best fits the mold of a future Chairman's Award team. I would think that a team that has won RAS is more likely to win CA in the future than a team that didn't win RAS.
I did build a RAS value into the model, which is why it shows up in the "Model Parameters" tab. However, I found that the optimal value for this award in terms of predictive power was 0 (+-50ish). I was originally surprised by this, and a little bit disappointed to be honest. I'll try optimizing my model again to make sure this was not done in error but I doubt it. All of the weightings I use were those that maximize the predictive power of my model, it has nothing to do with personal preference.

Quote:
In terms of event predictions, I'm wondering if it would make sense to have some sort of cutoff after the top X number of teams. Realistically, you won't have 60/60 teams at an event present for Chairman's Award, so it doesn't make sense for the 60th ranked team in terms of mCA to have a .5% chance of winning CA. I don't know what percent of teams at an event typically submit for Chairman's Award... my guess would be 1/3 of teams submit, but that's probably high.
My concern with this line of thought is that, although only some proportion of teams at an event submit for Chairman's, we don't know which teams those are. Obviously, teams with stronger awards histories are more likely to submit for Chairman's than teams without such histories, but we can never definitively say which teams are and are not presenting. As an example, I ran through the weakest mCA teams to win Chairman's last year, and team 4730 won at PCH Albany despite: having negative mCA, having never won a judged award prior to this, and having the lowest mCA of any team at their event. You can check this using my "2017 Chairman's predictions.xlsm" workbook. Going from 0.5% to 0.1% for example is a deceptively huge jump. We would expect about one 0.5% team to win Chairman's Award each season (since there are around 200 events), but we would only expect to see a 0.1% team win Chairman's in about a 5-year period.

I'll try adding a "weak team" penalty into the model that subtracts some mCA amount from the lowest X% of teams at the event to see if that improves the predictive power at all, but I'm pretty skeptical since the model seemed to be well-calibrated when I built it.
Reply With Quote
  #34   Spotlight this post!  
Unread 01-27-2018, 09:49 AM
Ginger Power's Avatar
Ginger Power Ginger Power is offline
Founder of COR Robotics
AKA: Ryan Swanson
no team
 
Join Date: Jan 2014
Rookie Year: 2013
Location: Minnesota
Posts: 1,476
Ginger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond reputeGinger Power has a reputation beyond repute
Re: paper: Miscellaneous Statistics Projects

Quote:
Originally Posted by Caleb Sykes View Post
All good points. I'll go back and test out some of these thoughts with my model. Capping for only the past 5 years especially intrigues me.


I did build a RAS value into the model, which is why it shows up in the "Model Parameters" tab. However, I found that the optimal value for this award in terms of predictive power was 0 (+-50ish). I was originally surprised by this, and a little bit disappointed to be honest. I'll try optimizing my model again to make sure this was not done in error but I doubt it. All of the weightings I use were those that maximize the predictive power of my model, it has nothing to do with personal preference.



My concern with this line of thought is that, although only some proportion of teams at an event submit for Chairman's, we don't know which teams those are. Obviously, teams with stronger awards histories are more likely to submit for Chairman's than teams without such histories, but we can never definitively say which teams are and are not presenting. As an example, I ran through the weakest mCA teams to win Chairman's last year, and team 4730 won at PCH Albany despite: having negative mCA, having never won a judged award prior to this, and having the lowest mCA of any team at their event. You can check this using my "2017 Chairman's predictions.xlsm" workbook. Going from 0.5% to 0.1% for example is a deceptively huge jump. We would expect about one 0.5% team to win Chairman's Award each season (since there are around 200 events), but we would only expect to see a 0.1% team win Chairman's in about a 5-year period.

I'll try adding a "weak team" penalty into the model that subtracts some mCA amount from the lowest X% of teams at the event to see if that improves the predictive power at all, but I'm pretty skeptical since the model seemed to be well-calibrated when I built it.
I completely understand that all of your decisions were based off of predictive power. All of my suggestions were based on my impressions about the Chairman's Award, and what I know about teams that have won it. Obviously not too scientific on my end

I'm looking forward to future postings on the subject!
Reply With Quote
  #35   Spotlight this post!  
Unread 02-19-2018, 08:02 PM
Jacob Plicque Jacob Plicque is offline
Registered User
FRC #0086 (Team Resistance)
Team Role: Mentor
 
Join Date: Apr 2006
Rookie Year: 2001
Location: Jacksonville, Fl
Posts: 61
Jacob Plicque is an unknown quantity at this point
Cool Scouting Statistics

Caleb
I found your scouting system to be a great source of strategy and scoring trends in 2017. I hope you are producing a new one for 2018.
Reply With Quote
  #36   Spotlight this post!  
Unread 02-19-2018, 08:33 PM
Caleb Sykes's Avatar
Caleb Sykes Caleb Sykes is offline
Discount Nate Silver
AKA: inkling16
no team
 
Join Date: Feb 2011
Rookie Year: 2009
Location: Minneapolis, Minnesota
Posts: 1,611
Caleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond reputeCaleb Sykes has a reputation beyond repute
Re: Scouting Statistics

Quote:
Originally Posted by Jacob Plicque View Post
Caleb
I found your scouting system to be a great source of strategy and scoring trends in 2017. I hope you are producing a new one for 2018.
Glad to hear it. I'm always happy to hear that people find my work useful.

I'm working on a 2018 scouting database and event simulator right now. They'll definitely be out before week 1 competitions start, but I can't promise a specific date, hopefully no later than next Monday.
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 01:13 AM.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi