View Single Post
  #6   Spotlight this post!  
Unread 19-04-2015, 08:15
MechEng83's Avatar
MechEng83 MechEng83 is offline
Lead Mentor/Engineer
AKA: Mr. Cool
FRC #1741 (Red Alert)
Team Role: Coach
 
Join Date: May 2011
Rookie Year: 2011
Location: Indiana
Posts: 617
MechEng83 has a reputation beyond reputeMechEng83 has a reputation beyond reputeMechEng83 has a reputation beyond reputeMechEng83 has a reputation beyond reputeMechEng83 has a reputation beyond reputeMechEng83 has a reputation beyond reputeMechEng83 has a reputation beyond reputeMechEng83 has a reputation beyond reputeMechEng83 has a reputation beyond reputeMechEng83 has a reputation beyond reputeMechEng83 has a reputation beyond repute
Re: 2015 Championship division simulated rankings

Quote:
Originally Posted by Spoam View Post
I had the exact same idea actually. The problem with the OPR residual, however, is that it gives you information about the accuracy of the regression with regard to each match, not each robot.

If you took the set of residuals from the matches a robot played in, it makes intuitive sense that that data should contain some level of information about that robot's deviation from their OPR. But is a data set of only 8-12 elements enough for this value to dominate the noise generated by their alliance partners' deviations (and therefore produce a meaningful standard deviation itself)? I dunno.

If some statistics wiz would like to chime in on this, I'd love to hear it.
Quote:
Originally Posted by themccannman View Post
The problem with trying to use variation or standard deviation with OPR is that the number it spits out pretty much just tells you what their match schedule was like. OPR is already a calculation of how much an alliances score tends to change when certain teams are playing, calculating standard deviation for that basically just going backwards. OPR tries to determine how one robot affects an alliances score, where as SD (with unique alliances) would give you how each alliance affected that robots score.

Unfortunately it's not very useful unless you have actual scouted data for each team to use, in which case you can make much more accurate predictions about rankings. Our scouting system had a little less than an 80% success rate guessing the winners of each match in our division the last two years, and those games were very defense heavy. I would bet on this system approaching a 95% success rate guessing match results this year since the game is much more consistent.
Good points. Thanks for pointing out the flaw in my idea. what I surmise is that this calculation of stdev would be marginally useful at best. This reminds me of a mantra I hear at work quite often: "All models are wrong. Some models are useful."
__________________

2016 INWLA GP| INWCH Entrepreneurship | INPMH DCA | INCMP Team Spirit | CAGE Match Winner (w/ 1747 &868), Finalist (1471 w/ 1529 & 1018), Best Fans
2015 ININD Judges Award, Proud "Phyxed Red Card" alliance partners of 1529 & 1720 | INWLA EI | INCMP GP
2014 Boilermaker Creativity | Chesapeake Finalist, Safety, GP, Entrepreneurship | IN State Championship Winner (w/ 868 & 1018) | CAGE Match Winner (w/ 1024, 5402 & 1646)
2013 Boilermaker RCA, Innovation in Controls, Finalist | Crossroads Entrepreneurship | Newton Semi-finalist
2012 Boilermaker Entrepreneurship | Queen City EI | Curie Semi-finalist
2011 Boilermaker RCA, Entrepreneurship
Red Alert Robotics
Reply With Quote