View Single Post
  #9   Spotlight this post!  
Unread 19-04-2015, 05:16
themccannman's Avatar
themccannman themccannman is offline
registered lurker
AKA: Jake McCann
FRC #3501
Team Role: Mentor
 
Join Date: Feb 2013
Rookie Year: 2011
Location: San Jose, CA
Posts: 432
themccannman has a reputation beyond reputethemccannman has a reputation beyond reputethemccannman has a reputation beyond reputethemccannman has a reputation beyond reputethemccannman has a reputation beyond reputethemccannman has a reputation beyond reputethemccannman has a reputation beyond reputethemccannman has a reputation beyond reputethemccannman has a reputation beyond reputethemccannman has a reputation beyond reputethemccannman has a reputation beyond repute
Re: 2015 Championship division simulated rankings

Quote:
Originally Posted by Spoam View Post
I had the exact same idea actually. The problem with the OPR residual, however, is that it gives you information about the accuracy of the regression with regard to each match, not each robot.

If you took the set of residuals from the matches a robot played in, it makes intuitive sense that that data should contain some level of information about that robot's deviation from their OPR. But is a data set of only 8-12 elements enough for this value to dominate the noise generated by their alliance partners' deviations (and therefore produce a meaningful standard deviation itself)? I dunno.

If some statistics wiz would like to chime in on this, I'd love to hear it.
The problem with trying to use variation or standard deviation with OPR is that the number it spits out pretty much just tells you what their match schedule was like. OPR is already a calculation of how much an alliances score tends to change when certain teams are playing, calculating standard deviation for that basically just going backwards. OPR tries to determine how one robot affects an alliances score, where as SD (with unique alliances) would give you how each alliance affected that robots score.

Unfortunately it's not very useful unless you have actual scouted data for each team to use, in which case you can make much more accurate predictions about rankings. Our scouting system had a little less than an 80% success rate guessing the winners of each match in our division the last two years, and those games were very defense heavy. I would bet on this system approaching a 95% success rate guessing match results this year since the game is much more consistent.
__________________
All posts here are purely my own opinion.
2011-2015: 1678
2016: 846
2017 - current: 3501
Reply With Quote