Quote:
Originally Posted by Spoam
I had the exact same idea actually. The problem with the OPR residual, however, is that it gives you information about the accuracy of the regression with regard to each match, not each robot.
If you took the set of residuals from the matches a robot played in, it makes intuitive sense that that data should contain some level of information about that robot's deviation from their OPR. But is a data set of only 8-12 elements enough for this value to dominate the noise generated by their alliance partners' deviations (and therefore produce a meaningful standard deviation itself)? I dunno.
If some statistics wiz would like to chime in on this, I'd love to hear it.
|
The problem with trying to use variation or standard deviation with OPR is that the number it spits out pretty much just tells you what their match schedule was like. OPR is already a calculation of how much an alliances score tends to change when certain teams are playing, calculating standard deviation for that basically just going backwards. OPR tries to determine how one robot affects an alliances score, where as SD (with unique alliances) would give you how each alliance affected that robots score.
Unfortunately it's not very useful unless you have actual scouted data for each team to use, in which case you can make much more accurate predictions about rankings. Our scouting system had a little less than an 80% success rate guessing the winners of each match in our division the last two years, and those games were very defense heavy. I would bet on this system approaching a 95% success rate guessing match results this year since the game is much more consistent.