Quote:
Originally Posted by Ether
So what am trying to do is this: have a discussion about what "the" standard error might mean in the context of OPR.
|
Let us assume that the model OPR uses is a good description of FRC match performance - that is, match scores are given by a linear sum of team performance values, and each team's performance in a given match is described by a random variable whose distribution is identical between matches.
OPR should then yield an estimate of the mean of this distribution. An estimate of the standard deviation can be obtained, as mentioned, by taking the RMS of the residuals.
To approximate the standard deviation of the mean (which is what is usually meant by "standard error" of these sorts of measurements), one would then divide this by sqrt(n) (for those interested in a proof of this, simply consider the fact that when summing random variables, variances add), where n is the number of matches used in the team's OPR calculation.
This, of course, fails if the assumptions we made at the outset aren't good (e.g. OPR is not a good model of team performance). Moreover, even if the assumptions hold, if the distribution of the random variable describing a team's performance in a given match is sufficiently wonky that the distribution of the mean is not particularly Gaussian then one is fairly limited in the conclusions they can draw from the standard deviation, anyway.