Quote:
Originally Posted by Ether
From this and other prior statements, I had the very strong impression you were seeking a separate error estimate for each team's OPR.
Such estimates would certainly not be virtually identical for every team!
|
The approach I described does find a separate error estimate for each team and, at least in this approach, they are virtually identical. Why do you think they would "certainly not be virtually identical"?
Note that this is computing the confidence of each OPR estimate for each team. This is different from trying to compute the variance of score contribution from match to match for each team, which is a very different (and also very interesting) question. I think it would be reasonable to hypothesize that the variance of score contribution for each team might vary from team to team, possibly substantially.
For example, it might be interesting to know that team A scores 50 points +/- 10 points with 68% confidence but team B scores 50 points +/- 40 points with 68% confidence. At the very least, if you saw that one team had a particularly large score variance, it might make you investigate this robot and see what the underlying root cause was (maybe 50% of the time they have an awesome autonomous but 50% of the time it completely messes up, for example).
Hmmm....