Quote:
Originally Posted by wgardner
To be as clear as I can about this: This says that if we compute the OPRs based on the full data set, compute the match prediction residuals based on the full data set, then run lots of different tournaments with match results generated by adding the OPRs for the teams in the match and random match noise with the same match noise variance, and then compute the OPR estimates for all of these different randomly generated tournaments, we would expect to see the OPR estimates themselves have a standard deviation around 11.4.
|
This sounds very similar to bootstrap resampling (
http://www.stat.cmu.edu/~cshalizi/40...lecture-08.pdf), which should measure the variation in estimated OPR from the "true" OPR values rather than how consistently individual teams perform. This may be why the values are virtually identical.