Quote:
Originally Posted by Ether
Can we all agree that 0.1 is real-world meaningless?
There is without a doubt far more variation in consistency of performance from team to team.
Manual scouting data would surely confirm this.
|
Sure. To reiterate though for others on the thread, it looks like the OPR estimates (assuming the model) for a tournament like the one in the data provided had a 1 standard deviation confidence range of around +/- 11.4 for nearly all teams (some teams might have been 11.3, some might have been 11.5, depending on their match schedules but as Ether says these very slight variations are essentially meaningless).
For example, this means that if a team had, say, an OPR of 50, that if they were in another identical tournament with the same matches and randomness in the match results, that the OPR computed from that tournament would probably be between 39 and 61 (if you're being picky, 68% of the time the score would lie in this range if the data is sufficiently normal or Gaussian).
So picking a team for your alliance that has an OPR of 55 over a different team that has an OPR of 52 is silly. But picking a team that has an OPR of 80 over a team that has an OPR of 52 is probably a safe bet.
In response to the latest post, this could be run on any other tournament for which the data is present. Ether made this particularly easy to do by providing the A match matrix and the vector of match results in nice csv files.
BTW, the code is attached and scilab is free, so anybody can do this for whatever data they happen to have on hand.