View Single Post
  #69   Spotlight this post!  
Unread 19-05-2015, 21:44
Ether's Avatar
Ether Ether is offline
systems engineer (retired)
no team
 
Join Date: Nov 2009
Rookie Year: 1969
Location: US
Posts: 8,101
Ether has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond repute
Re: "standard error" of OPR values

Quote:
Originally Posted by wgardner View Post
Why do you think they would "certainly not be virtually identical"?
Because there's no reason whatsoever to believe there's virtually no variation in consistency of performance from team to team.

Manual scouting data would surely confirm this.


Consider the following thought experiment.

Team A gets actual scores of 40,40,40,40,40,40,40,40,40,40 in each of its 10 qual matches.

Team B gets actual scores of 0,76,13,69,27,23,16,88,55,33

The simulation you described assigns virtually the same standard error to their OPR values.

If what is being sought is a metric which is somehow correlated to the real-world trustworthiness of the OPR for each individual team (I thought that's what Citrus Dad was seeking), then the standard error coming out of the simulation is not that metric.


My guess is that the 0.1 number is just measuring how well your random number generator is conforming to the sample distribution you requested.



Last edited by Ether : 19-05-2015 at 22:05.
Reply With Quote