View Single Post
  #44   Spotlight this post!  
Unread 17-05-2015, 17:15
Citrus Dad's Avatar
Citrus Dad Citrus Dad is offline
Business and Scouting Mentor
AKA: Richard McCann
FRC #1678 (Citrus Circuits)
Team Role: Mentor
 
Join Date: May 2012
Rookie Year: 2012
Location: Davis
Posts: 994
Citrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond repute
Re: "standard error" of OPR values

Quote:
Originally Posted by wgardner View Post
I guess my uncertainty is not about what "standard error" means but what you mean by "the OPRs."

But I'm guessing that what you're really interested in is: if the same tournament were run multiple times and if the match results varied randomly as we modeled (yeah, yeah, and if everybody had a can opener), what would be the standard error of the OPR estimates? Or in other words, what if the same teams with the same robots and the same drivers played in 100 tournaments back-to-back and we computed the OPR for each team for all 100 tournaments, what would be the standard error for these 100 different OPR estimates?

Too much for a Sunday morning. Thoughts?
The OPR measures the expected contribution per MATCH. We usually compute it for a tournament as representing the average contribution per match. So if we run the same match over and over, we would expect to see a similar OPR. The SE tells us the probability range that we expect the OPR to fall in if we kept running that match over and over. Confidence intervals (e.g. 95%) tell us that we have 95% confidence that the OPR will fall into this set range if we ran the same match (with complete amnesia by the participants) over and over.
Reply With Quote