View Single Post
  #13   Spotlight this post!  
Unread 13-05-2015, 10:49
Ed Law's Avatar
Ed Law Ed Law is offline
Registered User
no team (formerly with 2834)
 
Join Date: Apr 2008
Rookie Year: 2009
Location: Foster City, CA, USA
Posts: 752
Ed Law has a reputation beyond reputeEd Law has a reputation beyond reputeEd Law has a reputation beyond reputeEd Law has a reputation beyond reputeEd Law has a reputation beyond reputeEd Law has a reputation beyond reputeEd Law has a reputation beyond reputeEd Law has a reputation beyond reputeEd Law has a reputation beyond reputeEd Law has a reputation beyond reputeEd Law has a reputation beyond repute
Re: "standard error" of OPR values

I agree with most of what people have said so far. I would like to add my observations and opinions on this topic.

First of all, it is important to understand how OPR is calculated and what it means from a mathematical standpoint. Next it is important to understand all the reasons why OPR does not perfectly reflect what a team actually scores in a match.

To put things in perspective, I would like to categorize all the reasons into two bins. Things that are beyond a team's control and things that reflects the actual "performance" of the team. I consider anything that is beyond a team's control as noise. This is something that will always be there. Some examples, as others have also pointed out, are bad call by refs, compatibility with partners' robots, non-linearity of scoring, accidents that is not due to carelessness, field fault not being recognized, robot failure that is not repeatable etc.

The second bin will be things that truly reflects the "performance" of a team. This will measure what a team can potentially contribute to a match. This will take into account how consistent a team is. The variation here will include factors like how careful they are in not knocking stacks down, getting fouls, robot not functioning due to wiring that is avoidable. The problem is this measure is meaningful only if no teams are allowed to modify their robot between matches meaning the robot is in the exact same condition in every match. However in reality there are three scenarios. 1) The robot keeps getting better as teams worked out the kinks or tuned it better. 2) The robot keeps getting worse as things wear down quickly due to inappropriate choice of motors, bearings or the lack of, design or construction techniques were used. Performance can get worse also as some teams keep tinkering with their robot or programming without fully validating the change. 3) The robot stays the same.

I understand what some people are trying to do. We want a measure of expected variability around each team's OPR numbers, some kind of a confidence band. If we have that information, then there will be a max and min prediction of the outcome of the score of each alliance. Mathematically, this can be done relatively easily. However the engineer in me tells me that it is a waste of time. Based on the noise factors I listed above and that the robot performance may change over time, this becomes just a mathematical exercise and does not have much contribution to the prediction of outcome of the next match.

However I do support the publication of the R^2 coefficient of determination. It will give an overall number as to how well the actual outcome fits the statistical model.
__________________
Please don't call me Mr. Ed, I am not a talking horse.
Reply With Quote