View Single Post
  #3   Spotlight this post!  
Unread 30-06-2015, 15:00
Ether's Avatar
Ether Ether is offline
systems engineer (retired)
no team
 
Join Date: Nov 2009
Rookie Year: 1969
Location: US
Posts: 8,104
Ether has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond repute
Re: "standard error" of OPR values

Quote:
Originally Posted by Citrus Dad View Post
But based on this response, the OPR estimates themselves should not be reported because they are not statistically valid either.
Sez who? They are the valid least-squares fit to the model. That is all they are. According to what criteria are they then not valid?

Quote:
Instead by not reporting some measure of the potential error, they give the impression of precision to the OPRs.
Who is suggesting not to report some measure of the potential error? Certainly not me. Read my posts.

Quote:
I just discussed this problem as a major failing for engineers in general--if they are not fully comfortable in reporting a parameter, e.g., a measure of uncertainty, they often will simply ignore the parameter entirely.
I do not have the above failing, if that is what you were implying.


Quote:
ALWAYS, ALWAYS, ALWAYS is to report the uncertain or unknown parameter with some sort of estimate and all sorts of caveats.
You are saying this as if you think I disagree. If so, you would be wrong.


Quote:
Instead what happens is that decisionmakers and stakeholders much too often accept the values given as having much greater precision than they actually have.
Exactly. And perhaps more often than you realize, those values they are given shouldn't have been reported in the first place because the data does not support them. Different (more valid) measures of uncertainty should have been reported.


Quote:
While calculating the OPR really is of no true consequence, because we are working with high school students who are very likely to be engineers, it is imperative that they understand and use the correct method of presenting their results.
Well I couldn't agree more, and it is why we are having this discussion.

Quote:
So, the SEs should be reported as the best available approximation of the error term around the OPR estimates
Assigning a separate standard error to each OPR value computed from the FIRST match results data is totally meaningless and statistically invalid. As you said above, "it is imperative that they understand and use the correct method of presenting their results".

Let's explore alternative ways to demonstrate the shortcomings of the OPR values.

Quote:
the caveats about the properties of the distribution can be reported with a discussion about the likely biases in the parameters due to the probability distributions
"Likely" is an understatement. The individual (per-OPR) computed standard error values are obviously and demonstrably wrong (this can be verified with manual scouting data). And what's more, we know why they are wrong.

As I've suggested in my previous two posts, how about let's explore alternative, valid ways to demonstrate the shortcomings of the OPR values.

One place to start might be to ask whether or not the average value of the vector of standard errors of OPRs might be meaningful, and if so, what exactly it means.



Reply With Quote