Quote:
Originally Posted by Citrus Dad
The standard errors for the OPR values can be computed, but they are in fact quite large relative to the parameter values. Which is actually my point--the statistical precision of the OPR values are really quite poor because there are so few observations, which are in fact not independent. Rather than ignoring the SEs because they show how poor the OPR estimators are performing, the SEs should be reported to show how poorly the estimators perform for everyone's consideration.
|
+1, +/- 0.3.
Quote:
Originally Posted by Ether
There are better metrics to report to show how poorly the estimators perform.
|
While it would be great if standard error could be used as a measure of consistency of a team, but that's not its only function. I agree with Richard that one of the benefits of an error value is to provide an indication of how much difference is (or is not) significant. If the error bars on the OPRs are all (for example) about 10 points, then a 4 point difference in OPR between two teams probably means less in sorting a pick list than does a qualitative difference in a scouting report.
As it turns out, I was recently asked for the average time it takes members of my branch to produce environmental support products. Because we get requests that range from a 10 mile square box on one day to seasonal variability for a whole ocean basin, the (requested) mean production time means nothing. For one class of product, the standard deviation of production times was greater than the mean. Without the scatter info, the reader would have probably assumed that we were making essentially identical widgets and that the scatter was +/- 1 or 2 in the last reported digit.