Quote:
Originally Posted by Citrus Dad
This is not a mathematical exercise--it is a statistical one. And statistical analysis requires inference about the validity of the estimated parameters.
|
One of the two textbooks for my intermediate mechanics lab (sophomores and juniors in physics and engineering) was entitled
How to Lie with Statistics. Chapter 4 is entitled "Much Ado about Practically Nothing." For me, the takeaway sentence from this chapter is:
Quote:
|
Originally Posted by Huff, How to Lie with Statistics
You must always keep that plus-or-minus in mind, even (or especially) when it is not stated.
|
Unfortunately, not many high schoolers have been exposed to this concept.
Finally, if standard errors could be validly produced for each team as a measure of its consistency/reliability, that would be outstanding. Given that teams change strategy and modify robots between matches, (and this year's nonlinear scoring), it is not surprising that per-team standard error calculations are not valid. (And by the way, Ether's finding that the numbers could be calculated but did not communicate variability is at least qualitatively similar to Richard's argument concerning OPR.)
This does not negate the need for a "standard error" or "probable error" of the whole data set. OPR is ultimately a measurement, and anyone using OPR to drive a decision needs to understand the accuracy. That is, does a difference of 5 points in OPR means that one team is better than the other with 10% confidence, 50% confidence, or 90% confidence?