|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools |
Rating:
|
Display Modes |
|
|
|
#1
|
|||
|
|||
|
Re: "standard error" of OPR values
This year may be an anomaly, but it seems to me like, for some teams anyway, this is a reasonable model. Teams have built robots that are very predictable and task-oriented. For example: grab a bin, drive to the feeder station, stack, stack, stack, push, stack, stack, stack, push, etc. Knowing how quickly our human player and stack mechanism are, we can predict with reasonable accuracy how many points we can typically score in a match, with the only real variance coming from when things go wrong.
|
|
#2
|
|||||
|
|||||
|
Re: "standard error" of OPR values
I have to strongly agree with what Ed had to say above. Errors in OPR happen when its assumptions go unmet: partner or opponent interaction, team inconsistency (including improvement), etc. If one if these single factors caused significantly more variation than the others, then the standard error might be a reasonable estimate of that factor. However, I don't believe that this is the case.
Another option would be to take this measure in the same way that we take OPR. We know that OPR is not a perfect depiction of a team's robot quality or even a team's contribution to its alliance, but we use OPR anyway. In the same way, we know the standard error is an imperfect depiction of a team's variation in contribution. People constantly use the same example in discussing consistency in FRC. A low-seeded captain, when considering two similarly contributing teams, is generally better off selecting an inconsistent team over a consistent one. Standard error could be a reasonable measure of this inconsistency (whether due to simple variation or improvement). At a scouting meeting, higher standard error could indicate "teams to watch" (for improvement). But without having tried it, I suspect a team's standard error will ultimately be mostly unintelligible noise. |
|
#3
|
||||
|
||||
|
Re: "standard error" of OPR values
Has anyone ever attempted a validation study to compare "actual contribution" (based on scouting data or a review of match video) to OPR values? It seems like this would be fairly easy and accurate for Recycle Rush (and very difficult for Aerial Assist). I did that with our performance at one district event and found the result to be very close (OPR=71 vs "Actual"= 74).
In some ways, OPR is probably more relevant than "actual contribution". For example, a good strategist in Aerial Assist could extract productivity from teams that might otherwise just drive around aimlessly. This sort of contribution would show up in OPR, but a scout wouldn't attribute it to them as an "actual contribution". It would be interesting to see if OPR error was the same (magnitude and direction) for low, medium, and high OPR teams, etc. |
|
#4
|
||||
|
||||
|
Re: "standard error" of OPR values
Quote:
Someone did a study for Archimedes this year. I would say it is similar to 2011 where 3 really impressive scorers would put up a really great score, but if you expected 3X, you would instead get more like 2.25 to 2.5.... |
|
#5
|
||||
|
||||
|
Re: "standard error" of OPR values
Quote:
|
|
#6
|
||||
|
||||
|
Re: "standard error" of OPR values
Quote:
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|