Weeks 1 thru 7 World OPR based on qual match only

***I threw this together because it was so easy to extract the necessary data from this database:

Weeks 1 thru 7 World OPR based on qual matches only.

Links to corrected versions, if necessary, will be posted in this thread.**

thank you for all the work you do. You sir are awesome.

Thanks, Russ.

For convenience I have plotted the distribution; thumbnail attached.

Median is about 65, and 90th percentile is about 102.

As usual, there is a sharp inflection upward around the 99th percentile, which is about 125. Top FUEL teams are the outliers.





Can I request a version with max (or last) event OPR, rather than an average of all events?

So a robot that could climb consistently (50 pts) and score the reserve gear (40/3 = 13 pts) would be about median. That actually seems like more than median robots in past years.

Yes, please?

Been busy with other things today.

I whipped this](https://www.chiefdelphi.com/media/papers/download/5041) up quickly. I think it’s correct. Haven’t vetted it very thoroughly though.

If the link is broken, check for updated link in most recent post.

Thanks again, Russ.

Max OPR distribution is similar to average OPR, just shifted upward.

For max OPR, the mean is ~74 and the 90th percentile is ~114.





What beautiful curves. Surely there’s a lesson in that symmetry?

I would suspect it has to do with the internal structure of OPR, and not anything about actual robot performance.

I will opine, based on comparing OPR to our team’s scouting data this year, that OPR is a rather poor robot metric for steamworks.

I think “total points” OPR is a poor metric, but OPR along individual segments of the game (ex. auto, fuel, gears, climbing) has been pretty inline with our scouting data at both of 254’s events.

Even gears?

It certainly seemed like it for us at Iowa.

gear contribution comparison.PNG


gear contribution comparison.PNG

Weeks 1thru7 World Component OPR](https://www.chiefdelphi.com/media/papers/download/5048) based on qual match data, all on one sheet.

This isn’t what I’d call good, to be honest. Two teams scoring 2.5 gears have zero calculated contribution?

That’s kind of what I was thinking…

I did an experiment with OPR earlier this year that may be relevant to this discussion.
A 65 OPR robot could be thought of as a robot that runs 2 gears every match and climbs 50% of the time.
Put three of those robots together over enough matches and they will get 3 rotors (120) plus climb on average 1.5 times (75). (120 + 75)/3 = 65.

I took the Northern Lights schedule and replaced all the scores with randomized results assuming every robot was that 65 OPR robot.
I then randomized a bunch of tournaments.
The result was the OPR range would run roughly 25 to 105 at the end of the tournament across the 60 identical 65 OPR robots.
So I kind of think the error bar this year on OPR is in the ±40 range for a lot of robots.

I think it’s worse than that given that not all robots are created equal. For an example, at/near the end of quals at our first event, we had gotten at least auto movement plus climb or auto gear every match (plus we gear cycled). That gives a minimum of 55 points per match. However, our OPR was ~42. If we say that gears average to around 20 points each, and we got about 3 teleop gears per match (I should really ask our scouting team for this data), OPR was 75 points too low.

Here’s the same chart with y=x drawn. It helps to show the undercounting of gears, which is inevitable. From the regression equation, on average is 0.5 gears, but it looks like for 13 teams its a gear or more.

I suspect if you did the same thing for a district championship, it would be even more pronounced.

gear contribution comparison y=x.png


gear contribution comparison y=x.png

Thanks again, Russ. I added auton + teleop gear OPRs and plotted distribution. Mean is ~32, and 90th percentile is ~50.

Another sharp inflection upward at the high end – 99th percentile is ~63.





You are looking at the graph wrong. The linear offsets can be easily adjusted out. Here is the same data, just with the axes flipped and a linear adjustment performed on the calculated contributions. Notice that the R^2 value hasn’t changed, so I haven’t changed the data. The standard deviation of errors is 0.6 gears, which gives us a 95% confidence interval of ±1.2 gears. ±1.2 gears seems perfectly reasonable to me, especially when you consider that some teams (like ours) often chose to stop scoring gears when we reached 3 rotors and other teams chose to continue scoring worthless gears.

gear contribution comparison 2.PNG


gear contribution comparison 2.PNG