[TBA Blog] What is the Maximum OPR?

What is the Maximum OPR?
By Caleb Sykes

Background

The inspiration for this post comes from this Chief Delphi thread asking what the maximum possible Offensive Power Rating (OPR) was in 2019. The easy answer is that there is no upper bound on 2019 OPRs because there was no upper bound for the score due to penalties. Oftentimes innocuous questions can lead down very intriguing paths though, and it got me interested enough to do a more detailed analysis of the question of what the maximum unpenalized OPR in 2019 would have been. Since the maximum unpenalized score for a 2019 match was 154, a first pass guess would say the max OPR would also be 154 (assuming a team scores 154 in all their matches and all other scores are 0). However, just like negative OPRs are theoretically possible, so too are OPRs greater than the max score.

Check out the rest of the article here: https://blog.thebluealliance.com/2019/10/26/what-is-the-maximum-opr/

8 Likes

@Caleb_Sykes

Since there is a negative trend for [Max OPR vs. Matches per Team], and positive trends for [Max OPR vs. Total Teams] and [Max OPR vs. Total Matches], is there an upper limit associated with the hypothetical event with infinite teams and infinite matches such that each team plays twice, or is that non-convergent and the answer is an infinite OPR?

(Two matches per team makes sense because you can’t reap secondary benefits without a second match, can you? Can tertiary benefits still happen with only two matches played, or do you need three matches for those?)

Also, what makes some schedules stronger than others in this regard? Is there a way to optimize a schedule to maximize the maximum possible OPR?

Is there a way to answer these questions without accidentally writing a Master’s thesis in statistics?

I’m going to work up the chain of difficulty here on your questions. This is the hardest one:

So I’ll answer it last :slight_smile:

Your first one is easy:

OPR is undefined if every team plays only 1 match or only 2 matches. You need at least as many half-matches as teams to be played (or, equivalently, at least as many matches as teams/2) in order to obtain an overdetermined set of equations. It helps me to think of each team’s OPR as an “unknown” variable, and each half-match as an “equation”.

I’d move onto the other questions, but I suspect you’ll ask next about 3 matches per team. Our definition of OPR doesn’t work so well in this range. You see, having 3 matches per team is necessary to obtain an overdetermined system, but it is not sufficient. If there are linear dependencies (e.g. half of the teams only play with other teams in their half) then OPR becomes undefined. Technically, this can happen with any number of matches or teams, but in practice this never happens because we have enough matches per team and the scheduler is designed to avoid schedules like these.

Assuming there are no linear dependencies, then I strongly suspect that in the case of 3 matches/team, pushing teams -> infinity also pushes the max OPR -> infinity without bound. I’ll need to look more into it though.

4 Likes

I love that you’re actually willing and able to answer these types of questions.

Thanks for the clarification, that makes a lot of sense. You mentioned that “half matches” thing in the blog post but I didn’t quite absorb it.

1 Like

Disclaimer: this is not a proof (I only got a math minor, not a math major, so I can get away with this kind of stuff :stuck_out_tongue:), go get @Karthik to write up a proof if you want one

I made a match scheduler using 3 matches per team, and created 10 different schedules for each of the below number of teams, I then found the max normalized OPR for each of these schedules. Here are the results:

Team Count max generated normalized OPR
6 7
12 200
24 371
48 205
96 227
192 1626
384 16433

I also managed to get a proven OPR of 89824 when generating thousands of 60 team schedules. So if there is an upper bound, it is at least 89824. Since this is higher than the max value of 2-byte integer, I’ve decided to stop so that I don’t report false results created by any variable overflows. Also it takes my computer on the order of minutes to generate schedules with more teams than this. If there is a bound, it’s far too high for me to care about anymore, and would be difficult to verify anyway.

Here's the schedule if anyone wants to verify
red 1 red 2 red 3 blue 1 blue 2 blue 3
5 31 41 1 57 4
56 50 10 21 27 30
44 54 8 16 9 32
22 25 52 18 2 28
11 14 33 6 3 47
15 29 7 26 13 17
23 51 60 59 55 45
37 42 35 48 36 40
43 49 24 34 46 58
39 19 20 12 53 38
22 51 58 53 34 36
56 29 26 19 8 11
41 39 6 38 33 46
54 48 5 10 20 27
12 49 44 24 4 16
25 23 1 50 7 28
21 52 55 14 3 42
47 60 43 40 15 59
30 13 57 45 35 2
31 17 32 37 9 18
55 54 27 29 48 18
59 19 49 45 52 11
53 13 58 23 8 34
4 5 6 15 42 47
51 28 56 3 57 37
9 36 10 1 35 33
21 46 12 7 2 38
22 17 20 26 16 24
60 30 40 14 41 50
39 25 31 44 43 32
4 Likes

Where’s the reddit gold option on this thing?

Thanks for going the extra mile on this. That series definitely looks non-convergent (also not a math major :wink: ). That means that most likely there is no upper bound on OPR, even in a game with a maximum possible score. It helps to explain the wonky OPRs that TBA spits out early in events (when you’re about 3 matches in).

Alright, let’s take a crack at this one

Here’s Caleb’s guide for how to give yourself the highest max OPR possible:
Step 1: Try to convince as many teams as possible to register for your event, beg the event organizers for smaller pit spaces and tell them you don’t need a practice field. If you’re in a district system, get out. More teams at events correlates very strongly with high max OPRs as shown in the blog post, so doing this will be one of the most effective ways to maximize your max OPR.

Step 2: Once you’re at the event, try to cause as many delays as possible. Try to load in late and register as late as you can so that schedule generation is pushed back as much as possible. Don’t turn on your robot before going onto the field. Keep your thumbs down even after you’ve connected to the field. Bring a ton of wifi hotspots right up next to the field to jam signals and force match replays. Burn some pizza to set off the fire alarms. Get creative! I’m sure you can knock a few matches per team off the schedule if you work hard enough at it. Fewer matches per team also correlates well with high maximum OPR.

Step 3: Don’t be a surrogate team. Delaying inspection as late as possible may slightly reduce the chances that you’ll end up as a surrogate team since some inspectors will push for schedules which have non-inspected teams having a late first match, which probably means they are less likely to be a surrogate as well due to match turnaround criteria in the scheduler. Here I have listed the maximum OPRs for all the teams at ausp, mndu, and mnmi, as well as if they were a surrogate or not:

ausp max OPRs
team max OPR is surrogate
6508 1.749 FALSE
5876 1.719 FALSE
5451 1.717 FALSE
5985 1.704 FALSE
6579 1.698 FALSE
6813 1.690 FALSE
5648 1.689 FALSE
7146 1.687 FALSE
6007 1.685 FALSE
6035 1.675 FALSE
6996 1.668 FALSE
5663 1.668 FALSE
6432 1.665 FALSE
4614 1.664 FALSE
6386 1.657 FALSE
6997 1.650 FALSE
4537 1.646 FALSE
7844 1.645 FALSE
5449 1.645 FALSE
7838 1.644 FALSE
6520 1.641 FALSE
4739 1.633 FALSE
4270 1.632 FALSE
5988 1.630 FALSE
6083 1.630 FALSE
3132 1.630 FALSE
6304 1.628 FALSE
7741 1.628 FALSE
7433 1.624 FALSE
5516 1.623 FALSE
7884 1.618 FALSE
7586 1.618 FALSE
6575 1.616 FALSE
7129 1.613 FALSE
4774 1.611 FALSE
7601 1.608 FALSE
7074 1.606 FALSE
7593 1.605 FALSE
5308 1.604 FALSE
6524 1.602 FALSE
6441 1.598 FALSE
7113 1.597 FALSE
7561 1.597 FALSE
7719 1.595 FALSE
7709 1.594 FALSE
7780 1.594 FALSE
4613 1.593 FALSE
7588 1.591 FALSE
5584 1.581 FALSE
6986 1.581 FALSE
4802 1.576 FALSE
7707 1.576 FALSE
7551 1.575 FALSE
5333 1.572 TRUE
3881 1.570 FALSE
6063 1.567 FALSE
2437 1.564 FALSE
6476 1.549 FALSE
6510 1.544 TRUE
7047 1.537 FALSE
5522 1.494 FALSE
4729 1.473 FALSE
mndu max OPRs
team max OPR is surrogate
2531 1.675 FALSE
5991 1.654 FALSE
1816 1.634 FALSE
4539 1.630 FALSE
6318 1.628 FALSE
4207 1.619 FALSE
4230 1.614 FALSE
5653 1.614 FALSE
6047 1.611 FALSE
3008 1.608 FALSE
2574 1.602 FALSE
3381 1.599 FALSE
5464 1.597 FALSE
6045 1.596 FALSE
4009 1.596 FALSE
3102 1.595 FALSE
4217 1.595 FALSE
3750 1.592 FALSE
5542 1.592 FALSE
6022 1.591 FALSE
7864 1.589 FALSE
7893 1.586 FALSE
3276 1.586 FALSE
5348 1.582 FALSE
167 1.576 FALSE
2526 1.576 FALSE
6146 1.576 FALSE
3275 1.575 TRUE
3740 1.574 FALSE
6628 1.571 FALSE
7235 1.566 FALSE
3755 1.564 FALSE
6217 1.562 FALSE
5253 1.562 FALSE
6160 1.557 FALSE
4728 1.556 FALSE
4845 1.555 FALSE
7797 1.553 FALSE
4674 1.553 TRUE
6453 1.551 FALSE
5690 1.550 FALSE
5913 1.548 FALSE
93 1.546 FALSE
2977 1.543 FALSE
3134 1.537 FALSE
4480 1.536 FALSE
4741 1.535 FALSE
7068 1.532 FALSE
2503 1.532 FALSE
3294 1.531 FALSE
5299 1.530 FALSE
5638 1.530 FALSE
2264 1.530 FALSE
4511 1.529 FALSE
3291 1.529 FALSE
5290 1.524 FALSE
4238 1.514 FALSE
2506 1.514 FALSE
7041 1.514 TRUE
3277 1.511 FALSE
4166 1.499 FALSE
5999 1.491 FALSE
5586 1.482 FALSE
mnmi max OPRs
team max OPR is surrogate
3026 1.673 FALSE
3102 1.670 FALSE
2513 1.665 FALSE
3407 1.655 FALSE
5996 1.655 FALSE
2823 1.654 FALSE
2825 1.647 FALSE
2509 1.637 FALSE
2529 1.628 FALSE
2515 1.627 FALSE
2846 1.627 FALSE
3871 1.626 FALSE
4536 1.623 FALSE
3184 1.620 FALSE
2500 1.618 FALSE
7038 1.611 FALSE
2052 1.598 FALSE
2498 1.598 FALSE
7068 1.594 FALSE
3299 1.594 FALSE
3134 1.589 FALSE
2501 1.581 FALSE
3202 1.579 FALSE
3023 1.577 FALSE
3058 1.577 FALSE
5913 1.575 FALSE
2232 1.575 FALSE
3454 1.573 FALSE
5232 1.572 FALSE
3038 1.571 FALSE
6709 1.570 FALSE
4664 1.563 FALSE
5434 1.560 FALSE
7028 1.560 FALSE
5637 1.557 FALSE
5464 1.557 FALSE
2518 1.552 FALSE
2532 1.549 FALSE
5541 1.549 FALSE
3007 1.546 FALSE
4207 1.545 FALSE
3751 1.544 FALSE
3298 1.542 FALSE
7180 1.542 FALSE
3630 1.541 TRUE
2508 1.538 FALSE
7850 1.538 FALSE
4229 1.537 FALSE
5172 1.535 FALSE
2879 1.534 FALSE
2530 1.529 FALSE
2470 1.527 FALSE
4607 1.526 FALSE
2855 1.524 FALSE
3745 1.522 TRUE
2987 1.519 TRUE
4549 1.517 FALSE
2502 1.512 FALSE
3055 1.499 FALSE
4198 1.498 FALSE
2480 1.498 FALSE
2181 1.493 FALSE
3244 1.493 FALSE

You can see the surrogates are all pretty low relative to their non-surrogate competitors. Being a surrogate drops your max OPR on average about -0.04 normalized points according to these data.

Step 4: Assuming you have no control over team count or matches per team, what kind of schedules allow for the highest maximum OPRs? Well, the most important factor that I found is the number of unique partners you have. Here’s the North Carolina State Champs max OPRs by team, along with the number of unique partners each team had:

nccmp max OPRs
team max OPR unique partners
7890 1.213 20
7763 1.168 21
3459 1.167 21
2640 1.156 21
1533 1.146 22
7265 1.145 22
4291 1.144 22
4561 1.143 21
5190 1.142 21
900 1.141 22
3196 1.139 22
4534 1.124 22
6500 1.124 23
587 1.122 22
2655 1.120 23
6729 1.120 23
2682 1.117 22
4828 1.116 23
2642 1.113 23
7443 1.109 23
3229 1.109 23
4290 1.091 24
4795 1.089 24
7671 1.087 24
6502 1.086 24
4935 1.085 24
3506 1.084 24
2059 1.082 24
5607 1.081 24
5762 1.080 24
3737 1.079 24
5511 1.077 24

There’s an extremely strong correlation between these. Here is a graph of the data:
image
My impression is that unique partners is a good proxy for the magnitude of the “second-order effects” I describe in my blog post. Intuitively, if you have fewer unique partners, then their performance in matches without you on average reflects more heavily on you than if you had more partners and this impact was distributed out over more teams. If we looked at unique/duplicate partners of partners (third-order effects) or some kind of related metric, I assume that would cover much of the remaining variance in the above graph.

So we can bring it back to this question:

I think the answer is yes.

2 Likes

An OPR above the max score is not actually a real reflection of a teams ability and really represents an error bar in what OPR is trying to measure.
The range of possible max OPRs between the teams at an event strikes me as fairly significant.

So thinking of this the other way, doesn’t that imply that a team’s schedule has a bearing on how sensitive their OPR will be to variances in the performance of other teams?

This isn’t really true…

It seems to me that you’re assuming teams will be capable of, at most, 154 pts a match. This would mean the highest their OPR can be is 154.

In reality, teams can be capable of greater than 154 pts a match, it’s just that the FMS has a ceiling of 154. OPR does not share that ceiling.

A team with an OPR of 154, when paired with a team with negative OPR, would end the match with a score less than 154. A team with an OPR greater than 154, when paired with a team with negative OPR, could end the match with a score of 154.

Just because the FMS doesn’t give you credit for doing extra cycles, doesn’t mean scouting metrics can’t give you credit for doing extra cycles.

This was a big deal in 2017 when teams were able to get 4 rotors and there was no motivation to do extra cycles. Teams that were very good and were paired with very good partners were punished in scouting for maxing out scoring early and having nothing useful left to do (besides play defense). Some teams would run gear cycles that didn’t count for points just to prove to scouts that they could do more than FMS would allow. OPR kind of accounts for this by “awarding them a larger than max score OPR for that match”, but make sure you really understand OPR and it’s limitations before you try to use it to justify anything.

1 Like

If I am following right, the general situation where a team will maximize their OPR is if:

  1. they max out their score every match. (first order)
  2. their partners minimize their scores in matches not with them (second order)
  3. every other match score is maximized (third order)

Over the course of every event there is going to be noise in each team’s OPR due to variances in other team’s performance. I was thinking this might be a good way to measure how sensitive each team’s OPR is to variances due to schedule.

I don’t get your 2017 example. When two teams that could run 8 gears got paired both of those teams had their OPR suffer. As a thought experiment think of a 2017 event where every team could run 10 gears a match. OPR would decide everyone could run 4 gears because every alliance would end with 12 gears. That was part of the reason 2017 was considered a poor year for OPR.

1 Like

We would probably need a whole thread to get on the same page about “what OPR is trying to measure” and what an “error bar” on OPR actually means. I’m going to skip over this point for now because both of those concepts are pretty ill-defined in my mind. I’ll try to address your other points though.

Yes, I think this is true.

Point 1 is true in all but constructed cases, point 2 is true maybe 90%ish of the time, and point 3 is probably more often true than it is not. These are good rough heuristics, but won’t get you all the way there every time, I’m pretty sure the only way to do that is with the method described in my blog post (or computationally equivalent methods).

I think that max OPR is a very good proxy for your OPR’s sensitivity to matches which your team is not in. In my blog post I talk about “match impacts”, high max OPR indicates larger magnitude match impacts from the matches that your team does not participate in, and a low max OPR indicates weaker match impacts from the matches your team doesn’t participate in.

Not sure if I agree with classifying this kind of sensitivity as “noise” though, but I’ll put that up there with the other terms I think we’d need a whole thread to clear up.

EDIT: Here’s a thread from a while back trying to give some of these terms meaning. It’s been a while so I’ll read through it again.

I would define “noise” as things that are contributing to OPR that largely unrepeatable. Things like an opponent getting a bunch of penalty points or a partner breaking that are unrelated to you.

My understanding is OPR is an attempt to measure how many points a robot will contribute to an alliance in any given match (predictive). That can be by scoring points or by aiding their partners in scoring points. Obviously that’s really a histogram for a given robot so I was thinking of OPR as a median. I would define “error bar” as how far that median is from reality.

The other way I could see OPR being defined is as a measure of how many points a team has contributed to their alliances (historical). In which case “error bar” would be how much that number varies from what the team will do in the future.

It confuses me when you talk about multiple possible definitions of OPR. OPR has a singular clear definition, which is the linear least squares solution to the set of equations derived from a set of match scores and teams in those matches. Within this definition, there is no uncertainty or error bars as the solution is unique (provided the system is overdetermined which will always be the case for reasonable schedules). Put another way, OPR does not intrinsically have any uncertainty, just like 4+5=9 does not have any uncertainty. The only way it could intrinsically have uncertainty would be if there was uncertainty in the schedule or uncertainty in the match scores. If either of these had uncertainty, then it would make sense to calculate the uncertainty of OPR and I would make sure to publish that uncertainty in all of my work.

With that said, there are plenty of different meanings you can try to assign to this mathematical construct we call OPR, some of which are probably more reasonable meanings than others, and each of which will have its own uncertainty/error bars. I don’t think any of these can reasonably be interpreted as the uncertainty or the error bars to OPR though, as there is no single usage case for OPR. An OPR value means different things to different people, and just as there’s no inherently right or wrong meaning, so too is there no inherent uncertainty in the meaning of OPR. I guess if there was some kind of “true ability” for teams then we could compare OPR against that, but I don’t think such a thing could even be represented by a single number. Even if it could, we have no way of knowing what it is, so it’s all moot.

That said, let’s look at the forward-looking score prediction case and the backward looking historical results comparison that you mention and find those uncertainties.

Case 1 - Backwards looking match result differences from in-event OPR:
Question: After an event has completed, what uncertainty on post-event OPRs would be required to make 95% of in-event match scores fall within the uncertainty range of the alliance’s post-event OPR sum?

Results: Here are the final match scores, pre-event OPR sums (score predictions), and post-event OPR sums (Drive link because dataset is too large to put in post).

The residuals between the post-event OPR sums and the actual scores have an average of 0.0 (by definition of OPR) and a stdev of 9.7. If we assume independence of teams and equal variance of teams, we divide this by sqrt(3) to get a team’s OPR stdev of 5.6. Multiplying the stdev by 2 gives us a 95% confidence interval for each team’s OPR according to the above question at ±11.2. Since looking backwards is always far easier than looking forwards, I think this is about as low as any reasonable definition of “OPR uncertainty” could go. Comparing post-event OPRs to scores is really just “fitting” the OPRs to the scores and finding how good the best possible fit is.

Case 2 - Forward looking match predictions:
Question: Before an event has started, what uncertainty on pre-event OPRs would be required to make 95% of future in-event match scores fall within the uncertainty range of the alliance’s pre-event OPR sum?

Result: We’ll use the same dataset as above, but this time we’re going to sum up each team’s maximum pre-event OPR (you could also look at average or most recent event, but I have found max to be the best predictor) . The residuals from the predicted scores to the actual scores have an average of 3.5, meaning that the predicted scores were on average 3.5 points higher than the actual scores (the positive value is unsurprising considering we predicted using max OPRs) . The residuals also have a standard deviation of 13.6. Multiplying by 2 and dividing by sqrt(3) as above gives us a 95% confidence interval according to question 2 on OPR of ±15.7 (plus a constant offset of 3.5/3 = 1.2). This is probably about the worst-case scenario for any definition of “OPR uncertainty” as we are combining OPRs from the whole season’s worth of different events and have no in-event information to go off of.

Case 3 - Match W/L predictions using predicted contributions:
I currently use “predicted contributions” to generate W/L match predictions in my event simulator. Predicted contributions are a hybrid of max pre-event OPR and in-event match results which are used to make live-updating match predictions. These predictions are generated according to the formula Red WP = 1/(1+10^((blue_PC-red_PC)/(2*stdev2019)))
Where:
Red WP = red alliance win probability
blue_PC = the sum of the blue alliance’s predicted contributions
red_PC = the sum of the red alliance’s predicted contributions
stdev2019 = The standard deviation of 2019 week 1 qual and playoff scores = 17.1.

As this is a logistic function, there’s no exact mapping to a normal distribution although they are very similar. The closest Gaussian fit gives a stdev for each team’s PC of 7.2, which implies a 95% confidence interval on PCs of ±14.4. Fittingly, this value is right between the values derived from case 1 and case 2, which makes sense as PCs are a hybrid between pre-event and post-event OPRs. I was a little worried before doing this analysis that this value would be way outside of those bounds, but it actually fits in nicely, which means I probably at least vaguely understand what I’m doing. :smile:

Generalizing for the future:
Depending on what your usage case is for OPR, I think you could reasonably say the uncertainty on OPR is anywhere from ±11.2 to ±15.7 points in 2019. Again, depending on the usage case, these uncertainties should generally scale proportionally to score standard deviations each year, so if we normalize these ranges to the 2019 week 1 score stdev of 17.1, we can obtain general uncertainties to use in future years. Call a year’s week 1 score stdev s, this gives us 95% confidence intervals of ±.65s on the lower end to ±.92s on the higher end. Here’s what those ranges would have looked like for the past few years:

Estimated 'OPR uncertainties'
year week 1 score stdev 0.65*s 0.92*s
2010 2.7 1.76 2.48
2011 28.4 18.46 26.13
2012 15.5 10.08 14.26
2013 31.1 20.22 28.61
2014 49.3 32.05 45.36
2015 33.2 21.58 30.54
2016 27.5 17.88 25.30
2017 70.6 45.89 64.95
2018 106.9 69.49 98.35
2019 17.1 11.12 15.73
5 Likes

One last thought, I thought of another easy interpretation of “OPR uncertainty” which would be the range in which a team’s OPR is expected to change at an event. Here are the pre-event max OPRs and the post-event OPRs for teams in Roebling, Turing, and Tesla:

roe, tur, and tes pre and post event OPRs
team pre-event max OPR post-event OPR
27 31.74 28.27
203 19.51 25.65
229 25.20 12.64
333 25.25 17.58
341 24.12 23.21
346 38.64 36.45
379 29.77 24.02
525 36.17 33.07
548 30.56 28.26
610 33.22 37.23
834 31.38 26.70
977 31.12 28.55
1086 18.44 21.57
1153 33.65 33.28
1155 18.95 30.06
1259 33.53 16.91
1262 32.53 25.31
1285 26.38 25.19
1577 34.27 26.95
1629 30.59 29.52
1747 29.92 30.67
2095 17.24 21.96
2168 38.57 31.37
2177 24.25 29.77
2358 19.63 24.37
2451 24.69 27.39
2534 30.20 32.45
2576 28.99 33.31
2626 16.09 27.07
3026 17.85 10.26
3197 32.12 23.46
3313 30.84 24.61
3324 37.57 26.78
3357 30.64 30.72
3572 27.74 33.46
3620 30.31 28.73
3986 28.44 32.27
4122 24.45 26.95
4145 19.76 19.06
4329 28.55 21.21
4338 32.96 26.42
4476 29.73 27.60
4590 17.08 28.63
4817 11.13 14.81
4907 24.31 18.27
4909 27.78 26.89
4917 34.56 28.80
4939 19.00 16.67
4967 29.09 15.75
5205 29.90 28.74
5401 36.84 27.75
5422 20.06 12.15
5585 9.05 13.51
5618 30.30 29.55
5667 20.65 34.19
5934 20.75 19.21
5992 23.85 17.92
6328 17.57 17.05
6569 31.82 37.85
6574 25.65 22.78
6909 12.82 20.80
6964 21.36 20.73
7457 32.56 28.98
7460 16.37 24.75
7531 14.96 9.15
7554 9.34 17.75
7626 17.34 18.58
7850 13.25 16.73
148 43.38 31.75
231 28.12 33.74
294 28.42 34.38
359 38.92 39.23
568 8.30 15.62
585 11.07 9.45
972 23.12 23.50
1251 15.25 23.40
1296 23.45 21.33
1318 29.24 26.89
1369 28.80 25.03
1410 30.66 27.23
1533 28.79 23.46
1566 9.26 9.44
1658 29.22 37.19
1771 30.85 36.02
1912 24.55 22.97
2341 30.46 20.11
2412 25.39 27.27
2415 12.92 7.74
2429 12.52 13.60
2485 23.79 23.41
2557 24.94 27.09
2642 29.94 20.43
2655 28.72 18.82
2813 15.13 16.87
2898 23.86 14.68
2907 23.62 14.48
3243 19.87 17.40
3366 11.25 11.86
3647 35.75 29.76
3737 23.27 30.01
3847 34.65 29.43
3859 23.78 24.44
3881 13.44 21.69
3991 18.36 12.28
4005 27.79 29.32
4276 24.56 21.73
4336 27.47 24.90
4610 24.53 29.47
4639 11.17 17.25
4941 25.20 19.26
4965 23.31 19.35
5059 16.92 16.63
5190 35.71 35.81
5332 17.84 21.76
5411 30.74 32.54
5427 19.14 16.11
5550 14.46 12.19
5842 13.16 9.09
6026 6.86 16.86
6304 32.55 23.78
6364 17.38 21.78
6647 25.39 18.21
6695 15.81 24.21
6803 20.74 24.16
6829 21.80 25.73
6901 22.93 21.22
7403 13.59 5.92
7425 15.63 14.56
7458 16.35 29.91
7459 6.59 15.63
7492 22.28 22.45
7498 25.17 28.73
7525 13.93 24.00
7565 7.38 13.51
7906 7.43 12.53
254 38.30 41.56
589 18.80 17.61
649 37.73 37.12
948 23.66 26.06
1094 23.46 24.33
1421 34.51 22.40
1619 45.12 34.80
1622 27.77 22.21
1683 29.37 24.73
1700 27.29 23.94
1726 36.12 22.41
1730 33.16 20.14
1868 29.59 22.19
2383 32.04 27.43
2403 29.26 36.93
2556 16.10 27.00
2582 25.78 25.66
2733 22.68 29.89
2839 30.03 21.44
2930 29.82 24.20
3035 24.71 21.20
3196 26.52 31.51
3284 31.76 29.68
3310 38.53 40.02
3478 38.97 25.67
3933 27.00 16.16
3937 35.35 34.82
3966 20.28 15.17
4125 29.18 17.19
4401 20.71 11.49
4451 30.35 31.17
4513 22.02 31.43
4522 26.83 25.13
4613 31.66 31.99
5012 31.39 29.95
5199 34.22 39.04
5468 18.76 21.77
5716 12.65 6.22
5754 16.54 13.28
5805 19.16 22.01
5857 12.97 12.65
5872 23.82 17.66
5930 13.24 20.21
6025 20.19 21.07
6072 16.25 23.09
6144 21.40 25.15
6315 18.55 10.69
6321 27.82 22.51
6377 31.17 32.92
6391 27.86 29.34
6579 21.27 20.27
6672 32.65 32.02
6831 16.21 14.57
6957 11.42 5.39
6986 28.12 21.80
6995 16.21 11.58
7277 14.23 5.48
7308 11.24 17.90
7464 16.23 14.46
7494 15.81 13.43
7528 12.31 6.48
7621 16.08 12.46
7661 14.81 18.26
7724 23.27 22.73
7753 14.82 21.50
7840 27.23 19.84

The differences between these have an average of 1.0 (unsurprising due to use of max OPRs), and an stdev of 6.0. Multiplying by 2 gives us a 95% confidence interval of ±12.1 (and you could go a bit higher if accounting for the average offset if you wanted). So that’s another reasonable value that falls between my expected lower and upper bounds.