Like many others I do not put a large amount of emphasis on a team’s OPR because it is hard to know if the game is good for OPR (unless the game is comprised of completely independent or dependent tasks.) In AP Statistics my partner (who is also on 587) and I decided to do a linear regression test between OPR and rank for all of FIRST.

While we know OPR is not meant to have a whole lot to do with rank we wanted to see how much they did associate. We did the linear regression test for the slope of OPR v Rank and did 2 confidence intervals for the slope, in addition to calculating the r-value. We also did try to generalize for all of FIRST by taking randomly generated samples of all the teams at every competition, in order to minimize the bias from this (every competition is different) we used a sample size of 300.

I attached the powerpoint we presented that outlines everything we did and that has our conclusions! We thought it was interesting and wanted to share! (This is my first time posting a link so hopefully it works!)

I would like to see OPR as an indicator of playoff success compared to rank as an indicator of playoff success. The traditional selling point of OPR is that it measures robot quality better than RP or QP or whatever.

Great presentation, I learned some stuff being only in Grade 10 from it :P. One thing I noticed is slide 8 (Independent), you stated that 30010 is 3000, less than the 6000 teams2 comps per team. While this might equal to roughly the correct number of total rankings for the year, there are only roughly 3400 registered teams to compete in 2017, because as you know many teams shut down for various reasons, and FIRST leaves some numbers as extras every year ETC.

So it would be closer to 3400*2, also assuming the average numbers of comp’s attended per teams is 2, which you stated it is.

To piggyback on what Connor said above, the main point of calculating OPR is that it’s supposed to be better at predicting on-field success than ranking points. I would definitely expect to see a correlation, but I would be upset if there was a very large one. If the correlation were too high, that would mean that OPR gives pretty much the same data as ranking points, and therefore is pretty much useless. I also would like to see correlation data between OPR and some metric of competition performance (e.g. # of playoff matches played).

I would be interested in seeing the r and r^2 values for this correlation. Aka how much of the variation in rank is due to variation in OPR. If we take OPR to be a perfect measurement of robot ability (which as good as it is, it’s not perfect), we can use that to calculate how good of a measure rank is of robot ability for each year. In that way, we could theoretically quantify the amount of randomness in each game’s scoring.

We calculated the r value in order to get the t-stat: it was .635, we would have also liked something to do with playoff matches but we had a week to do this project (and had other exams, projects, etc)

As for the point about calculating randomness that does sound pretty freakin cool, however (again, I personally don’t look that much at OPR because we can usually get raw data on the field) I don’t really see a way to check how perfect OPR is. You could assume it, but I wouldn’t feel super comfortable doing that for this year personally.

One thing I have been playing with for checking OPR is to randomly create a bunch of robots, feed them into a tournament, and see how rank/OPR compare with the actual robot profile.

So for example, I used the Northern Lights schedule.
I took the 60 teams and randomly created robots in excel using the following criteria:

robots place 0 to 5 gears, even odds of each number.

if robots place 1 or more gears, 60% of doing so in auto

if place in gear in auto they are mobile, otherwise 90% chance

if a robot places 0 to 2 gears, 5% chance they shoot 42 fuel.

if a robot places 3 to 5 gears, the shoot 0 to 4 fuel even odds (mostly to minimize ties)

90% chance a robot is capable of climbing. If capable a robot has a 50% chance of climbing in any match.

So 6 was the only match to match random event. 1 through 5 they did every match.

Basically about every randomize I would to see a robot running 5 gears a match and climbing to have .500 record or worse and an OPR in the 60s.

The next step is I would do an alliance draft using routines and see how many points the 8 alliances would score.

Typically what I found was using straight rank or straight OPR would be about 10% to 30% lower on total points then if I created alliances using the underlying robot performance.

My general conclusion was OPR is at its heart a weighted average score metric. Rank is also correlated to total points. By assigning a 50% chance of robot climbing in a given match, I strongly tied points to how lucky a team was with those 50%. That would reflect through on both OPR and rank. Moving that knob would change the rate of outliers but they still existed.

Generally speaking, this year OPR and rank both rewarded teams for having a higher percentage of their alliance partners climbing versus the climbing rate of the tournament. That’s why you would see teams with a component OPR of 75 on touchpad points.

I like this approach, but I’d like this approach better with more detailed modeling of robot abilities. Generate a set of 60 robots of varying ability, meaning different underlying distributions for gear delivery, climbing, fuel scoring, etc. Tune said distributions (and distribution of distributions!) to match actual observed scouting data. Run a bunch of simulated tournaments, and see what happens.

One thing to note is that OPR is less skewed than pure rank when a team faces a harder schedule. In our division, we faced an extremely difficult schedule (literally the hardest in our division according to frc.divisions.co) and many matches were lost by a single missed climb. This dropped us out of the top 50 in pure rankings, but we were ranked pretty respectably in OPR (top-20) and got picked as a 2nd pick.