OPR after Week Seven Events

The OPR/CCWM numbers up to Week 7 events have been posted, please see


If you find any error or have any questions, please let me know.

I will post this again after the division data of the World Championship is announced.

So after 7 weeks, what is the final verdict on OPR in 2013? I feel like it’s a pretty good measurement as scoring is mostly linear.

No one’s totally sure until Karthik either praises or denigrates it in his strategy presentation this year. :smiley:

I’m not sure how it’s calculated, but I feel like it’s a decent way to scout as long as it’s followed up by actual scouting of a team. It’s going to sound like I’m whining here, but it’s true. Except for one match, our team scored above 50% (in some cases we scored above 65%) of our alliance’s total points during the competition (including elims). Our average points per match was 40 (and this is including a match where we jammed and only scored 25 points). If we take that match out, I believe that we averaged over 44 points per match. Yet our team only won two matches during the entire competition. So, I mean, it’s a good measurement, but teams are going to be missed.
Generally, however, I do think it’s an invaluable tool for smaller teams who don’t have the resources to do a bunch of scouting themselves. For the most part, robots can be measured by those metrics and it’s mostly accurate.

I can think of other issues with OPR that are unchanging from year to year:
-How valuable is a scouting system that everyone else is using? How do you differentiate your picks from other teams’ picks if you’re all picking from the same OPR/CCWM-derived list?
-OPR doesn’t take into account a robot’s type and how it interacts with other robots. You can get a high OPR in qualifying simply by supplying discs to a floor loader that then scores them for you. But an alliance of disc-suppliers would score very poorly in eliminations.
-OPR doesn’t compare well at all across events. Waterloo had a ridiculous average OPR (even excluding 1114/2056), but that was probably because nearly every robot there was offensive, and so little defense was played. If you added 10 defensive robots to the pool, everyone else’s OPR would have fallen drastically. You can do a “global OPR”, but I doubt it corrects that much.

OPR seemed to do very well this year as far as its ability to reflect robot’s effectiveness within a regional, but there are broader issues with it than just that. IIRC, Karthik’s main complaint about OPR was that people were using it for scouting and comparing robots without really understanding it or understanding its shortcomings like those I listed above.

^It was just a joke, bro. If a team bases scouting off of a singular statistical value (that they may not even fully understand).

I would say OPR is really good at figuring out at what level a robot can play at, but without the context of knowing the “class” of the robot, the number is meaningless. Teams 11 and 245 are some of the better known “cyclers” that may have a high OPR, but would a cycler pick another cycler over a floor loader because the cycler has a higher OPR?

I guess they always could, but that probably means scouting wants to take the rest of the day off and not worry about scouting any elims.

But maybe the cycler also played some impeccable defense and was very agile in transition so they could easily switch between offense and defense… and the rabbit hole get’s deeper.

Totally agree with all of your other points except this one. Why do you need to “differentiate your picks” for the sake of differentiation? You need to find the best robots available to complete your alliance. If there were a single, universal system to do this, it wouldn’t be any less valuable just because everyone else was using it.

OPR (specifically CCWM) is better than I expected at separating teams, and seems to be ~85% accurate based on a comparison to a few events of data.

That said, you have to remember that OPR is an approximation of skill that is better than average score but no replacement for a real scouting system. It’s a decent “sanity check” to see if you missed an outlier or two but if you have actual data I wouldn’t even touch it.

During and after the season, it’s a very good approximation of skill that is very useful for Internet debates. :stuck_out_tongue:

OPR is an excellent indicator this year.

What OPR does not clarify, however, is autonomous and climbing.

Autonomous is indefensible. I would much much rather have a robot that has an opr of 55 because it scores a 7 disk every single time in auton, then climbs quickly at the end. Even if it has a zero teleop score. Because that’s the perfect robot to have play defense.

OPR will never be an end-all measurement, and you really have to understand the mechanics of the game.

Another place that OPR fails is defense. There were several extremely well-scoring machines at MSC that had low OPR’s because they spent half their matches playing defense. Unless you scouted and knew that, you’d assume they were just poor robots.

That’s the kind of third round sleeper pick that high seeded teams dream of.

OPR in 2013 as a predictor of a robot’s contribution to an alliance = Thumbs Up
OPR in 2013 as a primary or solitary scouting tool = Thumbs Down
OPR in 2013 as a tool to complement effective match scouting = Thumbs Way Up
OPR in 2013 as a stat that is blindly quoted by those without a fundamental understanding of what it means = Thumbs Way Down

For a more detailed analysis, come check out my seminar in St. Louis.

How many thumbs do you have?

Somewhere, deep in the bowels of 1114’s shop, there exists a room full of nothing but students training to be Karthik’s hand doubles. They practice thumbs up and thumbs down for hours a day for many years until they are needed in an occasion such as this.

On top of that, it is not terribly difficult to actually match scout and get much more accurate, much more useful data. Fundamentally OPR has way less information to rank teams than a match scouter does, so it should come as a surprise to no one that it doesn’t do as good of a job.

In other words, a good strategy would be to use traditional scouting to find robots that complement your robot and strategy, and then look at their calculated contributions along avg. individual scores to help narrow down and rank those robots.
Though at some point, if there is a redundant robot that is a certain amount better than a complimentary robot, you definitely have to consider the redundant robot over the complimentary. How much of a difference must there be to do that? That is a great subject for a team meeting on Friday night.

Had to

To be clear: Ed’s spreadsheet has separate per-team least-squares estimates of Auto, Climb, and TeleOp.

Another place that OPR fails is defense. There were several extremely well-scoring machines at MSC that had low OPR’s because they spent half their matches playing defense. Unless you scouted and knew that, you’d assume they were just poor robots.

In general if a “well-scoring” machine is assigned the task of defending, it’s often because the other two machines are higher scorers. In the least-squares analysis that is used to compute OPR, the defending machine gets credit for the alliance score.

I guess another way of thinking of it is imagine this scene:
A: “How can we make our paper-based scouting better?”
B: “Let’s use this OPR thing. It’ll make us better than the opposition for sure!”

The problem in this scene is that nearly every other quality team is probably also augmenting their scouting with OPR. So in order to be scouting smarter than the opposition, you need to be thinking further than the numbers you can pull off the internet.

Essentially, the rising tide of more-available analytical tools has lifted everyone’s scouting boats: everyone is scouting better because of it, but a given team is probably relatively in the same position vis a vis its competitors as it was before the advent of easy-to-use OPR data. So you need to use it, but to assume that it is the ticket to better-than-your-opposition picking may be a bad assumption.

Another hobby of mine is triathlons, where you see something similar: a new, faster bike will come out, and you’ll buy it and use it just to keep up with your competitors. The speedier bike or technology doesn’t necessarily move you up the rankings, since everyone else is using it - moving up still requires hard work and getting better.

Shh, everyone knows that’s a special Canadian trade secret.

This. No matter how much we improve our automated analyses, humans still have to not only be in the loop, but control it. It might be interesting to find out if building your pick list strictly on OPR is better than just picking the highest-ranked available robot. I’m pretty sure it is, but without some actual, numerical, analysis I won’t say that I believe it. :wink:

Preposterous! This is 1114; they’re not wasting time practicing with humans to stand in for Karthik’s thumbs. They’re building something.

Tom, did you see the numbers on 2337. They had a respectable OPR of 44 but ranked 33 out of 64 teams. So they can be considered not the best offensive robot. Perhaps it is because they played a lot of defense. But did you see their CCWM number? It was 17.6 and ranked 11th out of 64 teams. 2054 and 67 made a wise choice. I doubt they did it because of the CCWM number, but you cannot deny that the CCWM number supports their scouts’ decision. I had always advocated using CCWM as part of the sanity check for 2nd round pick. It does take defense into consideration.