View Full Version : OPR after Week Seven Events
The OPR/CCWM numbers up to Week 7 events have been posted, please see
http://www.chiefdelphi.com/media/papers/2174
If you find any error or have any questions, please let me know.
I will post this again after the division data of the World Championship is announced.
Grim Tuesday
14-04-2013, 00:46
So after 7 weeks, what is the final verdict on OPR in 2013? I feel like it's a pretty good measurement as scoring is mostly linear.
PayneTrain
14-04-2013, 03:53
So after 7 weeks, what is the final verdict on OPR in 2013? I feel like it's a pretty good measurement as scoring is mostly linear.
No one's totally sure until Karthik either praises or denigrates it in his strategy presentation this year. :D
gabrielau23
14-04-2013, 07:43
I'm not sure how it's calculated, but I feel like it's a decent way to scout as long as it's followed up by actual scouting of a team. It's going to sound like I'm whining here, but it's true. Except for one match, our team scored above 50% (in some cases we scored above 65%) of our alliance's total points during the competition (including elims). Our average points per match was 40 (and this is including a match where we jammed and only scored 25 points). If we take that match out, I believe that we averaged over 44 points per match. Yet our team only won two matches during the entire competition. So, I mean, it's a good measurement, but teams are going to be missed.
Generally, however, I do think it's an invaluable tool for smaller teams who don't have the resources to do a bunch of scouting themselves. For the most part, robots can be measured by those metrics and it's mostly accurate.
No one's totally sure until Karthik either praises or denigrates it in his strategy presentation this year. :D
I can think of other issues with OPR that are unchanging from year to year:
-How valuable is a scouting system that everyone else is using? How do you differentiate your picks from other teams' picks if you're all picking from the same OPR/CCWM-derived list?
-OPR doesn't take into account a robot's type and how it interacts with other robots. You can get a high OPR in qualifying simply by supplying discs to a floor loader that then scores them for you. But an alliance of disc-suppliers would score very poorly in eliminations.
-OPR doesn't compare well at all across events. Waterloo had a ridiculous average OPR (even excluding 1114/2056), but that was probably because nearly every robot there was offensive, and so little defense was played. If you added 10 defensive robots to the pool, everyone else's OPR would have fallen drastically. You can do a "global OPR", but I doubt it corrects that much.
OPR seemed to do very well this year as far as its ability to reflect robot's effectiveness within a regional, but there are broader issues with it than just that. IIRC, Karthik's main complaint about OPR was that people were using it for scouting and comparing robots without really understanding it or understanding its shortcomings like those I listed above.
PayneTrain
14-04-2013, 09:37
^It was just a joke, bro. If a team bases scouting off of a singular statistical value (that they may not even fully understand).
I would say OPR is really good at figuring out at what level a robot can play at, but without the context of knowing the "class" of the robot, the number is meaningless. Teams 11 and 245 are some of the better known "cyclers" that may have a high OPR, but would a cycler pick another cycler over a floor loader because the cycler has a higher OPR?
I guess they always could, but that probably means scouting wants to take the rest of the day off and not worry about scouting any elims.
But maybe the cycler also played some impeccable defense and was very agile in transition so they could easily switch between offense and defense... and the rabbit hole get's deeper.
GearsOfFury
14-04-2013, 10:33
I can think of other issues with OPR that are unchanging from year to year:
-How valuable is a scouting system that everyone else is using? How do you differentiate your picks from other teams' picks if you're all picking from the same OPR/CCWM-derived list?
Totally agree with all of your other points except this one. Why do you need to "differentiate your picks" for the sake of differentiation? You need to find the best robots available to complete your alliance. If there were a single, universal system to do this, it wouldn't be any less valuable just because everyone else was using it.
Chris is me
14-04-2013, 10:38
OPR (specifically CCWM) is better than I expected at separating teams, and seems to be ~85% accurate based on a comparison to a few events of data.
That said, you have to remember that OPR is an approximation of skill that is better than average score but no replacement for a real scouting system. It's a decent "sanity check" to see if you missed an outlier or two but if you have actual data I wouldn't even touch it.
During and after the season, it's a very good approximation of skill that is very useful for Internet debates. :P
Tom Line
14-04-2013, 11:24
OPR is an excellent indicator this year.
What OPR does not clarify, however, is autonomous and climbing.
Autonomous is indefensible. I would much much rather have a robot that has an opr of 55 because it scores a 7 disk every single time in auton, then climbs quickly at the end. Even if it has a zero teleop score. Because that's the perfect robot to have play defense.
OPR will never be an end-all measurement, and you really have to understand the mechanics of the game.
Another place that OPR fails is defense. There were several extremely well-scoring machines at MSC that had low OPR's because they spent half their matches playing defense. Unless you scouted and knew that, you'd assume they were just poor robots.
That's the kind of third round sleeper pick that high seeded teams dream of.
No one's totally sure until Karthik either praises or denigrates it in his strategy presentation this year. :D
OPR in 2013 as a predictor of a robot's contribution to an alliance = Thumbs Up
OPR in 2013 as a primary or solitary scouting tool = Thumbs Down
OPR in 2013 as a tool to complement effective match scouting = Thumbs Way Up
OPR in 2013 as a stat that is blindly quoted by those without a fundamental understanding of what it means = Thumbs Way Down
For a more detailed analysis, come check out my seminar in St. Louis (http://www.chiefdelphi.com/forums/showthread.php?t=115843).
OPR in 2013 as a predictor of a robot's contribution to an alliance = Thumbs Up
OPR in 2013 as a primary or solitary scouting tool = Thumbs Down
OPR in 2013 as a tool to complement effective match scouting = Thumbs Way Up
OPR in 2013 as a stat that is blindly quoted by those without a fundamental understanding of what it means = Thumbs Way Down
For a more detailed analysis, come check out my seminar in St. Louis (http://www.chiefdelphi.com/forums/showthread.php?t=115843).
How many thumbs do you have?
Andrew Schreiber
14-04-2013, 11:51
How many thumbs do you have?
Somewhere, deep in the bowels of 1114's shop, there exists a room full of nothing but students training to be Karthik's hand doubles. They practice thumbs up and thumbs down for hours a day for many years until they are needed in an occasion such as this.
Ian Curtis
14-04-2013, 11:53
OPR in 2013 as a stat that is blindly quoted by those without a fundamental understanding of what it means = Thumbs Way Down
On top of that, it is not terribly difficult to actually match scout and get much more accurate, much more useful data. Fundamentally OPR has way less information to rank teams than a match scouter does, so it should come as a surprise to no one that it doesn't do as good of a job.
The best model of a cat is another, or preferably the same, cat.
All models are wrong, some are useful.
OPR in 2013 as a predictor of a robot's contribution to an alliance = Thumbs Up
OPR in 2013 as a primary or solitary scouting tool = Thumbs Down
OPR in 2013 as a tool to complement effective match scouting = Thumbs Way Up
OPR in 2013 as a stat that is blindly quoted by those without a fundamental understanding of what it means = Thumbs Way Down
For a more detailed analysis, come check out my seminar in St. Louis (http://www.chiefdelphi.com/forums/showthread.php?t=115843).
In other words, a good strategy would be to use traditional scouting to find robots that complement your robot and strategy, and then look at their calculated contributions along avg. individual scores to help narrow down and rank those robots.
Though at some point, if there is a redundant robot that is a certain amount better than a complimentary robot, you definitely have to consider the redundant robot over the complimentary. How much of a difference must there be to do that? That is a great subject for a team meeting on Friday night.
Akash Rastogi
14-04-2013, 11:55
OPR in 2013 as a predictor of a robot's contribution to an alliance = Thumbs Up
OPR in 2013 as a primary or solitary scouting tool = Thumbs Down
OPR in 2013 as a tool to complement effective match scouting = Thumbs Way Up
OPR in 2013 as a stat that is blindly quoted by those without a fundamental understanding of what it means = Thumbs Way Down
For a more detailed analysis, come check out my seminar in St. Louis (http://www.chiefdelphi.com/forums/showthread.php?t=115843).
Had to
http://weknowgifs.com/wp-content/uploads/2013/03/downvoting-roman-gif.gif
What OPR does not clarify, however, is autonomous and climbing.
To be clear: Ed's spreadsheet has separate per-team least-squares estimates of Auto, Climb, and TeleOp.
Another place that OPR fails is defense. There were several extremely well-scoring machines at MSC that had low OPR's because they spent half their matches playing defense. Unless you scouted and knew that, you'd assume they were just poor robots.
In general if a "well-scoring" machine is assigned the task of defending, it's often because the other two machines are higher scorers. In the least-squares analysis that is used to compute OPR, the defending machine gets credit for the alliance score.
Totally agree with all of your other points except this one. Why do you need to "differentiate your picks" for the sake of differentiation? You need to find the best robots available to complete your alliance. If there were a single, universal system to do this, it wouldn't be any less valuable just because everyone else was using it.
I guess another way of thinking of it is imagine this scene:
A: "How can we make our paper-based scouting better?"
B: "Let's use this OPR thing. It'll make us better than the opposition for sure!"
The problem in this scene is that nearly every other quality team is probably also augmenting their scouting with OPR. So in order to be scouting smarter than the opposition, you need to be thinking further than the numbers you can pull off the internet.
Essentially, the rising tide of more-available analytical tools has lifted everyone's scouting boats: everyone is scouting better because of it, but a given team is probably relatively in the same position vis a vis its competitors as it was before the advent of easy-to-use OPR data. So you need to use it, but to assume that it is the ticket to better-than-your-opposition picking may be a bad assumption.
Another hobby of mine is triathlons, where you see something similar: a new, faster bike will come out, and you'll buy it and use it just to keep up with your competitors. The speedier bike or technology doesn't necessarily move you up the rankings, since everyone else is using it - moving up still requires hard work and getting better.
PayneTrain
14-04-2013, 12:30
How many thumbs do you have?
Shh, everyone knows that's a special Canadian trade secret.
Essentially, the rising tide of more-available analytical tools has lifted everyone's scouting boats: everyone is scouting better because of it, but a given team is probably relatively in the same position vis a vis its competitors as it was before the advent of easy-to-use OPR data. So you need to use it, but to assume that it is the ticket to better-than-your-opposition picking may be a bad assumption.
This. No matter how much we improve our automated analyses, humans still have to not only be in the loop, but control it. It might be interesting to find out if building your pick list strictly on OPR is better than just picking the highest-ranked available robot. I'm pretty sure it is, but without some actual, numerical, analysis I won't say that I believe it. ;)
Somewhere, deep in the bowels of 1114's shop, there exists a room full of nothing but students training to be Karthik's hand doubles. They practice thumbs up and thumbs down for hours a day for many years until they are needed in an occasion such as this.
Preposterous! This is 1114; they're not wasting time practicing with humans to stand in for Karthik's thumbs. They're building something.
Another place that OPR fails is defense. There were several extremely well-scoring machines at MSC that had low OPR's because they spent half their matches playing defense. Unless you scouted and knew that, you'd assume they were just poor robots.
That's the kind of third round sleeper pick that high seeded teams dream of.
Tom, did you see the numbers on 2337. They had a respectable OPR of 44 but ranked 33 out of 64 teams. So they can be considered not the best offensive robot. Perhaps it is because they played a lot of defense. But did you see their CCWM number? It was 17.6 and ranked 11th out of 64 teams. 2054 and 67 made a wise choice. I doubt they did it because of the CCWM number, but you cannot deny that the CCWM number supports their scouts' decision. I had always advocated using CCWM as part of the sanity check for 2nd round pick. It does take defense into consideration.
Another place that OPR fails is defense. There were several extremely well-scoring machines at MSC that had low OPR's because they spent half their matches playing defense. Unless you scouted and knew that, you'd assume they were just poor robots.
OPR fails at defense this year because defense in general hasn't been very effective. Those well scoring robots that have low OPRs due to playing defense would have been better off playing offense for all their matches because they are better at offense than defense. For playing defense to be a net gain, you have to stop more points than you would normally score.
Though the fact teams are playing defense when they really shouldn't is why traditional scouting is still important.
OPR (specifically CCWM) is better than I expected at separating teams, and seems to be ~85% accurate based on a comparison to a few events of data.
That said, you have to remember that OPR is an approximation of skill that is better than average score but no replacement for a real scouting system. It's a decent "sanity check" to see if you missed an outlier or two but if you have actual data I wouldn't even touch it.
During and after the season, it's a very good approximation of skill that is very useful for Internet debates. :P
I don't know why there is this constant argument of the shortcomings of OPR. I don't think anybody including myself advocate only using OPR/CCWM as a way to scout, decide on match strategy or alliance selection.
To me, OPR/CCWM is very useful if you were not at the event so you have a general idea what each robot is good at based on Auto OPR, Climb OPR and Tele OPR. Like Chris said, if you have actual data, why do you need OPR? I don't remember who it was, but I was once asked if I have the match data of each robot of every match, will I be able to create a better model to increase the prediction ability of OPR? I thought it was a trick question.
But we need to keep in mind that some teams are very small. One person cannot watch 6 robots at the same time. He/she can try and take some notes but it is very difficult to rank teams based on subjective measures on select matches. In those cases, I think it is better to use OPR/CCWM as a guide rather than selecting the next highest seeded team
Take a look at what teams actually select based on human scouts (I assume) at events and compare the first round picks with OPR and second round picks with combination of OPR/CCWM, it is amazing how good the correlation is and why some teams that seeded high were not selected. There are always exceptions because a team is looking for a very specific attribute in a supporting robot. But in some cases I have to scratch my head where it looked like a poor choice and very often the data confirmed that they ended up as quarterfinalist.
I don't know why there is this constant argument of the shortcomings of OPR. I don't think anybody including myself advocate only using OPR/CCWM as a way to scout, decide on match strategy or alliance selection.
Quoted for truth.
I was once asked if I have the match data of each robot of every match, will I be able to create a better model to increase the prediction ability of OPR? I thought it was a trick question.
Given that data, you could do better than just reporting each team's high/low/mean/median scores etc and using that for prediction.
You could take into account which teams they were playing with, and against, for each of their alliance scores, and so on, recursively for those teams. Computer ranking of football teams do something similar.
Andrew Schreiber
14-04-2013, 13:17
I don't know why there is this constant argument of the shortcomings of OPR. I don't think anybody including myself advocate only using OPR/CCWM as a way to scout, decide on match strategy or alliance selection.
It's occasionally beneficial to remind people that OPR/CCWM/Metric du jour are not magic bullets. You more than likely know it but freshmandood519 (which I'm hoping is NOT an actual user name on here) probably doesn't.
Joe Ross
14-04-2013, 13:59
freshmandood519 (which I'm hoping is NOT an actual user name on here)
That one's safe, but stupidfreshman is taken.
Grim Tuesday
14-04-2013, 14:23
I don't know why there is this constant argument of the shortcomings of OPR. I don't think anybody including myself advocate only using OPR/CCWM as a way to scout, decide on match strategy or alliance selection.
To me, OPR/CCWM is very useful if you were not at the event so you have a general idea what each robot is good at based on Auto OPR, Climb OPR and Tele OPR. Like Chris said, if you have actual data, why do you need OPR? I don't remember who it was, but I was once asked if I have the match data of each robot of every match, will I be able to create a better model to increase the prediction ability of OPR? I thought it was a trick question.
But we need to keep in mind that some teams are very small. One person cannot watch 6 robots at the same time. He/she can try and take some notes but it is very difficult to rank teams based on subjective measures on select matches. In those cases, I think it is better to use OPR/CCWM as a guide rather than selecting the next highest seeded team
Take a look at what teams actually select based on human scouts (I assume) at events and compare the first round picks with OPR and second round picks with combination of OPR/CCWM, it is amazing how good the correlation is and why some teams that seeded high were not selected. There are always exceptions because a team is looking for a very specific attribute in a supporting robot. But in some cases I have to scratch my head where it looked like a poor choice and very often the data confirmed that they ended up as quarterfinalist.
I think one of the best cases of this OPR/Human scout difference was actually at the Buckeye Regional when we picked you guys. In my opinion, the biggest issue with OPR (and I was talking about this in the week 6 thread) is that it doesn't take into consideration improvement -- it's an average. So if a team has shooter troubles on Friday (like 2834) then gets their game on Saturday morning, their OPR would not be significantly improved. But human scouts can see and snag a second round pick that scores 60+ in a match.
On the other hand, at Buckeye, our first pick, 2252 was ranked 3rd in OPR but in the 20s seeding wise - and they were clearly one of the three best scorers at the event. There often isn't the type of correlation between rank and OPR as I'd like to see and OPR does a very good job of quickly highlighting which teams to watch carefully and look at their schedules to see if it was a particularly hard one.
I think one of the best cases of this OPR/Human scout difference was actually at the Buckeye Regional when we picked you guys. In my opinion, the biggest issue with OPR (and I was talking about this in the week 6 thread) is that it doesn't take into consideration improvement -- it's an average. So if a team has shooter troubles on Friday (like 2834) then gets their game on Saturday morning, their OPR would not be significantly improved. But human scouts can see and snag a second round pick that scores 60+ in a match.
On the other hand, at Buckeye, our first pick, 2252 was ranked 3rd in OPR but in the 20s seeding wise - and they were clearly one of the three best scorers at the event. There often isn't the type of correlation between rank and OPR as I'd like to see and OPR does a very good job of quickly highlighting which teams to watch carefully and look at their schedules to see if it was a particularly hard one.
I am very glad you did not use OPR/CCWM for your second pick at Buckeye. OPR/CCWM does not take the order of the match into consideration so you do not know if a team is improving or the robot is starting to have problems. I use trendlines in Excel to fit a straight line through the data points of all previous matches score based on match scouting. Then I look at the OPR numbers and trendline predictions and decide on what I think each team will likely score in the next match. Using this to look for teams in alliance selection is better than just using OPR/CCWM, and you will not miss a team that showed big improvement on Saturday like at Buckeye. :)
I think OPR is a success, but if there is a way to play on a team's continuous improvement as the season progresses, say through elims and other events, that could give a more accurate preview on a team's performance.
I know that if you make each team's next event substantly more important than the first, the single and double regional teams would get nicked compared to the three regional teams.
The point is, however, that each team is accurately compared in offensive power rating to other teams, conistency included. In this case then, the more accurate average of a team should reflect more of there later regional/s then their first.
OPR is here to stay, but improvement should never relinquish.
Scott_4140
17-04-2013, 16:20
OPR/CCWM look like they would be a fantastic asset to FTC teams at the World Championships.
FIRST doesn't currently maintain a data set of matches for FTC the way they do for FRC. There are a few data sets available for FTC Championship tournaments, but not all. Generating a usable World ranking is probably not practical at this late date. Maybe next season.
It does seem that it would be possible to adapt this spreadsheet to FTC for use at just the FTC World Championships. Division lists are out for FTC.
I tried creating a new tab and entering a set of schedule & match data that I found posted from Alaska. I figured it would be a good test set. I'm not sure if I got the data entered in the propper format. The CTRL-SHIFT-R and CTRL-SHIFT-O both produces an error for a subscript being out of range. Unfortunately my Visual Basic is a bit rusty. It probably requires more than just a new tab. Probably need to tweak the macros for 2-team alliances as well.
What would it take to create an FTC version?
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.