![]() |
OPR after Week Seven Events
The OPR/CCWM numbers up to Week 7 events have been posted, please see
http://www.chiefdelphi.com/media/papers/2174 If you find any error or have any questions, please let me know. I will post this again after the division data of the World Championship is announced. |
Re: OPR after Week Seven Events
So after 7 weeks, what is the final verdict on OPR in 2013? I feel like it's a pretty good measurement as scoring is mostly linear.
|
Re: OPR after Week Seven Events
Quote:
|
Re: OPR after Week Seven Events
I'm not sure how it's calculated, but I feel like it's a decent way to scout as long as it's followed up by actual scouting of a team. It's going to sound like I'm whining here, but it's true. Except for one match, our team scored above 50% (in some cases we scored above 65%) of our alliance's total points during the competition (including elims). Our average points per match was 40 (and this is including a match where we jammed and only scored 25 points). If we take that match out, I believe that we averaged over 44 points per match. Yet our team only won two matches during the entire competition. So, I mean, it's a good measurement, but teams are going to be missed.
Generally, however, I do think it's an invaluable tool for smaller teams who don't have the resources to do a bunch of scouting themselves. For the most part, robots can be measured by those metrics and it's mostly accurate. |
Re: OPR after Week Seven Events
Quote:
-How valuable is a scouting system that everyone else is using? How do you differentiate your picks from other teams' picks if you're all picking from the same OPR/CCWM-derived list? -OPR doesn't take into account a robot's type and how it interacts with other robots. You can get a high OPR in qualifying simply by supplying discs to a floor loader that then scores them for you. But an alliance of disc-suppliers would score very poorly in eliminations. -OPR doesn't compare well at all across events. Waterloo had a ridiculous average OPR (even excluding 1114/2056), but that was probably because nearly every robot there was offensive, and so little defense was played. If you added 10 defensive robots to the pool, everyone else's OPR would have fallen drastically. You can do a "global OPR", but I doubt it corrects that much. OPR seemed to do very well this year as far as its ability to reflect robot's effectiveness within a regional, but there are broader issues with it than just that. IIRC, Karthik's main complaint about OPR was that people were using it for scouting and comparing robots without really understanding it or understanding its shortcomings like those I listed above. |
Re: OPR after Week Seven Events
^It was just a joke, bro. If a team bases scouting off of a singular statistical value (that they may not even fully understand).
I would say OPR is really good at figuring out at what level a robot can play at, but without the context of knowing the "class" of the robot, the number is meaningless. Teams 11 and 245 are some of the better known "cyclers" that may have a high OPR, but would a cycler pick another cycler over a floor loader because the cycler has a higher OPR? I guess they always could, but that probably means scouting wants to take the rest of the day off and not worry about scouting any elims. But maybe the cycler also played some impeccable defense and was very agile in transition so they could easily switch between offense and defense... and the rabbit hole get's deeper. |
Quote:
|
Re: OPR after Week Seven Events
OPR (specifically CCWM) is better than I expected at separating teams, and seems to be ~85% accurate based on a comparison to a few events of data.
That said, you have to remember that OPR is an approximation of skill that is better than average score but no replacement for a real scouting system. It's a decent "sanity check" to see if you missed an outlier or two but if you have actual data I wouldn't even touch it. During and after the season, it's a very good approximation of skill that is very useful for Internet debates. :P |
Re: OPR after Week Seven Events
OPR is an excellent indicator this year.
What OPR does not clarify, however, is autonomous and climbing. Autonomous is indefensible. I would much much rather have a robot that has an opr of 55 because it scores a 7 disk every single time in auton, then climbs quickly at the end. Even if it has a zero teleop score. Because that's the perfect robot to have play defense. OPR will never be an end-all measurement, and you really have to understand the mechanics of the game. Another place that OPR fails is defense. There were several extremely well-scoring machines at MSC that had low OPR's because they spent half their matches playing defense. Unless you scouted and knew that, you'd assume they were just poor robots. That's the kind of third round sleeper pick that high seeded teams dream of. |
Re: OPR after Week Seven Events
Quote:
OPR in 2013 as a primary or solitary scouting tool = Thumbs Down OPR in 2013 as a tool to complement effective match scouting = Thumbs Way Up OPR in 2013 as a stat that is blindly quoted by those without a fundamental understanding of what it means = Thumbs Way Down For a more detailed analysis, come check out my seminar in St. Louis. |
Re: OPR after Week Seven Events
Quote:
|
Re: OPR after Week Seven Events
Quote:
|
Re: OPR after Week Seven Events
Quote:
Quote:
Quote:
|
Re: OPR after Week Seven Events
Quote:
Though at some point, if there is a redundant robot that is a certain amount better than a complimentary robot, you definitely have to consider the redundant robot over the complimentary. How much of a difference must there be to do that? That is a great subject for a team meeting on Friday night. |
Re: OPR after Week Seven Events
Quote:
![]() |
Re: OPR after Week Seven Events
Quote:
Quote:
|
Re: OPR after Week Seven Events
Quote:
A: "How can we make our paper-based scouting better?" B: "Let's use this OPR thing. It'll make us better than the opposition for sure!" The problem in this scene is that nearly every other quality team is probably also augmenting their scouting with OPR. So in order to be scouting smarter than the opposition, you need to be thinking further than the numbers you can pull off the internet. Essentially, the rising tide of more-available analytical tools has lifted everyone's scouting boats: everyone is scouting better because of it, but a given team is probably relatively in the same position vis a vis its competitors as it was before the advent of easy-to-use OPR data. So you need to use it, but to assume that it is the ticket to better-than-your-opposition picking may be a bad assumption. Another hobby of mine is triathlons, where you see something similar: a new, faster bike will come out, and you'll buy it and use it just to keep up with your competitors. The speedier bike or technology doesn't necessarily move you up the rankings, since everyone else is using it - moving up still requires hard work and getting better. |
Re: OPR after Week Seven Events
Quote:
|
Re: OPR after Week Seven Events
Quote:
Quote:
|
Re: OPR after Week Seven Events
Quote:
|
Re: OPR after Week Seven Events
Quote:
Though the fact teams are playing defense when they really shouldn't is why traditional scouting is still important. |
Re: OPR after Week Seven Events
Quote:
To me, OPR/CCWM is very useful if you were not at the event so you have a general idea what each robot is good at based on Auto OPR, Climb OPR and Tele OPR. Like Chris said, if you have actual data, why do you need OPR? I don't remember who it was, but I was once asked if I have the match data of each robot of every match, will I be able to create a better model to increase the prediction ability of OPR? I thought it was a trick question. But we need to keep in mind that some teams are very small. One person cannot watch 6 robots at the same time. He/she can try and take some notes but it is very difficult to rank teams based on subjective measures on select matches. In those cases, I think it is better to use OPR/CCWM as a guide rather than selecting the next highest seeded team Take a look at what teams actually select based on human scouts (I assume) at events and compare the first round picks with OPR and second round picks with combination of OPR/CCWM, it is amazing how good the correlation is and why some teams that seeded high were not selected. There are always exceptions because a team is looking for a very specific attribute in a supporting robot. But in some cases I have to scratch my head where it looked like a poor choice and very often the data confirmed that they ended up as quarterfinalist. |
Re: OPR after Week Seven Events
Quote:
|
Re: OPR after Week Seven Events
Quote:
You could take into account which teams they were playing with, and against, for each of their alliance scores, and so on, recursively for those teams. Computer ranking of football teams do something similar. |
Re: OPR after Week Seven Events
Quote:
|
Re: OPR after Week Seven Events
Quote:
|
Re: OPR after Week Seven Events
Quote:
On the other hand, at Buckeye, our first pick, 2252 was ranked 3rd in OPR but in the 20s seeding wise - and they were clearly one of the three best scorers at the event. There often isn't the type of correlation between rank and OPR as I'd like to see and OPR does a very good job of quickly highlighting which teams to watch carefully and look at their schedules to see if it was a particularly hard one. |
Re: OPR after Week Seven Events
Quote:
|
Re: OPR after Week Seven Events
I think OPR is a success, but if there is a way to play on a team's continuous improvement as the season progresses, say through elims and other events, that could give a more accurate preview on a team's performance.
I know that if you make each team's next event substantly more important than the first, the single and double regional teams would get nicked compared to the three regional teams. The point is, however, that each team is accurately compared in offensive power rating to other teams, conistency included. In this case then, the more accurate average of a team should reflect more of there later regional/s then their first. OPR is here to stay, but improvement should never relinquish. |
Re: OPR after Week Seven Events
OPR/CCWM look like they would be a fantastic asset to FTC teams at the World Championships.
FIRST doesn't currently maintain a data set of matches for FTC the way they do for FRC. There are a few data sets available for FTC Championship tournaments, but not all. Generating a usable World ranking is probably not practical at this late date. Maybe next season. It does seem that it would be possible to adapt this spreadsheet to FTC for use at just the FTC World Championships. Division lists are out for FTC. I tried creating a new tab and entering a set of schedule & match data that I found posted from Alaska. I figured it would be a good test set. I'm not sure if I got the data entered in the propper format. The CTRL-SHIFT-R and CTRL-SHIFT-O both produces an error for a subscript being out of range. Unfortunately my Visual Basic is a bit rusty. It probably requires more than just a new tab. Probably need to tweak the macros for 2-team alliances as well. What would it take to create an FTC version? |
| All times are GMT -5. The time now is 17:22. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi