Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   General Forum (http://www.chiefdelphi.com/forums/forumdisplay.php?f=16)
-   -   looking at OPR across events (http://www.chiefdelphi.com/forums/showthread.php?t=127679)

Michael Hill 10-03-2014 20:05

Re: looking at OPR across events
 
Quote:

Originally Posted by David8696 (Post 1356875)
My mistake. But I feel like the message holds true—OPR has predicted with fairly good accuracy the success of a team.

I already said for the top teams it will work, but for the midrange, where OPR would actually be useful, it falls apart. So I ask...what is OPR actually USEFUL for?

sailer99 10-03-2014 20:11

Re: looking at OPR across events
 
Quote:

Originally Posted by David8696 (Post 1356873)
In defense of OPR this year, I'd like to look at the data—that is, how accurately OPR has predicted teams' performances in the competitions we've seen thus far. Here are the top 10 teams as ranked by OPR, accompanied by their finishes in the competition they competed in.

1. 1114—Semifinalist at Greater Toronto East regional (lost in semifinals partly due to technical foul by alliance partner)
2. 2485—Finalist at San Diego regional (also lost finals partly due to tech foul)
3. 3683—Semifinalist at Greater Toronto East (alliance partner with 1114)
4. 33—Winner of Southfield district event
5. 987—Finalist at San Diego (alliance partner with 2485)
6. 624—Winner of Alamo regional
7. 16—Winner of Arkansas regional
8. 254—Winner of Central Valley regional
9. 3147—Finalist at Crossroads regional
10. 3393—Finalist at Auburn Mountain district

In fact, the top OPR team that didn't make at least finals in their respective regional/district event is 3494 at 12th, followed by 3476 in 13th (whose pickup broke during semifinals at San Diego). Overall, only seven of the top 30 teams failed to make the finals, and only two failed to make the semifinals of at least one event. As a robotics and statistics nerd, I think these numbers speak for themselves: OPR may not be a perfect metric, but it seems pretty dang accurate at predicting whether a team will finish well.

EDIT: 1114 and 3683 lost in the semifinals. Thanks to Kevin Sheridan for pointing that out.

Please watch the matches before judging that 1114 and 3683 lost due to a technical foul. One of the problems with OPR this year is it won't accurately predict how alliance partners mesh. I agree it is a good stat for getting a general idea of how teams are going to do but when it comes round to eliminations OPR is no longer very valid. I would be interested in how DPR is as a predictor this year and defense is being heavily played. I don't no much about DPR but I do know it uses your opponents score rather than your own.

efoote868 10-03-2014 20:15

Re: looking at OPR across events
 
Quote:

Originally Posted by Michael Hill (Post 1356886)
but for the midrange, where OPR would actually be useful, it falls apart.

What is your proof / how else would you differentiate middle tier teams / Do you have a concrete example of ranking teams by abilities that OPR fails to mirror?

My point is, everyone in the know will quickly acknowledge OPR or CCWM's shortcomings, but until there is a widely available, game independent, repeatable formula to rank all teams in all competitions relatively quickly, you'll find that OPR will remain the gold standard in scouting (especially for the championship event, where it is very difficult to watch / find video for every team attending).

Michael Hill 10-03-2014 20:16

Re: looking at OPR across events
 
I mean, yes, it does produce some interesting results, but if you go into a match looking at OPRs and realize the alliance you're going against has a really low rating, does that mean you're not going to try as hard? No! That's when you get whooped. So OPR shouldn't affect your strategy. We've also seen that you shouldn't even rely on OPR for your pick list. So I ask again, what are you actually getting out of OPR? What advantage do you have over others by looking at OPR vs just the qualification rankings?

Karthik 10-03-2014 20:22

Re: looking at OPR across events
 
Quote:

Originally Posted by Bryce2471 (Post 1356837)
Karthik,
Again, I completely agree. But i'm just wondering if in general, a team could get a high OPR more easily by going to a more competitive event?

In short, absolutely. Hence the danger of cross event comparison of OPR.

Karthik 10-03-2014 20:28

Re: looking at OPR across events
 
Quote:

Originally Posted by Jared Russell (Post 1356855)
The first statement is of course true, but the coupling of catching, assisting, and defense with other robots' actions means that very capable facilitators may not frequently and consistently play on high scoring alliances, regardless of the number of matches. (How many triple assist cycles have you seen remain uncompleted because the scorer kept missing and ran out of time?) It all depends on the distribution of robot capabilities at the event. A given team's event OPR/CCWM is conditioned based on this distribution, a result of OPR's assumed linear scoring model being fit to the game's non-linear and dependent scoring functions. (Hence, OPRs between events are not really comparable unless the distribution of robot capabilities is similar).

Agreed. Simply put, a team's OPR is partially a function of the teams it plays with; if subset of teams that the team is drawing partners from (i.e. the event) is limited in functionality, any team's OPR will be bottlenecked. However, the relative rankings within the event (e.g. number of deviations from the mean) still should suffice as a sufficient metric for those unable to watch the event.

efoote868 10-03-2014 20:31

Re: looking at OPR across events
 
Quote:

Originally Posted by Michael Hill (Post 1356895)
We've also seen that you shouldn't even rely on OPR for your pick list. So I ask again, what are you actually getting out of OPR? What advantage do you have over others by looking at OPR vs just the qualification rankings?

In short, filtering. I can use OPR to rank teams, and then I can scrutinize the teams on the threshold of the top tier to save myself effort from having to look at every team.

This is significant for individuals or small teams that lack man hours to devote to scouting.

Michael Hill 10-03-2014 20:38

Re: looking at OPR across events
 
Quote:

Originally Posted by efoote868 (Post 1356909)
In short, filtering. I can use OPR to rank teams, and then I can scrutinize the teams on the threshold of the top tier to save myself effort from having to look at every team.

This is significant for individuals or small teams that lack man hours to devote to scouting.

Unfortunately when you do that, you'll be setting yourself up for failure. You want to choose the robots that will best pair with your particular set of skills. Those robots could very well be misrepresented by OPR. As mentioned previously, look at 254's selections at CVR. They chose robots that are very compatible with their style of play even though they were near the vey bottom in OPR.

ErvinI 10-03-2014 20:39

Re: looking at OPR across events
 
My opinion on this:

CCWM within a regional: theoretically excellent. With the amount of opportunistic defense this year along with the frequency of fouls, this stat is stronger than OPR in calculating your strength since CCWM takes into account both offense and defense. However, looking at the GTRE rankings at a glance, CCWM seems to be all over the board, most likely due to the somewhat random nature of fouls.
Adj. OPR at a regional: theoretically excellent. There is some very large variations in terms of how many fouls your team gains. In fact, I am getting a standard deviation of 11.7 at GTRE. The issue with this is that some teams are more natural foul magnets than others due to their reputation (1114 has the highest foul OPR at GTRE).
OPR comparing robots at one event: decent. It's like Adj. Opr with even more error.
OPR/CCWM across regionals: horrible. Too many regionals have alliances that barely function, while others have too many alliances that function only defensively.

Has anyone done an Adjusted Contribution to Winning Margin analysis? That should be calculated as follows: ACCWM = (AOPR - DPR), iir the CCWM formula correctly. It should be able to take into account any fouls you incur while eliminating opposing team fouls.

Before anyone says it, I also agree that this stat will also have to be taken into account with a grain of salt, due to alliance synergies, luck and what-not.

Ether 10-03-2014 21:02

Re: looking at OPR across events
 
Quote:

Originally Posted by ErvinI (Post 1356917)
Has anyone done an Adjusted Contribution to Winning Margin analysis?

Yes, but it's with Twitter data, and across all events in weeks 1 & 2

http://www.chiefdelphi.com/forums/sh...70&postcount=7



efoote868 10-03-2014 21:07

Re: looking at OPR across events
 
Quote:

Originally Posted by Michael Hill (Post 1356915)
Unfortunately when you do that, you'll be setting yourself up for failure. You want to choose the robots that will best pair with your particular set of skills. Those robots could very well be misrepresented by OPR. As mentioned previously, look at 254's selections at CVR. They chose robots that are very compatible with their style of play even though they were near the vey bottom in OPR.

No one is arguing against that strategy, but not every team has the 'Poofs resources to implement it.

If you recognize that OPR is a tool just like a hammer is a tool, it isn't hard to see that it has practical uses with definite limitations. I wouldn't, and pretty much everyone that understands OPR wouldn't use it to strictly pick an alliance, but when you need to quickly fasten two pieces of wood together it's hard to beat a hammer and a nail.

David8696 10-03-2014 23:26

Re: looking at OPR across events
 
Quote:

Originally Posted by sailer99 (Post 1356888)
Please watch the matches before judging that 1114 and 3683 lost due to a technical foul.

Thus "partly due to." There can be no debate that the tech foul contributed to the loss—whether it caused it is much less clear-cut; I don't necessarily think so.

Also, I never thought one should base alliance choices or opinions of teams solely on OPR. I just think it can be a useful metric.

David8696 10-03-2014 23:27

Re: looking at OPR across events
 
Quote:

Originally Posted by efoote868 (Post 1356938)
No one is arguing against that strategy, but not every team has the 'Poofs resources to implement it.

If you recognize that OPR is a tool just like a hammer is a tool, it isn't hard to see that it has practical uses with definite limitations. I wouldn't, and pretty much everyone that understands OPR wouldn't use it to strictly pick an alliance, but when you need to quickly fasten two pieces of wood together it's hard to beat a hammer and a nail.

Very well put. If you're looking to drive in a nail, use the hammer; but if you're trying to saw a board in half, you'll have a lot better luck with a saw.

stuart2054 11-03-2014 00:15

Re: looking at OPR across events
 
I think the "tool concept" is correct. OPR or CCWM are a piece the scouting puzzle. It gives you an "idea" of which teams that you may not be know well to scout in particular to see if their skills compliment yours. I think that finding the right compliments to your teams skill set and strategy is always the key to winning in eliminations. You can't get that from a statistic but it helps focus your scouting, it is not a substitute for it. At the end of the day it is about synergy and co-operation. You cannot get that from a statistic but the statistic can help you manage your scouting effort. if your scouts see something that goes against the statistic, go with it. follow up on it and see if more team members agree. Performance can change radically between competitions due to learning and correction of previous errors. A change in strategy alone can make a big difference. At the end of the day the teams that can work seamlessly together (and have a good skill set) will usually win.

Ed Law 11-03-2014 01:16

Re: looking at OPR across events
 
Okay I have avoided giving my opinion on OPR for a long time. Some people may assume that since I publish the data, my team must use it a lot. I certainly understand the limitation and I understand in what situations it will really screw up the data.

We use OPR/CCWM very similar to other teams who understand the limitations of it. At events we attend, we do not use OPR because we have enough resources to capture every statistics that we need and extra scouts to watch and collect qualitative data to make pre-match decisions and alliance selections. We have a very sophisticated Android program written by a student that is now close to 20,000 lines of code to collect data on 6 tablets. However I do look at OPR at the event we attend to see what other teams see. For competitions that I did not attend and especially pre-scouting at Championship, OPR, sub-OPR and CCWM are very useful tools. Just like any building project, you need to know what is the best tool to accomplish different tasks.


All times are GMT -5. The time now is 21:21.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi