![]() |
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
My point is, everyone in the know will quickly acknowledge OPR or CCWM's shortcomings, but until there is a widely available, game independent, repeatable formula to rank all teams in all competitions relatively quickly, you'll find that OPR will remain the gold standard in scouting (especially for the championship event, where it is very difficult to watch / find video for every team attending). |
Re: looking at OPR across events
I mean, yes, it does produce some interesting results, but if you go into a match looking at OPRs and realize the alliance you're going against has a really low rating, does that mean you're not going to try as hard? No! That's when you get whooped. So OPR shouldn't affect your strategy. We've also seen that you shouldn't even rely on OPR for your pick list. So I ask again, what are you actually getting out of OPR? What advantage do you have over others by looking at OPR vs just the qualification rankings?
|
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
This is significant for individuals or small teams that lack man hours to devote to scouting. |
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
My opinion on this:
CCWM within a regional: theoretically excellent. With the amount of opportunistic defense this year along with the frequency of fouls, this stat is stronger than OPR in calculating your strength since CCWM takes into account both offense and defense. However, looking at the GTRE rankings at a glance, CCWM seems to be all over the board, most likely due to the somewhat random nature of fouls. Adj. OPR at a regional: theoretically excellent. There is some very large variations in terms of how many fouls your team gains. In fact, I am getting a standard deviation of 11.7 at GTRE. The issue with this is that some teams are more natural foul magnets than others due to their reputation (1114 has the highest foul OPR at GTRE). OPR comparing robots at one event: decent. It's like Adj. Opr with even more error. OPR/CCWM across regionals: horrible. Too many regionals have alliances that barely function, while others have too many alliances that function only defensively. Has anyone done an Adjusted Contribution to Winning Margin analysis? That should be calculated as follows: ACCWM = (AOPR - DPR), iir the CCWM formula correctly. It should be able to take into account any fouls you incur while eliminating opposing team fouls. Before anyone says it, I also agree that this stat will also have to be taken into account with a grain of salt, due to alliance synergies, luck and what-not. |
Re: looking at OPR across events
Quote:
http://www.chiefdelphi.com/forums/sh...70&postcount=7 |
Re: looking at OPR across events
Quote:
If you recognize that OPR is a tool just like a hammer is a tool, it isn't hard to see that it has practical uses with definite limitations. I wouldn't, and pretty much everyone that understands OPR wouldn't use it to strictly pick an alliance, but when you need to quickly fasten two pieces of wood together it's hard to beat a hammer and a nail. |
Re: looking at OPR across events
Quote:
Also, I never thought one should base alliance choices or opinions of teams solely on OPR. I just think it can be a useful metric. |
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
I think the "tool concept" is correct. OPR or CCWM are a piece the scouting puzzle. It gives you an "idea" of which teams that you may not be know well to scout in particular to see if their skills compliment yours. I think that finding the right compliments to your teams skill set and strategy is always the key to winning in eliminations. You can't get that from a statistic but it helps focus your scouting, it is not a substitute for it. At the end of the day it is about synergy and co-operation. You cannot get that from a statistic but the statistic can help you manage your scouting effort. if your scouts see something that goes against the statistic, go with it. follow up on it and see if more team members agree. Performance can change radically between competitions due to learning and correction of previous errors. A change in strategy alone can make a big difference. At the end of the day the teams that can work seamlessly together (and have a good skill set) will usually win.
|
Re: looking at OPR across events
Okay I have avoided giving my opinion on OPR for a long time. Some people may assume that since I publish the data, my team must use it a lot. I certainly understand the limitation and I understand in what situations it will really screw up the data.
We use OPR/CCWM very similar to other teams who understand the limitations of it. At events we attend, we do not use OPR because we have enough resources to capture every statistics that we need and extra scouts to watch and collect qualitative data to make pre-match decisions and alliance selections. We have a very sophisticated Android program written by a student that is now close to 20,000 lines of code to collect data on 6 tablets. However I do look at OPR at the event we attend to see what other teams see. For competitions that I did not attend and especially pre-scouting at Championship, OPR, sub-OPR and CCWM are very useful tools. Just like any building project, you need to know what is the best tool to accomplish different tasks. |
| All times are GMT -5. The time now is 21:21. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi