![]() |
looking at OPR across events
Is it easier for a given team to get a high OPR at stacked event than a softer one?
I think that assist points being the first sort in standings after qualification points, forces good teams to work with their partners. Even if it means that the alliance wont score as many points. I also think that assist points being awarded in the 0 10 30 order gives an advantage to teams who are going to highly competitive events. I'm curious to hear what other CD users think. |
Re: looking at OPR across events
Quote:
Quote:
Quote:
*BTW, Congrats last weekend! Looking forward to seeing you at District Champs. |
Re: looking at OPR across events
I think OPR is as worthless statistic in this game. It works ok games where there is a lot of individual contribution, not so much with teamwork. 2013 was a great year for OPR. 2012 wasn't so bad either, but 2014 is not a good year for OPR.
|
Re: looking at OPR across events
Quote:
Why not take a look at the database here. I think you'd be surprised at how well of a job it's doing as a whole, I certainly am. Take a look at some of the other events. If you still disagree, I understand and respect your opinion. That being said, I'd love to see any other quantitative metrics you are using to rank and evaluate teams, I'm rather obsessed with these kinds of things. :D |
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
If you haven't looked, how do you know it's bad?
Quote:
In almost every FIRST game, a high seeded captain is looking for a robot that is a strong offensive robot. This is exactly what OPR could assist you in finding. You could almost always look at a list of the top teams in OPR and pick one of those. The second pick is not something you'd want to determine by OPR. Often, you are lucky to find a robot that can play offense, especially at small events or districts. Things like Pit-scouting become extremely important. I'd often look for teams with Multi-Cim gearboxes and strong 6WD or 8WD bases and drivers that know how to use them in matches effectively. Quote:
Quote:
Since you haven't taken a look at it yet, here is some information from your event, Central Illinois: Spoiler for Qualification Rankings:
Spoiler for Rank by OPR:
Spoiler for Rank By CCWM:
They look pretty accurate to me, especially at the upper end. I would say both these do a much better job of ranking robot performance than the seeding wouldn't you? Since you haven't got a chance to look at OPR for the events or your team, I posted the link earlier, but I made a page for you with data just from Central Illinois here: Central Illinois OPR/CCWM with Filtering. Enjoy! |
Re: looking at OPR across events
On the subject of OPR/CCWM--has anyone had good success with using them as a forecasting tool for this year for "end of qualifications" standings from an earlier point in time (i.e. end of last full qualification match day)? I know many people feel that the statistic is trash with regards to being a performance indicator, but I have heard of several elite teams using these statistics for forecasting in prior years.
|
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Which OPR are we talking about, the one with or without penalties? The one with penalties I feel is hyper inflated as some of the top OPR's have more than 25% of their scores in penalty points. While it matches well with seeding and the other metric (because penalty points are big this year), I don't think it shows the actual power of a team. (thebluealliance uses OPR with penalties included I believe)
|
Re: looking at OPR across events
Quote:
http://www.chiefdelphi.com/forums/sh...d.php?t=127619 |
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
The problem with using OPR in a game like Aerial Assist is that there are some machines that can contribute greatly to an alliances performance but don't necessarily always score a lot of points. Take for example the 'ideal' third robot on an alliance, one that can inbound and collect effectively, but may not score often, if ever. If this robot is rarely paired with someone that can score, their strengths aren't necessarily reflected in the match score, which trickles down to their OPR. |
Re: looking at OPR across events
Ah wow thanks, fantastic data you have there.
|
Re: looking at OPR across events
Quote:
What version of LibreOffice? What Linux distro? You said the spreadsheet opens fine. Do the macros work too? I know OpenOffice and LibreOffice are not identical, but since I have OpenOffice 3.3-5 installed here under Windows XP Pro SP3, I tried it. It's been loading for about 5 minutes and according to the status bar is only about 15% complete. Not looking good. I'll put LibreOffice in my Slacko Linux partition and give it a try. |
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
You need three things in this game. A really good front court robot that is going to make 90%+ of the shots they take, has a good intake, and isn't afraid to play some defense, a midfielder that is able to transition from assisting to defense on the fly, and an inbounder that can quickly pass the ball to the midfielder as well as play a bit of D or throw some picks for the midfielder to get away from defense to execute the pass. Two high scoring robots is the flashy alliance that looks great on paper but unless they can transition into those defensive roles on the fly they're not going to stand up to the above alliance. I don't think you will see examples of more brutal defense than what we saw in the CVR finals. If you can't counter that with equally effective defense of your own, or counter defense to free up your front court robot, you're in a world of hurt. |
Re: looking at OPR across events
Quote:
I know you do everything humanly possible to coax the truth out of the Twitter data, and the time and effort you invest is much appreciated. For the Twitter stats reports that I generate, I include the Twitter data even if that data is not 100% complete. http://www.chiefdelphi.com/forums/sh....php?p=1355882 |
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
1. A great defensive robot doesn't have a big impact in matches where the other alliance would not have been scoring a lot of points anyway. 2. Catching robots require someone to provide a controlled truss shot. 3. Inbounders and assisters require someone to inbound to/assist to. In each, case, OPR/CCWM will tend to systemically underrate these attributes since they are only utilized in a subset of qualification matches and (in most cases) will have their dependencies met in elims. High goal scorers and truss shooters can manufacture pretty good scores by themselves and are therefore comparatively overrated by OPR/CCWM. |
Re: looking at OPR across events
Quote:
I will continue to agree to disagree with your general premises. I will also continue to utilize the systems in place until someone provides a better quantitative metric substitute. |
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Anyone know of an app I can get to view this on android? I've tried Office suite but it never loads all the way. Any suggestions?
|
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
I agree. That there is a general rule that OPR does not correctly represent teams with some strategies. I also agree that it is a highly interesting and useful metric most of the time. I was just wondering if it was another general rule that it's easier a team with good alliance partners to get a high OPR than a team with not so good partners? Quote:
Again, I completely agree. But i'm just wondering if in general, a team could get a high OPR more easily by going to a more competitive event? |
Re: looking at OPR across events
Quote:
EDIT: just realized I posted this on wrong thread, sorry |
Re: looking at OPR across events
Quote:
I assert that the distribution of robot capabilities in eliminations tends to be very different across all alliances. In total, more robot capabilities tend to be present, so partner- or opponent-dependent attributes will contribute more to an alliance than they did on average in qualifications. |
Re: looking at OPR across events
In defense of OPR this year, I'd like to look at the data—that is, how accurately OPR has predicted teams' performances in the competitions we've seen thus far. Here are the top 10 teams as ranked by OPR, accompanied by their finishes in the competition they competed in.
1. 1114—Semifinalist at Greater Toronto East regional (lost in semifinals partly due to technical foul by alliance partner) 2. 2485—Finalist at San Diego regional (also lost finals partly due to tech foul) 3. 3683—Semifinalist at Greater Toronto East (alliance partner with 1114) 4. 33—Winner of Southfield district event 5. 987—Finalist at San Diego (alliance partner with 2485) 6. 624—Winner of Alamo regional 7. 16—Winner of Arkansas regional 8. 254—Winner of Central Valley regional 9. 3147—Finalist at Crossroads regional 10. 3393—Finalist at Auburn Mountain district In fact, the top OPR team that didn't make at least finals in their respective regional/district event is 3494 at 12th, followed by 3476 in 13th (whose pickup broke during semifinals at San Diego). Overall, only seven of the top 30 teams failed to make the finals, and only two failed to make the semifinals of at least one event. As a robotics and statistics nerd, I think these numbers speak for themselves: OPR may not be a perfect metric, but it seems pretty dang accurate at predicting whether a team will finish well. EDIT: 1114 and 3683 lost in the semifinals. Thanks to Kevin Sheridan for pointing that out. |
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
My point is, everyone in the know will quickly acknowledge OPR or CCWM's shortcomings, but until there is a widely available, game independent, repeatable formula to rank all teams in all competitions relatively quickly, you'll find that OPR will remain the gold standard in scouting (especially for the championship event, where it is very difficult to watch / find video for every team attending). |
Re: looking at OPR across events
I mean, yes, it does produce some interesting results, but if you go into a match looking at OPRs and realize the alliance you're going against has a really low rating, does that mean you're not going to try as hard? No! That's when you get whooped. So OPR shouldn't affect your strategy. We've also seen that you shouldn't even rely on OPR for your pick list. So I ask again, what are you actually getting out of OPR? What advantage do you have over others by looking at OPR vs just the qualification rankings?
|
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
This is significant for individuals or small teams that lack man hours to devote to scouting. |
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
My opinion on this:
CCWM within a regional: theoretically excellent. With the amount of opportunistic defense this year along with the frequency of fouls, this stat is stronger than OPR in calculating your strength since CCWM takes into account both offense and defense. However, looking at the GTRE rankings at a glance, CCWM seems to be all over the board, most likely due to the somewhat random nature of fouls. Adj. OPR at a regional: theoretically excellent. There is some very large variations in terms of how many fouls your team gains. In fact, I am getting a standard deviation of 11.7 at GTRE. The issue with this is that some teams are more natural foul magnets than others due to their reputation (1114 has the highest foul OPR at GTRE). OPR comparing robots at one event: decent. It's like Adj. Opr with even more error. OPR/CCWM across regionals: horrible. Too many regionals have alliances that barely function, while others have too many alliances that function only defensively. Has anyone done an Adjusted Contribution to Winning Margin analysis? That should be calculated as follows: ACCWM = (AOPR - DPR), iir the CCWM formula correctly. It should be able to take into account any fouls you incur while eliminating opposing team fouls. Before anyone says it, I also agree that this stat will also have to be taken into account with a grain of salt, due to alliance synergies, luck and what-not. |
Re: looking at OPR across events
Quote:
http://www.chiefdelphi.com/forums/sh...70&postcount=7 |
Re: looking at OPR across events
Quote:
If you recognize that OPR is a tool just like a hammer is a tool, it isn't hard to see that it has practical uses with definite limitations. I wouldn't, and pretty much everyone that understands OPR wouldn't use it to strictly pick an alliance, but when you need to quickly fasten two pieces of wood together it's hard to beat a hammer and a nail. |
Re: looking at OPR across events
Quote:
Also, I never thought one should base alliance choices or opinions of teams solely on OPR. I just think it can be a useful metric. |
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
I think the "tool concept" is correct. OPR or CCWM are a piece the scouting puzzle. It gives you an "idea" of which teams that you may not be know well to scout in particular to see if their skills compliment yours. I think that finding the right compliments to your teams skill set and strategy is always the key to winning in eliminations. You can't get that from a statistic but it helps focus your scouting, it is not a substitute for it. At the end of the day it is about synergy and co-operation. You cannot get that from a statistic but the statistic can help you manage your scouting effort. if your scouts see something that goes against the statistic, go with it. follow up on it and see if more team members agree. Performance can change radically between competitions due to learning and correction of previous errors. A change in strategy alone can make a big difference. At the end of the day the teams that can work seamlessly together (and have a good skill set) will usually win.
|
Re: looking at OPR across events
Okay I have avoided giving my opinion on OPR for a long time. Some people may assume that since I publish the data, my team must use it a lot. I certainly understand the limitation and I understand in what situations it will really screw up the data.
We use OPR/CCWM very similar to other teams who understand the limitations of it. At events we attend, we do not use OPR because we have enough resources to capture every statistics that we need and extra scouts to watch and collect qualitative data to make pre-match decisions and alliance selections. We have a very sophisticated Android program written by a student that is now close to 20,000 lines of code to collect data on 6 tablets. However I do look at OPR at the event we attend to see what other teams see. For competitions that I did not attend and especially pre-scouting at Championship, OPR, sub-OPR and CCWM are very useful tools. Just like any building project, you need to know what is the best tool to accomplish different tasks. |
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
Our OPR is so darn low because we didn't meaningfully play in any quals. |
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
FWIW, I just posted an analysis of unpenalized alliance score residuals. http://www.chiefdelphi.com/forums/sh....php?p=1358276 |
| All times are GMT -5. The time now is 21:21. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi