Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   General Forum (http://www.chiefdelphi.com/forums/forumdisplay.php?f=16)
-   -   looking at OPR across events (http://www.chiefdelphi.com/forums/showthread.php?t=127679)

Bryce2471 10-03-2014 02:19

looking at OPR across events
 
Is it easier for a given team to get a high OPR at stacked event than a softer one?

I think that assist points being the first sort in standings after qualification points, forces good teams to work with their partners. Even if it means that the alliance wont score as many points.

I also think that assist points being awarded in the 0 10 30 order gives an advantage to teams who are going to highly competitive events.

I'm curious to hear what other CD users think.

Navid Shafa 10-03-2014 04:39

Re: looking at OPR across events
 
Quote:

Originally Posted by Bryce2471 (Post 1356289)
Is it easier for a given team to get a high OPR at stacked event than a softer one?

Is it easier? I'm not sure we can say definitively, because there are quite a few variables in this game. The big dynamic trade-off:
  • You can score more points with less defense on you.
  • You can score more points when you have strong partners.
It becomes very event dependent, as well as schedule dependent.
Quote:

Originally Posted by Bryce2471 (Post 1356289)
I think that assist points being the first sort in standings after qualification points, forces good teams to work with their partners. Even if it means that the alliance wont score as many points.

Absolutely. Many of the powerhouses could complete quick single bot cycles. However, a combination of seeding rules, point maximization and slow ball-returns certainly promotes more assisting in general.
Quote:

Originally Posted by Bryce2471 (Post 1356289)
I also think that assist points being awarded in the 0 10 30 order gives an advantage to teams who are going to highly competitive events.

A good match schedule that allows for more triple assists can certainly affect a team's seeding and tie-breaking ability. It also makes a notable difference in World OPR/CCWM.

*BTW, Congrats last weekend! Looking forward to seeing you at District Champs.

Michael Hill 10-03-2014 06:30

Re: looking at OPR across events
 
I think OPR is as worthless statistic in this game. It works ok games where there is a lot of individual contribution, not so much with teamwork. 2013 was a great year for OPR. 2012 wasn't so bad either, but 2014 is not a good year for OPR.

Navid Shafa 10-03-2014 06:49

Re: looking at OPR across events
 
Quote:

Originally Posted by Michael Hill (Post 1356328)
I think OPR is as worthless statistic in this game. It works ok games where there is a lot of individual contribution, not so much with teamwork. 2013 was a great year for OPR. 2012 wasn't so bad either, but 2014 is not a good year for OPR.

I'd have to strongly disagree. OPR and CCWM is certainly better some years compared to others, that I won't argue. After watching Central Illinois, I'm guessing you are dissapointed at the numbers you saw at the event or the rating your team has right now. A lot of Central Illinois teams saw rather low OPR/CCWM values, because defense was brutal there. I'm sure you can attest to that. If you are trying to compare teams from CI to other events, you are going to be disappointed.

Why not take a look at the database here.
I think you'd be surprised at how well of a job it's doing as a whole, I certainly am. Take a look at some of the other events.

If you still disagree, I understand and respect your opinion. That being said, I'd love to see any other quantitative metrics you are using to rank and evaluate teams, I'm rather obsessed with these kinds of things. :D

Michael Hill 10-03-2014 08:21

Re: looking at OPR across events
 
Quote:

Originally Posted by Navid Shafa (Post 1356329)
I'd have to strongly disagree. OPR and CCWM is certainly better some years compared to others, that I won't argue. After watching Central Illinois, I'm guessing you are dissapointed at the numbers you saw at the event or the rating your team has right now. A lot of Central Illinois teams saw rather low OPR/CCWM values, because defense was brutal there. I'm sure you can attest to that. If you are trying to compare teams from CI to other events, you are going to be disappointed.

Why not take a look at the database here.
I think you'd be surprised at how well of a job it's doing as a whole, I certainly am. Take a look at some of the other events.

If you still disagree, I understand and respect your opinion. That being said, I'd love to see any other quantitative metrics you are using to rank and evaluate teams, I'm rather obsessed with these kinds of things. :D

To be honest, I haven't even looked at where my team is. The data just doesn't lend itself to being useful in this type of game. If you're using OPR for anything like picklist, I think you'll find yourself disappointed. I would also say that trying to validate OPR with ranks at regionals is also a useless statistic (why not just go by rank then if your OPR matches?). When you have a game that hinges on another team's ability to complete tasks, OPR will not be a good indicator of performance.

Navid Shafa 10-03-2014 09:17

Re: looking at OPR across events
 
If you haven't looked, how do you know it's bad?

Quote:

Originally Posted by Michael Hill (Post 1356339)
If you're using OPR for anything like picklist, I think you'll find yourself disappointed.

First and most importantly, I would never make a picklist solely off of quantitative metrics, nor would I want to pick teams based on Pit-scouting and my assessment after watching that robot play. A good combination of both is what makes holistic scouting so valuable.

In almost every FIRST game, a high seeded captain is looking for a robot that is a strong offensive robot. This is exactly what OPR could assist you in finding. You could almost always look at a list of the top teams in OPR and pick one of those.

The second pick is not something you'd want to determine by OPR. Often, you are lucky to find a robot that can play offense, especially at small events or districts. Things like Pit-scouting become extremely important. I'd often look for teams with Multi-Cim gearboxes and strong 6WD or 8WD bases and drivers that know how to use them in matches effectively.

Quote:

Originally Posted by Michael Hill (Post 1356339)
I would also say that trying to validate OPR with ranks at regionals is also a useless statistic (why not just go by rank then if your OPR matches?).

I've never done this, nor I would attempt to do so. A team's rank in seeding quite often does not directly correlate with robot performance, there are many factors that impact the tournament rankings. I certainly would not expect OPR to exactly match the seeding. Look back to 2010 and 2012's seeding systems, the seeding was greatly impacted not only by your partners but the opposing alliance. Perhaps if we had an extremely large sample size these kinds of ranking comparisons would become relevant...

Quote:

Originally Posted by Michael Hill (Post 1356339)
When you have a game that hinges on another team's ability to complete tasks, OPR will not be a good indicator of performance.

This is exactly why I brought up CCWM before. This is still in my opinion accurately determining personal performance considering a mixture of different alliances and compositions. If we notice a distinct amount of parity between OPR and CCWM we know that a team has done more or less of the scoring for their alliance. It can also help narrow out teams that ranked extremely high or low, due to really strong or really poor match schedules. i.e. A team with a large CCWM value is doing the bulk of the scoring in general for their alliance.

Since you haven't taken a look at it yet, here is some information from your event, Central Illinois:

Spoiler for Qualification Rankings:
1 525
2 1736
3 1986
4 1806
5 1756
6 1747
7 171
8 2081
9 167
10 2481
11 967
12 2704
13 2451
14 4256
15 1208
16 4143
17 2039
18 4212
19 3138
20 2022
21 2164
22 1764
23 3352
24 4196
25 3284
26 1091
27 4786
28 292
29 1288
30 1094
31 648
32 2040
33 4213
34 4296
35 4655
36 4330
37 5041
38 1739
39 4329
40 2194


Spoiler for Rank by OPR:
1 525
2 1986
3 1806
4 1756
5 4256
6 1747
7 1736
8 167
9 2081
10 171
11 2451
12 4143
13 967
14 1208
15 2039
16 1288
17 3284
18 292
19 648
20 5041
21 2481
22 4212
23 3138
24 1094
25 2704
26 1764
27 4213
28 2040
29 1091
30 2164
31 3352
32 4330
33 2022
34 4196
35 4655
36 4786
37 2194
38 4329
39 1739
40 4296


Spoiler for Rank By CCWM:
1 1986
2 525
3 1806
4 1747
5 1736
6 1756
7 171
8 2081
9 2481
10 3284
11 2451
12 967
13 167
14 1208
15 4143
16 2704
17 4256
18 2039
19 4213
20 4330
21 3138
22 1091
23 1288
24 3352
25 4212
26 2164
27 2022
28 5041
29 4196
30 1764
31 4786
32 4655
33 1094
34 2040
35 4296
36 292
37 2194
38 4329
39 648
40 1739


They look pretty accurate to me, especially at the upper end. I would say both these do a much better job of ranking robot performance than the seeding wouldn't you?

Since you haven't got a chance to look at OPR for the events or your team, I posted the link earlier, but I made a page for you with data just from Central Illinois here: Central Illinois OPR/CCWM with Filtering. Enjoy!

Ben Martin 10-03-2014 09:31

Re: looking at OPR across events
 
On the subject of OPR/CCWM--has anyone had good success with using them as a forecasting tool for this year for "end of qualifications" standings from an earlier point in time (i.e. end of last full qualification match day)? I know many people feel that the statistic is trash with regards to being a performance indicator, but I have heard of several elite teams using these statistics for forecasting in prior years.

Lil' Lavery 10-03-2014 09:32

Re: looking at OPR across events
 
Quote:

Originally Posted by Michael Hill (Post 1356328)
I think OPR is as worthless statistic in this game. It works ok games where there is a lot of individual contribution, not so much with teamwork. 2013 was a great year for OPR. 2012 wasn't so bad either, but 2014 is not a good year for OPR.

Ideally, this is exactly the type of game you would want to use a OPR or CCWM metric for. With a variety of potential roles within an alliance, a team's contribution to offense or winning a match isn't always obvious to more traditional scouting. However, I don't think a single event comes anywhere near the sample size required to normalize the data, especially given the alliance driven nature of the game. Games with very discreet roles are much easier to get a normalized/accurate OPR, but OPR is less useful in those years because they're much easier to get meaningful data from scouting. District model teams, or any teams who compete in 3+ events and 50+ matches, may have enough input data for OPR to be useful, but in-season improvement (of both them and the average level of play) once again throws a wrench in how predictive OPR will be for future contributions.

Matt_Boehm_329 10-03-2014 09:58

Re: looking at OPR across events
 
Which OPR are we talking about, the one with or without penalties? The one with penalties I feel is hyper inflated as some of the top OPR's have more than 25% of their scores in penalty points. While it matches well with seeding and the other metric (because penalty points are big this year), I don't think it shows the actual power of a team. (thebluealliance uses OPR with penalties included I believe)

Ether 10-03-2014 10:18

Re: looking at OPR across events
 
Quote:

Originally Posted by Matt_Boehm_329 (Post 1356386)
Which OPR are we talking about, the one with or without penalties? The one with penalties I feel is hyper inflated as some of the top OPR's have more than 25% of their scores in penalty points. While it matches well with seeding and the other metric (because penalty points are big this year), I don't think it shows the actual power of a team. (thebluealliance uses OPR with penalties included I believe)

Unfortunately, removing awarded foul points from the final score requires using Twitter data, which is incomplete and may contain errors or redundancies. With that caveat in mind, I've posted the Twitter-data-based unpenalized final score in the thread linked below. I think Ed Law's OPR spreadsheet also uses Twitter data to remove awarded foul points, but I haven't been able yet to find a viewer that can open xlsm files with Excel 2000. Ed graciously made an attempt to create an xls version of his spreadsheet but was unsuccessful.

http://www.chiefdelphi.com/forums/sh...d.php?t=127619



RyanShoff 10-03-2014 10:43

Re: looking at OPR across events
 
Quote:

Originally Posted by Ether (Post 1356407)
I haven't been able yet to find a viewer that can open xlsm files with Excel 2000.

LibreOffice opens his spreadsheet fine under Linux. I assume it would work for Windows too.

thefro526 10-03-2014 10:43

Re: looking at OPR across events
 
Quote:

Originally Posted by Ben Martin (Post 1356369)
On the subject of OPR/CCWM--has anyone had good success with using them as a forecasting tool for this year for "end of qualifications" standings from an earlier point in time (i.e. end of last full qualification match day)? I know many people feel that the statistic is trash with regards to being a performance indicator, but I have heard of several elite teams using these statistics for forecasting in prior years.

Ben, I saw some OPR based calculations at our first District event, and wouldn't trust them 100% for all matches, but as a guide to how difficult a match is going to be and/or a comparison of one alliances strength to another, they're definitely useful.

The problem with using OPR in a game like Aerial Assist is that there are some machines that can contribute greatly to an alliances performance but don't necessarily always score a lot of points. Take for example the 'ideal' third robot on an alliance, one that can inbound and collect effectively, but may not score often, if ever. If this robot is rarely paired with someone that can score, their strengths aren't necessarily reflected in the match score, which trickles down to their OPR.

Matt_Boehm_329 10-03-2014 10:45

Re: looking at OPR across events
 
Ah wow thanks, fantastic data you have there.

Ether 10-03-2014 11:07

Re: looking at OPR across events
 
Quote:

Originally Posted by RyanShoff (Post 1356424)
LibreOffice opens his spreadsheet fine under Linux. I assume it would work for Windows too.

Thanks Ryan.

What version of LibreOffice? What Linux distro? You said the spreadsheet opens fine. Do the macros work too?

I know OpenOffice and LibreOffice are not identical, but since I have OpenOffice 3.3-5 installed here under Windows XP Pro SP3, I tried it. It's been loading for about 5 minutes and according to the status bar is only about 15% complete. Not looking good.

I'll put LibreOffice in my Slacko Linux partition and give it a try.



Ed Law 10-03-2014 11:20

Re: looking at OPR across events
 
Quote:

Originally Posted by Ether (Post 1356407)
Unfortunately, removing awarded foul points from the final score requires using Twitter data, which is incomplete and may contain errors or redundancies.

http://www.chiefdelphi.com/forums/sh...d.php?t=127619



My spreadsheet automatically ignores the redundancies. If a match is replayed, it will use the later match and ignore the original match. Also I only use the twitter data for a district/regional event if there are no match data missing from that event.

Michael Hill 10-03-2014 11:54

Re: looking at OPR across events
 
Quote:

Originally Posted by Navid Shafa (Post 1356357)
If you haven't looked, how do you know it's bad?



First and most importantly, I would never make a picklist solely off of quantitative metrics, nor would I want to pick teams based on Pit-scouting and my assessment after watching that robot play. A good combination of both is what makes holistic scouting so valuable.

In almost every FIRST game, a high seeded captain is looking for a robot that is a strong offensive robot. This is exactly what OPR could assist you in finding. You could almost always look at a list of the top teams in OPR and pick one of those.

The second pick is not something you'd want to determine by OPR. Often, you are lucky to find a robot that can play offense, especially at small events or districts. Things like Pit-scouting become extremely important. I'd often look for teams with Multi-Cim gearboxes and strong 6WD or 8WD bases and drivers that know how to use them in matches effectively.


I've never done this, nor I would attempt to do so. A team's rank in seeding quite often does not directly correlate with robot performance, there are many factors that impact the tournament rankings. I certainly would not expect OPR to exactly match the seeding. Look back to 2010 and 2012's seeding systems, the seeding was greatly impacted not only by your partners but the opposing alliance. Perhaps if we had an extremely large sample size these kinds of ranking comparisons would become relevant...



This is exactly why I brought up CCWM before. This is still in my opinion accurately determining personal performance considering a mixture of different alliances and compositions. If we notice a distinct amount of parity between OPR and CCWM we know that a team has done more or less of the scoring for their alliance. It can also help narrow out teams that ranked extremely high or low, due to really strong or really poor match schedules. i.e. A team with a large CCWM value is doing the bulk of the scoring in general for their alliance.

Since you haven't taken a look at it yet, here is some information from your event, Central Illinois:

Spoiler for Qualification Rankings:
1 525
2 1736
3 1986
4 1806
5 1756
6 1747
7 171
8 2081
9 167
10 2481
11 967
12 2704
13 2451
14 4256
15 1208
16 4143
17 2039
18 4212
19 3138
20 2022
21 2164
22 1764
23 3352
24 4196
25 3284
26 1091
27 4786
28 292
29 1288
30 1094
31 648
32 2040
33 4213
34 4296
35 4655
36 4330
37 5041
38 1739
39 4329
40 2194


Spoiler for Rank by OPR:
1 525
2 1986
3 1806
4 1756
5 4256
6 1747
7 1736
8 167
9 2081
10 171
11 2451
12 4143
13 967
14 1208
15 2039
16 1288
17 3284
18 292
19 648
20 5041
21 2481
22 4212
23 3138
24 1094
25 2704
26 1764
27 4213
28 2040
29 1091
30 2164
31 3352
32 4330
33 2022
34 4196
35 4655
36 4786
37 2194
38 4329
39 1739
40 4296


Spoiler for Rank By CCWM:
1 1986
2 525
3 1806
4 1747
5 1736
6 1756
7 171
8 2081
9 2481
10 3284
11 2451
12 967
13 167
14 1208
15 4143
16 2704
17 4256
18 2039
19 4213
20 4330
21 3138
22 1091
23 1288
24 3352
25 4212
26 2164
27 2022
28 5041
29 4196
30 1764
31 4786
32 4655
33 1094
34 2040
35 4296
36 292
37 2194
38 4329
39 648
40 1739


They look pretty accurate to me, especially at the upper end. I would say both these do a much better job of ranking robot performance than the seeding wouldn't you?

Since you haven't got a chance to look at OPR for the events or your team, I posted the link earlier, but I made a page for you with data just from Central Illinois here: Central Illinois OPR/CCWM with Filtering. Enjoy!

I'd like to see an OPR justification for why 254 chose 973 and 2135 at CVR. Until then, I'd much rather rely on actual scouting data for real performance evaluation. This game has more facets to it than usual, and there are more ways of scoring points. Previously, points were mostly scored by putting something into a goal. This year, many more things have to be done to get points (and a win). This year is all about choosing compatible robots, not just ones who can put points on the board by themselves. 973 was second to last in OPR at Central Valley, but does that mean that they didn't perform well? Sure, the top teams that can put sick points up are going to bubble to the top, but after about 20-30 teams (overall), it's a big jumble. Isn't it the teams after 20-30 the ones you really care about?

Cory 10-03-2014 12:56

Re: looking at OPR across events
 
Quote:

Originally Posted by Michael Hill (Post 1356474)
I'd like to see an OPR justification for why 254 chose 973 and 2135 at CVR. Until then, I'd much rather rely on actual scouting data for real performance evaluation. This game has more facets to it than usual, and there are more ways of scoring points. Previously, points were mostly scored by putting something into a goal. This year, many more things have to be done to get points (and a win). This year is all about choosing compatible robots, not just ones who can put points on the board by themselves. 973 was second to last in OPR at Central Valley, but does that mean that they didn't perform well? Sure, the top teams that can put sick points up are going to bubble to the top, but after about 20-30 teams (overall), it's a big jumble. Isn't it the teams after 20-30 the ones you really care about?

OPR is a BS metric this year. This game isn't about scoring. Pretty much every other game in FIRST has required more than one good offensive robot, so you are forced to pick the second best scoring bot at the event, as the #1 seed. Not this one.

You need three things in this game. A really good front court robot that is going to make 90%+ of the shots they take, has a good intake, and isn't afraid to play some defense, a midfielder that is able to transition from assisting to defense on the fly, and an inbounder that can quickly pass the ball to the midfielder as well as play a bit of D or throw some picks for the midfielder to get away from defense to execute the pass.

Two high scoring robots is the flashy alliance that looks great on paper but unless they can transition into those defensive roles on the fly they're not going to stand up to the above alliance. I don't think you will see examples of more brutal defense than what we saw in the CVR finals. If you can't counter that with equally effective defense of your own, or counter defense to free up your front court robot, you're in a world of hurt.

Ether 10-03-2014 13:12

Re: looking at OPR across events
 
Quote:

Originally Posted by Ed Law (Post 1356451)
My spreadsheet automatically ignores the redundancies. If a match is replayed, it will use the later match and ignore the original match. Also I only use the twitter data for a district/regional event if there are no match data missing from that event.

Ed,

I know you do everything humanly possible to coax the truth out of the Twitter data, and the time and effort you invest is much appreciated.

For the Twitter stats reports that I generate, I include the Twitter data even if that data is not 100% complete.

http://www.chiefdelphi.com/forums/sh....php?p=1355882



Lil' Lavery 10-03-2014 13:20

Re: looking at OPR across events
 
Quote:

Originally Posted by Cory (Post 1356523)
OPR is a BS metric this year. This game isn't about scoring. Pretty much every other game in FIRST has required more than one good offensive robot, so you are forced to pick the second best scoring bot at the event, as the #1 seed. Not this one.

You need three things in this game. A really good front court robot that is going to make 90%+ of the shots they take, has a good intake, and isn't afraid to play some defense, a midfielder that is able to transition from assisting to defense on the fly, and an inbounder that can quickly pass the ball to the midfielder as well as play a bit of D or throw some picks for the midfielder to get away from defense to execute the pass.

Two high scoring robots is the flashy alliance that looks great on paper but unless they can transition into those defensive roles on the fly they're not going to stand up to the above alliance. I don't think you will see examples of more brutal defense than what we saw in the CVR finals. If you can't counter that with equally effective defense of your own, or counter defense to free up your front court robot, you're in a world of hurt.

This is also exactly why OPR, or more specifically CCWM, could be a great metric this year, given an infinitely large sample size of non-changing machines. Because so much of this game relies on esoteric maneuvers and complimentary play, much of a robot's value to an alliance cannot be determined simply by recording how many times they score or assist. Some other concrete metrics, like shooting%, possessions, inbounding%, lines crossed, or time spent in contact with other robots could shed more light, but many teams don't track those reliably (or at all), and they still leave gaps in true value. Over a sufficiently large sample, a team's value to an average alliance should become clear with OPR/CCWM, though. I don't think we'll see a large enough sample to get meaningful results for most middle-tier robots, though. There's too much variably during qualification matches in terms of partner's capabilities, opponent's capabilities, strategy selected, and robot improvement. Not to mention that the value to an average alliance is often not identical to value to your particular alliance.

Jared Russell 10-03-2014 13:56

Re: looking at OPR across events
 
Quote:

Originally Posted by Lil' Lavery (Post 1356552)
Over a sufficiently large sample, a team's value to an average alliance should become clear with OPR/CCWM, though.

Maybe for an average qualifications alliance, but not for an average eliminations alliance. Many useful robot attributes cannot be demonstrated in every qualification match due to random alliance pairings. Examples:

1. A great defensive robot doesn't have a big impact in matches where the other alliance would not have been scoring a lot of points anyway.

2. Catching robots require someone to provide a controlled truss shot.

3. Inbounders and assisters require someone to inbound to/assist to.

In each, case, OPR/CCWM will tend to systemically underrate these attributes since they are only utilized in a subset of qualification matches and (in most cases) will have their dependencies met in elims. High goal scorers and truss shooters can manufacture pretty good scores by themselves and are therefore comparatively overrated by OPR/CCWM.

Navid Shafa 10-03-2014 16:22

Re: looking at OPR across events
 
Quote:

Originally Posted by Michael Hill (Post 1356474)
I'd like to see an OPR justification for why 254 chose 973 and 2135 at CVR.

Although I appreciate and use OPR for forecasting and comparing teams, I would have to be stupid to use just OPR to pick an alliance. Any quantitative metric is going to put 973 at the bottom. How can you expect any quantitative system to account for robots that don't function properly or to their full potential in a large quantity of qualification matches?

I will continue to agree to disagree with your general premises. I will also continue to utilize the systems in place until someone provides a better quantitative metric substitute.

Karthik 10-03-2014 17:40

Re: looking at OPR across events
 
Quote:

Originally Posted by Jared Russell (Post 1356591)
Maybe for an average qualifications alliance, but not for an average eliminations alliance. Many useful robot attributes cannot be demonstrated in every qualification match due to random alliance pairings. Examples:

1. A great defensive robot doesn't have a big impact in matches where the other alliance would not have been scoring a lot of points anyway.

2. Catching robots require someone to provide a controlled truss shot.

3. Inbounders and assisters require someone to inbound to/assist to.

In each, case, OPR/CCWM will tend to systemically underrate these attributes since they are only utilized in a subset of qualification matches and (in most cases) will have their dependencies met in elims. High goal scorers and truss shooters can manufacture pretty good scores by themselves and are therefore comparatively overrated by OPR/CCWM.

While I do agree that OPR can underrate these types of teams, I think we're all forgetting exactly how OPR works. Teams that consistently play on high scoring alliances will continue to get a higher OPR. Thus, teams who are very good at facilitating alliance partners, (Strong inbounding, passing, catching, etc) will eventually rise to the top. As Sean mentioned earlier, the sample size may not be large enough to filter out all the issues that Jarred mentioned above, but it's still a pretty decent metric, and definitely a useful tool when analyzing the results of an event where you have no/limited access to match video, or no time to watch and breakdown all the video.

JTEarley 10-03-2014 18:19

Re: looking at OPR across events
 
Anyone know of an app I can get to view this on android? I've tried Office suite but it never loads all the way. Any suggestions?

Navid Shafa 10-03-2014 18:20

Re: looking at OPR across events
 
Quote:

Originally Posted by JTEarley (Post 1356816)
Anyone know of an app I can get to view this on android? I've tried Office suite but it never loads all the way. Any suggestions?

Spyder has both OPR and OPR predictions per event. It's nothing fancy, but you might like that.

Bryce2471 10-03-2014 18:35

Re: looking at OPR across events
 
Quote:

Although I appreciate and use OPR for forecasting and comparing teams, I would have to be stupid to use just OPR to pick an alliance. Any quantitative metric is going to put 973 at the bottom. How can you expect any quantitative system to account for robots that don't function properly or to their full potential in a large quantity of qualification matches?

I will continue to agree to disagree with your general premises. I will also continue to utilize the systems in place until someone provides a better quantitative metric substitute.
Navid,
I agree. That there is a general rule that OPR does not correctly represent teams with some strategies. I also agree that it is a highly interesting and useful metric most of the time. I was just wondering if it was another general rule that it's easier a team with good alliance partners to get a high OPR than a team with not so good partners?

Quote:

While I do agree that OPR can underrate these types of teams, I think we're all forgetting exactly how OPR works. Teams that consistently play on high scoring alliances will continue to get a higher OPR. Thus, teams who are very good at facilitating alliance partners, (Strong inbounding, passing, catching, etc) will eventually rise to the top. As Sean mentioned earlier, the sample size may not be large enough to filter out all the issues that Jarred mentioned above, but it's still a pretty decent metric, and definitely a useful tool when analyzing the results of an event where you have no/limited access to match video, or no time to watch and breakdown all the video.
Karthik,
Again, I completely agree. But i'm just wondering if in general, a team could get a high OPR more easily by going to a more competitive event?

JTEarley 10-03-2014 18:39

Re: looking at OPR across events
 
Quote:

Originally Posted by Navid Shafa (Post 1356819)
Spyder has both OPR and OPR predictions per event. It's nothing fancy, but you might like that.

I like that, but it is nice to see every team's opr at once, instead of just within each event.

EDIT: just realized I posted this on wrong thread, sorry

Jared Russell 10-03-2014 18:51

Re: looking at OPR across events
 
Quote:

Originally Posted by Karthik (Post 1356788)
Teams that consistently play on high scoring alliances will continue to get a higher OPR. Thus, teams who are very good at facilitating alliance partners, (Strong inbounding, passing, catching, etc) will eventually rise to the top.

The first statement is of course true, but the coupling of catching, assisting, and defense with other robots' actions means that very capable facilitators may not frequently and consistently play on high scoring alliances, regardless of the number of matches. (How many triple assist cycles have you seen remain uncompleted because the scorer kept missing and ran out of time?) It all depends on the distribution of robot capabilities at the event. A given team's event OPR/CCWM is conditioned based on this distribution, a result of OPR's assumed linear scoring model being fit to the game's non-linear and dependent scoring functions. (Hence, OPRs between events are not really comparable unless the distribution of robot capabilities is similar).

I assert that the distribution of robot capabilities in eliminations tends to be very different across all alliances. In total, more robot capabilities tend to be present, so partner- or opponent-dependent attributes will contribute more to an alliance than they did on average in qualifications.

David8696 10-03-2014 19:19

Re: looking at OPR across events
 
In defense of OPR this year, I'd like to look at the data—that is, how accurately OPR has predicted teams' performances in the competitions we've seen thus far. Here are the top 10 teams as ranked by OPR, accompanied by their finishes in the competition they competed in.

1. 1114—Semifinalist at Greater Toronto East regional (lost in semifinals partly due to technical foul by alliance partner)
2. 2485—Finalist at San Diego regional (also lost finals partly due to tech foul)
3. 3683—Semifinalist at Greater Toronto East (alliance partner with 1114)
4. 33—Winner of Southfield district event
5. 987—Finalist at San Diego (alliance partner with 2485)
6. 624—Winner of Alamo regional
7. 16—Winner of Arkansas regional
8. 254—Winner of Central Valley regional
9. 3147—Finalist at Crossroads regional
10. 3393—Finalist at Auburn Mountain district

In fact, the top OPR team that didn't make at least finals in their respective regional/district event is 3494 at 12th, followed by 3476 in 13th (whose pickup broke during semifinals at San Diego). Overall, only seven of the top 30 teams failed to make the finals, and only two failed to make the semifinals of at least one event. As a robotics and statistics nerd, I think these numbers speak for themselves: OPR may not be a perfect metric, but it seems pretty dang accurate at predicting whether a team will finish well.

EDIT: 1114 and 3683 lost in the semifinals. Thanks to Kevin Sheridan for pointing that out.

Kevin Sheridan 10-03-2014 19:20

Re: looking at OPR across events
 
Quote:

Originally Posted by David8696 (Post 1356873)
In defense of OPR this year, I'd like to look at the data—that is, how accurately OPR has predicted teams' performances in the competitions we've seen thus far. Here are the top 10 teams as ranked by OPR, accompanied by their finishes in the competition they competed in.

1. 1114—Finalist at Greater Toronto East regional (lost finals partly due to technical foul by alliance partner)
2. 2485—Finalist at San Diego regional(also lost finals partly due to tech foul)
3. 3683—Finalist at Greater Toronto East (alliance partner with 1114)
4. 33—Winner of Southfield district event
5. 987—Finalist at San Diego (alliance partner with 2485)
6. 624—Winner of Alamo regional
7. 16—Winner of Arkansas regional
8. 254—Winner of Central Valley regional
9. 3147—Finalist at Crossroads regional
10. 3393—Finalist at Auburn Mountain district

In fact, the top OPR team that didn't make at least finals in their respective regional/district event is 3494 at 12th, followed by 3476 in 13th (whose pickup broke during semifinals at San Diego). Overall, only five of the top 30 teams failed to make the finals, and only two failed to make the semifinals of at least one event. As a robotics and statistics nerd, I think these numbers speak for themselves: OPR may not be a perfect metric, but it seems pretty dang accurate at predicting whether a team will finish well.

1114 and 3683 lost in the semifinals

David8696 10-03-2014 19:21

Re: looking at OPR across events
 
Quote:

Originally Posted by Kevin Sheridan (Post 1356874)
1114 and 3683 lost in the semifinals

My mistake. But I feel like the message holds true—OPR has predicted with fairly good accuracy the success of a team.

Michael Hill 10-03-2014 20:05

Re: looking at OPR across events
 
Quote:

Originally Posted by David8696 (Post 1356875)
My mistake. But I feel like the message holds true—OPR has predicted with fairly good accuracy the success of a team.

I already said for the top teams it will work, but for the midrange, where OPR would actually be useful, it falls apart. So I ask...what is OPR actually USEFUL for?

sailer99 10-03-2014 20:11

Re: looking at OPR across events
 
Quote:

Originally Posted by David8696 (Post 1356873)
In defense of OPR this year, I'd like to look at the data—that is, how accurately OPR has predicted teams' performances in the competitions we've seen thus far. Here are the top 10 teams as ranked by OPR, accompanied by their finishes in the competition they competed in.

1. 1114—Semifinalist at Greater Toronto East regional (lost in semifinals partly due to technical foul by alliance partner)
2. 2485—Finalist at San Diego regional (also lost finals partly due to tech foul)
3. 3683—Semifinalist at Greater Toronto East (alliance partner with 1114)
4. 33—Winner of Southfield district event
5. 987—Finalist at San Diego (alliance partner with 2485)
6. 624—Winner of Alamo regional
7. 16—Winner of Arkansas regional
8. 254—Winner of Central Valley regional
9. 3147—Finalist at Crossroads regional
10. 3393—Finalist at Auburn Mountain district

In fact, the top OPR team that didn't make at least finals in their respective regional/district event is 3494 at 12th, followed by 3476 in 13th (whose pickup broke during semifinals at San Diego). Overall, only seven of the top 30 teams failed to make the finals, and only two failed to make the semifinals of at least one event. As a robotics and statistics nerd, I think these numbers speak for themselves: OPR may not be a perfect metric, but it seems pretty dang accurate at predicting whether a team will finish well.

EDIT: 1114 and 3683 lost in the semifinals. Thanks to Kevin Sheridan for pointing that out.

Please watch the matches before judging that 1114 and 3683 lost due to a technical foul. One of the problems with OPR this year is it won't accurately predict how alliance partners mesh. I agree it is a good stat for getting a general idea of how teams are going to do but when it comes round to eliminations OPR is no longer very valid. I would be interested in how DPR is as a predictor this year and defense is being heavily played. I don't no much about DPR but I do know it uses your opponents score rather than your own.

efoote868 10-03-2014 20:15

Re: looking at OPR across events
 
Quote:

Originally Posted by Michael Hill (Post 1356886)
but for the midrange, where OPR would actually be useful, it falls apart.

What is your proof / how else would you differentiate middle tier teams / Do you have a concrete example of ranking teams by abilities that OPR fails to mirror?

My point is, everyone in the know will quickly acknowledge OPR or CCWM's shortcomings, but until there is a widely available, game independent, repeatable formula to rank all teams in all competitions relatively quickly, you'll find that OPR will remain the gold standard in scouting (especially for the championship event, where it is very difficult to watch / find video for every team attending).

Michael Hill 10-03-2014 20:16

Re: looking at OPR across events
 
I mean, yes, it does produce some interesting results, but if you go into a match looking at OPRs and realize the alliance you're going against has a really low rating, does that mean you're not going to try as hard? No! That's when you get whooped. So OPR shouldn't affect your strategy. We've also seen that you shouldn't even rely on OPR for your pick list. So I ask again, what are you actually getting out of OPR? What advantage do you have over others by looking at OPR vs just the qualification rankings?

Karthik 10-03-2014 20:22

Re: looking at OPR across events
 
Quote:

Originally Posted by Bryce2471 (Post 1356837)
Karthik,
Again, I completely agree. But i'm just wondering if in general, a team could get a high OPR more easily by going to a more competitive event?

In short, absolutely. Hence the danger of cross event comparison of OPR.

Karthik 10-03-2014 20:28

Re: looking at OPR across events
 
Quote:

Originally Posted by Jared Russell (Post 1356855)
The first statement is of course true, but the coupling of catching, assisting, and defense with other robots' actions means that very capable facilitators may not frequently and consistently play on high scoring alliances, regardless of the number of matches. (How many triple assist cycles have you seen remain uncompleted because the scorer kept missing and ran out of time?) It all depends on the distribution of robot capabilities at the event. A given team's event OPR/CCWM is conditioned based on this distribution, a result of OPR's assumed linear scoring model being fit to the game's non-linear and dependent scoring functions. (Hence, OPRs between events are not really comparable unless the distribution of robot capabilities is similar).

Agreed. Simply put, a team's OPR is partially a function of the teams it plays with; if subset of teams that the team is drawing partners from (i.e. the event) is limited in functionality, any team's OPR will be bottlenecked. However, the relative rankings within the event (e.g. number of deviations from the mean) still should suffice as a sufficient metric for those unable to watch the event.

efoote868 10-03-2014 20:31

Re: looking at OPR across events
 
Quote:

Originally Posted by Michael Hill (Post 1356895)
We've also seen that you shouldn't even rely on OPR for your pick list. So I ask again, what are you actually getting out of OPR? What advantage do you have over others by looking at OPR vs just the qualification rankings?

In short, filtering. I can use OPR to rank teams, and then I can scrutinize the teams on the threshold of the top tier to save myself effort from having to look at every team.

This is significant for individuals or small teams that lack man hours to devote to scouting.

Michael Hill 10-03-2014 20:38

Re: looking at OPR across events
 
Quote:

Originally Posted by efoote868 (Post 1356909)
In short, filtering. I can use OPR to rank teams, and then I can scrutinize the teams on the threshold of the top tier to save myself effort from having to look at every team.

This is significant for individuals or small teams that lack man hours to devote to scouting.

Unfortunately when you do that, you'll be setting yourself up for failure. You want to choose the robots that will best pair with your particular set of skills. Those robots could very well be misrepresented by OPR. As mentioned previously, look at 254's selections at CVR. They chose robots that are very compatible with their style of play even though they were near the vey bottom in OPR.

ErvinI 10-03-2014 20:39

Re: looking at OPR across events
 
My opinion on this:

CCWM within a regional: theoretically excellent. With the amount of opportunistic defense this year along with the frequency of fouls, this stat is stronger than OPR in calculating your strength since CCWM takes into account both offense and defense. However, looking at the GTRE rankings at a glance, CCWM seems to be all over the board, most likely due to the somewhat random nature of fouls.
Adj. OPR at a regional: theoretically excellent. There is some very large variations in terms of how many fouls your team gains. In fact, I am getting a standard deviation of 11.7 at GTRE. The issue with this is that some teams are more natural foul magnets than others due to their reputation (1114 has the highest foul OPR at GTRE).
OPR comparing robots at one event: decent. It's like Adj. Opr with even more error.
OPR/CCWM across regionals: horrible. Too many regionals have alliances that barely function, while others have too many alliances that function only defensively.

Has anyone done an Adjusted Contribution to Winning Margin analysis? That should be calculated as follows: ACCWM = (AOPR - DPR), iir the CCWM formula correctly. It should be able to take into account any fouls you incur while eliminating opposing team fouls.

Before anyone says it, I also agree that this stat will also have to be taken into account with a grain of salt, due to alliance synergies, luck and what-not.

Ether 10-03-2014 21:02

Re: looking at OPR across events
 
Quote:

Originally Posted by ErvinI (Post 1356917)
Has anyone done an Adjusted Contribution to Winning Margin analysis?

Yes, but it's with Twitter data, and across all events in weeks 1 & 2

http://www.chiefdelphi.com/forums/sh...70&postcount=7



efoote868 10-03-2014 21:07

Re: looking at OPR across events
 
Quote:

Originally Posted by Michael Hill (Post 1356915)
Unfortunately when you do that, you'll be setting yourself up for failure. You want to choose the robots that will best pair with your particular set of skills. Those robots could very well be misrepresented by OPR. As mentioned previously, look at 254's selections at CVR. They chose robots that are very compatible with their style of play even though they were near the vey bottom in OPR.

No one is arguing against that strategy, but not every team has the 'Poofs resources to implement it.

If you recognize that OPR is a tool just like a hammer is a tool, it isn't hard to see that it has practical uses with definite limitations. I wouldn't, and pretty much everyone that understands OPR wouldn't use it to strictly pick an alliance, but when you need to quickly fasten two pieces of wood together it's hard to beat a hammer and a nail.

David8696 10-03-2014 23:26

Re: looking at OPR across events
 
Quote:

Originally Posted by sailer99 (Post 1356888)
Please watch the matches before judging that 1114 and 3683 lost due to a technical foul.

Thus "partly due to." There can be no debate that the tech foul contributed to the loss—whether it caused it is much less clear-cut; I don't necessarily think so.

Also, I never thought one should base alliance choices or opinions of teams solely on OPR. I just think it can be a useful metric.

David8696 10-03-2014 23:27

Re: looking at OPR across events
 
Quote:

Originally Posted by efoote868 (Post 1356938)
No one is arguing against that strategy, but not every team has the 'Poofs resources to implement it.

If you recognize that OPR is a tool just like a hammer is a tool, it isn't hard to see that it has practical uses with definite limitations. I wouldn't, and pretty much everyone that understands OPR wouldn't use it to strictly pick an alliance, but when you need to quickly fasten two pieces of wood together it's hard to beat a hammer and a nail.

Very well put. If you're looking to drive in a nail, use the hammer; but if you're trying to saw a board in half, you'll have a lot better luck with a saw.

stuart2054 11-03-2014 00:15

Re: looking at OPR across events
 
I think the "tool concept" is correct. OPR or CCWM are a piece the scouting puzzle. It gives you an "idea" of which teams that you may not be know well to scout in particular to see if their skills compliment yours. I think that finding the right compliments to your teams skill set and strategy is always the key to winning in eliminations. You can't get that from a statistic but it helps focus your scouting, it is not a substitute for it. At the end of the day it is about synergy and co-operation. You cannot get that from a statistic but the statistic can help you manage your scouting effort. if your scouts see something that goes against the statistic, go with it. follow up on it and see if more team members agree. Performance can change radically between competitions due to learning and correction of previous errors. A change in strategy alone can make a big difference. At the end of the day the teams that can work seamlessly together (and have a good skill set) will usually win.

Ed Law 11-03-2014 01:16

Re: looking at OPR across events
 
Okay I have avoided giving my opinion on OPR for a long time. Some people may assume that since I publish the data, my team must use it a lot. I certainly understand the limitation and I understand in what situations it will really screw up the data.

We use OPR/CCWM very similar to other teams who understand the limitations of it. At events we attend, we do not use OPR because we have enough resources to capture every statistics that we need and extra scouts to watch and collect qualitative data to make pre-match decisions and alliance selections. We have a very sophisticated Android program written by a student that is now close to 20,000 lines of code to collect data on 6 tablets. However I do look at OPR at the event we attend to see what other teams see. For competitions that I did not attend and especially pre-scouting at Championship, OPR, sub-OPR and CCWM are very useful tools. Just like any building project, you need to know what is the best tool to accomplish different tasks.

Alan Anderson 11-03-2014 11:48

Re: looking at OPR across events
 
Quote:

Originally Posted by David8696 (Post 1356873)
...(lost in semifinals partly due to technical foul by alliance partner)...

If you're going to put that sort of disclaimer on data that doesn't support the thesis, you probably ought to mention those teams whose wins were partly due to fouls committed by opponents. My preference would be to just leave off the commentary and let people look at the raw data without prejudice.

AdamHeard 11-03-2014 17:12

Re: looking at OPR across events
 
Quote:

Originally Posted by Michael Hill (Post 1356474)
I'd like to see an OPR justification for why 254 chose 973 and 2135 at CVR. Until then, I'd much rather rely on actual scouting data for real performance evaluation. This game has more facets to it than usual, and there are more ways of scoring points. Previously, points were mostly scored by putting something into a goal. This year, many more things have to be done to get points (and a win). This year is all about choosing compatible robots, not just ones who can put points on the board by themselves. 973 was second to last in OPR at Central Valley, but does that mean that they didn't perform well? Sure, the top teams that can put sick points up are going to bubble to the top, but after about 20-30 teams (overall), it's a big jumble. Isn't it the teams after 20-30 the ones you really care about?

Well, since OPR does only reflect quals, and we essentially couldn't play in quals, we're not the greatest counterexample for OPR here.

Our OPR is so darn low because we didn't meaningfully play in any quals.

David8696 11-03-2014 19:06

Re: looking at OPR across events
 
Quote:

Originally Posted by Alan Anderson (Post 1357308)
If you're going to put that sort of disclaimer on data that doesn't support the thesis, you probably ought to mention those teams whose wins were partly due to fouls committed by opponents. My preference would be to just leave off the commentary and let people look at the raw data without prejudice.

Point taken. I was simply trying to give some perspective. Sorry for any misunderstanding.

Ether 12-03-2014 16:32

Re: looking at OPR across events
 

FWIW, I just posted an analysis of unpenalized alliance score residuals.

http://www.chiefdelphi.com/forums/sh....php?p=1358276




All times are GMT -5. The time now is 21:21.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi