Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   General Forum (http://www.chiefdelphi.com/forums/forumdisplay.php?f=16)
-   -   looking at OPR across events (http://www.chiefdelphi.com/forums/showthread.php?t=127679)

Michael Hill 10-03-2014 11:54

Re: looking at OPR across events
 
Quote:

Originally Posted by Navid Shafa (Post 1356357)
If you haven't looked, how do you know it's bad?



First and most importantly, I would never make a picklist solely off of quantitative metrics, nor would I want to pick teams based on Pit-scouting and my assessment after watching that robot play. A good combination of both is what makes holistic scouting so valuable.

In almost every FIRST game, a high seeded captain is looking for a robot that is a strong offensive robot. This is exactly what OPR could assist you in finding. You could almost always look at a list of the top teams in OPR and pick one of those.

The second pick is not something you'd want to determine by OPR. Often, you are lucky to find a robot that can play offense, especially at small events or districts. Things like Pit-scouting become extremely important. I'd often look for teams with Multi-Cim gearboxes and strong 6WD or 8WD bases and drivers that know how to use them in matches effectively.


I've never done this, nor I would attempt to do so. A team's rank in seeding quite often does not directly correlate with robot performance, there are many factors that impact the tournament rankings. I certainly would not expect OPR to exactly match the seeding. Look back to 2010 and 2012's seeding systems, the seeding was greatly impacted not only by your partners but the opposing alliance. Perhaps if we had an extremely large sample size these kinds of ranking comparisons would become relevant...



This is exactly why I brought up CCWM before. This is still in my opinion accurately determining personal performance considering a mixture of different alliances and compositions. If we notice a distinct amount of parity between OPR and CCWM we know that a team has done more or less of the scoring for their alliance. It can also help narrow out teams that ranked extremely high or low, due to really strong or really poor match schedules. i.e. A team with a large CCWM value is doing the bulk of the scoring in general for their alliance.

Since you haven't taken a look at it yet, here is some information from your event, Central Illinois:

Spoiler for Qualification Rankings:
1 525
2 1736
3 1986
4 1806
5 1756
6 1747
7 171
8 2081
9 167
10 2481
11 967
12 2704
13 2451
14 4256
15 1208
16 4143
17 2039
18 4212
19 3138
20 2022
21 2164
22 1764
23 3352
24 4196
25 3284
26 1091
27 4786
28 292
29 1288
30 1094
31 648
32 2040
33 4213
34 4296
35 4655
36 4330
37 5041
38 1739
39 4329
40 2194


Spoiler for Rank by OPR:
1 525
2 1986
3 1806
4 1756
5 4256
6 1747
7 1736
8 167
9 2081
10 171
11 2451
12 4143
13 967
14 1208
15 2039
16 1288
17 3284
18 292
19 648
20 5041
21 2481
22 4212
23 3138
24 1094
25 2704
26 1764
27 4213
28 2040
29 1091
30 2164
31 3352
32 4330
33 2022
34 4196
35 4655
36 4786
37 2194
38 4329
39 1739
40 4296


Spoiler for Rank By CCWM:
1 1986
2 525
3 1806
4 1747
5 1736
6 1756
7 171
8 2081
9 2481
10 3284
11 2451
12 967
13 167
14 1208
15 4143
16 2704
17 4256
18 2039
19 4213
20 4330
21 3138
22 1091
23 1288
24 3352
25 4212
26 2164
27 2022
28 5041
29 4196
30 1764
31 4786
32 4655
33 1094
34 2040
35 4296
36 292
37 2194
38 4329
39 648
40 1739


They look pretty accurate to me, especially at the upper end. I would say both these do a much better job of ranking robot performance than the seeding wouldn't you?

Since you haven't got a chance to look at OPR for the events or your team, I posted the link earlier, but I made a page for you with data just from Central Illinois here: Central Illinois OPR/CCWM with Filtering. Enjoy!

I'd like to see an OPR justification for why 254 chose 973 and 2135 at CVR. Until then, I'd much rather rely on actual scouting data for real performance evaluation. This game has more facets to it than usual, and there are more ways of scoring points. Previously, points were mostly scored by putting something into a goal. This year, many more things have to be done to get points (and a win). This year is all about choosing compatible robots, not just ones who can put points on the board by themselves. 973 was second to last in OPR at Central Valley, but does that mean that they didn't perform well? Sure, the top teams that can put sick points up are going to bubble to the top, but after about 20-30 teams (overall), it's a big jumble. Isn't it the teams after 20-30 the ones you really care about?

Cory 10-03-2014 12:56

Re: looking at OPR across events
 
Quote:

Originally Posted by Michael Hill (Post 1356474)
I'd like to see an OPR justification for why 254 chose 973 and 2135 at CVR. Until then, I'd much rather rely on actual scouting data for real performance evaluation. This game has more facets to it than usual, and there are more ways of scoring points. Previously, points were mostly scored by putting something into a goal. This year, many more things have to be done to get points (and a win). This year is all about choosing compatible robots, not just ones who can put points on the board by themselves. 973 was second to last in OPR at Central Valley, but does that mean that they didn't perform well? Sure, the top teams that can put sick points up are going to bubble to the top, but after about 20-30 teams (overall), it's a big jumble. Isn't it the teams after 20-30 the ones you really care about?

OPR is a BS metric this year. This game isn't about scoring. Pretty much every other game in FIRST has required more than one good offensive robot, so you are forced to pick the second best scoring bot at the event, as the #1 seed. Not this one.

You need three things in this game. A really good front court robot that is going to make 90%+ of the shots they take, has a good intake, and isn't afraid to play some defense, a midfielder that is able to transition from assisting to defense on the fly, and an inbounder that can quickly pass the ball to the midfielder as well as play a bit of D or throw some picks for the midfielder to get away from defense to execute the pass.

Two high scoring robots is the flashy alliance that looks great on paper but unless they can transition into those defensive roles on the fly they're not going to stand up to the above alliance. I don't think you will see examples of more brutal defense than what we saw in the CVR finals. If you can't counter that with equally effective defense of your own, or counter defense to free up your front court robot, you're in a world of hurt.

Ether 10-03-2014 13:12

Re: looking at OPR across events
 
Quote:

Originally Posted by Ed Law (Post 1356451)
My spreadsheet automatically ignores the redundancies. If a match is replayed, it will use the later match and ignore the original match. Also I only use the twitter data for a district/regional event if there are no match data missing from that event.

Ed,

I know you do everything humanly possible to coax the truth out of the Twitter data, and the time and effort you invest is much appreciated.

For the Twitter stats reports that I generate, I include the Twitter data even if that data is not 100% complete.

http://www.chiefdelphi.com/forums/sh....php?p=1355882



Lil' Lavery 10-03-2014 13:20

Re: looking at OPR across events
 
Quote:

Originally Posted by Cory (Post 1356523)
OPR is a BS metric this year. This game isn't about scoring. Pretty much every other game in FIRST has required more than one good offensive robot, so you are forced to pick the second best scoring bot at the event, as the #1 seed. Not this one.

You need three things in this game. A really good front court robot that is going to make 90%+ of the shots they take, has a good intake, and isn't afraid to play some defense, a midfielder that is able to transition from assisting to defense on the fly, and an inbounder that can quickly pass the ball to the midfielder as well as play a bit of D or throw some picks for the midfielder to get away from defense to execute the pass.

Two high scoring robots is the flashy alliance that looks great on paper but unless they can transition into those defensive roles on the fly they're not going to stand up to the above alliance. I don't think you will see examples of more brutal defense than what we saw in the CVR finals. If you can't counter that with equally effective defense of your own, or counter defense to free up your front court robot, you're in a world of hurt.

This is also exactly why OPR, or more specifically CCWM, could be a great metric this year, given an infinitely large sample size of non-changing machines. Because so much of this game relies on esoteric maneuvers and complimentary play, much of a robot's value to an alliance cannot be determined simply by recording how many times they score or assist. Some other concrete metrics, like shooting%, possessions, inbounding%, lines crossed, or time spent in contact with other robots could shed more light, but many teams don't track those reliably (or at all), and they still leave gaps in true value. Over a sufficiently large sample, a team's value to an average alliance should become clear with OPR/CCWM, though. I don't think we'll see a large enough sample to get meaningful results for most middle-tier robots, though. There's too much variably during qualification matches in terms of partner's capabilities, opponent's capabilities, strategy selected, and robot improvement. Not to mention that the value to an average alliance is often not identical to value to your particular alliance.

Jared Russell 10-03-2014 13:56

Re: looking at OPR across events
 
Quote:

Originally Posted by Lil' Lavery (Post 1356552)
Over a sufficiently large sample, a team's value to an average alliance should become clear with OPR/CCWM, though.

Maybe for an average qualifications alliance, but not for an average eliminations alliance. Many useful robot attributes cannot be demonstrated in every qualification match due to random alliance pairings. Examples:

1. A great defensive robot doesn't have a big impact in matches where the other alliance would not have been scoring a lot of points anyway.

2. Catching robots require someone to provide a controlled truss shot.

3. Inbounders and assisters require someone to inbound to/assist to.

In each, case, OPR/CCWM will tend to systemically underrate these attributes since they are only utilized in a subset of qualification matches and (in most cases) will have their dependencies met in elims. High goal scorers and truss shooters can manufacture pretty good scores by themselves and are therefore comparatively overrated by OPR/CCWM.

Navid Shafa 10-03-2014 16:22

Re: looking at OPR across events
 
Quote:

Originally Posted by Michael Hill (Post 1356474)
I'd like to see an OPR justification for why 254 chose 973 and 2135 at CVR.

Although I appreciate and use OPR for forecasting and comparing teams, I would have to be stupid to use just OPR to pick an alliance. Any quantitative metric is going to put 973 at the bottom. How can you expect any quantitative system to account for robots that don't function properly or to their full potential in a large quantity of qualification matches?

I will continue to agree to disagree with your general premises. I will also continue to utilize the systems in place until someone provides a better quantitative metric substitute.

Karthik 10-03-2014 17:40

Re: looking at OPR across events
 
Quote:

Originally Posted by Jared Russell (Post 1356591)
Maybe for an average qualifications alliance, but not for an average eliminations alliance. Many useful robot attributes cannot be demonstrated in every qualification match due to random alliance pairings. Examples:

1. A great defensive robot doesn't have a big impact in matches where the other alliance would not have been scoring a lot of points anyway.

2. Catching robots require someone to provide a controlled truss shot.

3. Inbounders and assisters require someone to inbound to/assist to.

In each, case, OPR/CCWM will tend to systemically underrate these attributes since they are only utilized in a subset of qualification matches and (in most cases) will have their dependencies met in elims. High goal scorers and truss shooters can manufacture pretty good scores by themselves and are therefore comparatively overrated by OPR/CCWM.

While I do agree that OPR can underrate these types of teams, I think we're all forgetting exactly how OPR works. Teams that consistently play on high scoring alliances will continue to get a higher OPR. Thus, teams who are very good at facilitating alliance partners, (Strong inbounding, passing, catching, etc) will eventually rise to the top. As Sean mentioned earlier, the sample size may not be large enough to filter out all the issues that Jarred mentioned above, but it's still a pretty decent metric, and definitely a useful tool when analyzing the results of an event where you have no/limited access to match video, or no time to watch and breakdown all the video.

JTEarley 10-03-2014 18:19

Re: looking at OPR across events
 
Anyone know of an app I can get to view this on android? I've tried Office suite but it never loads all the way. Any suggestions?

Navid Shafa 10-03-2014 18:20

Re: looking at OPR across events
 
Quote:

Originally Posted by JTEarley (Post 1356816)
Anyone know of an app I can get to view this on android? I've tried Office suite but it never loads all the way. Any suggestions?

Spyder has both OPR and OPR predictions per event. It's nothing fancy, but you might like that.

Bryce2471 10-03-2014 18:35

Re: looking at OPR across events
 
Quote:

Although I appreciate and use OPR for forecasting and comparing teams, I would have to be stupid to use just OPR to pick an alliance. Any quantitative metric is going to put 973 at the bottom. How can you expect any quantitative system to account for robots that don't function properly or to their full potential in a large quantity of qualification matches?

I will continue to agree to disagree with your general premises. I will also continue to utilize the systems in place until someone provides a better quantitative metric substitute.
Navid,
I agree. That there is a general rule that OPR does not correctly represent teams with some strategies. I also agree that it is a highly interesting and useful metric most of the time. I was just wondering if it was another general rule that it's easier a team with good alliance partners to get a high OPR than a team with not so good partners?

Quote:

While I do agree that OPR can underrate these types of teams, I think we're all forgetting exactly how OPR works. Teams that consistently play on high scoring alliances will continue to get a higher OPR. Thus, teams who are very good at facilitating alliance partners, (Strong inbounding, passing, catching, etc) will eventually rise to the top. As Sean mentioned earlier, the sample size may not be large enough to filter out all the issues that Jarred mentioned above, but it's still a pretty decent metric, and definitely a useful tool when analyzing the results of an event where you have no/limited access to match video, or no time to watch and breakdown all the video.
Karthik,
Again, I completely agree. But i'm just wondering if in general, a team could get a high OPR more easily by going to a more competitive event?

JTEarley 10-03-2014 18:39

Re: looking at OPR across events
 
Quote:

Originally Posted by Navid Shafa (Post 1356819)
Spyder has both OPR and OPR predictions per event. It's nothing fancy, but you might like that.

I like that, but it is nice to see every team's opr at once, instead of just within each event.

EDIT: just realized I posted this on wrong thread, sorry

Jared Russell 10-03-2014 18:51

Re: looking at OPR across events
 
Quote:

Originally Posted by Karthik (Post 1356788)
Teams that consistently play on high scoring alliances will continue to get a higher OPR. Thus, teams who are very good at facilitating alliance partners, (Strong inbounding, passing, catching, etc) will eventually rise to the top.

The first statement is of course true, but the coupling of catching, assisting, and defense with other robots' actions means that very capable facilitators may not frequently and consistently play on high scoring alliances, regardless of the number of matches. (How many triple assist cycles have you seen remain uncompleted because the scorer kept missing and ran out of time?) It all depends on the distribution of robot capabilities at the event. A given team's event OPR/CCWM is conditioned based on this distribution, a result of OPR's assumed linear scoring model being fit to the game's non-linear and dependent scoring functions. (Hence, OPRs between events are not really comparable unless the distribution of robot capabilities is similar).

I assert that the distribution of robot capabilities in eliminations tends to be very different across all alliances. In total, more robot capabilities tend to be present, so partner- or opponent-dependent attributes will contribute more to an alliance than they did on average in qualifications.

David8696 10-03-2014 19:19

Re: looking at OPR across events
 
In defense of OPR this year, I'd like to look at the data—that is, how accurately OPR has predicted teams' performances in the competitions we've seen thus far. Here are the top 10 teams as ranked by OPR, accompanied by their finishes in the competition they competed in.

1. 1114—Semifinalist at Greater Toronto East regional (lost in semifinals partly due to technical foul by alliance partner)
2. 2485—Finalist at San Diego regional (also lost finals partly due to tech foul)
3. 3683—Semifinalist at Greater Toronto East (alliance partner with 1114)
4. 33—Winner of Southfield district event
5. 987—Finalist at San Diego (alliance partner with 2485)
6. 624—Winner of Alamo regional
7. 16—Winner of Arkansas regional
8. 254—Winner of Central Valley regional
9. 3147—Finalist at Crossroads regional
10. 3393—Finalist at Auburn Mountain district

In fact, the top OPR team that didn't make at least finals in their respective regional/district event is 3494 at 12th, followed by 3476 in 13th (whose pickup broke during semifinals at San Diego). Overall, only seven of the top 30 teams failed to make the finals, and only two failed to make the semifinals of at least one event. As a robotics and statistics nerd, I think these numbers speak for themselves: OPR may not be a perfect metric, but it seems pretty dang accurate at predicting whether a team will finish well.

EDIT: 1114 and 3683 lost in the semifinals. Thanks to Kevin Sheridan for pointing that out.

Kevin Sheridan 10-03-2014 19:20

Re: looking at OPR across events
 
Quote:

Originally Posted by David8696 (Post 1356873)
In defense of OPR this year, I'd like to look at the data—that is, how accurately OPR has predicted teams' performances in the competitions we've seen thus far. Here are the top 10 teams as ranked by OPR, accompanied by their finishes in the competition they competed in.

1. 1114—Finalist at Greater Toronto East regional (lost finals partly due to technical foul by alliance partner)
2. 2485—Finalist at San Diego regional(also lost finals partly due to tech foul)
3. 3683—Finalist at Greater Toronto East (alliance partner with 1114)
4. 33—Winner of Southfield district event
5. 987—Finalist at San Diego (alliance partner with 2485)
6. 624—Winner of Alamo regional
7. 16—Winner of Arkansas regional
8. 254—Winner of Central Valley regional
9. 3147—Finalist at Crossroads regional
10. 3393—Finalist at Auburn Mountain district

In fact, the top OPR team that didn't make at least finals in their respective regional/district event is 3494 at 12th, followed by 3476 in 13th (whose pickup broke during semifinals at San Diego). Overall, only five of the top 30 teams failed to make the finals, and only two failed to make the semifinals of at least one event. As a robotics and statistics nerd, I think these numbers speak for themselves: OPR may not be a perfect metric, but it seems pretty dang accurate at predicting whether a team will finish well.

1114 and 3683 lost in the semifinals

David8696 10-03-2014 19:21

Re: looking at OPR across events
 
Quote:

Originally Posted by Kevin Sheridan (Post 1356874)
1114 and 3683 lost in the semifinals

My mistake. But I feel like the message holds true—OPR has predicted with fairly good accuracy the success of a team.


All times are GMT -5. The time now is 21:21.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi