looking at OPR across events

Is it easier for a given team to get a high OPR at stacked event than a softer one?

I think that assist points being the first sort in standings after qualification points, forces good teams to work with their partners. Even if it means that the alliance wont score as many points.

I also think that assist points being awarded in the 0 10 30 order gives an advantage to teams who are going to highly competitive events.

I’m curious to hear what other CD users think.

Is it easier? I’m not sure we can say definitively, because there are quite a few variables in this game. The big dynamic trade-off:

  • You can score more points with less defense on you.
  • You can score more points when you have strong partners.

It becomes very event dependent, as well as schedule dependent.

Absolutely. Many of the powerhouses could complete quick single bot cycles. However, a combination of seeding rules, point maximization and slow ball-returns certainly promotes more assisting in general.

A good match schedule that allows for more triple assists can certainly affect a team’s seeding and tie-breaking ability. It also makes a notable difference in World OPR/CCWM.

*BTW, Congrats last weekend! Looking forward to seeing you at District Champs.

I think OPR is as worthless statistic in this game. It works ok games where there is a lot of individual contribution, not so much with teamwork. 2013 was a great year for OPR. 2012 wasn’t so bad either, but 2014 is not a good year for OPR.

I’d have to strongly disagree. OPR and CCWM is certainly better some years compared to others, that I won’t argue. After watching Central Illinois, I’m guessing you are dissapointed at the numbers you saw at the event or the rating your team has right now. A lot of Central Illinois teams saw rather low OPR/CCWM values, because defense was brutal there. I’m sure you can attest to that. If you are trying to compare teams from CI to other events, you are going to be disappointed.

Why not take a look at the database here.
I think you’d be surprised at how well of a job it’s doing as a whole, I certainly am. Take a look at some of the other events.

If you still disagree, I understand and respect your opinion. That being said, I’d love to see any other quantitative metrics you are using to rank and evaluate teams, I’m rather obsessed with these kinds of things. :smiley:

To be honest, I haven’t even looked at where my team is. The data just doesn’t lend itself to being useful in this type of game. If you’re using OPR for anything like picklist, I think you’ll find yourself disappointed. I would also say that trying to validate OPR with ranks at regionals is also a useless statistic (why not just go by rank then if your OPR matches?). When you have a game that hinges on another team’s ability to complete tasks, OPR will not be a good indicator of performance.

If you haven’t looked, how do you know it’s bad?

First and most importantly, I would never make a picklist solely off of quantitative metrics, nor would I want to pick teams based on Pit-scouting and my assessment after watching that robot play. A good combination of both is what makes holistic scouting so valuable.

In almost every FIRST game, a high seeded captain is looking for a robot that is a strong offensive robot. This is exactly what OPR could assist you in finding. You could almost always look at a list of the top teams in OPR and pick one of those.

The second pick is not something you’d want to determine by OPR. Often, you are lucky to find a robot that can play offense, especially at small events or districts. Things like Pit-scouting become extremely important. I’d often look for teams with Multi-Cim gearboxes and strong 6WD or 8WD bases and drivers that know how to use them in matches effectively.

I’ve never done this, nor I would attempt to do so. A team’s rank in seeding quite often does not directly correlate with robot performance, there are many factors that impact the tournament rankings. I certainly would not expect OPR to exactly match the seeding. Look back to 2010 and 2012’s seeding systems, the seeding was greatly impacted not only by your partners but the opposing alliance. Perhaps if we had an extremely large sample size these kinds of ranking comparisons would become relevant…

This is exactly why I brought up CCWM before. This is still in my opinion accurately determining personal performance considering a mixture of different alliances and compositions. If we notice a distinct amount of parity between OPR and CCWM we know that a team has done more or less of the scoring for their alliance. It can also help narrow out teams that ranked extremely high or low, due to really strong or really poor match schedules. i.e. A team with a large CCWM value is doing the bulk of the scoring in general for their alliance.

Since you haven’t taken a look at it yet, here is some information from your event, Central Illinois:

Qualification Rankings

1 525
2 1736
3 1986
4 1806
5 1756
6 1747
7 171
8 2081
9 167
10 2481
11 967
12 2704
13 2451
14 4256
15 1208
16 4143
17 2039
18 4212
19 3138
20 2022
21 2164
22 1764
23 3352
24 4196
25 3284
26 1091
27 4786
28 292
29 1288
30 1094
31 648
32 2040
33 4213
34 4296
35 4655
36 4330
37 5041
38 1739
39 4329
40 2194

Rank by OPR

1 525
2 1986
3 1806
4 1756
5 4256
6 1747
7 1736
8 167
9 2081
10 171
11 2451
12 4143
13 967
14 1208
15 2039
16 1288
17 3284
18 292
19 648
20 5041
21 2481
22 4212
23 3138
24 1094
25 2704
26 1764
27 4213
28 2040
29 1091
30 2164
31 3352
32 4330
33 2022
34 4196
35 4655
36 4786
37 2194
38 4329
39 1739
40 4296

Rank By CCWM

1 1986
2 525
3 1806
4 1747
5 1736
6 1756
7 171
8 2081
9 2481
10 3284
11 2451
12 967
13 167
14 1208
15 4143
16 2704
17 4256
18 2039
19 4213
20 4330
21 3138
22 1091
23 1288
24 3352
25 4212
26 2164
27 2022
28 5041
29 4196
30 1764
31 4786
32 4655
33 1094
34 2040
35 4296
36 292
37 2194
38 4329
39 648
40 1739

They look pretty accurate to me, especially at the upper end. I would say both these do a much better job of ranking robot performance than the seeding wouldn’t you?

Since you haven’t got a chance to look at OPR for the events or your team, I posted the link earlier, but I made a page for you with data just from Central Illinois here: Central Illinois OPR/CCWM with Filtering. Enjoy!

On the subject of OPR/CCWM–has anyone had good success with using them as a forecasting tool for this year for “end of qualifications” standings from an earlier point in time (i.e. end of last full qualification match day)? I know many people feel that the statistic is trash with regards to being a performance indicator, but I have heard of several elite teams using these statistics for forecasting in prior years.

Ideally, this is exactly the type of game you would want to use a OPR or CCWM metric for. With a variety of potential roles within an alliance, a team’s contribution to offense or winning a match isn’t always obvious to more traditional scouting. However, I don’t think a single event comes anywhere near the sample size required to normalize the data, especially given the alliance driven nature of the game. Games with very discreet roles are much easier to get a normalized/accurate OPR, but OPR is less useful in those years because they’re much easier to get meaningful data from scouting. District model teams, or any teams who compete in 3+ events and 50+ matches, may have enough input data for OPR to be useful, but in-season improvement (of both them and the average level of play) once again throws a wrench in how predictive OPR will be for future contributions.

Which OPR are we talking about, the one with or without penalties? The one with penalties I feel is hyper inflated as some of the top OPR’s have more than 25% of their scores in penalty points. While it matches well with seeding and the other metric (because penalty points are big this year), I don’t think it shows the actual power of a team. (thebluealliance uses OPR with penalties included I believe)

Unfortunately, removing awarded foul points from the final score requires using Twitter data, which is incomplete and may contain errors or redundancies. With that caveat in mind, I’ve posted the Twitter-data-based unpenalized final score in the thread linked below. I think Ed Law’s OPR spreadsheet also uses Twitter data to remove awarded foul points, but I haven’t been able yet to find a viewer that can open xlsm files with Excel 2000. Ed graciously made an attempt to create an xls version of his spreadsheet but was unsuccessful.

http://www.chiefdelphi.com/forums/showthread.php?t=127619

LibreOffice opens his spreadsheet fine under Linux. I assume it would work for Windows too.

Ben, I saw some OPR based calculations at our first District event, and wouldn’t trust them 100% for all matches, but as a guide to how difficult a match is going to be and/or a comparison of one alliances strength to another, they’re definitely useful.

The problem with using OPR in a game like Aerial Assist is that there are some machines that can contribute greatly to an alliances performance but don’t necessarily always score a lot of points. Take for example the ‘ideal’ third robot on an alliance, one that can inbound and collect effectively, but may not score often, if ever. If this robot is rarely paired with someone that can score, their strengths aren’t necessarily reflected in the match score, which trickles down to their OPR.

Ah wow thanks, fantastic data you have there.

Thanks Ryan.

What version of LibreOffice? What Linux distro? You said the spreadsheet opens fine. Do the macros work too?

I know OpenOffice and LibreOffice are not identical, but since I have OpenOffice 3.3-5 installed here under Windows XP Pro SP3, I tried it. It’s been loading for about 5 minutes and according to the status bar is only about 15% complete. Not looking good.

I’ll put LibreOffice in my Slacko Linux partition and give it a try.

My spreadsheet automatically ignores the redundancies. If a match is replayed, it will use the later match and ignore the original match. Also I only use the twitter data for a district/regional event if there are no match data missing from that event.

I’d like to see an OPR justification for why 254 chose 973 and 2135 at CVR. Until then, I’d much rather rely on actual scouting data for real performance evaluation. This game has more facets to it than usual, and there are more ways of scoring points. Previously, points were mostly scored by putting something into a goal. This year, many more things have to be done to get points (and a win). This year is all about choosing compatible robots, not just ones who can put points on the board by themselves. 973 was second to last in OPR at Central Valley, but does that mean that they didn’t perform well? Sure, the top teams that can put sick points up are going to bubble to the top, but after about 20-30 teams (overall), it’s a big jumble. Isn’t it the teams after 20-30 the ones you really care about?

OPR is a BS metric this year. This game isn’t about scoring. Pretty much every other game in FIRST has required more than one good offensive robot, so you are forced to pick the second best scoring bot at the event, as the 1 seed. Not this one.

You need three things in this game. A really good front court robot that is going to make 90%+ of the shots they take, has a good intake, and isn’t afraid to play some defense, a midfielder that is able to transition from assisting to defense on the fly, and an inbounder that can quickly pass the ball to the midfielder as well as play a bit of D or throw some picks for the midfielder to get away from defense to execute the pass.

Two high scoring robots is the flashy alliance that looks great on paper but unless they can transition into those defensive roles on the fly they’re not going to stand up to the above alliance. I don’t think you will see examples of more brutal defense than what we saw in the CVR finals. If you can’t counter that with equally effective defense of your own, or counter defense to free up your front court robot, you’re in a world of hurt.

Ed,

I know you do everything humanly possible to coax the truth out of the Twitter data, and the time and effort you invest is much appreciated.

For the Twitter stats reports that I generate, I include the Twitter data even if that data is not 100% complete.

http://www.chiefdelphi.com/forums/showthread.php?p=1355882

This is also exactly why OPR, or more specifically CCWM, could be a great metric this year, given an infinitely large sample size of non-changing machines. Because so much of this game relies on esoteric maneuvers and complimentary play, much of a robot’s value to an alliance cannot be determined simply by recording how many times they score or assist. Some other concrete metrics, like shooting%, possessions, inbounding%, lines crossed, or time spent in contact with other robots could shed more light, but many teams don’t track those reliably (or at all), and they still leave gaps in true value. Over a sufficiently large sample, a team’s value to an average alliance should become clear with OPR/CCWM, though. I don’t think we’ll see a large enough sample to get meaningful results for most middle-tier robots, though. There’s too much variably during qualification matches in terms of partner’s capabilities, opponent’s capabilities, strategy selected, and robot improvement. Not to mention that the value to an average alliance is often not identical to value to your particular alliance.

Maybe for an average qualifications alliance, but not for an average eliminations alliance. Many useful robot attributes cannot be demonstrated in every qualification match due to random alliance pairings. Examples:

  1. A great defensive robot doesn’t have a big impact in matches where the other alliance would not have been scoring a lot of points anyway.

  2. Catching robots require someone to provide a controlled truss shot.

  3. Inbounders and assisters require someone to inbound to/assist to.

In each, case, OPR/CCWM will tend to systemically underrate these attributes since they are only utilized in a subset of qualification matches and (in most cases) will have their dependencies met in elims. High goal scorers and truss shooters can manufacture pretty good scores by themselves and are therefore comparatively overrated by OPR/CCWM.