OPR after Week Two Events

The OPR/CCWM numbers have been posted, please see

http://www.chiefdelphi.com/media/papers/2174

There are a number of points I would like to make:

  1. Central Valley data is now complete.
  2. One match in New York regional is missing. Based on time of match, it looks like it is a replay match but there is no score. I put in 0-0 for now so I can post the results. I will recalculate the OPR/CCWM once that match data is available.
  3. Toronto East regional is missing finals results. However it does not affect all the data posted since I know the outcome of the finals from other CD threads.
  4. Northern Lights regional data is missing.
  5. FIRST changed the headings of Team Standings page this week. Last week they were using last year’s heading with HP, BP, TP and CP. This week they changed it to AP, CP and TP and took out the Coop Points (CP). Unfortunately they did not repost the Week One events with the new headings. This creates problem in my spreadsheet and creates unnecessary work.

Enjoy the data!

If you find any error or have any questions, please let me know.

Wow, thank you so much for this! We are 27th… And great job 987 on being 2nd!

So… OPR vs CCWM. I understand how they’re calculated, I’m just not sure what the meaning of the distinction is.

Many high end teams have equal or nearly equal OPR and CCWM ranks (2056, 1114, 1986, 610, 987, 118), but other strong teams, like 4343 and 1241 have significantly higher CCWM ranks than OPR ranks. I wonder what that means.

Anybody have an idea of why this phenomenon occurs?

That said, even MORE proud of the kids on 4343 for being 8th in the world for CCWM after 2 weeks of play.

Ed,

At Waterford 4 of our 12 matches were played with only a two team alliance. Do you have any factor that plays into the CCWM and OPR calculation?

Northern Lights regional is missing, but I also have noticed that the FIRST website has not updated either.

Are the Auto, Climb and Teleop OPRs on the WorldRank sheet supposed to match the individual event teams have played so far? It looks like they did in last week’s version, but this week, for instance, has 2056’s OPR-T at 2.13. (Crazy enough on its own, but GTR-E has it at 46.)

EDIT: It looks like columns A-T got sorted after everything was calculated. 1640’s columns AA-AC look to be listed with team 4557 at rank 290. 2590’s are with 3755 at 409th. Is there a simple way to fix it?

Yes, I made a mistake. I sorted the teams by world OPR and forgot to include the other columns. I reposted the corrected file. If you don’t want to download it again, all you have to do is first resort by Team number, then sort again with those extra columns included. Sorry for the inconvenience. It was very late at night after an exhausting weekend of competition.

OPR and CCWM are calculated whether a team shows up on the field or not. It is not any different if a team showed up but the robot did not work at all during the match. Yes it will hurt you a little bit in OPR/CCWM but not that much as long as you do well in the other matches when you have 2 alliance partners. It will hurt the teams more in OPR/CCWM that don’t have a functioning robot or even get on the field.

If a team’s score is only high when there is a strong partner and it is low without a strong partner, then their OPR/CCWM will be low and rightly so.

Are individual event opr’s supposed to match the world rank ones, even if a team has only competed once? They seem to be off by a bit, not much but a bit.

At this point, there are 16 teams that have played in 2 events. Even if your team hasn’t played in more then one event, if you’ve played at an event with a team that played twice, all of that teams matches are taken into account when calculating your OPR. This provides a minimal connection between events, and explains why the world rankings are slightly different then individual event rankings.

At events where no team had played twice, the OPRs should match. Those events are Central Valley, Hatboro Horsham, San Diego, Lake Superior, Northern Lights, and Oregon.

You can be a judge of whether it’s worth the effort (I’m not sure if the information is available to do so), but teams’ OPR would be slightly more accurate if DQed teams were not included in their alliance score equation. It’s an additional given that’s being ignored. Also, the DQed team gets an OPR for only the matches they played, which I’d consider more fair.

If the information is available by match (which I’m not sure it is), does it inherently increase accuracy? The only DQ I’ve seen so is from G27. Not that DQs are common in any sense, but a robot that G27s could well have been a major contributor to the match score. (We were once red carded at an off-season when our robot went haywire at the end of a match we’d helped win.)

This scoring contribution is not true of other potential DQs, for instance the entire team no-showing or playing without clearing inspection, but it does apply for some. It could work if you had Disable information or no-show robots (vs 5.5.6 no show teams), but DQ might be a wash this year.

The reason why the data for Northern Lights is missing is because of the special match that they played between the NL winners and the Lake Superior winners. When they tried to sync the robots to the field they couldnt because of FMIS system so somehow in that process they also wiped the match data from the FRC system. Hopefully they will repost the data sometime this week.

What do you guys think of OPR this year? It seems like based on a comparison of WPI’s rankings versus the actual data we took, it is better than using ranking to sort teams, but still pretty noisy. I know our team’s OPR was a bit low. I bet this was due to getting a lot of matches with other good teams at the regional where the score was lower than you’d expect (robot failures, etc). The other teams are ranked at least in the right ballpark, but not in a very solid order. 1100’s OPR rank of #6 in particular is criminally low.

The OPR is a good representation of how good a team is this year. Without this coopertition award and coopertition points and weird ranking system, every team is trying to score as much as possible. However for regionals with lots of teams and not that many matches, there is still a lot of luck of the draw and the ranking will become meaningless. In that case, the OPR will still tell the truth about a team because data does not lie, only people do.

I did a little work after comparing to our actual scouting data, and it seemed while teams like ours had a reasonably accurate OPR, it’s really easy for one’s OPR to balloon this year, so in a regional ranking sense it’s less accurate overall. Technical fouls and fouls aren’t removed from the data. Teams that have a playstyle that draws fouls, or just played worse opponents, get an advantage in OPR.

At WPI, a few teams happened to have their non-functional matches paired with other good teams, and OPR doesn’t really know how to separate that out. Additionally, defense is huge this year, making this game less separable than other games. So while our average contribution to a match might be close to our OPR, other teams were a ways off in one direction or the other due to scheduling oddities or hella technical fouls.

Sorry, I meant to refer specifically to no-show DQs (which were specifically mentioned by the previous poster). I don’t think there’s match-by-match information on DQs, let alone reason-for-DQ information… Basically, you’re right.

As for how well OPR is doing, the metric I typically use is % qualifying matches predicted correctly. I have 2013 OPR nearly 1% ahead of 2012 this time last year (81.5% vs. 80.6%), though the result isn’t statistically significant (fwiw, p=0.29).

BaselA,

So do you sum the 3 team alliance OPR scores and then use the highest value to predict the match winner?

Yes. There are some problems with this method (e.g. in 2011, when 3 teams with great minibots were on an alliance), but I don’t think there’s a better way to do it.

Also, just for fun, 2012 OPR (used teams’ average of all event OPRs, but they’re all pretty similar) is predicting 2013 matches at about 61% (counting any rookies as OPR = 0). Thanks to Ed for the OPRs and Ether for the Twitter Match Data. Not sure what I’d do without you two.

Edit: I don’t want to post too many times, but there’s a couple different things here. One is OPR as a tool to predict what will happen. Ed’s reply below is pretty much exactly what I do for predicting matches (except realtime OPR; that’s something I’d like to do in the future). In this case, I’m talking about how well OPR evaluates teams this year vs. other years, for which I used post-event OPRs. You can’t hit 80% predicting matches without realtime data.

I actually use a few ways to predict match results. When there are sufficient data like in Week 5-7 and World Championship, I use historical World OPR and highest OPR. I also use the OPR calculated in real time for that event. Using one of them, I predict the rest of the matches and predict the final ranking. It is useful to have some idea ahead of time.