A True 2010 World Ranking

I have been thinking about doing this for a long time and I finally did it.

I took all the district, regional (including Israel) and championship data, including qualifying and elimination round matches and assemble it into a giant 1799 X 1799 matrix. Then solve for OPR and CCWM. I was not sure if my laptop can handle it but it ran without any problem in about 5 minutes.

The purpose of this is to avoid compaing OPR/CCWM between different events and the argument that some events are stronger than others. This also eliminates the issue about which OPR/CCWM data to use: latest, average, weighted average etc. Since all the matches are included, all interactions between teams are taken into account. This is the ultimate way of calculating world ranking.

The data can be found in the following white paper, http://www.chiefdelphi.com/media/papers/2174

The file name is Team_2834 2010_Scouting_Database Championship v6.zip

Enjoy reading through the data. If you find any errors, please let me know.

Eh… is this really necessary? Its not like the NFL or NCAA football or something, but good job I guess on your efforts?

edit: LOLs I think the NCAA Football should replace teh BCS with this system

It’s actually a lot like both! Think of teams as players and alliances as teams and it’s very analogous.

Woah. This is pretty amazing… Indeed, I’d say it’s the ultimate scouting resource. You sure put a lot of work into this- good job!

Is this supposed to have 2010 data? I looked up my team and it returned our 2009 records (which we honestly were horrible in)

There is a file further down that has 2010 data. I actually made the same mistake.

Great data.

Why not make it into a website? I personally think it would be 10x easier to update and distribute than an excel spreadsheet.

That’s basically what we did with FRC-DB :smiley:
ahem Sorry, shameless plug, but I couldn’t help it.

The data is only interesting to people involved with scouting and alliance selection. It may be a good idea to put the data into a website. There are a number of reasons I don’t.

(1) If the purpose is only to search for past events from static data, then a website will make sense. However my spreadsheet can be used in real time during a competition using real time data. Unfortunately not every venue gives us internet access so a spreadsheet works better.
(2) Many teams use my data and customize/enhance to fit their needs. It is easiest for them if I give the data in a spreadsheet.

Thanks Ed for posting this up. Is there any chance you could put a Word file in the file tree that explains (ever so basic) OPR vs. CCWM. I know it is in the thread of the original file structure, but a stand-alon word doc would be interesting.
My special interest in this is how strategy may effect those metrics. Certain strategies effected OPR and CCWM differently.

There is a PDF that explains the difference.

These rankings are interesting. It seems that OPR is the better metric this season (in my opinion) however I think this is mainly due to this years ranking algorithm. It also seems that the Isreal regional greatly skews the ccwm data, there are multiple teams in the top 20 (by CCWM) with only 4 matches, I checked a couple and they competed only in Isreal.

It is already there on page 16 and 17 of the updated presentation file. The file name is “Team 2834 Scouting Database Presentation (Updated in 2010)”. Those two pages discuss the Interpretation of OPR and CCWM. I will try to expand on that in the future.

I agree with you regarding OPR being the better metric this year. That is what I used. I did not use CCWM at all. I computed it just for completeness.

I don’t quite understand why CCWM was affected by the Israel regional but OPR was not affected. My only interpretation of the data is that the very high OPR numbers put up by some teams were greatly reduced when the few Israel teams went to Atlanta and did not put up those numbers. The effect is that all Israel teams get lower OPR than they got at Israel regional because of that. However, perhaps the Israel teams that went to Atlanta did contribute to the winning margin about the same as they did in Israel regional so all Israel team’s CCWM data did not get reduced. This is just a guess and I did not check the actual match results of those teams.

Thanks, I forgot about that. Pretty simple ID10T error on my part.

I was a bit shocked when compairing our OPR vs. CCWM rank until I re-read the paper. Basically, our strategy this year was to have as low as possible Winning Margin while still executing the win (since loser points were worth double).

Thats very cool!:smiley:

I just found it curious to look at the championship finalists and see how they feel out in OPR. I would say based on this the Finalist alliance was not a surprise, but the Championship alliance was if you look at the straight numbers. Looking at average ranking however the Championship alliance was clearly the highest ranked.

What does this mean? Probably nothing other than numbers don’t tell the whole story, which is just something I like to remind people occasionally.

Championship Winners
67 = 4
294 = 41
177 = 37
Avg = 27

Championship Finalists
1114 = 1
469 = 3
2041= 168
Avg = 57

Galileo Champs
2056 = 2
1625 = 56
3138 = 30
Avg = 29
(EDIT: Originally had 88 from copying the total cell not the average.)

Archimedes Champs
254 = 5
233 = 22
3357 = 151
Avg = 59

I am going to sound defensive here although I am not defending my data. This is not my data so I am just defending data in general.

Numbers don’t tell the whole story if you randomly pull some numbers out and manipulate them without understanding where the numbers come from. You can make the data look bad or look good depending on how you use it.

First of all, the OPR number that is published takes the whole season into account. For team 294, which had an awesome robot in Atlanta, only ranked 41 in the world because their OPR was only 1.5 at San Diego regional. They then improved their OPR to 4.0 at Los Angeles regional. On Newton field, their OPR was a whooping 7.1 and was Alliance Captain #1 and they picked team 67 with OPR of 7.3 in the first round. However for the whole season, their OPR was 3.9 which is why they were ranked 41. And team 177, a #16 pick on one of 4 fields is ranked 37 worldwide. You can’t find a better alliance than this.

I am not sure what you are trying to prove when you say the championship alliance was a surprise.

I was refering to whether mean was more important or having 2 of the 3 highest ranking teams. Which would you be more likely to predict win? That’s all I was getting at is that most people would think that would go the other way. I wasn’t trying to find deeper meaning, just to propose a question which I found interesting while looking at the data.

I also agree with you looking at data from just one event or all the season is different if you look out of context. Trending up and down can be abberations based on competition at events rather than meaningful performance changes. I believe to have very reliable data we all need to play more matches.

Ed,I’m glad you posted this, because as I intended to get across while most people thought the championship was an upset this says the winners should have been obvious. I wonder if some one with time one their hands could find an allaince with a higher average OPR or the highest combined OPR and see what that data might show.

Sorry Peter, I did not look at what team you are from. Otherwise I would not have misinterpreted your intention.

It was not an upset. My spreadsheet, based on Atlanta qualifying rounds in each division, predicted you win 17 to 16. And it was a good prediction comparing to the actual results. And yes, there was a higher combined OPR. 233/254/3357 has combined OPR of 18. However we are comparing OPR obtained from different fields so we have to be careful.