View Full Version : Week 4 OPR
Navid Shafa
29-03-2011, 00:58
Since a comparative OPR isn't really available on the global scale at the moment, I was curious to see how we ranked amongst other teams. I thought that we might have a chance this year of "Playing with the big boys", so I took all* of the Week 4 Regionals OPR data and meshed them to create this list for myself. I was absolutely shocked to see that we are even relatively close to some of these powerhouses.
Rank..Team #.......OPR
1.......1114...........71.88
2.......2363 ..........57.9409
3.......359.............51.8483
4.......2056...........51.65
5.......1983.......... 49.2999
6.......180........... 49.1931
7.......768............47.8643
8.......3098..........45.6004
9.......63.............45.5159
10.....1056..........42.4613
Bear in mind this was meant for personal use only, I'm just sharing in case anyone was interested, sorry if I'm treading on other OPR-organizers feet.
As of right now, all I have shown are the top 10 teams from week 4 regionals. I plan on taking OPR data from all of the regionals and ranking the top 25 so far in the same manner, but it will take me a bit of time...
This list was generated using OPRNET and I plan on using all of the regionals to include all of the teams who have played so far. When I do so, I will count each team's best OPR from any one regional when I rank the top 25 teams from OPR so far this season.
Enjoy! :D
*EDIT [Clarification]: Some data was unavailable via OPRNET, so this may not be an entirely accurate representation. It's rather close however, I apologize if I upset anyone because of it. As I said above, it was created for personal use originally.
Nathan Streeter
29-03-2011, 01:45
Although OPRNET is quite nice, may I recommend 2834's scouting database? Ed Law posts the updated spreadsheet soon after the completion of each week... It has a great UI and provides a bunch more information than just OPR! The CCWM metric is neat also, but I find its less reliable for predicting the strength of a team.
I looked for the Week 4 regionals without OPRNET results and with higher OPRs: LA, Midwest, and Niles. Below are the higher OPR's from these events.
Team...OPR....Event
111.....73.9....Midwest
74 ......60.7....Niles
330.....60.0....LA
1717...53.4....LA
968.....52.7....LA
118.....51.2....Midwest
At any rate, congratulations on Skunkworks' success this year, you guys definitely are competing amongst the top of FRC teams!
Since a comparative OPR isn't really available on the global scale at the moment, I was curious to see how we ranked amongst other teams.
Your team is ranked 29 out of 1548 teams who have competed so far. That is very good. You are one of the better teams. This ranking is what I called True World Ranking which is based on every qualifying and elimination match of every district/regional so far. All of that data is assembled into a giant matrix and solve so all interactions between teams are considered. Please see my old post here.
http://www.chiefdelphi.com/forums/showthread.php?p=964609#post964609
The data can be found here.
http://www.chiefdelphi.com/media/papers/2174
Navid, there are no OPR organizers here on CD. Nobody owns it. I didn't invent it. I think the person who did is no longer on this forum as I have not seen his posts in a long time. So don't worry about treading on somebody's feet. I am doing it as a service to others and it is my way to give back to the CD community who had helped our young team a lot in the last few years.
Nathan, thanks. You beat me to it, and I am not the only one staying up late.
waialua359
29-03-2011, 03:03
Ed,
did you include Hawaii?
I dont see it when I type in our team #.
-Glenn
Navid, there are no OPR organizers here on CD. Nobody owns it. I didn't invent it. I think the person who did is no longer on this forum as I have not seen his posts in a long time. So don't worry about treading on somebody's feet. I am doing it as a service to others and it is my way to give back to the CD community who had helped our young team a lot in the last few years.
Nathan, thanks. You beat me to it, and I am not the only one staying up late.
OPR has actually the product of collaborative work done by many individuals who are still on CD ( I know Jim Zondag worked on it, and he is still very active in the CD community)
OPR is standard matrix algebra and is a published mathematical formula, so when I see different OPRs it makes me a bit nervous. OPRnet may be taking across multiple event's rather than latest events, which would be cause for different values.
Ed,
did you include Hawaii?
I dont see it when I type in our team #.
-Glenn
With ETC I have you at 52 (top for hawaii)
The most notable thing about your team is in qualifying matches, someone on your alliance got 1st place in every race.
With regards to the regionals I have full data on, you are the only team who has achieved this.
By pairing up with 368 you created a minibot monopoly.
This is another example where picking the best robot may not be the best pick.
It looks like 2439 had an great robot, but a less valuable minibot.
They picked 1056 who had the most valuable robot, and a decent minibot.
However after looking at 359 and 368's EMCs the only way to beat them would be to compete against them in the race. It would have been a gamle, but it looks like 2439 should have considered 2090, who looks like the only team with a minibot who could really go toe to toe with 359 and 368s.
Here is just another example where minibot monopolies win...
Congrats guys on the great job!
OPR is standard matrix algebra and is a published mathematical formula, so when I see different OPRs it makes me a bit nervous. OPRnet may be taking across multiple event's rather than latest events, which would be cause for different values.
OPRNet just does the event that you specify. If you want just the most recent events, you'd have to figure out what their event codes are and enter them, one regional at a time.
Example: here's an excerpt from the bat file I use to generate the total list:
oprnet hi 2011 opr r q >> allranks.txt
oprnet is 2011 opr r q >> allranks.txt
oprnet dmn 2011 opr r q >> allranks.txt
oprnet nv 2011 opr r q >> allranks.txt
oprnet tx 2011 opr r q >> allranks.txt
oprnet ca 2011 opr r q >> allranks.txt
oprnet il 2011 opr r q >> allranks.txt
oprnet mn 2011 opr r q >> allranks.txt
oprnet mn2 2011 opr r q >> allranks.txt
Here's a list of every team at every regional so far.
OPRNet just does the event that you specify. If you want just the most recent events, you'd have to figure out what their event codes are and enter them, one regional at a time.
Example: here's an excerpt from the bat file I use to generate the total list:
oprnet hi 2011 opr r q >> allranks.txt
oprnet is 2011 opr r q >> allranks.txt
oprnet dmn 2011 opr r q >> allranks.txt
oprnet nv 2011 opr r q >> allranks.txt
oprnet tx 2011 opr r q >> allranks.txt
oprnet ca 2011 opr r q >> allranks.txt
oprnet il 2011 opr r q >> allranks.txt
oprnet mn 2011 opr r q >> allranks.txt
oprnet mn2 2011 opr r q >> allranks.txt
Here's a list of every team at every regional so far.
Oh okay! Thanks for clarify this, I have a mac so I can't use it.
The Lucas
29-03-2011, 09:32
Can someone please post Midwest OPR. The data up on the FIRST site has cannot be parsed because it is in a different format. It is probably due to the field issues they had during Elims. The same thing happened to Chesapeake and I posted the data.
[EDIT] Nevermind, Bongle fixed it in v14, here is IL
0 OPR 111 73.9248
1 OPR 118 51.1772
2 OPR 1625 45.6884
3 OPR 16 41.7728
4 OPR 2410 41.7174
5 OPR 1732 41.1471
6 OPR 135 37.2743
7 OPR 71 33.2081
8 OPR 2194 28.6006
9 OPR 2949 28.1996
10 OPR 2358 27.9021
11 OPR 2338 27.6187
12 OPR 2826 27.1633
13 OPR 1987 25.7962
14 OPR 2041 23.3367
15 OPR 1675 20.4011
16 OPR 3494 18.6332
17 OPR 45 16.518
18 OPR 2171 12.9152
19 OPR 1781 12.3256
20 OPR 2704 9.65644
21 OPR 2022 9.45619
22 OPR 3352 9.08313
23 OPR 2115 8.96639
24 OPR 101 8.44625
25 OPR 3779 8.22969
26 OPR 3595 6.91017
27 OPR 3067 6.79252
28 OPR 3197 6.37619
29 OPR 2151 4.5455
30 OPR 3612 4.44737
31 OPR 1739 4.27756
32 OPR 2432 4.04017
33 OPR 1367 3.10731
34 OPR 3488 2.54644
35 OPR 2769 2.35486
36 OPR 1091 1.68945
37 OPR 3646 1.40082
38 OPR 896 1.12364
39 OPR 3177 1.03324
40 OPR 3061 -0.270196
41 OPR 3695 -0.280014
42 OPR 2781 -2.29811
43 OPR 1850 -3.42507
44 OPR 2462 -3.43421
45 OPR 3110 -4.91922
46 OPR 3135 -7.98799
47 OPR 2803 -8.05401
48 OPR 2709 -8.28509
49 OPR 3416 -10.3558
I just fixed the OPRNet parser to parse the "fat"-style regional results.
Congratulations to 111 for their performance this weekend!
EagleEngineer
29-03-2011, 10:12
Although OPRNET is quite nice, may I recommend 2834's scouting database? Ed Law posts the updated spreadsheet soon after the completion of each week... It has a great UI and provides a bunch more information than just OPR! The CCWM metric is neat also, but I find its less reliable for predicting the strength of a team.
I looked for the Week 4 regionals without OPRNET results and with higher OPRs: LA, Midwest, and Niles. Below are the higher OPR's from these events.
Team...OPR....Event
111.....73.9....Midwest
74 ......60.7....Niles
330.....60.0....LA
1717...53.4....LA
986.....52.7....LA
118.....51.2....Midwest
At any rate, congratulations on Skunkworks' success this year, you guys definitely are competing amongst the top of FRC teams!
You mean 968.....52.7....LA
Nathan Streeter
29-03-2011, 10:22
OPR is standard matrix algebra and is a published mathematical formula, so when I see different OPRs it makes me a bit nervous. OPRnet may be taking across multiple event's rather than latest events, which would be cause for different values.
Yes, it is disconcerting if people are getting different OPR numbers... But that really isn't the case: Ed's and Bongle's OPR's do match up for each robot for each tournament. The reason why Ed's OPR *World Rank* numbers are different than taking the highest OPR from each team is because it solves a single world-wide matrix. This is great in many ways, but has the (single?) downside of solving for a single OPR from their Week 1 performance to their Championship performance, which is somewhat flawed. That said, I'm not sure what'd be a better way... :-/
Lineskier (Mike?), would it be possible to use the matrices to compute the EMC and ERC? I really like that EMC and ERC use the twitter feeds to break down a Robot's contribution into Minibot and Hostbot, but without the matrices, they just don't have the same accuracy.
Thanks, EagleEngineer for pointing out the typo, I'll try to fix that!
Lineskier (Mike?), would it be possible to use the matrices to compute the EMC and ERC? I really like that EMC and ERC use the twitter feeds to break down a Robot's contribution into Minibot and Hostbot, but without the matrices, they just don't have the same accuracy.
It seems like it should be possible to do this.
Let's say you had two variables per team:
OPRMini
OPRMain
Right now, the right-side vector of data for OPR is just a team's total scores because that's all I get out of the FRC match results. Since Lineskier has all the scores divided into points + bonus, he could just solve using minibot-only and main-only. The downside of this would be that you'd have teams with nonzero minibot scores who never had a minibot or never even went near the tower. You could say those teams "helped get their alliance minibot scored", but really it'd be mathematical noise.
Ed,
did you include Hawaii?
I dont see it when I type in our team #.
-Glenn
Hi Glenn,
Thanks for catching the mistake. It has been updated in version 5.
http://www.chiefdelphi.com/media/papers/2174
I think what happened was when I first tried to get the data for all Week 4 events, your regional was not done yet due to time difference. I later went back to get the data, ran the True World Ranking macros and published the results and forgot to run the macros for the regional itself. So the True World Ranking was valid and did not change. The only difference in version 5 is that it includes the Hawaii regional results for the Query, CCWM results and OPR results tab.
Sorry about the error.
Yes, it is disconcerting if people are getting different OPR numbers... But that really isn't the case: Ed's and Bongle's OPR's do match up for each robot for each tournament. The reason why Ed's OPR *World Rank* numbers are different than taking the highest OPR from each team is because it solves a single world-wide matrix. This is great in many ways, but has the (single?) downside of solving for a single OPR from their Week 1 performance to their Championship performance, which is somewhat flawed. That said, I'm not sure what'd be a better way... :-/
The World Rank is a season long overall score of how a team did. If a team went to two events and scored low on the first one and scored high on the second one, the single World Rank OPR will reflect that. It is not meant to show how strong a team is going into Championship. For that I would recommend using latest score which my spreadsheet will calculate in the OPR Results tab by putting -2 in the G2 cell and click the Calc box to run a simple macro.
Lineskier (Mike?), would it be possible to use the matrices to compute the EMC and ERC? I really like that EMC and ERC use the twitter feeds to break down a Robot's contribution into Minibot and Hostbot, but without the matrices, they just don't have the same accuracy.
Nathan, this has already been done by Team 33. But it was done manually as a one time thing. The data looks interesting. I was going to automate that using the twitter feed but when I heard that there are missing matches and may be even event, I changed my mind. I don't want to spend time on something if the data is not reliable. If anyone is interested in pursuing that, you are welcome to modify my macros. All the macros in my spreadsheet are not protected. It includes all code needed to assemble and solve the matrix for you.
Navid Shafa
29-03-2011, 20:30
Your team is ranked 29 out of 1548 teams who have competed so far. That is very good. You are one of the better teams.
:ahh: :ahh: :ahh:
WOW. That is truly astonishing, I knew there was going to be some strong competitors popping up from the Midwest regional, as I recognized some of the Alamo powerhouses...
I want to thank everybody for following up on this thread, filling in my holes and painting a clear picture of the current world standings.
Ed: I was just talking to a team-mate about this exact database yesterday. I remembered using it in 2008 and 2009, but I forgot who made it and where to find it. I'm glad that I have access to it once again.
I am extremely excited for St. Louis and I figure that with a strong alliance we could go far this year!
Thanks for all your input and advice!
Nathan, this has already been done by Team 33. But it was done manually as a one time thing. The data looks interesting. I was going to automate that using the twitter feed but when I heard that there are missing matches and may be even event, I changed my mind. I don't want to spend time on something if the data is not reliable. If anyone is interested in pursuing that, you are welcome to modify my macros. All the macros in my spreadsheet are not protected. It includes all code needed to assemble and solve the matrix for you.
Nathan was asking about ETC, ERC and EMC which have nothing to do with matrices or OPR. And they are all calculated automatically. I just haven't published the latest version of my site.
Lineskier (Mike?), would it be possible to use the matrices to compute the EMC and ERC? I really like that EMC and ERC use the twitter feeds to break down a Robot's contribution into Minibot and Hostbot, but without the matrices, they just don't have the same accuracy.
Actually matrices inherently has an equal potential of inaccuracy. Digging through my threads I talk about how my algorithm is actually executed and the resolution it gets over OPR. Matrix algebra looks at the big picture, but it doesn't look match to match. ETC does look match to match. Point inflation affects both in different ways. Also I can tell which alliances finished in which place with minibots. So if someone has an EMC of 30 I KNOW in every qualifying match their alliance put up a winning minibot.
Also I calculate your team's contribution to each match, for the robot. OPR does as good a job as it can using traditional mathematics. I use CS algorithms to calculate my #'s. That being said there are still issues with point inflation that are inherent in any system. Some teams will have the luck of the draw. I have an idea of how to fix this for the ERC but it will take a bit to implement.
If someone had a java function for calculating OPR I could quickly implement it. ETC and OPR look at a teams contribution from 2 separate directions. I believe by utilizing both that the values it will hone in on a team's true contribution.
OPR has actually the product of collaborative work done by many individuals who are still on CD ( I know Jim Zondag worked on it, and he is still very active in the CD community)
The origin of OPR was not a collaborative effort. I was talking about who coined the term and first proposed it. From what I can find, it was Scott Weingart from Team 293 in this post dated 4/6/2006, http://www.chiefdelphi.com/forums/showpost.php?p=484220&postcount=19. He is no longer active on CD as his last post was 4/13/2007.
I was not talking about who use it. Many people use it as a supplement to match scouting. Yes it is very simple matrix algebra and many have written their own programs to calculate it. If implemented correctly, the OPR numbers for each regional/district should be identical.
I am glad to see you proposing a new way of ranking teams. I always welcome innovation. However I do have serious concern about your method. This is not a personal attack so please don't be defensive. I just want to point out a few things so you can improve your algorithm if you choose to. I am not saying OPR is better. I am not defending OPR as I don't own it. In the end, you look at how numbers are calculated and you choose what you think will work for you.
1) One of my concern is if a team has a bad first match for whatever reason and that team's alliance score zero points. Then the next time this team play, according to your algorithm, you will assume that this team does not contribute much to whatever the score was for their second match. Please correct me if I misunderstood your algorithm.
2) I also read someone suggested to you that it should be iterative, i.e. loop back and have a second pass and third pass etc. I don't know if you tried this or not. An iterative method is fine as long as it converges. If it diverge or oscillate, then there is something wrong. In your method, does it always converge and if yes how many iterations typically before it converges. Have you compared the converged value to OPR? Are they close to each other?
3)One of the point you advocate about your method was considering match to match effect rather than a big picture like OPR. When you iterate, it is no longer just match to match. This is somewhat analogous to finite difference method. You are actually getting the effect of all the matches when you loop back and iterate.
4) If you argue that it should not iterate, then the final number is too dependent on your starting value and the method will not be mathematically valid.
I hope you will find a way to improve it so we will all benefit from a better way to rank teams.
Nathan, this has already been done by Team 33. But it was done manually as a one time thing. The data looks interesting. .
When compairing this data to some scouting data, it was reasonably accurrate. It turned into a bit of a "tube scoring" version of OPR. I have also eseen this done with penalty data, and some other interesting factors. We should have some good scouting data from Troy that hopefully we can use to tune the algorithms to be even more representative, and then test them at the State championship.
Like the robot, iterating and progressively improving your scouting and metric based scouting as the season continues helps a bunch.
I have written a html screen scrape that collects seeding data from all the regionals. Just raw data for all of you to play with. It beats copy and paste from all the web screens. It is interesting to note that there are three different styles used on the web displays. I would have thought all the web scripts would be the same..
DMC Mentor team 3234
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.