|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||
|
||||
|
Re: Week 4 OPR
Quote:
http://www.chiefdelphi.com/forums/sh...609#post964609 The data can be found here. http://www.chiefdelphi.com/media/papers/2174 Navid, there are no OPR organizers here on CD. Nobody owns it. I didn't invent it. I think the person who did is no longer on this forum as I have not seen his posts in a long time. So don't worry about treading on somebody's feet. I am doing it as a service to others and it is my way to give back to the CD community who had helped our young team a lot in the last few years. Nathan, thanks. You beat me to it, and I am not the only one staying up late. |
|
#2
|
||||
|
||||
|
Re: Week 4 OPR
Ed,
did you include Hawaii? I dont see it when I type in our team #. -Glenn |
|
#3
|
||||
|
||||
|
Re: Week 4 OPR
Quote:
The most notable thing about your team is in qualifying matches, someone on your alliance got 1st place in every race. With regards to the regionals I have full data on, you are the only team who has achieved this. By pairing up with 368 you created a minibot monopoly. This is another example where picking the best robot may not be the best pick. It looks like 2439 had an great robot, but a less valuable minibot. They picked 1056 who had the most valuable robot, and a decent minibot. However after looking at 359 and 368's EMCs the only way to beat them would be to compete against them in the race. It would have been a gamle, but it looks like 2439 should have considered 2090, who looks like the only team with a minibot who could really go toe to toe with 359 and 368s. Here is just another example where minibot monopolies win... Congrats guys on the great job! Last edited by mwtidd : 29-03-2011 at 07:27. |
|
#4
|
||||
|
||||
|
Re: Week 4 OPR
Quote:
Thanks for catching the mistake. It has been updated in version 5. http://www.chiefdelphi.com/media/papers/2174 I think what happened was when I first tried to get the data for all Week 4 events, your regional was not done yet due to time difference. I later went back to get the data, ran the True World Ranking macros and published the results and forgot to run the macros for the regional itself. So the True World Ranking was valid and did not change. The only difference in version 5 is that it includes the Hawaii regional results for the Query, CCWM results and OPR results tab. Sorry about the error. |
|
#5
|
||||
|
||||
|
Re: Week 4 OPR
Quote:
OPR is standard matrix algebra and is a published mathematical formula, so when I see different OPRs it makes me a bit nervous. OPRnet may be taking across multiple event's rather than latest events, which would be cause for different values. |
|
#6
|
||||
|
||||
|
Re: Week 4 OPR
Quote:
Example: here's an excerpt from the bat file I use to generate the total list: Code:
oprnet hi 2011 opr r q >> allranks.txt oprnet is 2011 opr r q >> allranks.txt oprnet dmn 2011 opr r q >> allranks.txt oprnet nv 2011 opr r q >> allranks.txt oprnet tx 2011 opr r q >> allranks.txt oprnet ca 2011 opr r q >> allranks.txt oprnet il 2011 opr r q >> allranks.txt oprnet mn 2011 opr r q >> allranks.txt oprnet mn2 2011 opr r q >> allranks.txt |
|
#7
|
||||
|
||||
|
Re: Week 4 OPR
Quote:
|
|
#8
|
|||||
|
|||||
|
Re: Week 4 OPR
Can someone please post Midwest OPR. The data up on the FIRST site has cannot be parsed because it is in a different format. It is probably due to the field issues they had during Elims. The same thing happened to Chesapeake and I posted the data.
[EDIT] Nevermind, Bongle fixed it in v14, here is IL Code:
0 OPR 111 73.9248 1 OPR 118 51.1772 2 OPR 1625 45.6884 3 OPR 16 41.7728 4 OPR 2410 41.7174 5 OPR 1732 41.1471 6 OPR 135 37.2743 7 OPR 71 33.2081 8 OPR 2194 28.6006 9 OPR 2949 28.1996 10 OPR 2358 27.9021 11 OPR 2338 27.6187 12 OPR 2826 27.1633 13 OPR 1987 25.7962 14 OPR 2041 23.3367 15 OPR 1675 20.4011 16 OPR 3494 18.6332 17 OPR 45 16.518 18 OPR 2171 12.9152 19 OPR 1781 12.3256 20 OPR 2704 9.65644 21 OPR 2022 9.45619 22 OPR 3352 9.08313 23 OPR 2115 8.96639 24 OPR 101 8.44625 25 OPR 3779 8.22969 26 OPR 3595 6.91017 27 OPR 3067 6.79252 28 OPR 3197 6.37619 29 OPR 2151 4.5455 30 OPR 3612 4.44737 31 OPR 1739 4.27756 32 OPR 2432 4.04017 33 OPR 1367 3.10731 34 OPR 3488 2.54644 35 OPR 2769 2.35486 36 OPR 1091 1.68945 37 OPR 3646 1.40082 38 OPR 896 1.12364 39 OPR 3177 1.03324 40 OPR 3061 -0.270196 41 OPR 3695 -0.280014 42 OPR 2781 -2.29811 43 OPR 1850 -3.42507 44 OPR 2462 -3.43421 45 OPR 3110 -4.91922 46 OPR 3135 -7.98799 47 OPR 2803 -8.05401 48 OPR 2709 -8.28509 49 OPR 3416 -10.3558 Last edited by The Lucas : 29-03-2011 at 09:46. |
|
#9
|
||||
|
||||
|
Re: Week 4 OPR
I just fixed the OPRNet parser to parse the "fat"-style regional results.
Congratulations to 111 for their performance this weekend! |
|
#10
|
||||
|
||||
|
Re: Week 4 OPR
Quote:
Lineskier (Mike?), would it be possible to use the matrices to compute the EMC and ERC? I really like that EMC and ERC use the twitter feeds to break down a Robot's contribution into Minibot and Hostbot, but without the matrices, they just don't have the same accuracy. Thanks, EagleEngineer for pointing out the typo, I'll try to fix that! |
|
#11
|
||||
|
||||
|
Re: Week 4 OPR
Quote:
Let's say you had two variables per team: OPRMini OPRMain Right now, the right-side vector of data for OPR is just a team's total scores because that's all I get out of the FRC match results. Since Lineskier has all the scores divided into points + bonus, he could just solve using minibot-only and main-only. The downside of this would be that you'd have teams with nonzero minibot scores who never had a minibot or never even went near the tower. You could say those teams "helped get their alliance minibot scored", but really it'd be mathematical noise. |
|
#12
|
||||
|
||||
|
Re: Week 4 OPR
Quote:
Quote:
|
|
#13
|
||||
|
||||
|
Re: Week 4 OPR
Quote:
Quote:
Also I calculate your team's contribution to each match, for the robot. OPR does as good a job as it can using traditional mathematics. I use CS algorithms to calculate my #'s. That being said there are still issues with point inflation that are inherent in any system. Some teams will have the luck of the draw. I have an idea of how to fix this for the ERC but it will take a bit to implement. If someone had a java function for calculating OPR I could quickly implement it. ETC and OPR look at a teams contribution from 2 separate directions. I believe by utilizing both that the values it will hone in on a team's true contribution. Last edited by mwtidd : 29-03-2011 at 21:56. |
|
#14
|
||||
|
||||
|
Re: Week 4 OPR
Quote:
Like the robot, iterating and progressively improving your scouting and metric based scouting as the season continues helps a bunch. |
|
#15
|
||||
|
||||
|
Re: Week 4 OPR
Quote:
I was not talking about who use it. Many people use it as a supplement to match scouting. Yes it is very simple matrix algebra and many have written their own programs to calculate it. If implemented correctly, the OPR numbers for each regional/district should be identical. I am glad to see you proposing a new way of ranking teams. I always welcome innovation. However I do have serious concern about your method. This is not a personal attack so please don't be defensive. I just want to point out a few things so you can improve your algorithm if you choose to. I am not saying OPR is better. I am not defending OPR as I don't own it. In the end, you look at how numbers are calculated and you choose what you think will work for you. 1) One of my concern is if a team has a bad first match for whatever reason and that team's alliance score zero points. Then the next time this team play, according to your algorithm, you will assume that this team does not contribute much to whatever the score was for their second match. Please correct me if I misunderstood your algorithm. 2) I also read someone suggested to you that it should be iterative, i.e. loop back and have a second pass and third pass etc. I don't know if you tried this or not. An iterative method is fine as long as it converges. If it diverge or oscillate, then there is something wrong. In your method, does it always converge and if yes how many iterations typically before it converges. Have you compared the converged value to OPR? Are they close to each other? 3)One of the point you advocate about your method was considering match to match effect rather than a big picture like OPR. When you iterate, it is no longer just match to match. This is somewhat analogous to finite difference method. You are actually getting the effect of all the matches when you loop back and iterate. 4) If you argue that it should not iterate, then the final number is too dependent on your starting value and the method will not be mathematically valid. I hope you will find a way to improve it so we will all benefit from a better way to rank teams. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|