![]() |
2015 Championship division simulated rankings
Using the preliminary match schedules and teams' best OPRs, I've simulated the rankings using the Monte Carlo method.
With every iteration of the match schedule, matches were simulated by summing each teams' OPR and adding in pseudo-random terms corresponding to the "randomness" in a teams actual performance. I'm terrible at explaining with words, so here it is in pseudocode: Code:
Total score = (OPR1 + (random1)) + (OPR2 + (random2)) + (OPR3 + (random3)) + (random4)Each match schedule is simulated 10,000 times and the ranks are averaged. Also shown here are the minimum and maximum ranks for each team during the entire simulation. Here are the results: Archimedes division Code:
Division: arcCode:
Division: curCode:
Division: galCode:
Division: newCode:
Division: carsCode:
Division: carvCode:
Division: hopCode:
Division: tes |
Re: 2015 Championship division simulated rankings
Thanks, this is awesome.
|
Re: 2015 Championship division simulated rankings
Very interesting. Thanks for the data!
Comparing the top of Tesla to the top of the other divisions is intriguing. It's the only division with no clear frontrunner by this metric. Should be fun! |
Re: 2015 Championship division simulated rankings
Quote:
|
Re: 2015 Championship division simulated rankings
This is awesome!
I do have a suggestion (which also may not convey easily in words) which could be even better, but also requires more input data. Rather than using a +/- 10 range in OPR, it would be good to use a team's standard deviation of OPR, which I guess would be related to the residuals from the OPR calculation. I don't know if this information is readily available, but it could narrow or expand the range possible, based on a team's consistency. Just a thought. |
Re: 2015 Championship division simulated rankings
10000 out of 10000 iterations 254 ranks 1. Thats crazy
|
Re: 2015 Championship division simulated rankings
Quote:
|
Re: 2015 Championship division simulated rankings
Quote:
If you took the set of residuals from the matches a robot played in, it makes intuitive sense that that data should contain some level of information about that robot's deviation from their OPR. But is a data set of only 8-12 elements enough for this value to dominate the noise generated by their alliance partners' deviations (and therefore produce a meaningful standard deviation itself)? I dunno. If some statistics wiz would like to chime in on this, I'd love to hear it. |
Re: 2015 Championship division simulated rankings
Quote:
Unfortunately it's not very useful unless you have actual scouted data for each team to use, in which case you can make much more accurate predictions about rankings. Our scouting system had a little less than an 80% success rate guessing the winners of each match in our division the last two years, and those games were very defense heavy. I would bet on this system approaching a 95% success rate guessing match results this year since the game is much more consistent. |
Re: 2015 Championship division simulated rankings
Very intersting, I like this idea. One problem I can see is that there are some teams that their last regional was early in the season (week 1-3), and I think the OPR of those teams won't represent the amount of points they will score at the Championship (they got a lot of time to practice, but it wasn't in an official competition so there isn't any recorded data of their improvement).
Quote:
|
Re: 2015 Championship division simulated rankings
Quote:
Quote:
|
Re: 2015 Championship division simulated rankings
Quote:
Our data for TORC is from Week 7 MSC, so our random is the standard +-10, but team X, data is from week 3 and we know since week OPR overall saw a 20% increase (for example, not actual data) so let the random from team X range from +12 to -8... Or we could just play the match next week. :) |
Re: 2015 Championship division simulated rankings
I think that your calculation method, which is essentially the following:
Red score = Red1_OPR + Red2_OPR + Red3_OPR Greatly overestimates qual scores. I think it might be more accurate to seperate the co-op and auto scores from OPR. In a single match, only one team can do co-op, and only one team can do auto (not entirely true, but pretty close). By counting all 3 team's auto and co-op scores, you're triple-weighting those scores. Example: Qual 24 has three red teams, each of which have a co-op OPR of 20 (100% consistent 3 tote stack) and an auto OPR of 40 (100% consistent co-op). However, their tote, RC, and litter OPRs are each zero, for a total OPR for each team of 60. The score for this match would be 60, as they would get one auto stack and complete co-op. However, your method predicts the score being 180 points. That's an extreme example, but it illustrates the issue well. I think a better method would be to use the following: Red Score = Red1_(toteOPR + binOPR + litterOPR) + Red2_(toteOPR + binOPR + litterOPR) + Red3_(toteOPR + binOPR + litterOPR) + MAX(Red1_autoOPR, Red2_autoOPR, Red3_autoOPR) + MAX(Red1_coopOPR, Red2_coopOPR, Red3_coopOPR) I think that method, while slightly more complex, will give more accurate results. |
Re: 2015 Championship division simulated rankings
What probability distribution did you use for the random terms?
|
Re: 2015 Championship division simulated rankings
Quote:
|
Re: 2015 Championship division simulated rankings
Quote:
Quote:
|
Re: 2015 Championship division simulated rankings
Quote:
|
Re: 2015 Championship division simulated rankings
Quote:
|
Re: 2015 Championship division simulated rankings
Quote:
|
Re: 2015 Championship division simulated rankings
I see our team is ranked last in Newton. :-( Well, with 118, 1671, and 1678 in Newton, we are definitely a division to watch. All these powerful alliances are going to be looking for a match proven cheese caked can burglar (I hope).
Top alliances with a landfill stacker, a human side stacker and a can burglar are going to be fun to watch. A bonus is a can burglar who can add functionality by stacking or manipulating flipped over totes and cans, or fill in if a top seed malfunctions. |
Re: 2015 Championship division simulated rankings
Quote:
|
Re: 2015 Championship division simulated rankings
Quote:
|
Re: 2015 Championship division simulated rankings
Can I just say, thanks to Jeremy for the simulation, 955 for the really nice applet and breakdown OPR for every team and event, and to Evan for his awesome Championship website. All this stuff is really cool to look at, and we appreciate it.
|
Re: 2015 Championship division simulated rankings
Quote:
R1 has a "platform OPR of 40, and a co-op OPR of 30 R2 has a "platform OPR of 40 and a co-op OPR of 28 R3 has OPRs of 0 for the sake of argument By the above method, the red alliance would be predicted to score just 110 points, since we would use R1's co-op average, but not R2's. But if R2 can usually score 40 platform points *and do co-op most of the time*, surely they wouldn't put up their 40 and then spend the rest of the match twiddling their thumbs. They would use the time they normally use on co-op to score more platform points! (There's also the issue of whether the opposing alliance has a high enough Co-op OPR to ensure co-op will be successful, but now we're getting really complicated.) |
Re: 2015 Championship division simulated rankings
Quote:
|
Re: 2015 Championship division simulated rankings
On the coop OPR I think you need to take a sum of the maxBlue(coopOPR,20)+maxRed(coopOPR,20) as the closest approximation and apply the total to both alliances.
And watching the regionals, the coop points increased with overall scores so that the max coop OPRs should be in an event close to the max OPR. |
Re: 2015 Championship division simulated rankings
Quote:
|
Re: 2015 Championship division simulated rankings
I would like to get a copy of this source code before CMP if possible, so I can play with it while on the plane.
|
Re: 2015 Championship division simulated rankings
1 Attachment(s)
Here is the source Netbeans project that I used to run the simulations along with all of the data files I used. Thanks to Evan Forbes and his website for providing me with a source for the best OPR data.
I may rework the randomization model code tonight as the model used for simulations was a very naive version. Feel free to PM me with any questions. |
Re: 2015 Championship division simulated rankings
So, statistically speaking - it is proven that never in 10,000 years, will 254 not finish 1st in their division. I don't think that would have happened with any other game under the exact same algorithm.
Does anyone know how to run the same algorithm, for say last year, and see how close it is to actual results? |
Re: 2015 Championship division simulated rankings
Quote:
|
Re: 2015 Championship division simulated rankings
Quote:
|
Re: 2015 Championship division simulated rankings
Quote:
If someone did want to run the same algorithm, the Galileo 2008 schedule is here: http://www2.usfirst.org/2008comp/Eve...eduleQual.html and best OPRs are here: http://www.chiefdelphi.com/forums/sh...&postcount=152 |
Re: 2015 Championship division simulated rankings
For kicks and giggles, here is the simulation for the Carson division rerun with the new schedule:
Code:
Division: carsGood luck to everyone tomorrow! |
Re: 2015 Championship division simulated rankings
Jeremy,
Are you using excel to do this analysis or spss or something else? If something else can you lets us know? Pretty fun data thanks for sharing? |
Re: 2015 Championship division simulated rankings
As mentioned on Gamesense, here's a spreadsheet comparing the simulated rankings against the actual rankings:
https://docs.google.com/spreadsheets...z-Mx20/pubhtml |
Re: 2015 Championship division simulated rankings
Pretty cool. We were simulated to come in 4th (3.8) and actually did come in 4th.
|
| All times are GMT -5. The time now is 14:58. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi