![]() |
2015 Championship division simulated rankings
Using the preliminary match schedules and teams' best OPRs, I've simulated the rankings using the Monte Carlo method.
With every iteration of the match schedule, matches were simulated by summing each teams' OPR and adding in pseudo-random terms corresponding to the "randomness" in a teams actual performance. I'm terrible at explaining with words, so here it is in pseudocode: Code:
Total score = (OPR1 + (random1)) + (OPR2 + (random2)) + (OPR3 + (random3)) + (random4)Each match schedule is simulated 10,000 times and the ranks are averaged. Also shown here are the minimum and maximum ranks for each team during the entire simulation. Here are the results: Archimedes division Code:
Division: arcCode:
Division: curCode:
Division: galCode:
Division: newCode:
Division: carsCode:
Division: carvCode:
Division: hopCode:
Division: tes |
Re: 2015 Championship division simulated rankings
Thanks, this is awesome.
|
Re: 2015 Championship division simulated rankings
Very interesting. Thanks for the data!
Comparing the top of Tesla to the top of the other divisions is intriguing. It's the only division with no clear frontrunner by this metric. Should be fun! |
Re: 2015 Championship division simulated rankings
Quote:
|
Re: 2015 Championship division simulated rankings
This is awesome!
I do have a suggestion (which also may not convey easily in words) which could be even better, but also requires more input data. Rather than using a +/- 10 range in OPR, it would be good to use a team's standard deviation of OPR, which I guess would be related to the residuals from the OPR calculation. I don't know if this information is readily available, but it could narrow or expand the range possible, based on a team's consistency. Just a thought. |
Re: 2015 Championship division simulated rankings
10000 out of 10000 iterations 254 ranks 1. Thats crazy
|
Re: 2015 Championship division simulated rankings
Quote:
|
Re: 2015 Championship division simulated rankings
Quote:
If you took the set of residuals from the matches a robot played in, it makes intuitive sense that that data should contain some level of information about that robot's deviation from their OPR. But is a data set of only 8-12 elements enough for this value to dominate the noise generated by their alliance partners' deviations (and therefore produce a meaningful standard deviation itself)? I dunno. If some statistics wiz would like to chime in on this, I'd love to hear it. |
Re: 2015 Championship division simulated rankings
Quote:
Unfortunately it's not very useful unless you have actual scouted data for each team to use, in which case you can make much more accurate predictions about rankings. Our scouting system had a little less than an 80% success rate guessing the winners of each match in our division the last two years, and those games were very defense heavy. I would bet on this system approaching a 95% success rate guessing match results this year since the game is much more consistent. |
Re: 2015 Championship division simulated rankings
Very intersting, I like this idea. One problem I can see is that there are some teams that their last regional was early in the season (week 1-3), and I think the OPR of those teams won't represent the amount of points they will score at the Championship (they got a lot of time to practice, but it wasn't in an official competition so there isn't any recorded data of their improvement).
Quote:
|
Re: 2015 Championship division simulated rankings
Quote:
Quote:
|
Re: 2015 Championship division simulated rankings
Quote:
Our data for TORC is from Week 7 MSC, so our random is the standard +-10, but team X, data is from week 3 and we know since week OPR overall saw a 20% increase (for example, not actual data) so let the random from team X range from +12 to -8... Or we could just play the match next week. :) |
Re: 2015 Championship division simulated rankings
I think that your calculation method, which is essentially the following:
Red score = Red1_OPR + Red2_OPR + Red3_OPR Greatly overestimates qual scores. I think it might be more accurate to seperate the co-op and auto scores from OPR. In a single match, only one team can do co-op, and only one team can do auto (not entirely true, but pretty close). By counting all 3 team's auto and co-op scores, you're triple-weighting those scores. Example: Qual 24 has three red teams, each of which have a co-op OPR of 20 (100% consistent 3 tote stack) and an auto OPR of 40 (100% consistent co-op). However, their tote, RC, and litter OPRs are each zero, for a total OPR for each team of 60. The score for this match would be 60, as they would get one auto stack and complete co-op. However, your method predicts the score being 180 points. That's an extreme example, but it illustrates the issue well. I think a better method would be to use the following: Red Score = Red1_(toteOPR + binOPR + litterOPR) + Red2_(toteOPR + binOPR + litterOPR) + Red3_(toteOPR + binOPR + litterOPR) + MAX(Red1_autoOPR, Red2_autoOPR, Red3_autoOPR) + MAX(Red1_coopOPR, Red2_coopOPR, Red3_coopOPR) I think that method, while slightly more complex, will give more accurate results. |
Re: 2015 Championship division simulated rankings
What probability distribution did you use for the random terms?
|
Re: 2015 Championship division simulated rankings
Quote:
|
| All times are GMT -5. The time now is 23:19. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi