2015 Championship division simulated rankings

What probability distribution did you use for the random terms?

I concur this is correct method. However, it’s also important this year to use the Max OPR, not average, as teams improved dramatically through the season. I’ve got a message into Ed Law on a method to extract the Max Auto and Coop OPRs from this database.

We have all this data available here if anyone wants to use it. Click the download button to get a local copy in csv format. We’ll run our own monte carlo analysis later today using various models and post it here, along with some regression analysis on which model gives the most accurate predictions, just for fun.

Thanks!

Wow, thanks!

Just to verify: the “Highest OPR (no co-op)” column includes auto?

I see our team is ranked last in Newton. :frowning: Well, with 118, 1671, and 1678 in Newton, we are definitely a division to watch. All these powerful alliances are going to be looking for a match proven cheese caked can burglar (I hope).

Top alliances with a landfill stacker, a human side stacker and a can burglar are going to be fun to watch. A bonus is a can burglar who can add functionality by stacking or manipulating flipped over totes and cans, or fill in if a top seed malfunctions.

Yes it does; If you’d like to do what CVR suggested you’d take the “Highest OPR (no coop)” and subtract the auto OPR.

Perfect. I updated my website to use this new algorithm: championship.evanforbes.net

Can I just say, thanks to Jeremy for the simulation, 955 for the really nice applet and breakdown OPR for every team and event, and to Evan for his awesome Championship website. All this stuff is really cool to look at, and we appreciate it.

There’s an issue with this method that may skew predictions for alliances with more than one team that does a lot of co-op. Lets say:

R1 has a "platform OPR of 40, and a co-op OPR of 30
R2 has a "platform OPR of 40 and a co-op OPR of 28
R3 has OPRs of 0 for the sake of argument

By the above method, the red alliance would be predicted to score just 110 points, since we would use R1’s co-op average, but not R2’s. But if R2 can usually score 40 platform points and do co-op most of the time, surely they wouldn’t put up their 40 and then spend the rest of the match twiddling their thumbs. They would use the time they normally use on co-op to score more platform points!

(There’s also the issue of whether the opposing alliance has a high enough Co-op OPR to ensure co-op will be successful, but now we’re getting really complicated.)

Is the max coopOPR used for the same event as the Max OPR? If not I see some problems where it may appear that teams score more than they really do, if they coop more at some events, and then stacked more at other events. They would be getting the max points from both.

On the coop OPR I think you need to take a sum of the maxBlue(coopOPR,20)+maxRed(coopOPR,20) as the closest approximation and apply the total to both alliances.

And watching the regionals, the coop points increased with overall scores so that the max coop OPRs should be in an event close to the max OPR.

I think you would want to take a sum of the min(max(BluecoopOPRs),20)+min(max(RedcoopOPRs),20) , as what you wrote would just give every alliance 40 points.

I would like to get a copy of this source code before CMP if possible, so I can play with it while on the plane.

Here is the source Netbeans project that I used to run the simulations along with all of the data files I used. Thanks to Evan Forbes and his website for providing me with a source for the best OPR data.

I may rework the randomization model code tonight as the model used for simulations was a very naive version.

Feel free to PM me with any questions.

ScheduleSimulator.zip (59.8 KB)


ScheduleSimulator.zip (59.8 KB)

So, statistically speaking - it is proven that never in 10,000 years, will 254 not finish 1st in their division. I don’t think that would have happened with any other game under the exact same algorithm.

Does anyone know how to run the same algorithm, for say last year, and see how close it is to actual results?

Only if some very rough assumptions hold. It’s not guaranteed.

I think a more accurate statement would be: if their division played the qualification rounds 10,000 times, 254 would seed first in all of them.

Because of the change from W-L to average, it allows the top teams to stay high easier. 1114 in 2008 was similarly dominate (scoring 50% more then the next best team in the division), however in a W-L scenario they wouldn’t always seed first (see http://www.chiefdelphi.com/forums/showpost.php?p=735425&postcount=165)

If someone did want to run the same algorithm, the Galileo 2008 schedule is here: http://www2.usfirst.org/2008comp/Events/galileo/ScheduleQual.html and best OPRs are here: http://www.chiefdelphi.com/forums/showpost.php?p=733568&postcount=152