2015 Championship division simulated rankings

Wow, thanks!

Just to verify: the “Highest OPR (no co-op)” column includes auto?

I see our team is ranked last in Newton. :frowning: Well, with 118, 1671, and 1678 in Newton, we are definitely a division to watch. All these powerful alliances are going to be looking for a match proven cheese caked can burglar (I hope).

Top alliances with a landfill stacker, a human side stacker and a can burglar are going to be fun to watch. A bonus is a can burglar who can add functionality by stacking or manipulating flipped over totes and cans, or fill in if a top seed malfunctions.

Yes it does; If you’d like to do what CVR suggested you’d take the “Highest OPR (no coop)” and subtract the auto OPR.

Perfect. I updated my website to use this new algorithm: championship.evanforbes.net

Can I just say, thanks to Jeremy for the simulation, 955 for the really nice applet and breakdown OPR for every team and event, and to Evan for his awesome Championship website. All this stuff is really cool to look at, and we appreciate it.

There’s an issue with this method that may skew predictions for alliances with more than one team that does a lot of co-op. Lets say:

R1 has a "platform OPR of 40, and a co-op OPR of 30
R2 has a "platform OPR of 40 and a co-op OPR of 28
R3 has OPRs of 0 for the sake of argument

By the above method, the red alliance would be predicted to score just 110 points, since we would use R1’s co-op average, but not R2’s. But if R2 can usually score 40 platform points and do co-op most of the time, surely they wouldn’t put up their 40 and then spend the rest of the match twiddling their thumbs. They would use the time they normally use on co-op to score more platform points!

(There’s also the issue of whether the opposing alliance has a high enough Co-op OPR to ensure co-op will be successful, but now we’re getting really complicated.)

Is the max coopOPR used for the same event as the Max OPR? If not I see some problems where it may appear that teams score more than they really do, if they coop more at some events, and then stacked more at other events. They would be getting the max points from both.

On the coop OPR I think you need to take a sum of the maxBlue(coopOPR,20)+maxRed(coopOPR,20) as the closest approximation and apply the total to both alliances.

And watching the regionals, the coop points increased with overall scores so that the max coop OPRs should be in an event close to the max OPR.

I think you would want to take a sum of the min(max(BluecoopOPRs),20)+min(max(RedcoopOPRs),20) , as what you wrote would just give every alliance 40 points.

I would like to get a copy of this source code before CMP if possible, so I can play with it while on the plane.

Here is the source Netbeans project that I used to run the simulations along with all of the data files I used. Thanks to Evan Forbes and his website for providing me with a source for the best OPR data.

I may rework the randomization model code tonight as the model used for simulations was a very naive version.

Feel free to PM me with any questions.

ScheduleSimulator.zip (59.8 KB)


ScheduleSimulator.zip (59.8 KB)

So, statistically speaking - it is proven that never in 10,000 years, will 254 not finish 1st in their division. I don’t think that would have happened with any other game under the exact same algorithm.

Does anyone know how to run the same algorithm, for say last year, and see how close it is to actual results?

Only if some very rough assumptions hold. It’s not guaranteed.

I think a more accurate statement would be: if their division played the qualification rounds 10,000 times, 254 would seed first in all of them.

Because of the change from W-L to average, it allows the top teams to stay high easier. 1114 in 2008 was similarly dominate (scoring 50% more then the next best team in the division), however in a W-L scenario they wouldn’t always seed first (see http://www.chiefdelphi.com/forums/showpost.php?p=735425&postcount=165)

If someone did want to run the same algorithm, the Galileo 2008 schedule is here: http://www2.usfirst.org/2008comp/Events/galileo/ScheduleQual.html and best OPRs are here: http://www.chiefdelphi.com/forums/showpost.php?p=733568&postcount=152

For kicks and giggles, here is the simulation for the Carson division rerun with the new schedule:

Division: cars
Teams found: 75
Iterations: 10000
Number	AvgRank	Max	Min
254	1.0	1.0	1.0
1519	2.9238	12.0	2.0
225	3.2303	12.0	2.0
1325	4.6833	18.0	2.0
2085	6.234	23.0	2.0
4488	6.2526	21.0	2.0
67	7.6632	24.0	2.0
85	7.7193	26.0	2.0
1730	9.4748	28.0	2.0
5406	10.9487	32.0	2.0
5254	12.0255	38.0	3.0
4587	12.127	32.0	2.0
3478	13.2271	37.0	3.0
1296	16.084	40.0	3.0
399	17.4508	42.0	5.0
5122	17.8394	41.0	5.0
973	18.2159	41.0	4.0
16	18.674	44.0	5.0
4980	19.0676	42.0	4.0
236	22.354	45.0	6.0
1501	22.3839	45.0	7.0
3604	23.095	49.0	7.0
5338	24.7953	52.0	6.0
60	25.2075	54.0	8.0
3339	25.5461	52.0	8.0
999	27.2118	53.0	9.0
1711	27.5821	56.0	10.0
558	28.4328	62.0	9.0
203	29.5784	60.0	10.0
3547	29.9607	57.0	11.0
1629	30.3261	55.0	10.0
467	31.0024	60.0	10.0
1058	31.6473	59.0	9.0
2471	32.785	59.0	12.0
1511	33.265	66.0	12.0
1510	34.8757	64.0	11.0
2377	37.0282	61.0	13.0
246	38.782	71.0	15.0
1885	38.8871	68.0	14.0
5053	39.0219	67.0	16.0
3506	39.5788	67.0	17.0
4215	43.708	70.0	21.0
3481	44.9965	72.0	20.0
5659	45.4461	74.0	21.0
3946	46.4022	73.0	20.0
20	47.1695	75.0	23.0
2534	48.0525	73.0	22.0
2075	49.9531	74.0	18.0
173	50.0761	75.0	22.0
375	50.4154	73.0	23.0
5510	51.8697	74.0	27.0
2521	51.8814	75.0	26.0
4028	54.6548	75.0	29.0
5416	54.7641	75.0	27.0
1241	54.8783	75.0	27.0
4499	55.5049	75.0	28.0
93	57.2367	75.0	32.0
418	57.4458	75.0	31.0
2905	59.3855	75.0	35.0
5625	60.0594	75.0	30.0
1458	60.5968	75.0	35.0
4818	61.0299	75.0	34.0
5655	61.7911	75.0	35.0
5549	62.3484	75.0	37.0
4574	62.4298	75.0	34.0
2601	62.8756	75.0	36.0
5059	63.1868	75.0	36.0
5719	63.4183	75.0	38.0
3256	67.3898	75.0	41.0
1710	67.797	75.0	42.0
1306	69.3932	75.0	46.0
3880	70.5046	75.0	43.0
2283	71.0979	75.0	48.0
3728	71.4733	75.0	48.0
4953	72.5791	75.0	48.0

This was run with the same obviously naive assumptions made by my initial randomization model. I don’t have too much faith in it being terribly accurate, but it is always fun to see what it spits out.

Good luck to everyone tomorrow!

Jeremy,

Are you using excel to do this analysis or spss or something else? If something else can you lets us know? Pretty fun data thanks for sharing?

As mentioned on Gamesense, here’s a spreadsheet comparing the simulated rankings against the actual rankings:
https://docs.google.com/spreadsheets/d/1n4OxmEnZwwLtjqdz7z0EJQQQlHFBzcDFEuqx_z-Mx20/pubhtml

Pretty cool. We were simulated to come in 4th (3.8) and actually did come in 4th.