Chezy Champs Ranking Projection Contest


#1

Announcing the Chezy Champs Ranking Projection Contest! I’ve previously made threads where I just put up my own ranking projections for events, but I thought this could be much more fun. Although I would love it if people submitted a full probability distribution for each team, that seems a touch unreasonable, so I’m going to instead look at each team’s average predicted rank (although if you do have a full probability distribution I’d love to see it). What I like about average rank is that it allows a way to quantify uncertainty. If you think two teams will seed about equally well, that’s fine, just give them the same predicted rank. If you don’t like dealing with uncertainty, just submit a list of teams from best to worst with predicted ranks 1 to 44. The winner of this contest will receive a $30 AndyMark gift card from me. I’ll be entering as well, so if no one beats me I save $30. #rigged

**How to submit: **

  1. Go to this link
  2. Click on File->Make a Copy…
  3. Change the name of the file to be “[Your Name or Username]’s CC Predictions” (e.g. “Bob’s CC Predictions”)
  4. Edit the values in the last column as you see fit
  5. In the top-right corner click SHARE -> Get shareable link
  6. Paste the link into a post on this thread.

**Rules: **
• You can make changes to your sheet as frequently as you want, I’m going to wait until 9/27 to copy all predictions, so just make sure your final predictions are done by then. So if you want to take a first pass and share the link but then update later that’s cool.
• The winner must reply to my PM to them asking for contact info within one week of the completion of Chezy Champs, otherwise the award is forfeited.
• Submission deadline is Thursday 9/26 at 11:59PM central time.
• I reserve the right to remove any submission for any reason (e.g. if I find out you are throwing matches to improve your predictions).
• I reserve the right to edit or add any rules at any time.

**Scoring: **
After qual matches are completed at Chezy Champs, I’ll take the Root Mean Square Error (RMSE) of each submission compared to the actual rankings. The lowest RMSE is the winner. For example, here is a sheet showing how I would have calculated the RMSE for my pre-event IRI projections. If you are missing a team from the submission, I will use a predicted rank of 0 for them. If you have an additional team that doesn’t end up competing, they will be ignored in the score.

**Where to view results: **
I’ll leave an open spreadsheet here since the open spreadsheet for the Detroit Prediction Contest worked so well. Feel free to make cool graphs/analysis and add in new submissions as they come in (note this is not official though, you must submit using the above method). If people abuse the editing power I’ll restrict access to people I trust. I’ll have a separate private sheet I’ll use for the official evaluation.

Well, have at it. Keep in mind the team list could still change, so keep an eye on that if you submit early. I’m really interested in how people derive their predictions, so while not a rule, I personally would appreciate if you explained your reasoning/process in your submission, and if you were building off of someone else’s work or starting from scratch.


#2

I’ve updated my predictions to use the most recent team list.

I’d be fine taking an easy victory, but I hope others submit since that’ll be more fun. :wink:


#3

I don’t think I’m smart enough to make a full submission, but 254 seems to win Chezy Champs each year they win world champs, so I’m gonna go with that.


#4

What the heck. Just for fun I used your data and found the average rankings for several categories I considered the most influential and my own weighted average.

Predictions


#5

Here are added my initial predictions. I may change them in the future if I spend more time with my program.

I made my predictions like this:

  1. Collect average win, auto, and climb RPs for all teams across all 2018 events
  2. Randomize the CC team list, take the teams 1-3 for the red alliance and 4-6 for the blue alliance
  3. Whichever alliance has the higher avg win RP gets 2 RPs
  4. Pick a random number 0-1; if the random number is less than the red alliance’s average auto RP, give those teams 1 RP
  5. Pick a random number 0-1; if the random number is less than the red alliance’s average climb RP, give those teams 1 RP
  6. Repeat 4-5 for the blue alliance
  7. Repeat 2-6 for the number of matches per event (I chose 70, 10 matches/team)
  8. Sort teams by their average RP, and assign ranks for that simulation
  9. Repeat 2-8 for each simulated event (I chose [strike]10,000[/strike] 50,000 now)
  10. Average each team’s rank across all simulations

The biggest failing of this program is probably the match assigner, where there is nothing ensuring each team plays the same number of matches. Since the assignment is completely random, for a given simulation a team could have as little as one match (if a team has no matches I rerun the simulation) and another team could have upwards of 20. Hopefully, this is made up for by running [strike]10,000[/strike] 50,000 simulations, but in reality probably not.

EDIT: If anyone is interested, you can find the python program here

EDIT 2: At first I misread the instructions and posted my predictions in the results sheet. My bad, all fixed now.


#6

It’s fine if you put your predictions in the “CC Prediction Summary” book, just know that official predictions need to be made by making a new link. The “CC Prediction Summary” is unofficial, which is why I made it publicly editable.


#7

So I made a full probability distribution for fun:

https://i.imgur.com/0Q74yq9l.jpg](https://i.imgur.com/0Q74yq9.jpg)
(click for enlarged view)

Also I ran the program on the IRI data, and got a RMSE of 19.1. Not really sure how that stacks up in the scheme of things.


#8

When you get it finalized, please share the spreadsheet with me. I’d love to compare it to mine after the event.

19.12 is a solid RMSE for IRI, I got 18.33 for that event, and jtrv got 18.32 using an algorithm that incorporated scouting data, and he could have gotten as low as 17.26 if his model had been perfectly calibrated.


#9

You can use the Chezy Arena schedule and randomize the team numbers.


#10

That actually went easier than expected; I got it integrated in under an hour. When I reran the numbers for IRI I got a slightly worse RMSE of 19.3, but that could also be due to unlucky random numbers. This is still a more sound method, so I’m updating my predictions.

Caleb,
I’m putting my updated probability distribution table here. I’ll update it whenever (if ever) I update my predictions, so you can assume it’ll be final on Thursday if you only want the final version.


#11

I’ve updated my predictions based on the preliminary schedule. Here’s who I think got good and bad schedules:
good:
1072
5012
4911
604

bad:
3538
1538
2557

team	old	new	change
8	29.9	30.6	0.7
115	28.2	30.3	2.1
254	6.2	8.2	2.0
604	24.6	20.9	-3.7
649	29.3	26.6	-2.7
687	25.8	22.6	-3.2
696	33.2	35.6	2.4
842	22.2	20.0	-2.2
846	24.5	22.1	-2.4
968	26.8	23.4	-3.4
971	15.6	16.8	1.2
973	16.3	14.6	-1.7
1072	31.4	26.3	-5.1
1323	7.9	9.6	1.7
1538	15.4	19.1	3.7
1678	6.1	6.9	0.8
1983	26.1	23.0	-3.1
2046	12.8	14.5	1.7
2471	12.9	13.9	1.0
2557	16.6	19.9	3.3
2659	26.3	26.5	0.2
2910	18.3	19.5	1.2
2990	28.7	27.3	-1.4
3250	25	23.0	-2.0
3309	9.1	11.2	2.1
3310	12.4	15.1	2.7
3476	14.2	15.0	0.8
3478	17.7	16.9	-0.8
3512	15.9	18.1	2.2
3538	17.1	24.8	7.7
3647	27.4	25.4	-2.0
4159	32	29.8	-2.2
4183	31.9	31.3	-0.6
4388	24.9	25.7	0.8
4488	14	12.5	-1.5
4911	22.8	18.4	-4.4
5012	23.7	18.9	-4.8
5026	26.9	30.0	3.1
5499	30.6	32.2	1.6
5803	17.6	20.4	2.8
5818	18.8	18.8	0.0
5924	34.7	31.7	-3.0


#12

The math behind this isn’t coherent enough for me to understand or explain it, but the results seemed kind of reasonable, so I’m sticking with it.


#13

Well, it correlates fine with the rest of the predictions so far, so you’re probably doing something right.


#14

I too updated my predictions with the preliminary schedule.

I also modified my program to assign win RPs slightly differently. Rather than always giving the win to the alliance with the higher average win RP, the program now calculates a win probability based on the difference in alliance average win RPs and a scaler calculated from season data*. The winning alliance is then assigned by comparing the alliance win probability with a random number. This method makes my predictions a bit less confident, but that’s probably better given the unpredictability of off-season events. The new program is here for anyone who’s interested.

Edit: I just reran both the with- and without-schedule programs with the new win probability calculator for IRI, and got RMSE’s of 18.31 and 18.67, respectively.


#15

DGB’s CC Predictions

I wrote a simple script to calculate each team’s average rank for the season from the TBA API [Row E]. Then, I ordered the teams 1-42 based on their average rank [Row D]. My current predictions are just a weighted average of (D*3+E)/4 [Row C, F].

There wasn’t any reasoning behind the weighted average besides the fact that it created a data set with a larger spread. I was also too lazy/busy to remove offseason events, weigh champs and recent performances more heavily, or to do anything else logical…


#16

I meant to say Wednesday 9/26, but apparently I can’t look at calendars. In case anyone was planning on submitting tomorrow, I’m changing the deadline to Thursday 9/27 at 11:59PM central.


#17

BobbyVanNess’s CC Predictions:


#18

I know no one asked for it, but I’m also sharing my individual match predictions that lead to the ranking predictions. I’d be interested in comparing them to someone else’s if anyone has and is willing to share.


#19

Here are my detailed match predictions. They might be a little bit different than the ones I used for my ranking projections, but should be reasonably close. They aren’t exactly what you would pull directly from my simulator. For example, I capped the climb percentage at 80%, and the win probabilities use more Elo and more mean reversion than they normally do in my simulator. This was done because it was found to have more predictive power at IRI, which I figure is the best point of comparison for CC.

I need to do a better job of predicting RPs in the future. Summing predicted contributions, while simple, has some very clear drawbacks. I’m working on some alternatives that I might have ready for next season.


#20

Predictions Entry: https://docs.google.com/spreadsheets/d/11objjWsRzR8t6kKufgHAISrgrNiTp2z7kx1rdARjylI/edit?usp=sharing

Match Predictions:

I predicted from scratch without looking at any other predictions and didn’t predict auto RPs. Some teams are lower than I’d expect, particularly 3538, but this is probably because I was too optimistic with predicting climb RPs