2019 Chezy Champs Ranking Projection Contest

Alright, predictions have locked. We have 11 contestants this year! Which is almost double what we had last year.

Here are the teams with the biggest discrepancies between predictions:

Team Predictions
Team min mean max stdev
1197 5.0 28.7 38.0 8.6
2928 7.0 19.0 27.0 7.5
846 5.7 16.1 28.0 7.3
2557 14.7 24.0 37.0 6.5
5507 12.0 25.1 35.0 6.3
3476 10.0 19.9 28.0 5.4
3218 21.0 31.2 37.0 5.3
1868 18.0 24.6 33.0 5.2
2910 1.0 9.3 22.0 5.0
2733 7.0 17.8 25.0 4.9
5026 14.0 22.4 30.0 4.8
696 24.0 31.6 38.0 4.7
114 15.4 22.6 29.0 4.6
604 11.0 16.7 27.0 4.6
4183 17.8 26.2 33.0 4.5
1072 26.5 35.9 41.0 4.5
2659 16.6 22.9 30.1 4.4
972 26.4 34.7 40.6 4.4
2102 19.0 25.7 35.0 4.3
3309 9.0 16.7 23.0 4.2
5940 26.8 36.1 40.0 4.0
5818 15.0 24.6 30.0 4.0
5199 3.0 10.2 15.2 3.7
6443 13.0 17.9 23.0 3.7
973 3.0 7.5 15.3 3.4
649 7.0 14.1 20.1 3.4
1671 25.9 31.2 35.0 3.3
2046 5.0 10.4 16.0 3.3
3647 12.0 16.4 23.5 3.3
498 27.4 33.3 37.0 3.1
1710 30.7 36.3 41.0 3.1
4414 5.0 10.5 15.0 3.1
1983 9.0 13.5 17.9 3.0
115 19.0 22.5 28.0 2.8
2930 15.0 21.1 27.0 2.8
254 2.0 5.1 11.3 2.7
971 1.5 4.3 10.0 2.7
5700 27.0 32.7 35.0 2.3
1678 1.0 3.4 7.2 2.3
1619 2.0 5.4 8.7 2.2

1197 tops the list, with an enormous spread of rank 5 at a minimum to rank 38 at maximum. Looks to me like Jace is an 1197 fangirl or something.
The next two teams on the list, 2928 and 846, are the teams that I estimate have the best and worst schedules respectively. Since I think only GaryH, IanH, Jago and I are the only ones who factored in the unofficial schedules, that is probably the cause of the discrepancy.

Finally, here’s a summary of how confident everyone is. A higher standard deviation means bolder predictions, and a lower standard deviation means conservative predictions:

Team stdev
Xand10 11.9
lcraig910 11.9
Teddy 11.7
Jace 11.7
GaryH 11.7
BordomBeThyName 10.7
Daniel 10.6
Kaitlyn 10.4
Jago 10.3
Average 9.6
Evan 9.3
Caleb 7.5
IanH 7.3
Ignoramus 0.0

@Ian_H, the defending champ has the lowest prediction confidence. My suspicion is that he’ll win again and that the rest of you are overconfident, but we’ll have to see.

To all the entrants, if you could share how you arrived at your predictions, I’d like that, so please do. Best of luck!

3 Likes

I’m not sure that’s realistic. Remember, the hab climb points necessary for the RP are increased at CC

1 Like

I did a match by match manual win percentage prediction off of the preliminary schedule, and then looked for any matches where either ranking point would be more or less likely and filled that in. Then did a small <20 event simulations and averaged the results. I also saw that everyone predicted 2557 would be ranked worse than I did, so I artificially tweaked their rank by a few points.

Not feeling very confident, I would predict that @Caleb_Sykes will win.

1 Like

To be honest, I dont think I had realized that at the time…

I win, 254 and 114 are the only 4rp teams remaining, and they play against each other in q55. Thanks for the 5$ @Clint_Ott :blush:

Well, day 1 is over, using the current ranks, here’s how everyone stands:

Current Standings
Team Current RMSE
Caleb 7.9
GaryH 8.3
Jago 8.8
IanH 9.0
Average 9.0
Evan 10.2
Daniel 10.3
Kaitlyn 10.3
Teddy 10.6
BordomBeThyName 10.7
Xand10 10.8
Jace 10.9
lcraig910 11.5
Ignoramus 11.5

Everyone’s beating the baseline predictions of Ignoramus, which is great! The average of everyone’s predictions has an RMSE of 9.0, which is solid. The four of us that used the preliminary schedules have a noticeable lead over those that didn’t, which shows how important schedules really are in determining teams’ outcomes.

Currently GaryH is leading the pack at 8.3, but with plenty of matches to go that’s far from a lock. If we plug in the predicted results from my simulator, here is what I would expect the final scores to look like (ignore my predictions of myself because of feedback):

Estimated Final Standings
Team Predicted RMSE
Caleb 7.2
GaryH 7.7
Jago 8.2
Average 8.3
IanH 8.5
Evan 9.3
Daniel 9.4
Kaitlyn 9.5
Xand10 10.0
Teddy 10.0
BordomBeThyName 10.1
Jace 10.5
lcraig910 10.6
Ignoramus 11.2

I also want to look at the teams that are the biggest overperformers relative to the consensus expectations:
114 is the biggest overperformer by far. They had an average predicted rank of 22.6, and their best ranking prediction came from Evan, who had them at rank 15.4. Well 114 is doing far better than anyone guessed with their current rank of third. I’m happy for that since they were one of my pick-em teams. :eagle:

Next we have 5507, who had an average predicted rank of 25.7, and a minimum of 12. They also managed to exceed everyone’s expectations to get to their current seed of 9. Something something quit underestimating eagle teams.

Finally we have 2930, who had an average predicted rank of 21.1, and a minimum of 15. Currently sitting at 8th is also well above all expectations.

Honorable mentions to 3218, 1983, 6443, 2102, 1671, and 498, who all managed to exceed every single predictor’s expectations.

We’ll see how it all shakes up tomorrow!

1 Like

Don’t worry, 2102 is dropping to a comfortable mid-20’s spot.

Edit: there we go.

2 Likes

Final results:

Team RMSE
Caleb 7.51
GaryH 8.31
IanH 8.32
Average 8.74
Jago 8.90
Daniel 9.55
Evan 9.55
Kaitlyn 10.12
Xand10 10.26
BordomBeThyName 10.55
Teddy 10.74
lcraig910 10.92
Jace 11.28
Ignoramus 11.54

Since I’m a non-competitor, the winner of the contest is GaryH! He eked it out over IanH by a mere 0.01! Gary, I don’t know your CD username, so I can’t message you. Please PM me by October 6 to receive your WCP gift-card otherwise it will be forfeited.

Thanks for competing everyone! We were all better than a blind prediction, and the average of all the predictions was a solid contender for first place. Since I forgot to mention it before, my predictions were just the ranking projections from my simulator reverted 20% toward the mean rank (20.5). The big differences compared to my submissions from last year is that I tweaked Elo a bit at the start of this season to use max Elos instead of end of season Elos to create start of season Elos, and that I used ILSs instead of predicted contributions for the bonus RPs. Gary, if you would also care to share your methodology I’d love to hear it.

I’ll probably do this again next year provided there is interest.

2 Likes

In case anyone is looking for an easy way to improve your predictions next year, I highly recommend mean-reversion. Essentially, after making your raw predictions, take those and combine them with the average (20.5 this year) rank in a weighted average. How strongly to weight your predictions depends on how confident you are. If you are very overconfident, you should use a lot of mean-reversion to compensate. If you are underconfident, then you can actually use a negative weight. The formula to do this is just (new rank) = (old rank) X (1 - (mean reversion weight)) + (average rank) X (mean reversion weight)

Here are the weights that would have given every team the highest scores:

Team RMSE w/out mean reversion RMSE w/ Mean Reversion Improvement Mean Reversion
Jace 11.28 9.85 1.43 50%
lcraig910 10.92 9.58 1.34 40%
Teddy 10.74 9.51 1.22 40%
Xand10 10.26 9.11 1.15 40%
BordomBeThyName 10.55 9.67 0.88 40%
Kaitlyn 10.12 9.46 0.66 40%
Daniel 9.55 8.98 0.56 30%
GaryH 8.31 7.77 0.54 30%
Jago 8.90 8.57 0.33 20%
Evan 9.55 9.27 0.28 30%
Average 8.74 8.56 0.17 20%
Caleb 7.51 7.37 0.14 -20%
IanH 8.32 8.28 0.03 -10%
Ignoramus 11.54 11.54 0.00 0%

Almost everyone could stand to improve by adding in some mean reversion. I actually shouldn’t have done the 20% mean reversion that I did, as I overcompensated and made my score slightly worse, although it wasn’t a large effect.

I would recommend to almost everyone that you should mean-revert your predictions by 20% next year.

1 Like

GaryH is ThunderChief on CD, also known as Gary Hedge, the “Predictions Mentor” for team 3476, Code Orange. (ThunderChief comes from being a main Mentor for team 980, ThunderBots when my son, Andrew Hedge, was in High School.)

I am so happy to win this contest and validate my prediction methodology. I correctly predicted Code Orange would win 3 and lose 7. Sorry team, but our schedule was really hard. I also correctly predicted the Win/Loss of 46 out of the first 53 matches. Predicting the Bonus RPs was harder.

I am curious who “IanH” is. Will you reveal yourself? PS/ This is the second time we have the honor to help warm up the Cheesy Poofs for the Semi-Finals by being a member of the #8 Alliance. #8 Alliance was the only underdog that did not perform an upset in the QFs. :cry: Congratulations to 5507, 2733, 498 and 2557 for being in the Finals, outperforming all predictions.

1 Like

Ian H is @Ian_H, a 5026 mentor who does a little bit of everything, including strategy.

Not really a part of the contest, but I charted the ranking progression of all teams through their 10 qual. matches. I think it’s pretty interesting. I split the charts into two groups for better legibility.

For example - we (5818) had a rough start with two robot system failures in our first two matches. As of lunchtime Saturday, we had 0 RP and were ranked dead last. But the team rallied hard and we managed to climb to 17th by the end of quals.


And other teams had a great start, but then struggled toward the end:

Here is the sheet if you want to make a copy and play with it:

2 Likes

Hi GaryH! Congratulations on winning the contest. I’m glad you won this year. We’ve got to find a way to beat Caleb next year!

I’ve been the lead technical mentor of 5026 since 2014, though I really enjoy the strategy side. And as there are more strategic possibilities with 5026, I’m very happy to be able to give more strategy advice to the team, and hopefully support the growth of that side of the team. We have a lot to learn this next year (any advice is appreciated)

It was pretty fun interacting with Code Orange this weekend, sorry for the rough match we played together.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.