Announcing the Houston Match Prediction Contest! Anyone who wants to can submit their predictions for the matches in any Houston Championships division using the preliminary schedule. For each division, whoever makes the predictions that turn out to be the best will receive a $15 AndyMark gift certificate from me. This means I could be giving out up to $90! I have a job now though so I should be fine. If you are not part of a team, I can alternatively send this gift card to a team of your choice. Rules and regulations below.
Change the name of the file to be “[Your Name or Username]’s [Event] Predictions” (e.g. “Bob’s Turing Predictions”)
Edit the values in the last column as you see fit
In the top-right corner click SHARE -> Get shareable link
Paste the link into a post on this thread.
Limit one submission per division per person, but I get 3 entries (My Elo, max OPR, and Elo/OPR average). Sorry if you think that’s not fair, make your own contest if you want multiple entries.
When comparing submissions to my predictions, your submission must have a minimum of 100 percentage points of difference cumulative over all matches. This applies to all 3 of my submissions. That is, you can’t submit a copy of one of my prediction sets and change one match’s predictions by a single percentage point. You should have no problem with this requirement if you start from scratch on your predictions. This is more for anyone who just wants to modify my existing predictions. Since each division has over 100 matches, a simple way to achieve this difference would be to change all of my match predictions up or down by one percentage point. Another way would be to find at least 10 matches and change the predictions by at least 10 percentage points.
You are free to make submissions for multiple divisions.
I will make copies of your predictions when you publish them, so don’t share the link until your predictions are finalized. I’m not going to go back and re-copy anything if you make changes.
Don’t directly copy anyone else’s predictions. I’m sure everyone is fine with you using their predictions as a starting point, but you should really strive to get at least 100 percentage points different than everyone else. I may or may not enforce this.
If the final schedule for a division does not end up being equivalent to the preliminary schedule, assume the contest to be cancelled for that division unless I say otherwise.
Submission deadline is Tuesday 4/17 11:59PM central time.
I reserve the right to remove any submission for any reason (e.g. if I find out you are throwing matches to improve your predictions).
I reserve the right to change or add any rules at any time.
Determining the Best Predictions:
As people submit, I will maintain a master book of all submissions here. When Houston matches are finished, I will compare these predictions to the results of the Houston matches. The prediction accuracy metric I will use are Brier scores, or mean-squared errors. Essentially, your prediction will be subtracted from the actual match result (red win = 0, blue win = 1, tie = 0.5), and this difference will be squared. Your total Brier score is the average of these values for all matches. I like Brier scores because they punish both over-confident and under-confident predictions.
There is no second place, if no one beats my predictions for a division, I save $15.
Gift Certificate Information:
I will post the winner for each division on this thread at some point within 1 week of Houston Championships concluding. After my post, the winners have 1 week to pm me their contact information and we will coordinate delivery of the AndyMark giftcard at that point. If you do not communicate with me within this timeframe, you will forfeit your prize.
Why are You Doing This?
Mostly for fun. However, if you wouldn’t mind posting your reasoning along with your predictions I would be very interested. If the reasoning is “insider information” that’s cool, but I’m particularly interested in which (if any) universal metrics you are basing your predictions on. To the best of my knowledge, my Elo and OPR average is the best general predictive model out there, but I would genuinely love to be proven wrong. I guess I’m also trying to be a discount Nate Silver and/or Dr. Joe, so there’s that.
If you have any questions, please ask before submitting. Also, make sure to check the entire thread in case I have posted updates or clarifications. Let’s see who can make the best predictions!
Here is my approach. Note I am writing this in a rush, so if anything is unclear or you have questions feel free to ask.
I’m a big fan of ML and have been using it for my match predictions so far this season. I first generate component OPRs for a team and feed them, along with the values for the other two teams on the alliance into an LSTM that comes up with an estimate of the score for each component of the match. The second part is a lot more fun. Since kickoff, I’ve been training a bot(I have named him Hal) to play Power-Up. I can elaborate a lot more on him later but here is a general idea. Hal is an NN that take a game state as an input and outputs a continuous value of the game state from the perspective of the bot, and a policy that is a probability vector over all possible actions. Given that game state, Hal uses an MCTS to improve its estimates over time. After thousands of iteration and self-play, HAL learned to play Power-up. One of the options I have with HAL, is that I can input certain characteristics like climb rate, etc. Using the component OPRs from before, I give HAL his characteristics and play against another HAL that is described using the characteristics of the other alliance. After 1000 match simulations, I calculated the probability of winning for each alliance.
Quick update, I had 2 extra matches from Hopper at the bottom of the Newton, Roebling, and Turing sheets. I have removed them now.
This is super cool! We’ll see how it works out. This is exactly the kind of thing I was hoping to fish out with this contest, since I’m sure lots of people make cool predictive models on their own, but don’t bother to post them for one reason or another. I’m always skeptical of ML models since it seems like many people who use them don’t seem to understand things like separating training and testing data. Now with this contest, we’ll get to see whose models are actually predictive, and whose are just explanatory.
Your probability predictions look identical to mine. It seems you have made W/L/T predictions for each match though in column I. Please convert these predictions into blue win probabilities and re-submit. For example, you could set all of your blue wins to 100%, red wins to 0%, and ties to 50%. I would recommend using the full range of probabilities, but you can do what you want.
If you do not change these before the deadline I will use the transformation described above for your entry.
No this isn’t TBA’s algorithm, but shares some components of it. It’s something I’ve been playing around with this season, but haven’t had the time to get it to run on TBA’s infrastructure.
Side note: My prediction model seems to be doing alright in general, but I haven’t analyzed how well it extrapolates predictions to events with no matches played. Another way of saying it is that maybe my model adapts well to an event after a few matches are played, but might be complete trash at the beginning of an event. Some of my predictions are VERY different from the other submissions… either my model knows something the others don’t, or it’s just very wrong