2019 Houston Match Prediction Contest

#1

This is the official thread for the $50 Houston Match Prediction Contest sponsored by VEXPRO/WCP. Here is the contest from last year for reference. Anyone can submit their predictions for whichever division(s) they choose to, using the preliminary schedule which was released by FIRST a few days ago. The entry with the lowest Brier score as well as the entry with the most matches correctly predicted will each receive a $25 VEXPRO/WCP giftcard

Scoring:

  • The master spreadsheet is here (it’s currently a copy of last year’s, please make edits to get it set for this year).
  • I will use Brier scores to determine the best model. Your prediction will be subtracted from the actual result (red win = 0, blue win = 1, tie = 0.5), and then squared. So if you predict a 85% blue win probability, and blue does win, then your Brier score for that match would be 0.15^2 = 0.0225.
  • The model with the lowest average Brier score, throughout all matches from all eligible divisions (see the 5th bullet point in “Other rules”), will win the contest.
  • You are welcome to not enter all divisions, but your predictions for all divisions that you do not enter will be considered to be 0.5 (tie) for every match, for the sake of finding the best overall model.
  • Winners must send me a PM on Chief Delphi within 1 week of the end of Houston half-champs to claim the giftcard.

Other Rules:

  • Limit one submission per division per person
  • You are free to make submissions for multiple divisions.
  • All submissions due by 11:59 PM Central time on 4/17. I’ll make my own private copy of the public sheet at that time for official scoring,
  • Don’t directly copy anyone else’s predictions. I’m sure everyone is fine with you using their predictions as a starting point, but you should really strive to get at least 100 percentage points different than everyone else. I may or may not enforce this.
  • If the final schedule for a division does not end up being equivalent to the preliminary schedule, that division will be ignored in the calculation of the best overall model.
    So if the schedule for changes, the rankings will only be determined by the Brier scores for the other 5 divisions.
  • I reserve the right to remove any submission for any reason (e.g. if I find out you are throwing matches to improve your predictions).
  • I reserve the right to change or add any rules at any time, also I’m not going to be a great organizer since I’m on a busy Australia trip, so I’m counting on all of you to self-moderate.

Good luck, have fun, and I look forward to seeing what everyone comes up with.

7 Likes

#2

Update, we just doubled the prize pool! Thanks to VEXPRO/WCP and @R.C for sponsoring this competition. There will now be 2 separate $25 winners, one for the entry with the best Brier score, and the other for the entry with the most matches correctly predicted. Winners will receive VEXPRO/WCP gift certificates.

I will still enter, but as a non-contestant since I don’t want to be a host and a competitor if I’m handling someone else’s money.

3 Likes

#3

Is the submission process the same as last year’s?

0 Likes

#4

Just manually enter your submission in the shared spreadsheet for now.

0 Likes

#5

I entered my submissions and started cleaning up the sheet. It’s hard to tell if everything will work correctly without any matches played and only my predictions entered.

In the future if we plan on reusing this sheet year after year, we may consider making it easier to modify things like the number of matches and names for each sub-division. Also make it easier to add new people to the contest. Right now, you have to manually modify them in a thousand places all over the spreadsheet.

0 Likes

#6

Predictions entered! Thanks for setting this up again Caleb.

0 Likes

#7

Just entered mine. I’ll be keeping last year’s goal of beating the coin flip column.

Edit: I just ran my model through the PNY DCMP and ended up with a 0.182. Not great, but better than last year.

1 Like

#8

Just added mine!

0 Likes

#9

Mine are in (I think). If I win, I’ll give 50% of the winnings to Caleb, and 50% to Eugene.

1 Like

#10

I wonder what @kmehta used for his algorithm…

3 Likes

#11

The best part is that this year I just stuck the AVERAGE function in the spreadsheet, so whenever you decide to update your predictions, I don’t have to do anything to change mine :slight_smile:

1 Like

#12

ponders entering random noise just to crash Kunal’s winning chances

2 Likes

#13

The fatal flaw of AVERAGE has been exposed. Maybe you should freeze your values, @kmehta, before @Caleb_Sykes changes his.

0 Likes

#14

What if that was the strategy all along? Get Caleb to throw the competition so you would win outright? 604 wins :slight_smile:

1 Like

#15

I have added my predictions in an extra sheet called “John Bottenberg’s Predictions”. However, I am trying to figure out the best way to add them to the sheets with the rest of the predictions. Columns S, T, and U seem hard to move to the right so I can add my predictions in with everyone else’s.

EDIT: I think I successfully added my predictions to the correct sheets, so perhaps no outside action is required.

May the best predictor win! I personally have somewhat bastardized @Caleb_Sykes’s Event Simulator in order to create my predictions, so Caleb I have you to thank if I end up winning.

0 Likes

#16

With my predictions finally submitted, I’m curious about others’ methodology, and what their expected results are. I personally used a Keras ML model on Component OPR data (been developing it for a while, whitepaper coming soon??) trained on ~10000 matches with testing (separate of course, I keep it kosher :slight_smile: ) data of about 1000 matches giving a Brier score of 0.13 and accuracy of 80%. How does that stack up against your predictions?

0 Likes

#17

If you actually get a Brier score of 0.13 that would be easily the best model I’ve ever seen.

I’ve thought for a while that a score around there is theoretically achievable using component data, but haven’t gotten around to actually building a model that uses component data.

0 Likes

#18

Since the Turing schedule has changed, Turing will not be scored for the purposes of this competition.

0 Likes

#19

.13 is amazingly good. I have a big spreadsheet pulling all of the component OPR data together and building each alliance’s score piece by piece. It’s almost certainly a bad model but it was fun to build. I ran a bunch of different DCMPs through it and got Brier scores ranging from 0.15 to 0.20 depending on the event.

0 Likes

#20

Were you using earlier event data or post-event data from the DCMPs? The big caveat I forgot to mention of the above number was that it was about events where uses OPRS from the event once it’s completed, so accuracy is down using OPRs from teams most current event, which I’m doing.

Side note: Can we add Turing back? Please? :pleading_face: My predictions were :fire: for Turing.

0 Likes