https://drive.google.com/drive/folders/1fchYTI-fHdOUowamuMS3wk33Dt_1r5fv?usp=sharing

Here’s my little gamble, just simple coin flipping.

https://drive.google.com/drive/folders/1fchYTI-fHdOUowamuMS3wk33Dt_1r5fv?usp=sharing

Here’s my little gamble, just simple coin flipping.

Is TBA’s algorithm going to be outputting any predicted results/rankings? I don’t recall seeing it this year…

Caleb or Eugene, would it be possible to add TBA’s algorithm to this contest?

The TBA algorithm is very bad this year since I didn’t have the time to update it. I don’t think it would be a contender in this contest.

Lol, I kind of figured someone would do this. You should be getting a Brier score of about 0.5 for each division. Guessing 50% for all matches would get you a better Brier score of around 0.25.

If someone wanted to grab all of the TBA predictions (or Spyder), I’ll let the predictions in as a non-competing participant. I’d even let it in as a competitor if someone wanted to use it as their one entry, so long as they make clear where they got the predictions from.

TBA’s predictions won’t be up until the schedules are finalized.

https://drive.google.com/open?id=1kGpM2x_WFXkLe0NkzpI1siRMc83qRAvW

Here is a folder containing all of my predictions for each division.

Sounds about right to me! Someone has to throw in a control sample to weigh the other methods against.

Are your Newton predictions correct? They don’t line up well with the other predictions like your predictions for other divisions do.

My predictions for Roebling are here:

This is just the difference between component OPRs from Ether’s data between red and blue alliance. The percentage is a comparison of the rest of the sheet. My closest match was about 50% confidence, the most skewed towards red was around 1% and the most skewed towards blue is around 99%.

My predictions: https://drive.google.com/open?id=1Neo__RUdJmHx8Sm8nbj31WcVaux8LfZd

These are a hybrid of Caleb’s stats and the prediction system 604 has been using this year.

And thanks to Eugene for giving me the idea to enter

Turns out something was very wrong with my model, due to the way I hacked in the preliminary data instead of pulling from TBA.

Caleb, can you please update my predictions? Sorry for the extra work.

I updated the sheets at my old link: https://drive.google.com/drive/folders/1_HKXhKWhXs-QZNzAsXpKSP4IW5xOZoSs

Kunal you might want to update yours too…

Recommended reading for those trying to really win:

I am curious to see how good (or bad) team number is at predicting the outcome of matches. My predictions are here. Instead of just “if sumBlue>sumRed, P_BlueWin=1”, I attempted to create percentage likelihoods of winning. You can see all of the calculations in my spreadsheets, on the “Calculations” tab.

- Calculate red and blue sums (duh :p)
- Red - Blue = raw blue advantage (smaller sum = more likely to win. If red has a smaller sum this will be negative).
- Rank the absolute values of step 2. Assign each to a rank, where the smallest difference (aka spread) is given a 0 and the largest spread is given a 1, and everything else is interpolated based on the ordered rank (not z score). This step on a per-division basis. Let’s call this value x.
- If sumBlue<sumRed, blueWin% = 0.5+x/2
- If sumRed<sumBlue, blueWin% = 0.5-x/2

The result is basically that the match with the largest difference of alliance sums in each division will be 100% (or 0%). The one with the smallest difference of alliance sums will be 50%.

This isn’t going to do as well as the more sophisticated models, but we will see if it’s better than random chance :).

Your model actually has a reasonably high (0.5ish) correlation coefficient with the other models, so it’s a pretty good bet that it’ll do better than chance.

Some fun facts for the team number-based predictions:

- No match at CMPTX has a single-digit spread. The closest is Q13 on Roebling with a spread of 11 (16833-16822).
- The largest spread is on Q113 on Hopper with the blue alliance sum being 16725 more than the red alliance sum.
- The average spread is approximately 4268.

I will add a few more in a little bit.

Ya, I fed the wrong team data on accident. It should be corrected now.

Here’s my predictions for Turing, Will hopefully have them all finished by tomorrow.

https://docs.google.com/spreadsheets/d/1rGq0DFcCEEZhSpH4eAxdFq5WrJjBnErUjFzPQAP9MG0/edit?usp=sharing

My model uses a penalized logistic regression to get the estimated probability of the blue alliance winning in response to the blue alliance’s component EPRs minus the red alliance’s EPRs. Thanks to Ether for his dataset, which made this much easier than hammering the TBA API on each run.

I would be interested to see data on what matches seem to have the most debate between all the submissions.