2018 IRI Bracket Challenge

They’re defending IRI champions. I wouldn’t underestimate them.

Just basing off mkm performances!

I mean robots are different every year, still not completely counting them out. A high level team doesn’t come in prepared.

I personally feel like 1741, 1720, and 1024 got slept on HARD.

1024 balled out today. I expect them to make elims tomorrow.

Congrats to BrennanB on the win and jtrv on silver! DM me with your email addresses and you’ll receive your prizes! Thank you to everyone who entered and I hope you all had fun trying to predict IRI 2018!

*Username		Points*
BrennanB		360
jtrv			359
DGB			338
Golfer4646		331
nomythicalbeast		326
Rory Lippert		319
mman1506		312
forbes			311
TrevorC			304
kingbrandon14		298
rkap51			298
191jmh25		296
andrewthomas		296
Kevin Leonard		295
NonStopScouting		291
alex.richards48		291
WillNess		286
Tomithy			284
MrMARVINMan		284
cobbie			282
lynca			281
pchild			279
efoote868		279
tmpoles			275
saintblaze4639		275
Kaleb Dodd		274
GearLOading		274
Brian Maher Fan Club	266
Rangel			262
ZhengQi			262
chuteDoor		256
Paperclips		255
TDav540			254
Attention		250
Thayer McCollum		249
fishing_cat		249
Tired Scout		248
Maximillian		248
microbuns		245
Csherm			244
niklas674		239
CurtisB42		238
natejo99		218
Super84			190
Caleb Sykes		150

Feel free to share what surprises you called or where you think you missed the mark.

insert 5254 silver meme here

Feel free to share what surprises you called or where you think you missed the mark.

mr statistics man in last place. Also huge props to alliance 7 for making it to the finals. That run was so fun to watch from home.

He literally had every alliance as a winner so he didn’t get points for playoff performance. No surprise there for me.

I feel like I made some rather questionable decisions, but I see I still did alright. This was a ton of fun, and I look forward to participating again!

I actually would have been in first place, but I got hit hard by the completely unfair and unexpected mid-season rules update.

42% chance 2056 doesn’t seed top 8 btw


I still stand by that prediction.

I’m surprised you stood by it in the first place

I trust my model, and that was my model’s prediction, so I trust it. Not saying my model doesn’t have flaws, because there’s a dozen I could list, but I don’t see anything that tells me 2056 was dramatically undervalued.

If your goal is to get me to doubt my model, your best bet would probably be to use calibration curves with a large sample size or to beat me in some kind of a prediction contest with a large sample size. I try not to let anecdotes sway my opinions much. Anecdotally, I also had 118 at a 38% chance of seeding in the top 8, and that turned out pretty well for me, since I imagine most people would have put their top 8 chances much higher than that.

This is what happens when you part time mentor for 5254, Justin

1 Like

First things first - I think your model is super cool and the associated coding goes way over my head. So I have massive respect for you there!

If you’re interested, a few of us on the SLFF discord server occasionally do predictions (per event, and it’s a pretty casual setting) and put them up against yours to see how they stack up. For IRI, the best we got manually was a 71.43% average, compared to the 60.95% shown in your model. Here’s an invite link :slight_smile:

That’s a neat idea. How many different manual predictions were there? It seems somewhat unfair to have multiple different predictions and then pick the best one later for comparison. What is the accuracy if you average all of the manual predictions together? It would be neat to see what a “wisdom of the crowd” model would say.

My submission was a “wisdom of the crowd” model with data up to where I posted which was about 36 serious entries. I gave each team a score based on where they were picked (the process was automated via Google Sheets):
Captain of 1 = 32 Points
1st Pick of 1 = 31 Points
Captain of 2 = 30 Points
1st Pick of 2 = 29 Points
Then made my bracket based on who had the most points.
As I expected, my bracket scored about in the middle.

There were just three of us that did manual predictions, it was kind of a rushed thing before quals matches started. The manual predictions were at 66.67, 67.72, and 71.53%, so average of 68.57%.

I was curious what would happen if you did exploit the loophole, and scored your bracket with 1 as W, 2 as F, 3/4 as SF, 5/6/7/8 as QF, and you only scored 240. Don’t blame me for this.

Have you done any more validation of your model compared to other simple metrics for predictions? (OPR, average score, etc).

You post very, very confidently about your model very often, and while it seemed to do an okay job in predicting the results of specific matches in Houston, it’s not clear how much better or worse it did than other metrics, and how accurate your odds of success actually are. It would be cool to see more of that, and less assertions of high confidence in the model.