Alright, I did some work yesterday to get predicted rankings up and running. Those are now available in v 1.1.
Changes since 2019.1.0:
Miscellaneous small bug fixes
Added predicted rankings
Added predicted rankings to team lookup
Added current rankings
Added current rankings to team lookup and match lookup sheets
Added continuous ranking projection sheet
Hid the "seed values" sheet by default. Predicted Contributions will show seed values before the event starts, and if you want to see them again you can unhide the sheet or just run an import starting from before match 1
I’m always terrified of releasing the first round of ranking projections because I’m afraid of bugs, but I have to make the leap sometime so here I go. If you want to help me out and look for bugs, there are a couple of good tools you can use:
First, you can check the “rankings” sheet to verify all the data lines up with official results. The data in my rankings sheet is not directly pulled from TBA, but rather constructed based on match data, so any error there is likely to carry into the rankings projections.
Second, I have given you the option to simulate the event from any point in time by adjusting a cell in “data import”. One simple way to use this is to simulate consecutive matches and see if the ranking projections update in a way you expect for teams that competed in that match.
Lastly, building up on #2, I have a sheet called “continuous ranking projections” which can make cool graphs telling the story of a team’s ranking throughout the event. Everything on those graphs should make intuitive sense if you think about it. For example, here is 2052 at mndu2:
They have some very good matches in q7 and q17 in which the model learns they are competitive and can get the HAB RP consistently. Winning but not getting the HAB RP in q26 doesn’t really affect their top 4, top 8 or top 15 chances since they still won, but their chances of seeding first takes a hit. Through match 51, they lock in their chances of seeding very high, but not much changes in their 1 seed race as both they and 525 continue winning and getting the HAB RP. Everything comes to a head in q52 though, where 2052 comes in as an underdog, but manages to get BOTH bonus RPs and win, and not just against any opponent either, but against 525, who they were racing for the 1 seed. After that, 2052 is 2 full RPs up on 525, and slowly lock in the 1 seed through the end of the event.
For another story, here’s 1410 at okok:
Unlike 2052, who the model would have guessed would rank in the top 8 going into the event, 1410 comes in not even expected to get top 15. They don’t quite have as many flashy matches like 2052, but instead you see a slow but steady climb up the rankings as they win matches but don’t get bonus RPs. There’s a solid jump to the top 15 chances in q31 where they get their sole bonus RP. THere’s also a bump to the top 4, top 8, and top 15 at q59 where they win an underdog match. You can see their top 4 chances getting reasonably high until they lose q74. Their top 4 chances then drop to 0 in q87 when 2996 and 4523 lock in more RPs than 1410, which locks 1410 out of the top 4. One last note, the noisiness of the continuous ranking projections will go down as you select more simulations to run. The above graph for 1410 was created using 400 simulations at each point in time, and the one below was created using 1000:
This is clearly the same graph, but it’s much more smooth and 1410’s matches stand out a little better. So give this a shot for your team and make sure the “story” being shown matches up with what you would expect. I really love these graphs, so I hope you all do as well.
Other good spots to look for errors if you’re hunting:
high order ranking sorts
Outlier matches (unicorn matches, high score difference matches, matches with no-shows)
Check that teams’ min and max possible ranks are correct bounds.