2011 Galileo, we faced 111, 254, and 1114 in separate matches, along with 610, a division finalist, and 399, 2137, 2337, and 967 - four top 8 ranked and top 15 OPR teams. Toss in 359 for good measure. Never with any of them except for 399 and 2337.

All this *random* awesomeness in an **88-team division**. Based on regular season season stats, 4 of 10 matches were essentially guaranteed to be L’s save for the most flukey of on-field outcomes. But hey we were 5-1 otherwise! I’ve not been a fan of the *random* algorithm since that holy terror of a schedule. It likely was purely random. Doesn’t mean I have to like it. Doesn’t mean I can’t irrationally apply some sort of sentient vindictiveness to the construct.

I’ve often wondered why someone couldn’t come up with an algorithm using regular season metrics to try and make championship schedules more balanced. An Algorithm of Death only based on real performance metrics instead of presumptions that higher-numbered teams are worse than lower-numbered ones. At least construct and publish one as an exercise for the mind. Who knows what kind of change that might spark down the road? Professional sports leagues (as well as college) seed their playoff teams based on regular-season performance all the time. FIRST presumes to model FRC after professional sports. So…why not?

At the very least, seek to guarantee the championship alliance matchups generated by the algorithm don’t exceed some performance gap threshold based on regular season metrics. Or if such a gap has to be generated for a team in one match, seek to counter that gap with an equally lopsided benefit in their favor for another match.