I am impressed. Very impressed.
This algorithm blows the 2007 version out of the water. It's not even close. I'm actually giddy right now, because the really exposes what a farce last year's algorithm was. I just ran the 2007 Waterloo Regional schedule through "MatchRater". Here's what I got.
Waterloo 2007 was a 30 team event, with 11 matches. I chose Waterloo, because of the inherent difficulty of minimizing duplication at event so small, with such a large number of matches. Also, I knew exactly what the chose delta was for the event.
Data Format
Code:
Schedule Statistics
-------------------
#: number of matches played, a '+' after the number
indicates one additional round as a surrogate
d: minimum delta between matches (e.g. '1' means back-to-back)
part: number of distinct partners followed by most frequent repeat count
opp: number of distinct opponents followed by most frequent repeat count
both: number of distinct teams seen as partner or opponent
followed by most frequent combined repeat count
r/b: balance between red and blue alliance appearances
eg, 3b means team appeared as blue 3 times more than as red
4+ repeats: any teams seen four or more times as partners or opponents
team # d part opp both r/b 4+ repeats
---- -- -- ----- ----- ----- --- ------------
Actual Waterloo 2007 Schedule Data
Code:
best: 11 4 | 15 2 | 23 3 | 27 4 | 1
worst: 11 3 | 11 5 | 15 5 | 21 7 | 9 (1)
Now, I just ran the exact parameters we used in Waterloo (delta of 3), through the new algorithm.
Waterloo 2007 Schedule, run through new algorithm
Code:
best: 11 3 | 22 1 | 25 2 | 29 3 | 1
worst: 11 3 | 20 2 | 19 4 | 26 4 | 1 (30)
Hmm, lets see. In the best case we have, 7 more partners, 2 more opponents, and 2 more total teams seen (29 means you play with or against every team at the regional). In addition the most frequent repeat count drops each time by one. In the worst case we have, 9 more partners, 4 more opponents, and 5 more total teams seen. The most frequent repeat count drops drastically as well. There's no comparison. In fact the best case from the 2007 algorithm is somewhat poorer than the worst case from the new algorithm.
My early opinion, is that this new algorithm is a drastic improvement upon what we saw last year. Kudos to the Saxton's for creating this.