Match Scheduling Algorithm Competition

Thanks for the history lesson Joe. Very interesting.

Four years have passed since 2007, and a whole new student class is now in place. Maybe it’s time for FIRST to put this out there again.

One point of clarification:

a. Maximum time (in number of matches) between each match played for all teams

I assume that means “maximize the minimum time”

**

From memory, and it was a while ago…

Compute all possible (2-team) alliances.
Use a simple arbitrary scheme for pairing alliances in 1st round matches
For 2nd round and beyond matches:
[INDENT]
Disregard possible alliances that were used in previous matches
Assign each possible alliance the shortest off-the-field time associated with the two teams in that alliance.
Pick the possible alliance that has been off the field for the longest (break ties arbitrarily).
Pick an opponent from among the remaining alliances. Choose one that:
[INDENT]
Doesn’t include a team that was allied before with the teams of the already selected alliance
Doesn’t include a team that previously opposed the teams of the already selected alliance
Has been off the field the longest

Continue
[/INDENT]
Continue
[/INDENT]
I also think that, in order to preserve a minimum inter-match time interval, I put in a heuristic that would permit the first alliance put into a match to face opponents who had been allies earlier.
[Edit] Thanks to Joe I was reminded of this long post that I wrote a while ago - It contains a better description of the heuristics I used - Look near the bottom of it.Link [/Edit]

As you know, looking for the “best” of solution to problems in algorithmic domains like one is often fraught with confusion, peril, and frustration. Looking for “good enough” often is the best choice :wink:

I should have been more clear. I was willing to accept minor separation between my results and perfection. Creating a good enough scheduling implementation isn’t all that hard.

Blake
PS: Joe’s info surprised me - Either I totally forgot or totally missed the solicitation and evaluation process he described, so I sent him a PM asking if has any still-valid links or other info I can look at to learn about it.

At this point, I dunno why we aren’t simply creating our own open-sourced algorithm and associated interface. The software engineering prowess of the CD community as a whole could probably out-do a single contestant in any contest, regardless of how long the contest ran. Multiple small tasks are very easily delegated and worked on with a good group.

When I find some time (heh, my fiancee moves in this weekend) I’ll resurrect some old stuff and make a Google Code project out of it. It’ll start off in Java since that’s what I made it in at the time.

I’ll contribute.

If it is possible, the software should have an option to take into account the team’s performance so far in the season in order to generate a “fair” match schedule for FiM or MAR Region Championship or World Championship. Let me define what is team’s performance and what is fair.

Team’s performance can be highest OPR, weighted average OPR or season world ranking.

Fair does not mean trying to make every match as close in score as possible so that any alliance would have a fair chance of winning. Fair means no teams should have a heavier “load” than other teams. Mathematically, it will mean that the average OPRs of the opposing alliance that each team will have to face will be about the same. In this way, teams that have higher OPRs will still have a higher chance of winning their matches. I like this way because it does not actually look at the predicted score of each match based on OPR and artificially modify it to force the outcome. There is still some inherent randomness to it.

What about years where OPR/metric du jour are not really applicable?

If FIRST wants to follow the sports model, tell me a sport where there is scoring that OPR is not applicable? The only recent game I know that does not work is 2007 Rack “N” Roll where the score goes exponentially up if you have a few tubes forming a chain. What can I say? The game may be good but the scoring is bad from spectators’ viewpoint. It is too complicated. We need games and scoring that is simple to explain to visitors. I think I am going off topic.

I guess we can use win/loss, being alliance captain or be drafted in elimination round and how far they advance, which is what the ranking point system is for the Michigan district model.

If you don’t like using OPR, do you have any suggestions what we can use?

I’m not saying OPR is a wrong metric. I’m just saying that algorithms are determined before the game is played, what if it is found that the metric chosen is not a good indicator of performance?

I’ll also come out and say that any attempt to make schedules “fair” is doomed to failure in my opinion due to differing opinions on what metrics to categorize on. I claim that the only fair way to do scheduling is to not look at the team numbers at all. That way every team has the same chance of getting an easy schedule or an impossible one.

Weighting matches based on current years ranking is inherently unfair. You are punishing a team and decreasing their chance of future success based on previous success, and rewarding teams with low success with a higher chance of future success.

In FRC efforts to make it fair should end after registration; Every team has equal opportunity to compete, build, find mentors, fund raise, etc… No competitive boost should be given to under-performing teams. They had just as much of a chance to make amazing happen as the teams that routinely do, don’t punish the teams that work hard for that.

OPR has biases in each and every year. In general, given the nature of FIRST scoring (finite number of game pieces and/or scoring locations), as the quality of teams at an event trends up, OPR for teams near the top and bottom flattens out - at some point, there are simply not enough tubes, or balls, or whatever for each team to score to their maximum ability. Compare the OPRs from the Michigan State Championship with those from a middle-tier Regional and tell me that they make sense.

Regardless, I think there is no need to balance scheduling with any consideration of robot performance or capability. Random is at least as “fair” as any other method. If you draw an easy schedule, enjoy your cakewalk but have fun getting bounced out of the playoffs if your bot isn’t up to the task. Likewise, if you draw a tough schedule but the robot performs up to its abilities, you will be drafted and have a great chance to go deep into elims. IMO, the best way to be “fair” about it is to maximize the number of matches (random draws) that each team plays.

What about a feature called ‘Titan Match’:

  • Consenting teams could ask to be versus each other, picking their partners as well.
  • It would only be applicable to offseason events, and ethically only if all 6 teams consented.
  • Most likely, it’d only be able to happen 1-2 times overall during the event so it didn’t throw the rest of the schedule off.
  • It would be easy to implement into the stitching algorithm, and would simply eat more processor time in most other algorithms since it’s an additional constraint

I certainly wouldn’t argue against adding “Titan Matches” to any of the feature lits I have put in other scheduling posts/threads (for the moment I’m being too lazy to go find and copy all of them into this discussion).

One big reason for saying that, is my belief that Jesse’s excellent suggestion for developing an open source scheduler is best implemented by developing a “scheduler”, not a the-way-we-always-run-FRC-matches scheduler, or a VRC scheduler, or an FLL scheduler, or a league-play scheduler, or an FiM scheduler, or etc.

If the team that implements an open source scheduler is mostly drawn from Chief Delphi participants, they will sometimes have to force themselve to lift their heads up out of the STEM robotics weeds (FIRST and VRC in particular) and look at the project as a scheduling project, not a FIRST (or BotBall, or VRC or …) task.

The result should be something useful for the STEM Robotics world, and for other uses.

Blake

Why do performance metrics have to completely dominate the algorithm’s decision making process? Why do we all have to view the inclusion of OPR or similar into the algorithm as an attempt to completely balance the competitive scales in each and every match?

Why couldn’t we use OPR or some other performance ranking method in the scheduling algorithm as a “sanity check” during the scheduling process to ensure the net strength of an alliance relative to the proposed opposing alliance doesn’t skew past some ultra-extreme, ridiculously-impossible to overcome threshold? Matches of this nature are likely to lead to an utter slaughter on the field. I would like to think that the slaughterers would not particularly enjoy such a monumental cakewalk, as it doesn’t really challenge them, the slaugterees would definitely not enjoy it, and most importantly, the crowd would find such matchups boring as hell. So why not try to eliminate or minimize these outlying extremes in the schedule without trying to perfectly balance each and every match?

Not only do such matchups as described above happen a lot at events, they sometimes happen to the same team multiple times at an event. When that occurs, such teams begin to feel that they are not getting enough of a positive experience and equal opportunity for success relative to other teams for the money they invest in a competition. That should never happen.

There were people involved with writing earlier FIRST scheduling algorithms that stated overconfidently that certain scheduling goals were impossible to be achieved. People like Jim Zondag and the Saxtons took it upon themselves to prove those people wrong. They sought to create their own algorithm implementations, and they were successful in proving that “impossible” criteria could in fact be successfully implemented into a match scheduling algorithm.

So for all the newly doubtful, I propose there is someone out there who DOES have the ability to create an algorithm that mixes in some reliance on ranking metrics to balance out the match lists, at least enough to avoid frequent “slaughterhouse” matchups that no one really enjoys. I fully support anyone bold enough to take that challenge on and prove to everyone it CAN be done!

Once the metric is defined, and prioritized (a task that can become tricky) in the list of metrics to be used, a scheduler is certainly do-able.

If the metric only depends on the 3 teams in a single alliance you can imagine easily pre-computing it for all possible alliances and then it just becomes one more attribute to consider along with all the others (time since last match, etc.). Who cares if there are a zillion possible alliances. Computer CPUs are fast and RAM storage is cheap.

If you need to use all 6 (or 4, or …) possible teams to compute the metric, things change a bit but it is still do-able.

Blake

I’ll completely admit that programming isn’t my forte. With that said, please bear with my questions.

Would such an algorithm give the same schedule everytime? or would it vary greatly?
If we made it opensource, would I be able to use this knowledge to best guess my alliances for a given regional?

Next, as for the stacking the deck debate that has came up. Could they use such pre-ranking systems to judge the opposing alliances for a given team and ensure each team has approximately the same difficulty? Completely ignore your own team and alliance and just look at the opposing. Would this lead to the same issues? I wouldn’t be opposed to going against an alliance made up of teams 1, 2, and 3 as long as my next match was against teams 5000, 5001, and 5002. That would allow for alliances to vary greatly during individual matches but give a general expected difficulty for the overall regional. Again, is this possible? and if so, would it give the desired results?

Jason

While I want to say that, in general, most programs will produce the same answer every time they are given the same initial conditions, there are enough exceptions to that rule-of-thumb to make it something you don’t want to rely on.

However, generally, whether an algorithm that attempts to use random numbers can be used to get identical answers during separate uses is almost completely up to the programmer who writes the instructions the computer executes.

If the author gives users the ability to kick off an algoithm’s (pseudo)random number generator with an initialization value the user controls, and if they weed out a few other sources of possible variation; then users can get repeatable results by using repeatable initial conditions, or they can get unpredicatable (far, far less predictable) results by choosing unpredicatable initial conditions.

Because these and other ways of randomizing the way the schedule is applied to the teams involved, the degree of predictabilty in each match can be placed completely into the hands of the algorithm’s users. There is no need to worry that they will be forced into giving anyone any useful knowledge before they want to purposefully release it.

Blake
PS: This is true even if all 2012 FRC matches, for example, were played using a (a set of) fixed schedule(s) published online tomorrow. Knowing that Alliance AQB will face Alliance SRK in the first match of every FRC tournament does you no good on the field if you don’t know which of the teams in the tournament will get assigned to each of those letters (A, B, Q, S, R, and K) (that would happen the morning of the event).

But… knowing that AQB will face SRK would be a nice help for scouts. Once they (at the start of the day) matched up the real teams with the fake teams in the well-known “schedule”, their scouting software, spreadsheets, etc. would know the complete actual schedule.

Entering the data necessary to correlate the fake and real team IDs means entering (by hand at the event) only a small fraction of the data necessary to describe the dozens of matches in a typical FRC/VRC/FTC tournament.

PPS: Of course, I realize that, pre-publishing a schedule filled with bogus place-holder team IDs, so that scout can simply/easily correlate its entries with the real teams at the start of a tournament, “isn’t the way FRC works” …

Why is this a programming challenge? It seems like creating the specification is the hard part, not the actual coding.

Well…

You have variable numbers of teams.
You have a number of different parameters that the user needs to input. (Match cycle time, number of matches per team, break times–and all of those will vary slightly.)
You have the 7 items in the challenge statement, which will have different weightings.
You need to make it user-friendly.

Your program needs to handle every one of those items, and handle it cleanly and effectively.

That’s why it’s a programming problem (aside from the standard “Everything’s a programming problem” jokes.)

The other thing is that the challenge was a 12-hour time duration, which greatly adds to the challenge.

IMO, the only restriction for team age should be that rookies change their team color less. I’m speaking from personal experience that changing bumpers can be time-consuming and it would be easier if they changed them less- easier on the team and they would have more time to iron out the kinks in their system.

Ah bumpers, my favorite topic. Do you know that the rules for bumpers are three and a half pages long? You can easily use up 4 weeks just to design and build bumpers. That was how long it took us last year. Fortunately it was done in parallel with doing the robot.

About changing bumpers, in our rookie year, our bumper would take one person about 5-10 minutes to remove and re-install. Fortunately that was Lunacy and we didn’t need to change bumper yet. We only had to do that to pass through a standard 30" wide door. In our second year, we had a better design but it would still take one person 1-2 minutes to change to a different color bumper. Last year, our latest bumper design only takes one person 20-30 seconds to change. WIth 3 people, we can do it easily in 5-10 seconds. We really like the design so we probably will not change it unless FIRST comes up with new bumper rules that are 6 pages long.