Suggestion to improve the alliance choosing program

We’re a rookie team this year and after competing in our first regional, one of our team members made what I thought was an astute observation. When an alliance consists of 2 rookie teams and 1 non-rookie team, it seems really unfair to the non-rookie team - especially if there aren’t any rookies on the opposing alliance.

My suggestion would be to add to the algorith the “rookie factor” where rookies are not put on the same alliance if at all possible. Also, when an alliance has a rookie team, the alliance they’re playing against would also have a rookie team.

I’d suggest reading the thread’s about last year’s pairing algorithm and see the generally consensus on it. While you’re suggestion isn’t exactly the same, it does have some themes in common.
here’s one thread dealing with it that can be used as a starting point

I searched for “alliance algorithm” because I didn’t want the threads on algorithms used for robot programming. I would have posted to one of the threads that came up from that search, but the thread was closed.

I also looked under “Rules/Strategy” because I figured that was the logical place for a thread on how alliances are chosen. I didn’t think to look under “Championship Event” because allliances are chosen for more than just the championship.

Besides, there’s always someone who will point out a new thread belongs somewhere else :wink: and a moderator can always move it.

A quick review of the “Algorithm of Death”, as it was known:
Step 1: divide all teams at the event evenly into three tiers by number.
Step 2: Take the first unmatched team from each tier and place them on one alliance. Repeat for the second unmatched team. Have those two face each other.
Step 3: Repeat Step 2 until all tiers are out of unmatched teams.
Step 4: Apply other factors. This gets the variation.
Step 5: Distribute to teams.

This resulted in some lousy rankings for good teams. It’s hard to get even a 50% win record if you’re against 1114 for 5 out of 9 matches and never with them.

The algorithm was based on the (mistaken) assumption that rookies (and second and sometimes third year teams) are inherently worse on the field than veteran teams.

It is incredibly hard to make any alliance sorting program based on team skill simply because teams don’t preform based on any known pattern. Past performance is no indication of current success (mentors leave, students graduate etc.) and there are some rookie teams which absolutely shine. Also the more variables inputted into a sorting program the more likely teams are to be paired with/against each other again and again and again as there become fewer and fewer “fair” combinations.

With this years game I am against the alliance system completely as penalties could prevent a powerhouse team from winning making alliance partners a risk with many teams having negative average scores, but I can’t think of a remedy so I’ll have to live with it.

I crunched numbers on this in 2006 based on 2 weeks of regionals, and here’s an image that breaks down scores by ‘average alliance number’. Average alliance number was the average of the 3 team numbers that made it up. Note that rookies this year were about 1700+, so a team with an average alliance number higher than that was probably all rookies.

There is definitely some correlation between team number and scores, but it is a fairly weak correlation, and more importantly, there is a LOT of variation in each group. There are rookies who can dominate regionals (2056 in 2007), and there are rookies who can barely get their robot to move. However, there are also older teams like this.

http://www.chiefdelphi.com/forums/attachment.php?attachmentid=4111&d=1142127063

Edit: Going through all my old statistics threads is fun.

Here’s another relevant one. Given two alliances, find their average alliance numbers (AAN1 and AAN2). The x-axis on this graph is the difference between opposing alliance’s AANs. If a team like (1114, 1503, 1680) faced (25, 48, 71), then the difference would be something like 1200ish. This graph shows the win rate for the higher-numbered alliance.

Basically, it says that in 2006, if your AANs differed by 1200, then the team with the higher number had a 20% win rate.
http://www.chiefdelphi.com/forums/attachment.php?attachmentid=4138&d=1142707377

Time to go back and increase your data set size, add 2007 and 2008 data, and include a Z-axis with number of teams in each band. Good work!

You guys are overlooking something. There’s another reason it’s beneficial to not have rookie teams on the same alliance. As rookies, we’re learning about all aspects of FIRST. We learn the most from experienced teams. It seems to me the mentoring aspect FIRST promotes throughout the build phase, would be appropriate for the competition phase too. On an alliance with 1 rookie team, there are 2 experienced teams to offer help, guidance, strategy, etc. I don’t see a down side to this.

While it’s certainly possible for a rookie team to outperform many experienced teams, I think it’s still in the best interest of the organization overall for rookie teams to get the benefit of what more experienced teams can teach them during that first year. The more experienced alliance partners a rookie team has, the more information it receives on how to be even better next year.

I understand what you are saying, but if you talk to nearly every psycologist, past behavior is the best predictor of future behavior. If there was no correlation or indication, then we would expect teams like 71, 111, 233, 1114 to have a normal distribution of results (ie win 3 regionals in a year just as often as not getting picked for the eliminations in one year). As we know, however, these teams always are some of the top teams. I think you mean that the correlation is not strong enough to be a used. If so, I agree.

I don’t see how that will help. Remember, FIRST is not about the competition or the robots; it’s about the people.

Also, I can think of at least one veteran team off the top of my head that could use some on-field mentoring themselves. They aren’t exactly in a position to give advice. You wouldn’t know it to look at their number–and the number is what the algorithm uses.

Personally, I’d rather see the return of the design books.

I am unsure as to how something like this would work for some of the younger regionals as well, I mean look at Minnesota this year. Over half of the field is rookie teams.

Ok, let me explain it this way: If a rookie team were on 10 alliances with 20 different experienced teams, that’s 20 sets of data. The rookie team can decide for itself which advice is useful and which is not, but the more times the same advice is given, the more likely it is to be valid. More information is better than less.

I also am well aware it’s not about the competition and winning, which is exactly why I’m suggesting the rookie teams be paired with 2 experienced teams during the competition. If I were promoting a better winning strategy, I’d suggest teams be seeded by individual performance, but I personally don’t care about that, except to the extent of keeping track of our individual performance so we know how our design and strategy worked.

If the algorithm were changed to include the fewest pairings of rookie teams possible, and to balance the rookie distribution between the competing alliances, it wouldn’t matter what percentage rookies were at the competition. It would only mean there wouldn’t be alliances where experienced teams were competing against inexperienced.

I don’t know how easy that would be to implement for next year. (It’s too late for this year.) I think it won’t be that easy.

Take a look at the 2007 match lists, if you can find any. (The Blue Alliance probably has them.) You will see almost exactly that situation. The hard part will be keeping the other teams from facing each other more than once or twice. Last year’s algorithm was the most hated in FRC history. So you want the “third tier” to be made up only of rookies and only one other tier. That can’t be easy to do. If you think it is, then I invite you to come up with an algorithm and submit it to FIRST for their use.

I respectfully disagree. In some matches the teams will be evenly balanced. In other matches they won’t be balanced at all. That isn’t a bad thing. You have to be able to adapt and learn from each match, whether you have helpful pairings or not.
The match scheduling has been pretty good this year. Sure, some teams will end up with somewhat easier schedules than others. That happens in every other sport too.

I know at the MN Regional this weekend at least half if not more were Rookies, Also alot of the rookie teams have a better robot and drivers then 5 year old teams, it all depends on the team not there experiance, this isn’t true all the time but there were a good amount of good rookie teams at the MN Reg.

Again to go back to the stats I did a few years back, a team’s seeding performance in one year is a very poor predictor of its performance in the year following.

Here’s a graph:http://www.chiefdelphi.com/forums/attachment.php?attachmentid=5406&d=1175831673

On the X axis is a team’s seeding performance in 2005. Further left is better. On the Y axis is a team’s seeding performance in 2006, lower is better. You’ll see about the only thing you can predict is that teams who were top seeds in 2005 tended to not be dead last in 2006. Likewise, teams who did very poorly in 2006 tended to not win the following year (but some did). Past behavior predicting future behavior may work well in humans, but not so much in robotics teams.

The teams that do well year after year are very special cases. Out of the 1500ish active teams in FIRST, people can probably only name 50ish ‘power houses’ who win year after year after year and never hiccup.

On the main topic:
Keep in mind that a team’s next-year performance will probably be modified FAR more by who they communicate in the pits with, rather than who they play with on the field. If you play 8 games, you’re only on the field for 16-20 minutes, but you’re at the regional in the proximity of other robotics teams (whether in the hotel, pits, stands, or fields) for 72 hours. The 71 hours and 40 minutes that you’re not on the field are where your entire team can learn from vets, not just on-field.

There seems to be some confusion about what I actually mean when I say it would be better to have the rookie teams more evenly distributed on alliances.

What I am NOT saying is rookies are worse than experienced teams.

What I AM saying is rookies are less experienced than experienced teams.

Sometimes the student is better than the teacher, but that doesn’t mean the student can’t still learn from the teacher because experience does count for something.

Having said that though, if the data provided is correct, there does seem to be a correlation between the number of rookie teams on an alliance and how the alliance performs, so if it’s possible to reduce the “unfairness” to the non-rookie teams by even a little bit through distributing the rookie teams more evenly, why in the world would anyone not want to?

Excepting, of course, the difficulty in writing the program that can do this and all the other functions too. A valid and reasonable argument against my suggestions, btw. If good programmers say it’s too hard to factor rookies into the algorithm and still have it work as well as it does now, then it’s too hard.

Kimberly may have a point - they got a really unlucky schedule at GLR. In their first 4 matches, they were paired with another rookie. They were paired with a “powerhouse” veteran team just once, 469 - and in that match they came up against 494 and 67.

To contrast, I looked at Rush’s schedule (lowest team number at GLR). They saw rookies on the field only twice - in the same match, one on each alliance. 33 played with a rookie once, never against one.

503, in the middle of the list of team numbers, played with and against rookies and very low-number veterans in many of their matches. 573 had a similar schedule.

2676, the highest team number at GLR, played with and against rookies in most of their schedule.

I looked at the West Michgan schedule and saw a similar clustering of rookie teams, and again of low-number teams.

We’ve seen that the “maximize time between matches” constraint often results in a team playing with another team in one match and against them the following. That’s not unreasonable.

Does the schedule algorithm shuffle the teams before slotting them into the schedule? Or does it start in numeric order? As we saw last year, strict team order doesn’t equate to team strength. But did the schedule inadvertently create semi-tiers?

Hmmm… I’ve even seen griping from a low-numbered team (I think it was 48) mentioning that they were in the first match of a regional multiple times.

Wait 10 minutes, I’ll re-jigger my OPR calculator to get an idea of the average team # that a team plays with.

I never said it was a fantastic predictor. I said it was the best. Predicting behavior in humans is very challenging.

The graph that you provide may very well prove that past preformance does predict future behavior. It appears the correlation coefficent would be around .2 or .3 and with a large sample size (around 1,000) I would not be surprised to see the correlation statistically significant. This means that the observed relationship is not due to random varation but because the two samples (results from 2005 and 2006) are in fact related.

Although it is not a perfect correlation or relationship (nearly all relationships in life are not perfect) doesn’t mean there is none. It may not be a very strong relationship, but it appears there is one. I cannot think of another relationship (team number, funding…) that is a better predictor than past preformance.

To tie this back into the original topic, there is not perfect predictor for team preformance. Unless someone does a huge multiple regression study and finds a way to predict how teams are (I don’t think there is one), the best way is to just randomly assign teams under the perameters (such as time inbetween matches) to ensure the most fair pairings.