paper: Relative Success of Championship Teams based on Qualification Method

Thread created automatically to discuss a document in CD-Media.

Relative Success of Championship Teams based on Qualification Method
by: Basel A

Compares methods of championship qualification based on relative metrics of success

For each year from 2004-2013, compared various metrics of success for teams based on those teams’ methods of qualification. Metrics included:

[list][]% of elimination teams that were of each method
[
]% of each method that were in eliminations
[*]Representation Index (how over- or under-represented each method was in eliminations)
[/list]

Four-year moving averages were included for the latter two metrics. Raw data included.

I personally believe the Representation Index is the best metric out of the four. It indicates the number of elimination spots the method took divided by the number of elimination spots they would’ve been expected to take. A method with an index of at or around 1 did about as well as expected. Any method with an index near or above one is arguably worth sending to the Championship based on competitiveness alone.

Jim Zondag’s FIRST Championship History Results were very useful
The various 1114 Scouting Databases were also super useful
Team 358’s Award History Database was helpful as well
Other data derived from my previous work.

Feel free to post or PM me to suggest other metrics of success or to ask questions about the data.

Summary.xlsx (47.3 KB)
Summary 2.0.xlsx (38.5 KB)
Summary.xlsx (39.8 KB)
Summary2015.xlsx (41.4 KB)

Data seems to suggest that districts produce more competitive teams at championships. Why do you think that is? More matches/ events?

More playing time would be a huge factor as most teams coming out of the districts have played a minimum of 36 qualifying matches (2 districts and a championship) and all the elimination matches.

As Jim Zondag has stated in the past, the more hands on time with the robot, the better the performance.

You’re also not qualifying (for instance) the 1st, 2nd and 16th best robot at a sign-up-to-get-in event. District teams that qualify on points show consistently strong performance over at least 3 events. The guys that points-qualified from MAR this year were:
222 – Qtr8, WinC1, FINC6
225 – Win2, SmiC2, SMI7
316 – Fin4, WinC3, QTR3
293 – Win3, QtrC1, QTR9
303 – Smi5, Smi2, (QtrC5), QTRC3
193 – QtrC4, Fin3, QtrC2
Winner, Finalist, Semifinalist Quarterfinalist. Number indicates draft pick (C=Captain); capitalized indicates MAR Championship performance.

We qualified these teams instead of those that won at our sign-up-show-up events (districts). In such competitive regions, that’s a very big difference. All 3 of the Region Champions also finished within the points cutoff (i.e. we were skipped to invite the full 6 teams), with 2729 and 2590 being the top 2. (We were right above 293.) Regionals don’t have the setup to yield such a high concentration.

I think we’ve all seen/expected the district correlation for a while, but its definitely interesting to see it in the numbers.

I am actually more interested in the lines that show the HoF teams vs the RCA winners… Most of us know the HoF teams, and all of them are great teams with great robots. What I find interesting is that the CA line is trending downward, but the HoF line is not.

This brings up a few interesting questions…

  1. What is the correlation between CA teams and “good robots”?
  2. Do the incredibly strong CA teams have strong all around programs thus resulting in good robots? or are CCA judges picking teams that have strong robots in addition to strong CA criteria?
  3. Why is the RCA line dipping? Is it because there are more RCA’s given out, thus the pool has to be spread more thinly? Is it because RCA teams are concentrating more on their RCA performance than Robot performance?

Any theories?

I have a couple thoughts about this. However, since this issue is complex, and I tend to be verbose, this note is likely to be a “tl;dr” for many.

First off, I really like the “Representation Index” as a way to compare the “relative success” of teams from these different qualification categories. Essentially, the “Representation Index” indicates whether or not a disproportionate number of the teams in a given category are making it into the CMP elimination rounds. Very high numbers mean that almost all teams in that category make it to CMP elims. Very low numbers mean that almost none of the teams in that category end up in CMP elims.

Looking at the graph, one will see that the bigger categories have smoother trend lines, since the larger numbers of teams in these groups average out noise. Regional Winners are about 40% of the teams at CMP, while about 15% of the teams belong to each of RCA, EIS, and RAS categories. As expected by basic statistics, these categories have much more stable trend lines than the smaller categories, as whether or not one or two teams end up making eliminations or not doesn’t change the statistics much for a bigger category.

On the other hand, with the categories for “Original & Sustaining”, “Last Year’s Winners”, and “Hall of Fame” each being only a few percent of teams, those trend lines are much more volatile, despite being helped by the 4-year moving average.

Now, on to Kim’s specific questions:

In general, I think that there is a pretty strong correlation between CA teams and “good robots.” However, the criteria of being “in CMP elims” (the criteria which matters for the “Representation Index”) is a pretty stringent restriction. The “CMP elims” robots aren’t just “good robots” but consist of only 96 robots in the world, out of around 2550 FIRST teams in 2013. Ignoring the fact that the “CMP elimination” teams aren’t really the “top 96” in the world due to some excellent robots not attending CMP, or not making it to the elimination rounds for other reasons, the “CMP elimination caliber” robots are about the top 4% of teams worldwide.

This top 4% of teams worldwide aren’t just “good robots” but are “exceptional robots.” So, the question really is, “What is the correlation between CA teams and exceptional robots?”

Well, what the statistics show is that being an RCA team isn’t as well-correlated with being an exceptional robot as things such as winning CMP last year, qualifying by rank from a district CMP, being a HoF team, winning a regional, or being one of the hanful of original & sustaining teams.

That said, I think that RCA teams generally do have “good robots” – they just have proportionately less “exceptional robots” than some of the other categories.

I think what we’re seeing is that some teams focus more on the robot, some teams focus more on CA activities, and some teams try to excel at both. Teams that focus more on the robot are likely to have a bit of an edge over similarly capable teams that focus on CA activities or teams that balance both. It requires a lot more team effort to build both an “exceptional robot” and be an RCA team than it does to only build an “exceptional robot.”

In general, I think the correlation between CA teams and “good robots” is primarily due to the strength of the program from those teams. From what I’ve seen from the outside, the RCA judges seem to make their decisions without really considering the strength of the team’s robot. The CCA judges are picking the very best RCA team each year – the level of competition for the CCA is so high that only elite programs are even in the running. Such elite programs are excellent not only in CA qualities, but also in the robot design, build, and operation, meaning they are very likely to have “exceptional robots.” I think this is why the HoF teams are so highly represented in the CMP elimination rounds.

Well, everything else being equal, one actually would expect the RCA line to dip a little bit every year, as the very best CA team from the prior year becomes a Hall-of-Fame team for the next year, and thus the RCA category loses one of its best teams, which then gets back-filled by a lesser team. This new HoF team was surely an “elite program” or they wouldn’t have won the CCA; accordingly, that team is very likely to produce an “exceptional robot.” However, the downward trend is moving faster than just one team shifting to the HoF category each year.

This is speculation on my part, but I think the primary reason for the downward trend in the RCA line is the arrival on the scene of the new “By Rank from District CMP” category. Since these are all at least “very good robots” within their district, these teams have earned higher representation in the elimination rounds, which has tended to displace RCA and RAS teams from elimination round berths.

However, I also find it interesting that EI teams have not seen the downwards trend experienced by RAS and RCA teams. It would seem that over the past 6 years, EI teams have become just as strongly correlated with “exceptional robots” than RCA teams, although that was not the case back in the 2007 time frame.

I would also note that the fast drop in the RAS trend line makes sense, too, as with more and more veteran teams each year bringing up the average level of capability, it gets harder and harder for a rookie team to build an “exceptional robot” in their first year.

Having said all of the above, I would be really interested in seeing the “Regional Winner” trend line broken out into ***three separate categories ***for “Regional Winner Captain”, “Regional Winner 1st-Pick”, and “Regional Winner 2nd-Pick” as I think at least one of those lines would be significantly different than the others! However, that data may be very difficult to add to the spreadsheet that was used to generate these charts, if it isn’t already there.

Actually just a break out of Captain - 1st Pick would be very interesting, and i think (predict) would more closely match the District lines.

Exactly. Actually, it is because I think the lines would be different from one another that I asked about separating them out.

Like you, I predict the Captain/1st-Pick category would more closely match the district line, and the 2nd-Pick category would more closely match the RCA/EI lines.

Are you ready for a graph with way too many lines? The new summary has been uploaded.

I would have broken out the Alliance positions of the Regional Winners in the first place if I knew that I could at the time. I thought it’d be impossible to do as someone without any experience web scraping data, until I remembered that 1114’s Scouting Databases include all alliance selection results (thanks Karthik et. al.!). Using that as a base, I cross-referenced with the All Awards Database and online FIRST data to fill in the blanks and correct for errors. I don’t think it’s 100% accurate, but I’m sure it’s pretty close.

I also decided to break out veteran Chairman’s Teams and first-time Chairman’s Teams. Over time, the ratio has swung from mostly first-timers to mostly veterans, but there’s enough of each to make both lines pretty smooth. It does show that the bigger Chairman’s Teams (Hall of Fame and, to a lesser degree, multiple RCA winners) tend to do better robot-wise.

In 2004, Regional Chairman’s Teams outperformed the Hall of Fame, but as teams like 254, 67, and 111 (among others) won CCAs, the RCA line dipped significantly and the Hall of Fame line rose. Nevertheless, as excellent as the Hall of Fame teams are, I don’t think the downward trend can be wholly attributed to steady expansion of the HoF. Each year, more of the CMP qualifies competitively (Champion, wildcard, etc.), so it makes sense that the teams that qualify otherwise perform worse relative to the rest of the teams.

It’s possible we’d see a different trend if we were using absolute measures of success, rather than relative measures.

Well the captain and 1st pick lines do make sense. Before 2008 it was rare to have 12 matches at a event and the seeding would rarely yield a true number 1. But as most know the first pick is usually the first or second best robot at each event.

Wow, that is very interesting.

I had expected that the Regional Winners Captains / 1st-picks would be high-performing, but still below the District CMP teams.

What I had not expected was that the “Regional Winner 2nd-picks” would underperform RCA, EI, and basically every other category of robot other than the Rookie All-Stars.

However, when thinking about it, if the ranking system is doing a reasonably good job of sorting robots by robot capability and the draft is doing a reasonably good job of ordering the remaining teams, then the 2nd-picks of a regional winner are likely about the 17th to 24th best robots at the regional. At a 40-team regional, these are about the middle of the pack. Thinking of that in world-wide terms: there are about 2500 teams worldwide - the middle of the pack would be robots ranked about 1200-1300 worldwide. It really shouldn’t be surprising that very few of these robots appear in the elimination rounds at CMP, as the CMP eliminations would be expected to be 96 of the best robots in the world.

Updated these numbers. New chart.

A few notes:

I only used the first 24 teams in elims from each division. I’m not deriding the role that 4th teams played on alliance, it was just to keep the number consistent from year to year. I can’t pick out who the 25th through 32nd teams would have been for 2004-2013, so I chose to leave them out for 2014.

There’s a downward trend in the district teams’ success rate. Possible explanations include the increase in qualifying teams from Michigan, the addition of the NE and PNW district systems, regression to the mean, and random year-to-year variations. I personally think it’s mostly the last one. If there’s interest, I could break out the different district systems’ success rates, which could shed some light.

This was a historically bad year for Rookies at the Championship. Only two made it to elims, and they were not in the top 24. It was also bad for registered teams, none of whom made elims, but that can mostly be explained by the very small number of registered teams at the CMP this year (four).