|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
|||||
|
|||||
|
Re: paper: Relative Success of Championship Teams based on Qualification Method
Quote:
First off, I really like the "Representation Index" as a way to compare the "relative success" of teams from these different qualification categories. Essentially, the "Representation Index" indicates whether or not a disproportionate number of the teams in a given category are making it into the CMP elimination rounds. Very high numbers mean that almost all teams in that category make it to CMP elims. Very low numbers mean that almost none of the teams in that category end up in CMP elims. Looking at the graph, one will see that the bigger categories have smoother trend lines, since the larger numbers of teams in these groups average out noise. Regional Winners are about 40% of the teams at CMP, while about 15% of the teams belong to each of RCA, EIS, and RAS categories. As expected by basic statistics, these categories have much more stable trend lines than the smaller categories, as whether or not one or two teams end up making eliminations or not doesn't change the statistics much for a bigger category. On the other hand, with the categories for "Original & Sustaining", "Last Year's Winners", and "Hall of Fame" each being only a few percent of teams, those trend lines are much more volatile, despite being helped by the 4-year moving average. Now, on to Kim's specific questions: Quote:
This top 4% of teams worldwide aren't just "good robots" but are "exceptional robots." So, the question really is, "What is the correlation between CA teams and exceptional robots?" Well, what the statistics show is that being an RCA team isn't as well-correlated with being an exceptional robot as things such as winning CMP last year, qualifying by rank from a district CMP, being a HoF team, winning a regional, or being one of the hanful of original & sustaining teams. That said, I think that RCA teams generally do have "good robots" -- they just have proportionately less "exceptional robots" than some of the other categories. I think what we're seeing is that some teams focus more on the robot, some teams focus more on CA activities, and some teams try to excel at both. Teams that focus more on the robot are likely to have a bit of an edge over similarly capable teams that focus on CA activities or teams that balance both. It requires a lot more team effort to build both an "exceptional robot" and be an RCA team than it does to only build an "exceptional robot." Quote:
Quote:
This is speculation on my part, but I think the primary reason for the downward trend in the RCA line is the arrival on the scene of the new "By Rank from District CMP" category. Since these are all at least "very good robots" within their district, these teams have earned higher representation in the elimination rounds, which has tended to displace RCA and RAS teams from elimination round berths. However, I also find it interesting that EI teams have not seen the downwards trend experienced by RAS and RCA teams. It would seem that over the past 6 years, EI teams have become just as strongly correlated with "exceptional robots" than RCA teams, although that was not the case back in the 2007 time frame. I would also note that the fast drop in the RAS trend line makes sense, too, as with more and more veteran teams each year bringing up the average level of capability, it gets harder and harder for a rookie team to build an "exceptional robot" in their first year. Having said all of the above, I would be really interested in seeing the "Regional Winner" trend line broken out into three separate categories for "Regional Winner Captain", "Regional Winner 1st-Pick", and "Regional Winner 2nd-Pick" as I think at least one of those lines would be significantly different than the others! However, that data may be very difficult to add to the spreadsheet that was used to generate these charts, if it isn't already there. |
|
#2
|
|||||
|
|||||
|
Re: paper: Relative Success of Championship Teams based on Qualification Method
Quote:
|
|
#3
|
|||||
|
|||||
|
Re: paper: Relative Success of Championship Teams based on Qualification Method
Quote:
Like you, I predict the Captain/1st-Pick category would more closely match the district line, and the 2nd-Pick category would more closely match the RCA/EI lines. |
|
#4
|
|||||
|
|||||
|
Re: paper: Relative Success of Championship Teams based on Qualification Method
Are you ready for a graph with way too many lines? The new summary has been uploaded.
I would have broken out the Alliance positions of the Regional Winners in the first place if I knew that I could at the time. I thought it'd be impossible to do as someone without any experience web scraping data, until I remembered that 1114's Scouting Databases include all alliance selection results (thanks Karthik et. al.!). Using that as a base, I cross-referenced with the All Awards Database and online FIRST data to fill in the blanks and correct for errors. I don't think it's 100% accurate, but I'm sure it's pretty close. I also decided to break out veteran Chairman's Teams and first-time Chairman's Teams. Over time, the ratio has swung from mostly first-timers to mostly veterans, but there's enough of each to make both lines pretty smooth. It does show that the bigger Chairman's Teams (Hall of Fame and, to a lesser degree, multiple RCA winners) tend to do better robot-wise. In 2004, Regional Chairman's Teams outperformed the Hall of Fame, but as teams like 254, 67, and 111 (among others) won CCAs, the RCA line dipped significantly and the Hall of Fame line rose. Nevertheless, as excellent as the Hall of Fame teams are, I don't think the downward trend can be wholly attributed to steady expansion of the HoF. Each year, more of the CMP qualifies competitively (Champion, wildcard, etc.), so it makes sense that the teams that qualify otherwise perform worse relative to the rest of the teams. It's possible we'd see a different trend if we were using absolute measures of success, rather than relative measures. ![]() |
|
#5
|
||||
|
||||
|
Re: paper: Relative Success of Championship Teams based on Qualification Method
Well the captain and 1st pick lines do make sense. Before 2008 it was rare to have 12 matches at a event and the seeding would rarely yield a true number 1. But as most know the first pick is usually the first or second best robot at each event.
|
|
#6
|
|||||
|
|||||
|
Re: paper: Relative Success of Championship Teams based on Qualification Method
Quote:
I had expected that the Regional Winners Captains / 1st-picks would be high-performing, but still below the District CMP teams. What I had not expected was that the "Regional Winner 2nd-picks" would underperform RCA, EI, and basically every other category of robot other than the Rookie All-Stars. However, when thinking about it, if the ranking system is doing a reasonably good job of sorting robots by robot capability and the draft is doing a reasonably good job of ordering the remaining teams, then the 2nd-picks of a regional winner are likely about the 17th to 24th best robots at the regional. At a 40-team regional, these are about the middle of the pack. Thinking of that in world-wide terms: there are about 2500 teams worldwide - the middle of the pack would be robots ranked about 1200-1300 worldwide. It really shouldn't be surprising that very few of these robots appear in the elimination rounds at CMP, as the CMP eliminations would be expected to be 96 of the best robots in the world. |
|
#7
|
|||||
|
|||||
|
Re: paper: Relative Success of Championship Teams based on Qualification Method
Updated these numbers. New chart.
A few notes: I only used the first 24 teams in elims from each division. I'm not deriding the role that 4th teams played on alliance, it was just to keep the number consistent from year to year. I can't pick out who the 25th through 32nd teams would have been for 2004-2013, so I chose to leave them out for 2014. There's a downward trend in the district teams' success rate. Possible explanations include the increase in qualifying teams from Michigan, the addition of the NE and PNW district systems, regression to the mean, and random year-to-year variations. I personally think it's mostly the last one. If there's interest, I could break out the different district systems' success rates, which could shed some light. This was a historically bad year for Rookies at the Championship. Only two made it to elims, and they were not in the top 24. It was also bad for registered teams, none of whom made elims, but that can mostly be explained by the very small number of registered teams at the CMP this year (four). Last edited by Basel A : 20-05-2014 at 13:07. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|