|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
|||||
|
|||||
|
Re: paper: Relative Success of Championship Teams based on Qualification Method
Quote:
Like you, I predict the Captain/1st-Pick category would more closely match the district line, and the 2nd-Pick category would more closely match the RCA/EI lines. |
|
#2
|
|||||
|
|||||
|
Re: paper: Relative Success of Championship Teams based on Qualification Method
Are you ready for a graph with way too many lines? The new summary has been uploaded.
I would have broken out the Alliance positions of the Regional Winners in the first place if I knew that I could at the time. I thought it'd be impossible to do as someone without any experience web scraping data, until I remembered that 1114's Scouting Databases include all alliance selection results (thanks Karthik et. al.!). Using that as a base, I cross-referenced with the All Awards Database and online FIRST data to fill in the blanks and correct for errors. I don't think it's 100% accurate, but I'm sure it's pretty close. I also decided to break out veteran Chairman's Teams and first-time Chairman's Teams. Over time, the ratio has swung from mostly first-timers to mostly veterans, but there's enough of each to make both lines pretty smooth. It does show that the bigger Chairman's Teams (Hall of Fame and, to a lesser degree, multiple RCA winners) tend to do better robot-wise. In 2004, Regional Chairman's Teams outperformed the Hall of Fame, but as teams like 254, 67, and 111 (among others) won CCAs, the RCA line dipped significantly and the Hall of Fame line rose. Nevertheless, as excellent as the Hall of Fame teams are, I don't think the downward trend can be wholly attributed to steady expansion of the HoF. Each year, more of the CMP qualifies competitively (Champion, wildcard, etc.), so it makes sense that the teams that qualify otherwise perform worse relative to the rest of the teams. It's possible we'd see a different trend if we were using absolute measures of success, rather than relative measures. ![]() |
|
#3
|
||||
|
||||
|
Re: paper: Relative Success of Championship Teams based on Qualification Method
Well the captain and 1st pick lines do make sense. Before 2008 it was rare to have 12 matches at a event and the seeding would rarely yield a true number 1. But as most know the first pick is usually the first or second best robot at each event.
|
|
#4
|
|||||
|
|||||
|
Re: paper: Relative Success of Championship Teams based on Qualification Method
Quote:
I had expected that the Regional Winners Captains / 1st-picks would be high-performing, but still below the District CMP teams. What I had not expected was that the "Regional Winner 2nd-picks" would underperform RCA, EI, and basically every other category of robot other than the Rookie All-Stars. However, when thinking about it, if the ranking system is doing a reasonably good job of sorting robots by robot capability and the draft is doing a reasonably good job of ordering the remaining teams, then the 2nd-picks of a regional winner are likely about the 17th to 24th best robots at the regional. At a 40-team regional, these are about the middle of the pack. Thinking of that in world-wide terms: there are about 2500 teams worldwide - the middle of the pack would be robots ranked about 1200-1300 worldwide. It really shouldn't be surprising that very few of these robots appear in the elimination rounds at CMP, as the CMP eliminations would be expected to be 96 of the best robots in the world. |
|
#5
|
|||||
|
|||||
|
Re: paper: Relative Success of Championship Teams based on Qualification Method
Updated these numbers. New chart.
A few notes: I only used the first 24 teams in elims from each division. I'm not deriding the role that 4th teams played on alliance, it was just to keep the number consistent from year to year. I can't pick out who the 25th through 32nd teams would have been for 2004-2013, so I chose to leave them out for 2014. There's a downward trend in the district teams' success rate. Possible explanations include the increase in qualifying teams from Michigan, the addition of the NE and PNW district systems, regression to the mean, and random year-to-year variations. I personally think it's mostly the last one. If there's interest, I could break out the different district systems' success rates, which could shed some light. This was a historically bad year for Rookies at the Championship. Only two made it to elims, and they were not in the top 24. It was also bad for registered teams, none of whom made elims, but that can mostly be explained by the very small number of registered teams at the CMP this year (four). Last edited by Basel A : 20-05-2014 at 13:07. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|