|
|
|
![]() |
|
|||||||
|
||||||||
Compares methods of championship qualification based on relative metrics of success
For each year from 2004-2013, compared various metrics of success for teams based on those teams' methods of qualification. Metrics included:

Summary.xlsx
Summary 2.0.xlsx
Summary.xlsx
Summary2015.xlsx
21-05-2013 20:51
z_beeblebrox
Data seems to suggest that districts produce more competitive teams at championships. Why do you think that is? More matches/ events?
21-05-2013 21:04
nikeairmancurry|
Data seems to suggest that districts produce more competitive teams at championships. Why do you think that is? More matches/ events?
|
21-05-2013 21:55
Siri|
More playing time would be a huge factor as most teams coming out of the districts have played a minimum of 36 qualifying matches (2 districts and a championship) and all the elimination matches.
As Jim Zondag has stated in the past, the more hands on time with the robot, the better the performance. |
22-05-2013 09:40
Kims Robot
I think we've all seen/expected the district correlation for a while, but its definitely interesting to see it in the numbers.
I am actually more interested in the lines that show the HoF teams vs the RCA winners... Most of us know the HoF teams, and all of them are great teams with great robots. What I find interesting is that the CA line is trending downward, but the HoF line is not.
This brings up a few interesting questions...
1. What is the correlation between CA teams and "good robots"?
2. Do the incredibly strong CA teams have strong all around programs thus resulting in good robots? or are CCA judges picking teams that have strong robots in addition to strong CA criteria?
3. Why is the RCA line dipping? Is it because there are more RCA's given out, thus the pool has to be spread more thinly? Is it because RCA teams are concentrating more on their RCA performance than Robot performance?
Any theories?
22-05-2013 18:39
Ken Streeter
|
... What I find interesting is that the CA line is trending downward, but the HoF line is not.
This brings up a few interesting questions... Any theories? |
|
... 1. What is the correlation between CA teams and "good robots"?
|
|
... 2. Do the incredibly strong CA teams have strong all around programs thus resulting in good robots? or are CCA judges picking teams that have strong robots in addition to strong CA criteria?
|
|
3. Why is the RCA line dipping? Is it because there are more RCA's given out, thus the pool has to be spread more thinly? Is it because RCA teams are concentrating more on their RCA performance than Robot performance?
|
23-05-2013 07:16
Chris Fultz|
Having said all of the above, I would be really interested in seeing the "Regional Winner" trend line broken out into three separate categories for "Regional Winner Captain", "Regional Winner 1st-Pick", and "Regional Winner 2nd-Pick" as I think at least one of those lines would be significantly different than the others! However, that data may be very difficult to add to the spreadsheet that was used to generate these charts, if it isn't already there.
|
23-05-2013 10:20
Ken Streeter
|
Actually just a break out of Captain - 1st Pick would be very interesting, and i think (predict) would more closely match the District lines.
|
27-05-2013 15:13
Basel A
Are you ready for a graph with way too many lines? The new summary has been uploaded.
I would have broken out the Alliance positions of the Regional Winners in the first place if I knew that I could at the time. I thought it'd be impossible to do as someone without any experience web scraping data, until I remembered that 1114's Scouting Databases include all alliance selection results (thanks Karthik et. al.!). Using that as a base, I cross-referenced with the All Awards Database and online FIRST data to fill in the blanks and correct for errors. I don't think it's 100% accurate, but I'm sure it's pretty close.
I also decided to break out veteran Chairman's Teams and first-time Chairman's Teams. Over time, the ratio has swung from mostly first-timers to mostly veterans, but there's enough of each to make both lines pretty smooth. It does show that the bigger Chairman's Teams (Hall of Fame and, to a lesser degree, multiple RCA winners) tend to do better robot-wise.
In 2004, Regional Chairman's Teams outperformed the Hall of Fame, but as teams like 254, 67, and 111 (among others) won CCAs, the RCA line dipped significantly and the Hall of Fame line rose. Nevertheless, as excellent as the Hall of Fame teams are, I don't think the downward trend can be wholly attributed to steady expansion of the HoF. Each year, more of the CMP qualifies competitively (Champion, wildcard, etc.), so it makes sense that the teams that qualify otherwise perform worse relative to the rest of the teams.
It's possible we'd see a different trend if we were using absolute measures of success, rather than relative measures.

29-05-2013 08:54
nikeairmancurryWell the captain and 1st pick lines do make sense. Before 2008 it was rare to have 12 matches at a event and the seeding would rarely yield a true number 1. But as most know the first pick is usually the first or second best robot at each event.
30-05-2013 10:25
Ken Streeter
|
Are you ready for a graph with way too many lines? The new summary has been uploaded.
I ... have broken out the Alliance positions of the Regional Winners ... |
20-05-2014 03:49
Basel A
Updated these numbers. New chart.
A few notes:
I only used the first 24 teams in elims from each division. I'm not deriding the role that 4th teams played on alliance, it was just to keep the number consistent from year to year. I can't pick out who the 25th through 32nd teams would have been for 2004-2013, so I chose to leave them out for 2014.
There's a downward trend in the district teams' success rate. Possible explanations include the increase in qualifying teams from Michigan, the addition of the NE and PNW district systems, regression to the mean, and random year-to-year variations. I personally think it's mostly the last one. If there's interest, I could break out the different district systems' success rates, which could shed some light.
This was a historically bad year for Rookies at the Championship. Only two made it to elims, and they were not in the top 24. It was also bad for registered teams, none of whom made elims, but that can mostly be explained by the very small number of registered teams at the CMP this year (four).