Are you ready for a graph with way too many lines? The new summary has been uploaded.
I would have broken out the Alliance positions of the Regional Winners in the first place if I knew that I could at the time. I thought it'd be impossible to do as someone without any experience web scraping data, until I remembered that 1114's Scouting Databases include all alliance selection results (thanks Karthik et. al.!). Using that as a base, I cross-referenced with the All Awards Database and online FIRST data to fill in the blanks and correct for errors. I don't think it's 100% accurate, but I'm sure it's pretty close.
I also decided to break out veteran Chairman's Teams and first-time Chairman's Teams. Over time, the ratio has swung from mostly first-timers to mostly veterans, but there's enough of each to make both lines pretty smooth. It does show that the bigger Chairman's Teams (Hall of Fame and, to a lesser degree, multiple RCA winners) tend to do better robot-wise.
In 2004, Regional Chairman's Teams outperformed the Hall of Fame, but as teams like 254, 67, and 111 (among others) won CCAs, the RCA line dipped significantly and the Hall of Fame line rose. Nevertheless, as excellent as the Hall of Fame teams are, I don't think the downward trend can be wholly attributed to steady expansion of the HoF. Each year, more of the CMP qualifies competitively (Champion, wildcard, etc.), so it makes sense that the teams that qualify otherwise perform worse relative to the rest of the teams.
It's possible we'd see a different trend if we were using absolute measures of success, rather than relative measures.
