paper: FRC Event Comparison 2009-2013

Thread created automatically to discuss a document in CD-Media.

FRC Event Comparison 2009-2013
by: Jim Zondag

Comparison of all FRC events played 2009-2013.
May help you to choose events to attend.


I began using this method in 2009 when we launched the District system in Michigan. The original goal was to have a method for grading FRC events relative to one another so that we could see how the new format District events compared to other traditional Regionals on performance metrics. This is the best method I have devised so far to do these type of comparisons. It works very well to measure event capability growth in season, as well as comparing events across multiple seasons.

FRC_Event_Comparison_2009-2013 v2.xlsx (155 KB)

Since registration opens tomorrow and everyone is working to pick events, I figured I should post this again.
This sheet plots all Events played in the 2013 season on a 2-D scatter-plot.
What this plot shows is overall competitive strength of the event vs. the competitive balance of the event.

Higher on the chart means an overall stronger event
Further right on the chart means a more balanced event.

All of the raw data is available in tabular form in the OPR Event Data tab. The graph is pretty cluttered with the labels in the center regions.
I include the IRI as a reference point to set an upper limit to the “art of the possible” in event quality.

Due to the co=dependency of the variables used to generate this type of chart, the events will trend up and to the right as they get better.

Fun with math!:slight_smile:
Good luck teams with registration. I hope everyone gets into the events they want.

Very cool data Jim, thanks for posting this.

Really cool stuff to look at!
Just a note- the Smoky Mountain Regional is listed pre-2013 as SmokyMountain, and in 2013 as SmokyMountains, and when you enter the latter into the Query Box, it returns an error. Might want to fix that.

EDIT: That same kind of bug occurs for a few other events too, including North Star which for 2013 is listed as MNNorthStar, and not Query-able.

Thanks! This is great work and very useful information.

I’m familiar with SNR when it comes to “Signal-To-Noise Ratio” in Radar, but what does SNR mean here?

EDIT: Nevermind this. My monitor wasn’t wide enough to notice the tab.

Amazing Chart - I love this data… can’t wait to see how New England stacks up! It looks like in 2013 & 2011 events were in the top 50%, but that over the years they have spanned the range (and interesting to see CT in the top 4 reg season events all but 2013). I’d be interested to see this correlated with how many NE vs non-NE teams… maybe I’ll tackle that one of these days :slight_smile:

It’s also nice to actually nice to see that aside from the District CMPs that the CMP is actually still pulling “the best of the best”. Not quite IRI style, but it’s good to see it still on the upper end of the spectrum. Makes me feel a bit better about some of the deviation charts we’ve seen, but as we all move towards districts, I expect to see that get better.

Only thing I was thinking was that powerhouses tend to skew some of this, but SNR looks like it covers that. Am I reading it right that a more negative/lower SNR would indicate a much broader OPR spread (thus the likely presence of powerhouses and struggling teams)? It might be that Im looking at this before having caffeine… :slight_smile:

Thanks for the great data!

You are correct Kim. What a couple of Powerhouses tend to do is push up the average, but if they are the only ones, it then causes a poor SNR value.

To cherry pick a few neat examples of 2013 data, in general the more teams play they better they get. Bedford was about 75% 3rd event teams, and had an amazing average. It also had a very good SNR as most teams could score reasonably well. While this s neat, check out Crossroads. It had a ton of overlap with Boilermaker. Crossroads also had a ton of second event teams. You see simlar trending between Traverse City and West Michgan (which is very similar overlap and timing). I did a quick check on Pine Tree, and there was a lot of overlap with BAE, and only 1 of the top 10 ranked was competing at their first regional for the year…

Fun data Jim. It is neat to see events “grow”, and get more competitive.

*Hey Jim,

That is an excellent way to graphically present the data. Thanks for posting it!

I’m wondering how much the graphs would change, particularly near the top right corner, if you used median absolute deviation from the median divided by the median (MADM/M), instead of SNR of sigma. Sigma assumes Gaussian distribution (which OPRs tend not to be). MADM/M is more robust to distribution.

If you don’t have the time for that, I’ll volunteer to take a crack at it if you’ll post the OPR data you used.

Ike & Jim,

Bedford, 75% were 3rd event players, YIKES! I knew it was high, I did not realize it was that high… Were there any of the 25% 2nd event teams that made it though Bedford and on to MSC? (I know Bedford was on the bubble and got there by Chairman’s)

Is the 75% playing for nothing district an anomaly, as I think I remember that Bedford may have not been on the original district list when registration opened last year?

Or do you typically see this in the last event before MSC?

I believe there were a couple that made it through that would not otherwise. Last year, there were just barely enough teams to call for the extra district. This resulted in a lot of 3rd event teams. The 75% is an anomaly as previous years there were not a lot of lottery slots, and they were more evenly distributed. By the very nature of it though, this is most likely to occur at a late season event. Bedford was also a late add, so most teams had already signed up for and planned on attending two earlier events.