How do the 2014 Regionals and Districts stack up?

Here are some fun things that interest me:

Jim Zondag’s history spreadsheets
Ed Law’s scouting spreadsheets
Team 358’s awards database
The new common district points system
“What’s the Toughest Regional” threads
This graph, also from Jim Zondag

Drawing inspiration (and data) from the items above, I’ve crafted some giant spreadsheets in the past few years to combine the available performance data with the twin goals of 1) assigning a performance index score to each team, and 2) using those scores to figure out which events have the toughest competitive fields. The spreadsheets are too large to upload to ChiefDelphi, unfortunately (I’ve tried). Nevertheless, here are a couple of fun charts that I made using my latest incarnation.

2014 Competitions Comparison.pdf (188 KB)
2014 Competitions Comparison 2.pdf (190 KB)

These graphics attempt to describe all district / regional competitions in terms of strength of their top tier teams as well as the depth of teams further down their team lists. To clarify, “Performance Index of Teams 7-16” means the average calculated performance index of the teams at that event who rank 7th through 16th in terms of the calculated index.

This calculated performance index for each team is based on their competition results (wins, alliance selections, elimination rounds) and awards since 2008. The system is partly based on the new district points system, but it’s not identical. It includes an OPR component (so shoot me). For the most part, the same teams end up on top no matter what system one uses.

Enjoy!

Edit: Here’s an attempt at hosting the spreadsheet:
Spreadsheet Download (30+ MB)

2014 Competitions Comparison.pdf (188 KB)
2014 Competitions Comparison 2.pdf (190 KB)


2014 Competitions Comparison.pdf (188 KB)
2014 Competitions Comparison 2.pdf (190 KB)

Looking at the team list grow I was pretty sure Wisconsin would be a slugfest this year. Thanks for confirming :slight_smile:

Not surprising, but interesting to see that the 1-6/7-16 chart is much more linear than the 1-4/21-24 chart. That said, any “all-in-one” metric is bound to have some significant flaws. Good to see Hatboro getting the respect it deserves, though. It’s one of the deepest events there is. Despite being week 1, pretty much everyone can score a game piece and fields something resembling a functional robot.

I’m surprised Sacramento ranks so high. I always hear the people that play at it call it a “week 1 regional in week 4.”

It’s official nickname has been “The Defensive Regional.”

Do you think you could post .pdfs of these same charts, but with Districts and Regionals on separate charts?

I realized that event size is probably skewing data significantly… (#20-#24 is below the middle of the pack for a district, while it’s a step above the middle of the pack for most regionals).

As a side-note, why do Waterford and Bedford have so few teams registered for them? It’s killing their rankings…

Thank you very much for all your data work and for posting these charts… they’re very interesting! Any shot you could upload your .xls to another location so we can download it and tinker?

What do you mean by that?

It’ll be an huge offensive light-show… I guess “slugfest” probably has it’s roots as a baseball idiom… a slugger’s a hitter who hits a lot of home runs, so a “slugfest” is a game in which there are tons of runs being scored.

Sure, here are a couple with regionals and districts separated.
Districts depth vs top tier.pdf (182 KB)
Regionals depth vs top tier.pdf (184 KB)

After thinking about this a bit more, I modified my methods again. In baseball we have Wins Above Replacement (WAR), and in FRC we have Minimum Competitive Concept (MCC). They are similar concepts. I subtracted a baseline amount from each team’s score in an attempt to represent the value a team adds above a bare level.

Defining “replacement level” is somewhat arbitrary, but I defined it as attending one event, having a 5-7 record, getting picked late or not being selected, going down in the quarterfinals or not playing in elims, having an OPR of 10 (about 10% of the season’s max OPR), and not winning awards. That amounts to about 15 points on the scale. Teams with negative performance index defaulted to 0. About 40% of teams competing in 2013 had an index of 15 or less before this adjustment. The idea of this adjustment is to try to quantify the “value” that teams bring above and beyond the most basic level of competitive achievement. I think that produced slightly better numbers for gauging how exciting and competitive elimination rounds will be at a particular event.

I also went with teams 1-4 for “top tier” and 5-24 for “depth.” I figure it’s really hard to win a regional if there are already four super teams signed up - it means you’d likely either have to be better than one of those teams, or win through a super alliance of two of those teams. And I did 5th-24th to include all of the teams that would seem, on paper, to be most likely to reach elims. I think the 5th-24th average in particular is less misleading with that small baseline adustment described above.

I’ll see if I can figure out how to host it on my website. Beware; the spreadsheet is a sprawling mess in some ways. But it does have a bunch of knobs to turn for people who are so inclined.

Districts depth vs top tier.pdf (182 KB)
Regionals depth vs top tier.pdf (184 KB)


Districts depth vs top tier.pdf (182 KB)
Regionals depth vs top tier.pdf (184 KB)

Wisconsin, Orlando, and Las Vegas look a step above the rest.

So, if I’m to understand this correctly, you’re using/developing a statistic called MCC, which you’re intending to make similar to WAR? This sounds like a great idea (IMHO), and should certainly be very interesting. What are you trying to have the metric include? Just on-field performance? General team greatness (basically adding the Chairmans/spirit/GP dimension)?

I like your definition of the “replacement level…” it seems to work pretty well. Someone on the cusp of making elims makes a lot of sense being called “replacement level” since that’s the level with which you’d replace a team on your alliance with a backup robot. If I understand correctly, you then found that to be equal to about 15pts in this statistic you’ve developed (are you calling it Performance Index, currently?). 40% of FRC teams have a Performance Index of less than 15pts. You then subtracted 15pts from everyone’s Performance Index to determine their Performance Index Above Replacement. I assume you allowed teams to have a negative Performance Index Above Replacement (but with a min of -15, since you capped it to 0 on the Performance Index), correct?

I really love this idea, and depending on how you answer the questions in my first paragraph, I’d like to propose some different names… MCC describes what the “replacement-level team” is, but doesn’t really describe what the stat does. The idea of wins above the replacement level is a very useful and easy-to-scale, so it seems like it’d make sense for it to be named something like WAR.

I also like the idea of separate stats for on-field performance and for the Chairmans/spirit/GP dimension, so here’s my suggestion: perhaps rWAR (robot Wins Above Replacement) which would then be scaled so that it corresponds to # of qualifying wins at a 12-match event. An elite-level team would probably have an rWAR of about 6 (6 wins above replacement level, which you gave as about 5-7)… fortunately for those of us who are into baseball stats, that scales somewhat similarly to baseball WAR. :slight_smile: If we have a separate stat for the Chairmans/spirit/GP dimension, perhaps it could be called cWAR (for “character” or “chairman’s”). An all-around stat could be some statistical combination of both… probably called aWAR for “aggregate Wins Above Replacement.”

Now I’m even more interested in seeing this spreadsheet… seeing how you go about calculating various things and trying (or not trying) to compensate for different elements! :slight_smile:

I don’t know if you can cal this a good or bad thing, but the well known outliers like Waterloo (a morsel of the global elite competition matched against an admittedly below-average majority) or Mexico City (a lot of newer teams that will naturally have lower or nonexistent values across the board). In the 1-4 v 21-24, the cluster of points also corroborates the common notion that the top 10% perform miles better than the median team in a competition (median assuming you have a 40-50 team event).

The data seems to confirm opinions some have about event makeup, but there’s also likely a way to form the data around a totally different opinion with having 2500 team compete in 8 up to 58 matches before Champs.

Not to sound needy, but I’d be curious to see matchups of 1-4 v 37-40 and 21-24 v 37-40

1 Like

Nice work Dan!
Judging by any of the scales you generated it looks like teams will have their work cut out for them for sure at LVR this year. There are some great teams competing and the vast majority of teams will have already competed at an earlier regional.

I’ve edited the first post with a link to the spreadsheet. Let me know if the link doesn’t work.

As far as I know, thee MCC idea originated in this thread, started Ike, formerly of Team 33. I don’t think of MCC as a statistic; the threads talk about the minimum requirements for a robot to be competitive on Saturday afternoon. But I do think that type of thinking could lead to some sort of “replacement level” definition for FRC that could be used in a stat similar to WAR.

I guess the basic idea is to predict which teams will be generally good in 2014, especially in terms of on-field performance. Awards presumably have some predictive ability; I’m assuming that based on the fact that Zondag and the others on the district points system team did some regression analysis to put their system through the paces.

I didn’t give out any negative values for performance index, and I did give it a minimum of zero after subtracting. Mainly I think the point values at those levels are mostly noise, and I also wasn’t in the mood to assign a negative contribution to any teams.

I’ve said it the past couple years that Wisconsin is one of the deepest events and now there here is some proof!

We’ve come a long way since 2006 when 111, 1625, 70 and 494 put on a show for the 30 other teams just struggling to score.