![]() |
How do the 2014 Regionals and Districts stack up?
2 Attachment(s)
Here are some fun things that interest me:
Jim Zondag's history spreadsheets Ed Law's scouting spreadsheets Team 358's awards database The new common district points system "What's the Toughest Regional" threads This graph, also from Jim Zondag Drawing inspiration (and data) from the items above, I've crafted some giant spreadsheets in the past few years to combine the available performance data with the twin goals of 1) assigning a performance index score to each team, and 2) using those scores to figure out which events have the toughest competitive fields. The spreadsheets are too large to upload to ChiefDelphi, unfortunately (I've tried). Nevertheless, here are a couple of fun charts that I made using my latest incarnation. Attachment 15431 Attachment 15432 These graphics attempt to describe all district / regional competitions in terms of strength of their top tier teams as well as the depth of teams further down their team lists. To clarify, "Performance Index of Teams 7-16" means the average calculated performance index of the teams at that event who rank 7th through 16th in terms of the calculated index. This calculated performance index for each team is based on their competition results (wins, alliance selections, elimination rounds) and awards since 2008. The system is partly based on the new district points system, but it's not identical. It includes an OPR component (so shoot me). For the most part, the same teams end up on top no matter what system one uses. Enjoy! Edit: Here's an attempt at hosting the spreadsheet: Spreadsheet Download (30+ MB) |
Re: How do the 2014 Regionals and Districts stack up?
Looking at the team list grow I was pretty sure Wisconsin would be a slugfest this year. Thanks for confirming :)
|
Re: How do the 2014 Regionals and Districts stack up?
Not surprising, but interesting to see that the 1-6/7-16 chart is much more linear than the 1-4/21-24 chart. That said, any "all-in-one" metric is bound to have some significant flaws. Good to see Hatboro getting the respect it deserves, though. It's one of the deepest events there is. Despite being week 1, pretty much everyone can score a game piece and fields something resembling a functional robot.
|
Re: How do the 2014 Regionals and Districts stack up?
I'm surprised Sacramento ranks so high. I always hear the people that play at it call it a "week 1 regional in week 4."
|
Re: How do the 2014 Regionals and Districts stack up?
Quote:
|
Re: How do the 2014 Regionals and Districts stack up?
Do you think you could post .pdfs of these same charts, but with Districts and Regionals on separate charts?
I realized that event size is probably skewing data significantly... (#20-#24 is below the middle of the pack for a district, while it's a step above the middle of the pack for most regionals). As a side-note, why do Waterford and Bedford have so few teams registered for them? It's killing their rankings... Thank you very much for all your data work and for posting these charts... they're very interesting! Any shot you could upload your .xls to another location so we can download it and tinker? |
Re: How do the 2014 Regionals and Districts stack up?
Quote:
|
Re: How do the 2014 Regionals and Districts stack up?
Quote:
|
Re: How do the 2014 Regionals and Districts stack up?
2 Attachment(s)
Quote:
Attachment 15443 Attachment 15444 After thinking about this a bit more, I modified my methods again. In baseball we have Wins Above Replacement (WAR), and in FRC we have Minimum Competitive Concept (MCC). They are similar concepts. I subtracted a baseline amount from each team's score in an attempt to represent the value a team adds above a bare level. Defining "replacement level" is somewhat arbitrary, but I defined it as attending one event, having a 5-7 record, getting picked late or not being selected, going down in the quarterfinals or not playing in elims, having an OPR of 10 (about 10% of the season's max OPR), and not winning awards. That amounts to about 15 points on the scale. Teams with negative performance index defaulted to 0. About 40% of teams competing in 2013 had an index of 15 or less before this adjustment. The idea of this adjustment is to try to quantify the "value" that teams bring above and beyond the most basic level of competitive achievement. I think that produced slightly better numbers for gauging how exciting and competitive elimination rounds will be at a particular event. I also went with teams 1-4 for "top tier" and 5-24 for "depth." I figure it's really hard to win a regional if there are already four super teams signed up - it means you'd likely either have to be better than one of those teams, or win through a super alliance of two of those teams. And I did 5th-24th to include all of the teams that would seem, on paper, to be most likely to reach elims. I think the 5th-24th average in particular is less misleading with that small baseline adustment described above. Quote:
|
Re: How do the 2014 Regionals and Districts stack up?
Wisconsin, Orlando, and Las Vegas look a step above the rest.
|
Re: How do the 2014 Regionals and Districts stack up?
Quote:
I like your definition of the "replacement level..." it seems to work pretty well. Someone on the cusp of making elims makes a lot of sense being called "replacement level" since that's the level with which you'd replace a team on your alliance with a backup robot. If I understand correctly, you then found that to be equal to about 15pts in this statistic you've developed (are you calling it Performance Index, currently?). 40% of FRC teams have a Performance Index of less than 15pts. You then subtracted 15pts from everyone's Performance Index to determine their Performance Index Above Replacement. I assume you allowed teams to have a negative Performance Index Above Replacement (but with a min of -15, since you capped it to 0 on the Performance Index), correct? I really love this idea, and depending on how you answer the questions in my first paragraph, I'd like to propose some different names... MCC describes what the "replacement-level team" is, but doesn't really describe what the stat does. The idea of wins above the replacement level is a very useful and easy-to-scale, so it seems like it'd make sense for it to be named something like WAR. I also like the idea of separate stats for on-field performance and for the Chairmans/spirit/GP dimension, so here's my suggestion: perhaps rWAR (robot Wins Above Replacement) which would then be scaled so that it corresponds to # of qualifying wins at a 12-match event. An elite-level team would probably have an rWAR of about 6 (6 wins above replacement level, which you gave as about 5-7)... fortunately for those of us who are into baseball stats, that scales somewhat similarly to baseball WAR. :-) If we have a separate stat for the Chairmans/spirit/GP dimension, perhaps it could be called cWAR (for "character" or "chairman's"). An all-around stat could be some statistical combination of both... probably called aWAR for "aggregate Wins Above Replacement." Now I'm even more interested in seeing this spreadsheet... seeing how you go about calculating various things and trying (or not trying) to compensate for different elements! :-) |
Re: How do the 2014 Regionals and Districts stack up?
I don't know if you can cal this a good or bad thing, but the well known outliers like Waterloo (a morsel of the global elite competition matched against an admittedly below-average majority) or Mexico City (a lot of newer teams that will naturally have lower or nonexistent values across the board). In the 1-4 v 21-24, the cluster of points also corroborates the common notion that the top 10% perform miles better than the median team in a competition (median assuming you have a 40-50 team event).
The data seems to confirm opinions some have about event makeup, but there's also likely a way to form the data around a totally different opinion with having 2500 team compete in 8 up to 58 matches before Champs. Not to sound needy, but I'd be curious to see matchups of 1-4 v 37-40 and 21-24 v 37-40 |
Re: How do the 2014 Regionals and Districts stack up?
Nice work Dan!
Judging by any of the scales you generated it looks like teams will have their work cut out for them for sure at LVR this year. There are some great teams competing and the vast majority of teams will have already competed at an earlier regional. |
Re: How do the 2014 Regionals and Districts stack up?
I've edited the first post with a link to the spreadsheet. Let me know if the link doesn't work.
Quote:
Quote:
Quote:
|
Re: How do the 2014 Regionals and Districts stack up?
Quote:
We've come a long way since 2006 when 111, 1625, 70 and 494 put on a show for the 30 other teams just struggling to score. |
| All times are GMT -5. The time now is 22:06. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi