![]() |
Strongest regional competitions
alright all your data people. What do you feel are the top 20 strongest / toughest regional competitions .. what are the weakest
|
Re: Strongest regional competitions
|
Re: Strongest regional competitions
Quote:
Also off the top of my head, some regionals with good rosters: Arkansas Dallas Silicon Valley |
Re: Strongest regional competitions
Since these threads discussing "most competitive" or "strongest" or "toughest" events are inquiring about robot performance and BBQ or Sauce has been pointed to as the appropriate performance metric I have been curious as to why a judged award (Chairman's) should be included in the conversation. I think it would be more insightful to examine the data for event champions/winners normalized like a batting average for the number of events entered in a team's history.
|
Re: Strongest regional competitions
I can't remember a week one as loaded as Dallas is this year...everyone has their work cut out for them for sure.
|
Re: Strongest regional competitions
Finger Lakes.
Not the most star studded of line-ups but every single team is solid. 1511, 340, 229, 191, 1507, 1126.... Feel free to add on to this, the Rochester area is incredible. Most notable moment: 1 tube hang or minibot away from beating 2056/217 in 2011. |
Re: Strongest regional competitions
To account for competitiveness in terms of the robot competition alone (no other awards), I propose a new ranking system : WORldS last yEar ranking, or WORSE. (I really wanted that acronym.)
Since comparing teams relevant to their region is generally not going to tell you which region is strongest, we can instead use the ranking of the teams at the last world championships. Each team that attended worlds is given a score equivalent to (100- division ranking from last year). This isn't perfect as great teams can miss worlds and, as we all know, the rankings can be messed up - but I think it'll work better than BBQ. Anyone with access to the data and better computer skills than me want to try it? |
Re: Strongest regional competitions
Quote:
Personally, I want to see some kind of standardized z-score system that takes last year's OPRs of events and compares them with the mean score of that week of competition. Of course, all the qualms of OPR would apply, but I think it's a little more numerical and detailed compared to BBQ, and takes rankings to a minimal factor. |
Re: Strongest regional competitions
Maybe we should use a standardized OPR, where we give teams an OPR percentage per year, based on their OPR divided by the max OPR for the season/week. We can average the percentages out to give a quantitative estimate of how well one team does against the other teams that year. This should also account for teams that do abnormally good/bad.
I feel like WORSE and BBQ won't give a representative ranking, because, like mentioned, teams who almost won will be ignored. |
Re: Strongest regional competitions
Quote:
LSR 2013 LSR 2014 Dallas 2015 57 57 57 118 118 118 148 148 148 192 231 457 231 418 624 418 441 647 441 457 704 457 624 932 624 653 987 Above are the oldest ten teams listed from each regional. They have much in common (simply a side-by-side comparison in Excel). Been there to play with them. Makes for an exciting competition no matter what week it occurs :yikes: P.S. Sorry for the lousy formatting. Couldn't get it to space out for easier reading. |
Re: Strongest regional competitions
MAR Hatboro-Horsham looks to once again be stacked, with a lineup that includes all previous MAR champions, a few of which have made mutliple Einstein appearances in the last few years, and a good number of MAR's biggest players
|
Re: Strongest regional competitions
This is an interesting topic that can be filled with a lot of opinion. :)
To look at a team’s previous rankings is a start, but one must know the game before you can say the competition will be harder at one location rather than another. Also looking at an overall OPR ranking, is not always a complete understanding of a team. How many members graduated, how strong their student base is, did they lose/gain any mentors that would affect the team. All of these are determining factors in the overall strength of a particular team. :confused: Also, remember that if you look at the competition level of a team that attends multiple events, that team gets better, so an event later in the year, with many teams that have already competed will likely be a harder event. An example of that is to look at previous years Alamo and Lonestar events. Of these competitions many of the same teams compete, it doesn’t matter which was earlier than the other, the second event is a tougher event. There are many pairs of events similar to this example around the country. ::rtm:: I love that everyone is already looking at the competition, but without knowing the game it is a difficult call. If the game is something that no one has seen, how can you tell how a team will do? Maybe a Rooky Team will step up and design the ultimate robot that can accomplish everything the game allows….there are a lot of unknowns at this time and until everyone has competed in this year’s game who knows. :eek: |
Re: Strongest regional competitions
Quote:
While i expect most FRC teams to be more stable than college football programs, all we can do before these regionals play is the same thing the media does before the college football teams play: Speculate, hype, and try to use previous year's stats to justify our predictions. However, i'd say FRC can be more predictable than football 9 times out of 10 ;) |
Re: Strongest regional competitions
Quote:
Florida State Alabama Oregon Oklahoma Ohio State AP Poll Week 16 top 5 teams: Alabama Florida State Oregon Baylor Ohio State So the AP Poll went from Oklahoma being #4 to unranked, but other than that, the preseason polls seemed to do a pretty decent job. Four of the preseason top five were still in the top five at the end. So sure, you can't account for every factor (like student, mentor and sponsor loss), but preseason predictions can still do a fairly decent job. Good teams tend to be good from year to year. What are the chances that every team at a regional will be worse than the year before? Pretty slim. Some teams will get worse, but others will get better. The Waterloo regional will be one of the more competitive ones next year. Only two of the teams attending were not in eliminations at any event last year. EDIT: I decided to expand on the method of ranking events by % of teams in elims at their first event the previous year. It is slated more to areas with smaller event sizes the previous year (like PNW). This probably makes it a poor indicator of regional competitiveness. However, it still gives insights, especially within areas of similar event sizes. For example, Hatboro is ranked much higher than other MAR districts. Here are the results: Code:
PNWMt.Vernon 80.65% |
Re: Strongest regional competitions
I don't think anybody is trying to look at the future in a fatalistic way. It's just fun to look at the different events and see how they compare. Any way of doing it has significant flaws, but it's still a fun exercise for some people.
|
Re: Strongest regional competitions
Quote:
Back to the point at hand: Quote:
|
Re: Strongest regional competitions
The hardest/strongest regional(s) is (are) always the one(s) you attend. Remember, you can only really understand how good a team is if you see them live! :)
|
Re: Strongest regional competitions
Quote:
|
Re: Strongest regional competitions
2 Attachment(s)
I finally got around to the tedious task (for me, without scripting capabilities) of putting all the current team lists for the events into Excel and looking up aWAR for each team.
If you're new to aWAR, it is essentially a year-by-year statistic to describe how good/successful a team was in a certain year. It is primarily based on the district point system (i.e., teams accrue points based on how they do in quals, picking, elims, and awards), plus an addition from OPR. District and CMP points have a 1.7x multiplier. This raw point value is then adjusted and scaled so that a 'replacement level' team (I determined a replacement level team to be one which is on the cusp of making elims at a 40-team district event, with a 5-7-0 record who may or may not win some awards) has an aWAR around 0 and a dominant team that year should have an aWAR around 7. aWAR stands for 'aggregate Wins Above Replacement,' explaining why it is scaled as such... an aWAR of 0 would suggest a team capable of being on the cusp of making elims, an aWAR of 7 would suggest a team capable of getting a perfect record in quals. For more information about aWAR, see this thread where I first introduced it (http://www.chiefdelphi.com/forums/sh...8&postcount=1). At any rate, for this ranking I used teams' multi-season aWAR... the weighted average of the last 4 years (40%, 32%, 20%, 8%). For each event I took the average of all teams, the average of the top 8 teams by aWAR, the average of the top 16 teams by aWAR, and the average of the top 24 teams by aWAR. Each are given below. The average aWAR could theoretically evaluate how good qualification matches, whereas the average of the top 8 may be better for determining which events have the most 'star power', and the average of the top 24 may be better for evaluating how competitive elims will be. Note that the averages of the upper subsets (particularly the top 24) will favor large events and the average of all teams may be pretty foreign from how folks usually evaluate how good a tournament is (looking at the top few teams). As is starting to be discussed some in this thread, no statistical description of teams performance (whether over the team's history, recent years, or last year) will perfectly describe the team's success... however, you see teams generally performing around similar levels (i.e., perennial powerhouses, perennial contenders, teams that sometimes build contenders, teams that have never made elims, etc.). The idea is that you can predict roughly how competitive events will be relative to others... obviously this can only go so far. I'm attaching a spreadsheet that allows you to go through and sort/rank the events yourself (perhaps filter out categories?) and also allows you to sort/rank teams by aWAR... I recommend this particularly for those of you that are intrigued by or skeptical of aWAR. Filter down to teams from your area and see what you think... I hope to upload my full aWAR spreadsheet that allows you to use the lookup feature to 'study' teams' recent years and individual seasons (similar to the lookup sheet on Ed Law's spreadsheet). As a last note, almost all the groundwork to calculate aWAR was laid by Dan Niemitalo (Nemo), who made the spreadsheet for the same reason but to generate a statistic he called Performance Index. Sorted by Top 8 aWAR: Code:
Rank Event Code Week Type Event Name Average top8 top16 top24Sorted by Top 16 aWAR: Code:
Rank Event Code Week Type Event Name Average top8 top16 top24Sorted by Top 24: Code:
Rank Event Code Week Type Event Name Average top8 top16 top24Sorted by Average aWAR: Code:
Rank Event Code Week Type Event Name Average top8 top16 top24 |
Re: Strongest regional competitions
Quote:
I've been toying with concepts like this in my head as well. I'd like to create an index that tries to compare performance at different events more fairly, similar to the way baseball stats can adjust for things like park effects, league effects, different levels of offense in different years, and so on. Rather than average OPR for the event, for evaluating replacement level I'd probably favor something like average OPR of teams 20-28 on the OPR list. And for evaluating the difficulty of winning the event, I'd probably look at the average OPR of the top 4-8 teams, or maybe even just the top 2-3. It depends what one is looking for. If you want to know how hard an event is to WIN, then you mainly need to look at the strength of the top two teams other than your own team to know how high the bar is. If you want to gauge how hard it is to make the semifinals, on the other hand, you're probably looking at the strength of the top 10 or so teams, because you want to be in that group to have a good shot at getting on the top 5 or so alliances and avoid being the underdog in the quarterfinals. I'm just spitballing here, but I think it might make sense to weight the value of an event win by average of the top ~3 teams, compared to the average of the top ~3 across ALL events. Then assign more or less value to a win based on how the event stacks up. And that's for 1st and 2nd robots on the alliance - I'd probably want to do something different for the 2nd pick or a backup robot. For finalists (robots 1+2) I'm probably looking at the average of robots ~3-5 compared to that average for all events. And so on. This has issues and it's a loose idea in my mind so far, but I think that would provide a bit more of a basis for comparing a team's win at Event A to another team's semifinalist finish at Event B. |
| All times are GMT -5. The time now is 19:35. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi