|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#16
|
||||
|
||||
|
Re: Best in your State/Region
Hello again, all! I've missed you. Let's take a little detour.
I was wondering this same question to myself right after the championships: "What mountain teams were the best this year, and in years past?" However, I got carried away and I took my question further, instead asking "What team would be the best if the entire Rocky Mountain Region was in the district format during 2016?" So, I calculated by hand the number of district points each team would have received if the district system were present in the region this year. (I'm a mountain man. I don't know how to make software do things for me.) I have organized the list by the tiebreaker rules, and have come up with a very interesting result. You can find that document here. A few important things about the document:
|
|
#17
|
|||||
|
|||||
|
Re: Best in your State/Region
Quote:
OPR: 1) 4451 2) 1876 3) 1102 4) 3490 5) 343 Results at various levels/overall: 1) 4451 (won Palmetto, Palmetto EI, Orlando Innovation in Control and QF, Carver Imagery Award and QF) 2) 3490 (won Rocket City with Robonauts and Bomb Squad--but hey, they did enable 100% capture along the way) 3) 343 (Palmetto finalists, Rocket City QFs) 4) 1876 (Palmetto QFs and 8 seed, Orlando semis and 7 seed) 5) 1758 (Palmetto #3 seed and QFs) Truth be told, that list is pretty close for all the other ones too. 343 has a Championship subdivision finalist in 2015 that would push them up a bit more (fourth robot, never played, but scoreboard), but they were also off the pace for a few years going back. 4451's been on a hot streak where they've been head and shoulders above everyone else in the state--one of three teams to win Palmetto back to back, WFFA, EI, RAS literally everywhere they went--and with them helping to start a team the next county over I'm on the lookout for them. 3490 is always a threat in the state, especially at SCRIW, but just now broke through by winning the last-pick lottery (which I have absolutely zero room to hate on). 1876 never gets any press or buzz, but somehow they pull a rabbit out of their hats and gets in contention even at overstuffed events like Palmetto and Orlando. |
|
#18
|
||||
|
||||
|
Re: Best in your State/Region
Quote:
I have a very North-specific worldview, being from lower norcal myself. |
|
#19
|
||||
|
||||
|
Re: Best in your State/Region
You're correct. I recalled that 229 got picked, but I assumed it was much later in the draft. Updated as such.
|
|
#20
|
||||
|
||||
|
Re: Best in your State/Region
I'll post my analysis on MAR and New York sometime in the next day. In the meantime, I have one question about a response.
Quote:
Your analysis is pretty spot on. T/Z-scores are a nifty stat for this. I look forward to seeing how it compares to my own analysis. |
|
#21
|
||||
|
||||
|
Re: Best in your State/Region
Quote:
When I was working on it initially, I realized that consistency was overvalued in the standard error statistic*, and a team that was consistently slightly better than the average team came out far ahead of everyone else, even those that had small inconsistencies but were generally better (significantly higher average.) This is because using the t-distribution isn't precisely telling us how good a team is, but rather how unlikely their performance is given that we assume that they are the average team. The way I solved this is by restricting analyses of individual fields to a select set of teams rather than all teams at the competition, which raised the average comparatively and reduced the overvaluing of absurdly consistent teams. I kept the ones with all teams analyzed, but honestly reality-checking the latter made me realize that restricting the number of teams for more specific fields could be helpful. For example, I did not include 1712's data in the high goal t-score calculations. This caused averages to change and thus some of the strange cardinal results you see in the final order sort. So, in the example of 708 and 1257: 708 has a higher average and higher standard error than 1257, so, with the low goal specific analysis the higher general average resulting from eliminating teams that aren't competitive low goalers makes standard error more important and thus 1257 does better relative to 708 in the low-goal specific one rather than the boulder volume one. * at least, overvalued for my purposes in picklisting. Last edited by GKrotkov : 23-08-2016 at 08:37. Reason: I didn't actually answer the question the first time. |
|
#22
|
||||
|
||||
|
Re: Best in your State/Region
Quote:
Also, is this data from just quals, just eliminations, or both? Last edited by Karibou : 23-08-2016 at 09:55. Reason: Fixed question about a wider spread in low goal ability - I'm bad at statistics/phrasing |
|
#23
|
||||
|
||||
|
Re: Best in your State/Region
Quote:
Do you have stats for climbs? Thanks, Z |
|
#24
|
||||
|
||||
|
Re: Best in your State/Region
One more list I played with- Sorted by Average OPR Rank during the past 5 years while removing each team's worst year:
*Now 5254 only has two years to calculate from, but whatever. |
|
#25
|
||||
|
||||
|
Re: Best in your State/Region
Quote:
![]() |
|
#26
|
||||
|
||||
|
Re: Best in your State/Region
Quote:
The question about 25 makes perfect sense - and it's a really good one, too. It has a multifaceted answer. For one, the t-distribution flattens out near the extremes. That means that you have to increase relatively more t-score to gain a similar amount of area under the curve. That is, t-scores don't scale linearly. A team with a t-score of 4 isn't twice as good (or even twice as unlikely) as a team with a t-score of 2. As for the spread of teams, eliminating teams with <1 low goal average really tightened the spread, rather than widening it. I haven't tried to prove it, but I imagine that this could help 25 by reducing the margins between everyone else's average and the population average. I do think that even with those mitigating factors, 25's margin over everyone else is still remarkable. This is from qualifications only. Dawgma reduced our scouting to a watchlist after we got 9 matches for each team, but I've filled out some of the scouting via recordings since then. I have the data that Dawgma & 708 collected from MAR Champs, but it'd be kind of pointless to use the t-distribution on scales, since you won't do more than one per match. Probably just as good to look at the ratio of successful scales to attempted scales. That gives us: 1/2/3) 708 [7/7], 341 [5/5], and 869 [6/6] tie with a perfect record. 4) 25 with 8/9 5) 365 with 6/7 Major caveat there, though. I didn't do the nonboulder scouting to fill out the scales, so we only have a limited # of matches to get that data from. Also, since 25 was on our watchlist we watched them more, more chance for us to catch them in a bad match. If someone else has different data, I'd go with that. |
|
#27
|
||||
|
||||
|
Re: Best in your State/Region
Here's what 1257's data says on scale rate (9-12 matches observed for all teams):
Here's the top five for average endgame points (scales + challenges):
If anyone else has stats requests from MAR CMP, feel free to ask and I'll see what I can do. Last edited by Brian Maher : 23-08-2016 at 13:42. |
|
#28
|
|||||
|
|||||
|
Re: Best in your State/Region
Are you a meme or is this a real post.
|
|
#29
|
|||
|
|||
|
Re: Best in your State/Region
Quote:
Quote is editted to add 2987 Last edited by Whatever : 23-08-2016 at 14:54. Reason: Clarification of the quote since I editted it |
|
#30
|
||||
|
||||
|
Re: Best in your State/Region
Quote:
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|