Quote:
Originally Posted by Karibou
How did you determine the cutoffs for which teams to include for each analysis? I imagine it was pretty clear-cut for high goal scoring since that was an "either you can do it or you can't" ability for the most part, but where did you draw the line for low goal scorers? I know 25 was good, but is their t-score so dominant compared to the rest of the teams because they were so good, or because there was a wider spread in low goal scoring ability, lowering the average compared to how well 25 was performing? (does that question make sense? Statistics really isn't my strong suit)
Also, is this data from just quals, just eliminations, or both?
|
I threw out any team that had an average low goal score of <1. I don't have any insightful reason for that, but I think that it's fair to assume that a competitive low goaler at MAR Champs will score one boulder per match, on average. There weren't any of these, but if I found a team with a standard error of 0, then I'd have to throw them out, too. Not for any great reason, but just because the formula for t-scores divides by the standard error.
The question about 25 makes perfect sense - and it's a really good one, too. It has a multifaceted answer. For one, the t-distribution flattens out near the extremes. That means that you have to increase relatively more t-score to gain a similar amount of area under the curve. That is, t-scores don't scale linearly. A team with a t-score of 4 isn't twice as good (or even twice as unlikely) as a team with a t-score of 2. As for the spread of teams, eliminating teams with <1 low goal average really tightened the spread, rather than widening it. I haven't tried to prove it, but I imagine that this could help 25 by reducing the margins between everyone else's average and the population average. I do think that even with those mitigating factors, 25's margin over everyone else is still remarkable.
This is from qualifications only. Dawgma reduced our scouting to a watchlist after we got 9 matches for each team, but I've filled out some of the scouting via recordings since then.
Quote:
Originally Posted by ezygmont708
Do you have stats for climbs?
|
I have the data that Dawgma & 708 collected from MAR Champs, but it'd be kind of pointless to use the t-distribution on scales, since you won't do more than one per match. Probably just as good to look at the ratio of successful scales to attempted scales. That gives us:
1/2/3) 708 [7/7], 341 [5/5], and 869 [6/6] tie with a perfect record.
4) 25 with 8/9
5) 365 with 6/7
Major caveat there, though. I didn't do the nonboulder scouting to fill out the scales, so we only have a limited # of matches to get that data from. Also, since 25 was on our watchlist we watched them more, more chance for us to catch them in a bad match. If someone else has different data, I'd go with that.