Best in your State/Region

Gabe,

Do you have stats for climbs?

Thanks,

Z

One more list I played with- Sorted by Average OPR Rank during the past 5 years while removing each team’s worst year:

  1. 5254* [1]
  2. 1507 [4.75]
  3. 3015 [7.75]
  4. 340 [8]
  5. 20 [9]

Now 5254 only has two years to calculate from, but whatever.

Okay dude now you’re just purposefully creating ranking algorithms that make 5254 look good :stuck_out_tongue:

I threw out any team that had an average low goal score of <1. I don’t have any insightful reason for that, but I think that it’s fair to assume that a competitive low goaler at MAR Champs will score one boulder per match, on average. There weren’t any of these, but if I found a team with a standard error of 0, then I’d have to throw them out, too. Not for any great reason, but just because the formula for t-scores divides by the standard error.

The question about 25 makes perfect sense - and it’s a really good one, too. It has a multifaceted answer. For one, the t-distribution flattens out near the extremes. That means that you have to increase relatively more t-score to gain a similar amount of area under the curve. That is, t-scores don’t scale linearly. A team with a t-score of 4 isn’t twice as good (or even twice as unlikely) as a team with a t-score of 2. As for the spread of teams, eliminating teams with <1 low goal average really tightened the spread, rather than widening it. I haven’t tried to prove it, but I imagine that this could help 25 by reducing the margins between everyone else’s average and the population average. I do think that even with those mitigating factors, 25’s margin over everyone else is still remarkable.

This is from qualifications only. Dawgma reduced our scouting to a watchlist after we got 9 matches for each team, but I’ve filled out some of the scouting via recordings since then.

I have the data that Dawgma & 708 collected from MAR Champs, but it’d be kind of pointless to use the t-distribution on scales, since you won’t do more than one per match. Probably just as good to look at the ratio of successful scales to attempted scales. That gives us:
1/2/3) 708 [7/7], 341 [5/5], and 869 [6/6] tie with a perfect record.
4) 25 with 8/9
5) 365 with 6/7

Major caveat there, though. I didn’t do the nonboulder scouting to fill out the scales, so we only have a limited # of matches to get that data from. Also, since 25 was on our watchlist we watched them more, more chance for us to catch them in a bad match. If someone else has different data, I’d go with that.

Here’s what 1257’s data says on scale rate (9-12 matches observed for all teams):

  1. 5401 (100.0%)
  2. 25 (78.5%)
  3. 4573 (70.0%)
  4. 708 (66.7%)
  5. 341 (64.3%)

Here’s the top five for average endgame points (scales + challenges):

  1. 5401 (15.0)
  2. 25 (12.9)
  3. 869 (11.4)
  4. 341 (11.1)
  5. 4573 (11.0)

If anyone else has stats requests from MAR CMP, feel free to ask and I’ll see what I can do.

Are you a meme or is this a real post.

3 Likes

2987 is flying under your radar.
Quote is editted to add 2987

I think the list would be more accurate if you took the average of the events each team did or double the total points if a team only attended one event. As it stands now, a team that only attended one event is below where they should be.

I can’t remember if this originally came from Antonio or Scott, but this district point spreadsheet is pretty good at reflecting the most successful teams competition-wise in MAR over the past 5 years. 341 might be a couple spots low, since they had a great robot in 2013 and didn’t attend MAR Champs that year (they were the #1 ranked team in MAR going into the event).

Oh shoot, thanks for correcting me. I just remembered they were alliance captains on Galileo!

It was more that I noticed that a bunch of good teams had one bad year pulling them down, like 20’s 2012, 340’s 2016, 639’s 2015, and I wondered what it would be like if you removed the one bad year for each team. I considered not removing a year for 5254 as well, but I thought that might be a weird exception to make. Although I understand why it makes it look like I’m padding 5254’s stats.

For the record, in the past 5 years, 1507 was the 1 ranked team by OPR twice, 5254 was twice, and 20 was once.
1507 has maintained an awesome amount of consistency in that even in their worst years they find a way to be a contender, and in their good years they dominate.

No need to hype 5254. :rolleyes: Their performance speaks for itself.

If hype you must, ask, “Which NY team would have earned the most points at IRI, if IRI were a district competition?”

I hadn’t thought about that. I’ll see what it looks like if I did.

Kevin knows I have a deep love for 5254; they’re like 2791’s best friend! Lots of information sharing between the two teams this year. I just think, let the performances speak for themselves, you know? Kevin, everyone who has their finger on the pulse of competitive FRC (the kind of people who read summer CD) knows that 5254 is the real deal. We get it, you iterate, etc.

Back on topic, I’ve been meaning to compile stats for CT, but I don’t know what kind of convenient OPR calculator everyone is using these days. Could anyone supply some links to resources I could use to gather this information? Or is it all in the cloud / online somewhere now?

Top teams that consistently choke in eliminations in Western NY:

  1. 5254

Top teams that have seeded first at Finger Lakes twice and failed to take home the banner:

  1. 5254

Sure there are a lot of lists I could make where 5254 comes in first. :rolleyes:

I can join the club too :cool:
Teams that worked for a grand total of 1 match before elims, but still got picked in the top 4, and then broke down again during elims:

  1. 2502

Alliance captains in CHS that won an event and did not go to champs.

  1. 2537

I ranked the the top 10 in Indiana over the three district events as well as the championship event. I only counted those who ranked in the top 15 for each one and considered the OPR relative to the event. Also, the Number 1 ranked by OPR was given a slight bump. In order…

1501- By far the most consistent, got OPR 1 in 3 events
1024- Consitently got 2nd in OPR
4103- Very consistent in top 5 OPR
4982- Had a bit of a bad event with the State Championship but performed well at Walker Warren getting OPR 1
234- Consistent top OPR
135- Under the Radar team this year which did very well
1747- Suffered due to their State Championship performance
868- Great team this year, had a poor 1st event for their standards
71- Suffered due to a bad State Championship performance
461- Didn’t get in the top 15 for one of their events but they were top 10 for state and walker warren, and went to the finals in walker warren. (kind of a writer’s pick)

I’ll echo his question, since I’m curious as well. Are you getting this through BlueAlliance, and if so, how can I get ahold of that data as well?

Here’s more data! (since you can never have too much data)

Instead of just looking at OPR rank, I graphed it according to standard deviations above above the mean. 254 once again dominates, with several records: the only team with an average above 4, 1 of 4 teams to break 4 in a single year, 1 of 2 teams to break 4 in both categories (max/avg OPR), and the only team to break 5 (and even 6 in avg OPR).

Each category includes the 10 teams with the highest average (not highest rank as previously used). For the average, I used only CA teams, and the relevant stats (i.e. for max OPR, the average was the average of all max OPRs)

I think this is somewhat more representative than just rank (since it adjusts for how far above average you are), but it does make the graphs messier:

The top team per year in CA (standard deviations above average, max OPR, (CA data set only) (all teams)):
2012: 1717 (4.51) (4.77)
2013: 1538 (3.97) (4.71)
2014: 254 (4.01) (3.52)
2015: 254 (5.86) (6.57)
2016: 971 (4.06) (4.15)

Top teams in the world (data set now includes all teams):
2012: 2056 (5.20)
2013: 987 (4.93)
2014: 1114 (4.21)
2015: 254 (6.57)
2016: 148 (4.35)

Teams for 2008 to 2011 are 1114, 71, 67, and 111, which gives Ontario a third of the top spots in the past 9 years.