World Qualification Ranks

Thread/Google doc that is updated erratically. It’s pretty simple, just merges all of the qualification data into one place.

Only done for fun, not meant to be an actual ranking list of best to worst robots, it’s just a fun thing to look at.

Current Top 25

Rank	Team	Qual Ave
1	1114	141.5
2	148	116.1
3	2481	113.28
4	987	106.3
5	67	96.33
6	3130	95.66
7	2451	94.57
8	525	92.22
9	1519	90.41
10	744	88.9
11	4488	88.58
12	4048	84.66
13	3620	84
14	1706	83.85
15	118	83.8
16	176	82.55
17	1025	82.37
18	4330	80.5
19	1403	80.33
20	2386	79.33
21	1209	79.28
22	2607	78.91
23	5172	77.66
24	3824	77.33
25	1208	77.28

WEEK 1

Rank	Team	Qual Ave
1	148	116.1
2	987	106.3
3	3130	95.66
4	525	92.22
5	1519	90.41
6	744	88.9
7	4488	88.58
8	118	83.8
9	1403	80.33
10	2607	78.91
11	5172	77.66
12	3824	77.33
13	1024	75.5
14	4623	75.22
15	1983	75.08
16	1640	73.33
17	1477	73.3
18	2342	72
19	4859	70.55
20	179	70.4
21	1592	69.8
22	4539	69.55
23	348	69.2
24	4624	68.55
25	3242	68.3

WEEK 2 (In Progress)

1	1114	141.5
2	2481	113.28
3	67	96.33
4	2451	94.57
5	4048	84.66
6	3620	84
7	1706	83.85
8	176	82.55
9	1025	82.37
10	4330	80.5
11	2386	79.33
12	1209	79.28
13	1208	77.28
14	2609	76.28
15	857	71.42
16	931	71.33
17	3612	68.85
18	4256	68.42
19	5053	68
20	1732	67.71
21	4522	67.71
22	610	67.62
23	4779	67.44
24	16	67.42
25	5478	67.37

47.8 difference from #1 to #25; that is amazing.

I ran some general stats:

Mean: 43.238
Median: 41.2
Standard Deviation: 13.624
Range: 119.28
Minimum: 22.22
Maximum: 141.5
Count: 955

With week 3 events started/starting here are updated ranks

Worldwide

1	1114	142.9
2	2481	117.8
3	148	116.1
4	1678	109.6
5	987	106.3
6	2451	104.8
7	254	104.4
8	67	97.41
9	3130	95.66
10	525	92.22
11	1519	90.41
12	744	88.9
13	4488	88.58
14	1025	88.08
15	1208	86.4
16	118	83.8
17	2386	81.8
18	1403	80.33
19	4048	79.83
20	2607	78.91
21	4330	78.4
22	2137	77.83
23	5172	77.66
24	1706	77.5
25	4522	77.5

Week 2 Only

1	1114	142.9
2	2481	117.8
3	1678	109.6
4	2451	104.8
5	254	104.4
6	67	97.41
7	1025	88.08
8	1208	86.4
9	2386	81.8
10	4048	79.83
11	4330	78.4
12	2137	77.83
13	1706	77.5
14	4522	77.5
15	3616	77
16	314	76.83
17	3612	75.4
18	176	74.83
19	217	74.33
20	3620	74.16
21	1986	74.1
22	5053	73.83
23	701	73.8
24	379	73.66
25	4946	73.6

That is an incredible list - and I love seeing 3130 in the top 10!

I was able to watch them at Northern Lights and they are incredible! The Errors have had great robots for the last three years that 4607 has been around and we always look to them for great ideas! We actually adopted some of their ideas from the Week Zero event (hosted by 2472 and 2052)…

My goodness I love this time of year!

Your list doesn’t filter out teams that aren’t qualified for worlds (I think). I don’t believe 1706 is qualified for worlds (hopefully not yet at least).

It’s not actually supposed to filter out anyone.

Any chance this would be updated with week 3 events? Pretty please with sugar on top? :slight_smile:

I would love to see this updated for week three as our QA this week was 98.58 :smiley:

Our qual average was 126.3 or something like that placing us as number 2 if we had competed in week 2. Eager to see who’s moved around

Updated as all the events just finished tonight :slight_smile:

Both median and average jumped by 3 points, probably due to some teams starting to compete in their second event/people watching matches and learning how to play the game.

Of the top 100 Co-op per match scores came from the following weeks:

Week 1: **29%**
Week 2: **37%**
Week 3: **34%**

Co-op scores in the top 100 are broken down here:

Max: **440**  (Co-op stack in 11/12 matches) (Week 2)
Average: **321.2** (Co-op stack in 8.03/11.67 matches) 
Min: **280**   (Co-op stack in 7/12 matches) (Week 3)

And your most co-opiest teams are:

4539	94.4%
5782	91.7%
1649	90.0%
348	90.0%
2386	85.0%
1519	83.3%
78	83.3%
624	80.0%
2481	80.0%
744	80.0%
456	80.0%
4471	80.0%
348	79.2%
176	79.2%
4906	79.2%
178	79.2%
3130	77.8%
4859	77.8%
4564	75.0%
4048	75.0%
1024	75.0%
217	75.0%
4381	75.0%
133	75.0%
2079	75.0%

Litter however has been on the up and up, and teams have been using it more now than before.

Of the top 100 Litter score per match came from the following weeks:

Week 1: **14%**
Week 2: **39%**
Week 3: **47%**

Seems like more and more litter is getting put in cans.

Finally the rankings:

Worldwide:

1	1114	142.9
2	1519	136.08
3	1730	126.3
4	2056	124.5
5	624	124
6	1983	121
7	2481	117.8
8	148	116.1
9	1619	115.3
10	1023	115.25
11	1678	109.6
12	987	106.3
13	2451	104.8
14	254	104.4
15	1523	102.7
16	3663	102.41
17	2974	101.3
18	234	100.33
19	225	98.58
20	2122	98.5
21	2996	97.9
22	67	97.41
23	3230	97.4
24	744	97.1
25	1501	96.91

Week 3:

1	1519	136.08
2	1730	126.3
3	2056	124.5
4	624	124
5	1983	121
6	1619	115.3
7	1023	115.25
8	1523	102.7
9	3663	102.41
10	2974	101.3
11	234	100.33
12	225	98.58
13	2122	98.5
14	2996	97.9
15	3230	97.4
16	744	97.1
17	1501	96.91
18	1806	96.8
19	4451	96.7
20	135	95.75
21	1720	95.41
22	2852	95.3
23	662	95.1
24	192	94.6
25	246	94.33

Woo! 4 highest seeds from Kokomo are all on the week 3 list, with the top 2 being in the overall list!:smiley:

Those coop points really help! We had a weird weekend, seeded first and won Alamo with zero coop points all weekend - arghhh.

I don’t mean to take away from the teams on these lists, but qualification scores are highly dependent on the depth of the particular event, and I think that it misrepresents the abilities of teams. Personally, I think it’d be more valuable to compare teams on a metric that better identifies their individual performance, like OPR.

That being said, the well deserving are rising to the tops of these lists anyway.

Agreed that depth of particular event will impact these (although so far less than 2014 that it really isn’t funny).

Unfortunately, to my knowledge, there’s no way to determine how much was scored in each category in each match this year… the twitter feed used to supply this data, enabling ‘category OPR.’ Without that, this is the best that can be done without a ton of effort.

(Edit: just realized I thought this was in Ether’s thread with the category averages… guess the second bit is pretty much irrelevant. Ether’s thread with top 25 in each average of category is quite interesting; look out for Ed Law’s week 3 update for his OPR/CCWM spreadsheet)

Correct.

enabling ‘category OPR.’

“Category” or as some call it “Component” OPR is still possible, by using the category (component) scores in the Team Rankings table. Problem is, DQs muddy up the waters because the FRC API that is supposed to provide DQ data is broken.

Without that, this is the best that can be done without a ton of effort.

That’s where Ed Law comes in :slight_smile: He devotes a lot of effort to make the most out of the data that is available.

Exactly. OPR is a better guess at what individually the teams can do than qual average scores. And obviously watching their matches is an even better guess as to how they will preform in their next match.

What makes the list interesting is how weird seeding is this year. It’s by far more indicative of a team’s strength than previous years with win/losses and the list is only a data source for looking at how seeding changes.

The rankings themselves are no more useful than people keeping track of world high scores or many of the other things that we look at to an extent. Lists made of averages, OPR, or CCWM ultimately are for fun. You can’t watch every single match to know who is better than who, but you can give educated guesses.

This data however can be used to quantify game trends to some extent, like the massive increase in litter in cans from week 1 to 2, and 2 to 3 to a lesser extent. More interesting things are in the data, you just have to look.

Is it possible to do a similar ranking for finals averages?

It says my team was at Orlando? We were at week1 Dallas, the rest of the stats are correct however.

YES they certainly are…and then capping them w/ Littered RC’s over and over.

How can you possibly use Q Points Average to compare a midling robot in an 8 Q Match Regional (See Week 4 Virginia), against a 13 Q Match Regional (See Week 4 Waterloo), now add 1114, 2056, & 5406 to that mix? You cannot…More chances at success for 1114 & company, more chances at failure for many, many others. Or, the opposite fewer chances to stub your toe (or even have others do it for you in blind draw qualifying…I’m talking about non-movers, not the attempters).

The whole thing is percentage based to the very last few matches. How is it, that disparity in number of Q Matches was possibly left in?

Per Regional I understand…level playing field for all competing there is OK (each plays the same amount of matches). Comparing QPA at Virginia vs. Waterloo is not OK…Even without the super magic dream team alliance partners. (Outlier(s)…LOL Ummm, different planet or solar system maybe).

Does the OPR calculation set, somehow correct for that major disparity? If so, how? Inquiring minds want to know.