In 2020 I ran some numbers for FIRST Canada on possible ways to scale teams points if they missed an event due to the pandemic. (The season ended up being cancelled after I did this exercise, but whatever) This data looked back at past performances from teams who earned a given number of points at their first event, and then looked at the number points earned at their second event. Essentially it was trying to find a projection of improvements of rejection from event 1 to event 2, based on event 1 performance. To avoid small sample size issues, I looked at buckets of data with a width of 8 district points. Here’s what I found:
Bucket Min
Bucket Max
Bucket Mid
Bucket Count
First Event Average
Second Event Average
Bucket Change Factor
4
12
8
122
8.52
18.10
2.13
5
13
9
125
9.42
19.18
2.04
6
14
10
128
10.09
19.05
1.89
7
15
11
122
10.88
19.28
1.77
8
16
12
126
11.61
20.02
1.72
9
17
13
127
12.57
20.52
1.63
10
18
14
119
13.72
21.78
1.59
11
19
15
113
14.64
22.55
1.54
12
20
16
107
15.60
23.24
1.49
13
21
17
93
16.53
24.75
1.50
14
22
18
85
17.49
23.96
1.37
15
23
19
86
18.71
24.64
1.32
16
24
20
91
19.79
25.33
1.28
17
25
21
92
21.02
26.27
1.25
18
26
22
87
22.18
26.76
1.21
19
27
23
85
23.34
26.55
1.14
20
28
24
85
24.29
26.53
1.09
21
29
25
88
25.38
27.40
1.08
22
30
26
95
26.09
27.33
1.05
23
31
27
95
26.66
28.34
1.06
24
32
28
94
27.76
30.35
1.09
25
33
29
85
28.58
30.88
1.08
26
34
30
79
29.65
31.85
1.07
27
35
31
75
30.44
32.17
1.06
28
36
32
68
31.19
33.99
1.09
29
37
33
64
32.09
35.02
1.09
30
38
34
54
33.00
35.70
1.08
31
39
35
49
34.41
36.94
1.07
32
40
36
51
35.69
37.14
1.04
33
41
37
46
37.26
36.72
0.99
34
42
38
44
37.86
36.30
0.96
35
43
39
40
38.93
36.30
0.93
36
44
40
42
40.24
37.21
0.92
37
45
41
42
40.88
34.86
0.85
38
46
42
38
41.53
35.95
0.87
39
47
43
45
42.78
36.78
0.86
40
48
44
44
43.89
37.27
0.85
41
49
45
38
44.97
38.11
0.85
42
50
46
34
45.97
39.15
0.85
43
51
47
35
46.63
41.91
0.90
44
52
48
34
47.26
40.74
0.86
45
53
49
28
48.29
42.39
0.88
46
54
50
25
48.68
44.08
0.91
47
55
51
26
49.27
43.62
0.89
48
56
52
17
50.47
44.06
0.87
49
57
53
15
52.60
48.47
0.92
50
58
54
16
54.06
47.38
0.88
51
59
55
18
55.61
48.50
0.87
52
60
56
18
57.11
48.22
0.84
53
61
57
18
58.11
50.78
0.87
54
62
58
17
58.41
50.65
0.87
55
63
59
21
59.29
54.00
0.91
56
64
60
19
59.74
54.32
0.91
57
65
61
20
60.00
55.15
0.92
58
66
62
19
61.11
55.21
0.90
59
67
63
18
62.28
56.94
0.91
60
68
64
15
63.53
56.73
0.89
61
69
65
12
64.42
57.25
0.89
62
70
66
10
65.10
58.10
0.89
63
71
67
11
65.64
59.00
0.90
64
72
68
10
68.60
53.80
0.78
65
73
69
21
70.90
59.00
0.83
66
74
70
20
71.20
58.40
0.82
67
75
71
19
71.95
58.89
0.82
68
76
72
17
72.53
60.41
0.83
69
77
73
17
73.06
62.65
0.86
70
78
74
18
73.33
63.33
0.86
This could be a potential way to scale points for teams who only play one event. I’d love if it someone wants to pull data from a broader set and see these bucket change factors look like. I’m pretty sure my data set was only Ontario from 2017-2019.
I would like to know how many teams based in continental North America, only compete in one regional, and attend champs. I think requiring teams to participate in 2 events to receive a non AQ merit qualification to champs sounds reasonable, but i am often wrong.
Winning an event is more than 4x as valuable as ranking first (and about 2.5x as valuable as the combo of ranking first + being an alliance captain) and regardless of what others think of it, district championship winners auto qualify to CMP. The rest of the elements your post have been breathlessly litigated for the decade + of the district model and I don’t really care to engage with them, i just found this point in particular funny.
Why not just tell the teams in CA directly that they should just quit? Seems faster than such a roundabout way of distributing wildcards.
On the main topic, I love the proposal, and I think some of the concerns could be addressed by actually figuring out point values. I also think that it would be difficult to be as bad as the current system.
Making Chairman’s and EI worth points is good IMO because (I’ve heard) that there’s already an implicit robot performance factor in Chairman’s. I also think that one could make it worth a ton of points so that as long as you just make it into elims, you’ll qual for champs.
Finally, making all awards worth some points and taking the best event is probably the best way to decide quals, and it won’t benefit the teams that already win anyway. It benefits mid tier teams with the budget to go to champs but only decent performance.
I’m not sure what you’re saying here. Six Indiana teams would advance before the next Texas team, and seven Indiana teams would qualify before the next Michigan team.
While the Chairman’s Award is about “more than robots”, teams often leverage their robots to enhance their impact on the broader community. For this reason, it is expected teams in contention for the Chairman’s Award will have built a robot appropriate to the game’s challenges for the season. This does not require the team to have ranked at a certain level during the event but does require teams to put in more than just the minimal effort necessary to field a drivable robot.
I read this as give the culture awards and district champs winners a slot and then points slots based on all of FIRST points.
So Indiana would have 6 guaranteed. Then the rest are hoping that they have more points than all the other areas of FIRST to get a points slot. If so, regions that have more higher performing teams (ie CA, MI, TX) would potentially take up those points slots instead. Was this an incorrect interpretation?
Not at all. I was illustrating that Indiana is stronger than you suggested.
Another way to say it, is the 10th best team in Indiana, who did not qualify for CMP*, outperformed 15 teams from Michigan who did attend CMP.
Math is off somewhere. Playoff points for District events are 10 per level you advance to (technically 5 per win but then you dont get the 5 if you go 1-2 so i dont know why they say it that way?) and the max a team can earn at a district event is 78 (1 seed, win, chairmans).
Might be thinking of DCMP points, which are all 3x, and get you to the 90 points for winning.
This language basically only exists for sorting out backup teams. Say team A breaks halfway through quarters (after winning one match), and team B is called in, and alliance places finalist. Then team A gets their 5 points, B gets 15, and the other two on the alliance gets 20.
Barring any edge cases like that, it is essentially 10 per Bo3 win.
This is great, Rachel! I’d absolutely love to see this implemented. It’s a shame a bunch of people didn’t read your post, because I think there’s some really clever and well thought out solutions in there (as well as from some of the other posters in this thread).
This equation is only used to distribute ranking points. The reason it exists is that game who name can not be mentioned that didn’t have Wins and Losses to award ranking points. So in that case ranking was determined by scores and this equation gives a ranking point distribution that more or less mirrors the point distribution of the W-L system.
I believe it was continued since it also equalizes the distribution across different event sizes.
Not saying it isn’t w/o problems since in the W-L era it was possible for two (or more) teams to earn 24 ranking points and with this only 1 team can earn 22. On the flip side it is possible for a team to have 0 wins and thus end up with 0 ranking points while using this equation gives those teams a minimum of 4 points and with a small enough event 5 points.
However using that equation does mean that those teams that don’t rank #1 have a better shot at moving on to DCMP due to the compression of points that it creates.
It also allows for the bonus RPs to be factored in w/o changing the relative weights of qualification performance, captain/picking performance and finals performance.
I don’t like that it is over complicated. Can you explain it to a 5 yr old? Can you imagine Major League Baseball using something like that to pick playoff teams?