While I have long been grateful for the comprehensive data provided by FIRST and made accessible by The Blue Alliance, there is one area where FRC data is lacking: declines during alliance selection. Information about declines is buried in countless threads across CD and in my quarantine boredom I have compiled the in-season declines listed in the 2010-2020 alliance selection results threads into one spreadsheet: https://docs.google.com/spreadsheets/d/1K0L5OEUhxrGXfIhEz3Dbvei59XT36nPf7ZEdmWUGGx0/edit?usp=sharing
Here is a brief overview of the spreadsheet:
Disclaimer
The further back in time you go, the spottier and more questionable the data gets. You’ve been warned. I make no promises that any of this accurate, but will do my best to sanity check and correct/add data.
Schema
My primary goal in how I compiled this data is providing complete information about declines without creating redundancies with TBA and official data. This is meant to be a reference and a starting point for further analysis. The spreadsheet has the following columns:
Year: year of the event
Event key: the TBA event key associated with the event (the thing at the end of the URL for an event on TBA, e.g. “2019nyut” for the 2019 Central New York Regional)
Declining team: the team number of the team who declined to join a captain’s alliance
Picking team: the team of the number who was rejected by another another alliance
Decline number: for the first time a particular captain is declined, this column is 1; if another team declines them, that row will have a 2, etc
Round: Either “First”, “Second”, or “Third” to indicate which round of the alliance selection the decline happened during. Almost all rows have “First”, because declines from the second/third round are incredibly rare but do occasionally happen.
Date added: the date (in MM/DD/YYYY format) the decline was added to the spreadsheet
Source: A link to either a CD post, YouTube video, etc to verify the decline
This means that at the 2019 Darwin Division (2019dar), team 3707 declined 1676 in the first round, and then 225 declined 1676 in the first round.
Corrections
I would be immensely grateful if you could take a moment and review some of the events you’ve attended and let me know if there are inaccuracies.
I suspect that there are issues with this data, especially further back in time. The accuracy of the data is only as good as the posts that it is compiled from (minus any mistakes I may have made). PLEASE suggest corrections or additions on this thread. Changes are more likely to be included if you can provide a source or reference to verify them. Please let me know if you were in attendance or one on of the involved teams.
Insights
Here are a few fun lists based on the data:
Teams who have declined >= 4 times
Team #
# declined
# rejected
# events rejected
27
7
0
0
225
6
0
0
3683
5
0
0
234
5
0
0
1732
4
0
0
51
4
0
0
1678
4
4
1
217
4
0
0
1625
4
0
0
3314
4
2
2
365
4
1
1
2826
4
0
0
Teams who have been rejected >= 4 times
Team #
# declined
# rejected
# events rejected
3072
0
6
3
2137
2
5
4
303
1
5
3
1123
0
5
1
1678
4
4
1
1676
2
4
2
1481
1
4
2
1089
1
4
3
5847
0
4
1
5224
0
4
2
5654
0
4
1
2509
0
4
1
3512
0
4
3
3302
0
4
1
2403
0
4
1
2180
0
4
2
3056
0
4
1
Teams who have been rejected at >= 3 separate events
81.05% of recorded declines were the first time that team had been declined at that event
13.31% were the second time
3.63% were the third time
1.81% were the fourth time
0.2% were the fifth time
No team was declined 6 times in this data
This is pretty fun to look through. Probably one of the better ways to determine which games had good ranking metrics.
A small correction I noticed for ilch2014, the picking team and declining teams are inverted. 1675 should be the declining team, 695 should be the picking team (this is made obvious by the rankings). On a personal note, I remember in this particular discussion we were concerned about the legality of one of 695’s mechanisms at the time which contributed, in part, to the decline.
Should also be noted that the structure of the district system inherently reduced a team’s inclination to decline a pick due to district points, so the more districts are available, the more the data can skew.
Does the data really show district teams are less inclined to decline? The list of teams with 4+ declines seems to have pretty good district team representation.
I would think trying to get the best chance to win is the motivating factor when declining. Advancing an additional round is worth more points than the place difference so the point trade off isn’t obvious.
The bigger question is what happened between 2016 and 2017 to roughly double the declines. Is that due to better record keeping or did something fundamentally change?
One guess that I would make is that the game had a large amount to do with this. In 2017 it was possible at many events for the 8th seed to be in a better picking position than the 1st seed alliance. This was because while the 1st seed could in theory scoop up the second best robot at the event, they would then have slim pickings come their 2nd pick. Meanwhile the 8th seed could pick up two relatively evenly matched teams back to back.
I could see this playing a large role in a teams decision to decline the 1st seed in an attempt to get a better 2nd pick.
If I had to guess, teams who were able to place cubes on the switch in auto almost every time could rank highly, and then have their invitations declined if they weren’t a top scale scoring robot as well.
If you look just at worlds, I think the data makes a lot more sense. I’m not sure if this is the cause, but one thing that definitely skews the data is that every year there are more events than the last (except 2020 ). Here’s a graph accounting for how many events there were:
Table
Year
Events
Declines
Declines per Event
2020
52
28
0.5384615385
2019
303
74
0.2442244224
2018
278
79
0.2841726619
2017
255
76
0.2980392157
2016
203
40
0.197044335
2015
179
34
0.1899441341
2014
165
37
0.2242424242
2013
128
52
0.40625
2012
81
39
0.4814814815
2011
66
24
0.3636363636
2010
57
14
0.2456140351
It’s really interesting how many 2020 had. My hypothesis is that early events have more declines.
The data seems to correlate pretty well with how good the ranking system was at ranking teams by how good they were - in years with worse ranking systems, more declines (See: 2012, 2017), in years with better ranking systems (2010, 2015, 2016, 2019), less declines.
The thing that throws a weird wrench in that correlation is 2013/2014. I think most people would agree that 2013 game tended to produce better teams at the top than 2014 game, but 2013 has significantly more declines than 2014.
I’m not sure what to make of that. One hypothesis would be that the data from earlier years is more noisy due to decreased sample size, and is more prone to chance. Another hypothesis would be that more teams in districts in 2014 caused a decrease in declines (NE and PNW went to districts in 2014).