2023 AUG 08 | Written by Fiona Hanlon, FIRST Robotics Competition Team Experience Specialist
As we round out the end of summer, we wanted to share some information & updates on the Double Elimination playoff structure that we implemented for the 2023 season. When we announced that we would be using this model for the 2023 season, we noted that it was a pilot, and we would be soliciting feedback to understand how it went and if any changes were needed. We want to share some of that feedback and some of the changes we are making for the 2024 season.
In the weekly event surveys, we asked teams “Did you enjoy having awards between final matches more, the same, or less than having all awards after the final matches?” The responses were 66.4% positive, 20.7% neutral, and 12.9% negative overall. The chart below shows the responses week-by-week.
In the end of season survey, we also asked teams how they felt about the new tournament structure: “Did you enjoy the new Double Elimination tournament style more, the same, or less than the previous best 2 out of 3 style?” 80% of mentors and 70% of students responded positively to this question, compared to 7% of mentors and 9% of students that responded negatively.
We also solicited feedback from our Program Delivery Partners on the new tournament type as well as awards between matches, and the responses were overwhelmingly positive. Based on all of this feedback, I amAndyMark happy to share that we will be continuing with the Double Elimination Playoff Tournament and will be keeping awards between matches.
We also took a look at the free-response feedback from our surveys as well as data from events to see what improvements (if any) could be made. Our findings are below:
We noticed that for most events, playoffs took on average 3 hours and 45 minutes when we intended for playoffs to take just over 3 hours. In looking at the data, this is likely due to a few factors, including the format being new for everyone.
One of the main causes of this was that we had scheduled playoffs to do 7-minute cycles when on average, events were doing 8-minute cycles.
We also noticed that the breaks took longer than planned.
Additionally, in some of the survey responses, people noted that they felt that the breaks without awards were long and not very engaging. Some wish we had explicitly noted what awards were being given out during each break.
So, taking all of the above into consideration, we are mapping out a new schedule. This schedule should help events better stick to the ~3-hour tournament while still guaranteeing that teams have at least 15 minutes between matches. The new schedule makes the following modifications:
Increases the cycle time to 9-minutes
Removes the first two breaks entirely
Shrinks the 3rd break to be only 5 minutes
We are also working on ways to make it clearer to event attendees when awards will be handed out.
I wanted to end this blog with my favorite piece of data from the 2023 season that shows that Double Elims worked as intended. One of the things that many people liked about this new playoff style was that it would better showcase which alliances were the best at the event as it was possible that Alliance 8 was the second-best alliance at the event. For example, in the old model, it was not possible for Alliance 1 & 8 to be the winners and finalists. The chart below gives data on which alliance ended up as a winner or finalist at events in 2022 (using the old playoff style) and in 2023 (using Double Eliminations). It’s important to note that while it is likely that Double Eliminations had some correlation to these changes, it is not the only factor at play.
Finally, we want to again thank those who helped us with implementing the Double Elimination tournament for the 2023 season and to everyone who helped provide feedback on their experiences. Looking forward to the 2024 season!
Love to see the data on this and that they’re iterating quickly to make changes in response to feedback.
I’m curious how they’re achieving some of the results, specifically removing the first two breaks. The break after Round 1 (between Matches 4 and 5) ensured that the team who lost Match 4 still had 15 minutes between playing in 4 and 6. Removing the break means there’s only a single 9-minute match cycle for that alliance to turn around. You can resolve that issue by swapping Match 6 and 7, but then you have the same issues going from new_7 → 9 and 8 → 10 because the second break is gone as well.
The only thing that comes to mind is I wonder if they’re not looking at the end of the 9 minute match-cycle, but instead when the match ends. A match being 2.5 minutes means there are 6.5 minutes after the match in the next cycle, which combined with a 9 minute break barely provides the required 15 minutes (totaling 15.5). But that’s a different (and much tighter) turn than we asked teams to do this year. As with that math teams going from 4 → 6 had (7 - 2.5) minutes for end of cycle + 8 minute break + 7 minute cycle for Match 5, or 19.5 minutes.
Edit: And to emphasize the above, the 15.5 minutes there would include the time for a team to get off the field as well as the time for them to be back on the field, connected, and ready to play in order to keep the schedule. So that isn’t 15 minutes off-field repairing.
Edit 2: @s3529 updated the match timing table below to better demonstrate what I’m trying to say.
Having said all of that, I agree that those short breaks were awkward at times and often teams were ready to play. As field staff it was nice to get a breather, but running 9 minute cycles will alleviate some of the pressure there.
** Usually, the gap between matches is equal to:
total duration of the breaks + [number of matches * cycle time] + 2 minutes
For these cells, the value from the manual doesn’t include the 2 minute buffer so I don’t either.
Edit: Found a mistake in my math for Match 11 and corrected it.
I am intrigued to see how the percentage of finalists and winners come from each alliance changes next year. The one thing they don’t acknowledge in this data is that the game design can influence how dominant the number 1 alliance is.
The 2022 game favored having incredibly strong individual robots which is going to push the #1 alliance to be dominant
The 2023 game favored having three good robots over two great robots so this allows for lower alliances to play more competitive matches.
Using FIM District Championship as an example, only one of the four alliances on FIMstein was a number one alliance on their field. Comparing that to 2022, three alliances on FIMstein were the number one alliance on their field. Now this is just one event so it’s a small sample size.
All this is just a long way to say, I was very happy with the change to double elimination this year, but I am excited to see if the variety of alliances making it to finals was more of a byproduct of the format of eliminations or the game for the 2023 season.
I love that we got real data from HQ. They didn’t give us any of the “most people like it” crap. Just the raw percentages.
Alliances 4-6, and to a lesser extent, alliances 3, 7, and 8, are the real winners of this system. And that’s awesome. ±1-2 spots in the qualification rankings used to be a death sentence. Not anymore. That’s a win.
Just as a survey/human behavior/data/program evaluation nerd, I find the chart about public opinion interesting. It seems that the small mid-season changes really didn’t have an effect on people and they either liked the system or didn’t. Until week 6, that is. Maybe people just got more used to the system? Smaller sample size later in the season? People more adjusted from their previous events? Or maybe things really just did get better, but not till the end.
I’m guessing that could be due to a number of district championships that week, and no district qualifiers. There were also fewer events in general that week, so a smaller sample size could also play a part.
After week 6, more folks have more time to sit down and fill out surveys. These surveys are due weekly and are just not a priority when there are fires to put out.
I wish they wouldn’t have included that graph for this reason. 2022 was historically one of the best games for top seeds while 2023 was historically one of the best games for triple offense. I struggle drawing any meaningful conclusions from it.
That being said, that 80% of mentors and 70% of students preferring Double Elimination is super impressive and a more important metric.
I hope HQ continues to share results of the surveys like this.
Selection bias. Week 6 is District Championship week, so the survey respondents will be more heavily impacted by who qualified for their various district Championships, which imposes a selection bias on the week 6 respondents to have done well enough in the new award/playoff format (over the course of the full season) to earn a spot in their district championship and to have at least three competitions worth of experience with the new format to become accustomed to it.
This isn’t to say their data isn’t still valuable, only that it explains the bump from a mid-60s approval rating to >70%
Likely a better chart that indicates overall satisfaction with the DE format would be something like “avg # of alliance match ups for each seeded alliance” or “avg # of playoff matches played per alliance” , metrics which indicate how deep of a run each alliance made and the dynamic nature of their playoff experience.
This is an interesting research methods question. If you give a small window of time, response rates will likely be lower. But, if you give a larger window, response quality will probably diminish as memories grow more distant. And, as an added incentive for a shorter window, FIRST can use data from earlier weeks to improve the experience at later week events.
I make a point to write all of my written feedback no later than the next day after an event, because I know the salient details will leave my mind. But program feedback and evaluation is super important to me, and I understand that it’s not everyone’s priority.
In my experience (which is limited) with these sorts of surveys, people who don’t really care too much also tend to have more neutral opinions (is there a causation there??? time to write a survey!). And are less likely to bother with the survey. Lovers and haters tend to go out of their way to make their voices heard more than the Meh crowd.
If we somehow forced, or incentivized all teams to report back on this survey, we might see the percentage for “negative” drop a bit as more neutrals weigh in.