Noticing improvements toward the end of quals

How do you ensure your team is aware of last-minute improvements from teams at the end of quals? Obviously, watching the last few matches is very helpful, but with everything going on at that point, how do you make sure things don’t skip through the cracks?

At Tech Valley, 2601 was one of the best defenders at the event, could score a bunch of low goals, and had a traversal climb they never quite got working… until their final match. My scouts and I saw it and talked about picking them for our #7 alliance, but it very easily could have just been another data point in the spreadsheet if we weren’t watching so closely or had to talk to another team during that match. 2601 ultimately fell to the #2 alliance, who won.

11 Likes

With scouts, it may make sense to have a “Need to See” list - this could be teams that are in a “If they score [metric] or achieve [challenge], and demonstrate this convincingly, they move up a tier.” I think a lot of teams have their list solidified the night before Elims and then don’t really reconsider it throughout the next day. It’s still worth having a morning crew on the last day dedicated to figuring out the stragglers/third bots.

15 Likes

You may want to look into time-weighted formulas (like a time-weighted average). When you apply something like that to a dataset, and compare it to a straight average, it can help show teams that have improved or declined over the course of an event. One team may have 5 traverse climbs in an event, and another 3 - but if the first team did all of theirs in their first 5 matches, while the second did theirs in the last 3, then a time-weighted average would show the decline of the first and the improvement of the second - it’s great for leading you to ask questions when picking between teams - But remember, it doesn’t mean everything. A team may have a different strategy for their last match or two, based on the matchups, and there’s nothing wrong with their robot, so an apparent decline may not be indicative of a robot issue!

7 Likes

If your data is robust enough, you should be able to spot trends (i.e. a team improving as matches go on, or a team that is falling apart as things break on their robot) as well as be able to spot anomalies (low scoring match because another team drove into them and opened their main breaker. If you are collecting match by match data then you can plot each metric versus match number (or just scan that row in your spreadsheet to spot these trends.

If you are relying on just averages or “I saw them do this in match x” then you are certainly going to miss these trends.

Often, when we prepare our alliance selection strategy, we will characterize a team as “inconsistent 2 ball auto, averages x balls per match and improving, consistent mid climb with potential trav”. The words “inconsistent”, “consistent”, “improving” and “potential” are used to indicate trends or inconsistencies that should be assessed in the final matches. You can also spend some time with these teams in the pits to find out if they have made any changes that you might need to assess during those final matches. If they perform their 2 ball auto 3 times in their final 3 matches and you find out that they spent time on the practice field making adjustments, then you can remove “inconsistent” from that aspect of their assessment.

These last minute changes are often hard to spot and you rarely have time to fully re-analyze the data to see if your prior evening’s strategy assessment needs to be revised. In 2017, we missed the fact that a robot had added a climb overnight and passed over them in our alliance selection only to be beaten by the alliance that picked them after us.

If I was on a team that had a marked improvement, it may not be a bad idea to advertise that to the top teams that are likely to become alliance captains. Dropping some hints with their scouts that they might want to watch your team’s performance in the final matches would likely get you noticed, especially if the event is not particularly deep and you want to separate yourself from the crowd of potential 2nd picks (or even potential 1st picks).

6 Likes

This is more or less what we usually do and I think it works out well. We try to end the strategy meeting the night before playoffs with a list of a handful of teams to watch the next day - usually teams that did poorly early in quals and seemed to improve a lot by the end, or vice versa. Or if there was something confusing in a particular robot’s data. On the morning before playoffs we don’t normally do full scouting, we only scout the teams on that list.

1 Like

Ask

1 Like

Interesting. I’ve observed and fallen victim to the opposite phenomenon, where you focus too much on a team’s last couple of matches (especially in events where teams get 2-3 matches in the morning right before alliance selection), recency bias sets in, and you end up ignoring previous 8-10 matches.

11 Likes

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.