Multi Seasonal Analysis: Project Starpath

For easy reference, this project’s official name is Project Starpath. I am currently attempting an analysis of a supposed paradigm shift going on in FRC, specifically related to team performance. I have heard of a growing performance gap between lower, mid, and elite-level teams. The idea behind it is that in fairly recent years, the top teams’ performance levels have accelerated much more quickly than mid-level teams, leading to a greater performance gap overall. My main goals are to:

  1. Gather data on performance of elite, mid, and lower teams across multiple seasons
  2. Validate existence of growing performance gap (based on data)
  3. Identify root causes of performance changes (groups of teams, e.g., lower, mid, and elite, as well as case studies of specific teams)
    a. Quantitative and qualitative information is welcome
    b. Any changes that could contribute to performance changes (technology, knowledge, team
    culture, etc)
  4. Create efficient and comprehensive conclusions based on data and extrapolations
  5. Compile all information mentioned above into a dossier
  6. Publish dossier to the FRC community to help teams efficiently improve
  7. Understand ramifications of an entire community understanding what separates lower teams and elite teams

This is a big project, and I am reaching out to the entire FIRST community. I will also make an effort to reach out to individual teams for their input.

10 Likes

Curious to see what statistics you find to back this up. My subjective observation has been that if anything, the gap between the top tier of teams and the 2nd tier has been narrowing, especially in areas that have switched to districts.

14 Likes

I will second this. I don’t think the difference between a mid level team and an elite level team’s performance has ever been closer in modern FRC. The first question I have is why, and I know it’s been described ad nauseum but I’ll add a few thoughts.

The rise of COTS played a huge role in recent years in raising that bar. I would love to see some math there though, because I think this years game exacerbated this due to the relative simplicity of building a high scoring robot. I also think this past game was perfectly designed to raise the floor, and I’m interested in seeing if that will be a trend going forward.

8 Likes

I kind of feel like the 2023 game made it seem like teams are closer than they actually are. When the very top teams are scoring 8-9 game pieces per match in teleoperated, it’s easy to look at a team scoring 7-8 game pieces per match and dismiss the difference as small sample sizes, role differences or scouting errors. I’m not ready to say the gap between the elite teams and 2nd tier teams is smaller than it’s been historically.

8 Likes

From a research point of view, this would require access to first hand data: interviewing team members from each competition year, farming technical binders. etc. This would be a daunting task, but there may be workarounds.

I’m thinking you’ll run into a large number of confounding variables that will make it difficult to figure out “why” a team’s performance changed. So it might be better to work on correlation between ONE variable (i.e. swerve drive, in-house fabrication, or budget) and the perceived team performance. Keep it as close to the classic scientific methodology as possible.

4 Likes

Anecdotally I think that the top tier is similar to where it has been (a couple game pieces ahead of tier 2 on average, with usually a handful of elite standouts that are 3-4 ahead). This is pretty consistent with how it has been for a long time. This manifests itself with different scoring results depending on the game (ex. being ahead on gears in 2017 can still be high variance and result in a loss, based on how gears were step scored).

The game has altered a lot since 2018 play, with even more COTS options than before, brushless motors making robots move faster, and teams not being required to build a second robot/build skills to transfer performance capability between robots. Also losing ~700 mostly mid to low performance tier teams during COVID has made the median team have a higher performance level. It makes it feel like things are more normalized, even though there is a group of teams that is still performing a step better.

Institutional knowledge/experience with how to manage your build, season and competition play is the primary thing that maintains top team performance. To their credit this group is very open with what the do to pull it off, but it’s typically the small nuanced decisions they deal with on a day-to-day basis or during match play where they have better knowledge assets. In recent years, they have also done the legwork to have extremely consistent software relative to what we see in Tier 2.

7 Likes

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.