Yearly Improvement Survey

Many times this year I heard something along the lines of “it’s a shame the 2020 season was cancelled, this was the best robot we’ve ever built.” I know personally that my team’s robot this year is certainly the most advanced and competitive robot we’ve ever built. And I’d say my high school team also built a more competitive robot than they have in a number of years. As a CSA at our first two competitions, we had about half the number of requests for help as we had in the past two years. It seemed like teams were generally well put together and there were many fewer teams showing up to competition with their robot half finished or missing bumpers or code.

I’d like to see if we can get some data to quantify whether this observation was really the case more generally in FRC. Below is a quick survey asking you to compare your robot’s competitiveness (however you define that) to your robots from a few years prior. If your team’s rookie year was 2010 or later, you should mark which year was your rookie year and mark the years prior as not competing. By combining everyone’s responses, we can get a general measure of relative robot competitiveness each year. I will post a link to the analysis once a number of people have responded.

SURVEY

5 Likes

Part of me was a little glad our competition was cancelled. We got the news right when we were in the middle of feverishly cobbling together a robot that could do 90% of the game at a 20% level. We didn’t feel ready for competition at all!

That said, apart from our less than perfect planning, we’ve been consistently learning and improving our engineering skills every year, which makes me more proud than the competitiveness of our robot.

With 21 unique teams responding, here are some preliminary results:

Each response is translated into a score from -3 to +3 for each year after the team’s rookie year, then the responses are averaged. If more than one response is recorded for a single team, their answers are averaged before the overall average to keep them from being weighted too heavily. From this (preliminary) data, it does seem that the past three years have seen a lot of improvement, with 2020 second only to 2018 and 2019 coming in third.

I did notice a trend that I think may be messing with the data a bit though. It seems that after a strong negative year there’s a much higher chance of having a strong positive year. Inherently this makes sense because you’re comparing to the previous year and teams will generally revert to the mean. But if the current year’s robot is only much better compared to last year’s (bad) robot and not to the average from past years, that gives bad data. As I get some more responses I will try to quantify this trend and model it out of the final results.

2 Likes

30 teams have replied now, plus a bug fix to count replies without team numbers as separate teams.

The more teams that respond the better data we’ll get. And if you want to share your personal experiences in this thread, that would also be beneficial

1 Like

I wonder how many teams will be able to respond to this. We started 2013, but I’m the 3rd coach in that small(?) time frame.

I know how we felt about our robot this year compared to years passed, but I can only go back a couple of years on this team so I couldn’t fill out the “required” questions on the survey.

1 Like

That’s a fair point. On the one hand, if your team was around for certain years but you don’t know much about those years’ robots, you can choose “my team did not compete” as essentially a “don’t know” option. That’s basically what the analysis does with those responses anyway. On the other hand, if you really don’t know how well those years’ robots did then you don’t have a baseline against which you can compare the robots you were involved with. I’ll leave it up to each individual to decide whether or not they can provide meaningful data if they don’t know their team’s full robot history.

2017 (rookie year): very good and simple robot, finalist
2018: we don’t talk about this robot
2019: we make it workable for the offseason
2020: Some bad design decisions, best and most advanced project so far, but had its short comings due to our lack of experience in some areas

2018 as rookies, basically got RNG’d into highest rookie seed. Robot had a nonfunctional auto (my fault), and could only pick up boxes from straight on.
2019 we won a regional as a defense bot. High point of our history.
2020 we prioritized climb perhaps too much. Didn’t have a functional intake when we went to go pick up the pit from the canceled regional.

I wanted to provide this update for anyone interested in the results.

With 51 teams responding, 2020 pulled into the lead (though 2018-2020 are probably all too close to be significantly different). There is still a large difference between the improvements from 2018-2020 and the rest of the years. I’m open to theories for why this might be.

1 Like

Availability bias.

I have no doubt that’s why there are more responses for recent years than older. But I don’t see why that would make the more recent responses skew towards more improvement. Especially because it’s a sharp improvement at 2018 not a slow increase.

For the record, here’s what the relationship between number of years responded and average response score looks like:
image

Since I was in high school from 2017 to 2020, I could only accurately judge those years. I based 2013 (rookie) to 2016 on TBA data, and then went from there. Yes, I compared to previous years, not to an average (saw something about that earlier).

One thing I notice is that the survey doesn’t distinguish between teams that competed in 2020 and teams that didn’t. “We thought we had an amazing robot until we got to comp and lost comms in half our matches/broke a bunch of bolts/couldn’t actually line up with the goals/etc” could describe a lot of years for the teams I’ve worked with. So I can imagine that many teams probably think more highly of their robots before they’re actually put to the test, which could be skewing respondents’ perception toward having built way better robots in 2020.

3 Likes

That is certainly a fair criticism. I don’t really have any way of adjusting for that other than throwing out data from the majority of teams that didn’t compete in 2020, so you’ll just have to take the results with a grain of salt. That being said, each respondent was asked to define “competitiveness” for themselves so they would be able to judge how well they think their robot would be able to play the game as they planned, rather than using some more objective metric that may not capture the whole picture.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.