Improving the Autonomous Award

I believe that FIRST should expand the criteria for the autonomous award to factor in each team’s contribution to the wider FRC programming community. Currently teams are “required” to share any code they reuse. This being the only “requirement” / reward for code creates several perverse incentives for teams looking to maintain their competitive advantage:

  1. To share as little code as possible, sometimes even none as it can all be “retyped”.
  2. When sharing code to put very little emphasis on making that code easy for others to use.

I think sharing code and working to improve the programming of other teams should be rewarded and encouraged! While I believe it is encouraged and in line with many of FIRST’s values it is not rewarded or recognized.

I am not trying to stagnate cutting edge development or new programming techniques, quite the opposite! I am not saying a team should be penalized for not immediately sharing every new advancement that they make. I do think we should recognize the teams that do share and encourage those that still want to maintain their yearly competitive advantage to at least share what they have learned at the end of each season. I want everyone to build off of what others have done to continue and make improvements year in and out.

A team should not be able to make an advancement, keep how it works a secret, and continue winning awards year after year based on that same advancement. There are numerous cases of FRC teams sharing knowledge with the community and raising the entire ocean with their knowledge. Some might argue that this was even to the detriment of the competitive advantage they had over other teams. That is what every FRC team should strive for that their code is copied hundreds of times over and serves to empower the entire FRC programming community.

There are teams that are already doing amazing work on this! I would like to specifically commend the Open Alliance for making much of that possible! There are still many other teams that:

  1. Answer questions on CD and Discord
  2. Post not only their code but the logic behind it and a full account of everything they tried that was unsuccessful
  3. Team that actively assist other teams at their competition.
  4. Open source community tools like: Path Planner, AdvantageKit, and so many more that have come before.

Lets use the autonomous award to not only recognize teams that score a bunch of points and win events (you will find an EXTREMELY strong correlation with auto award winner and event performance) but also the teams that use that knowledge gained to lift everyone up.

35 Likes

So you want to change it to be more like Engineering Inspiration and be a team attribute award instead of a bot award? I don’t object but I also don’t object to an Information Systems Inspiration award either.

2 Likes

I’ve always appreciated when I am intrigued by how a historical robot worked and I can just find its source code online - for example, 254 is great for this. On the other hand, I’ve actively searched for other teams’ code (won’t name any names) and could not, for the life of me, find any sort of code release. I can understand not releasing code until after the season, but hiding your source code indefinitely afterwards feels antithetical to FIRST’s mission.

4 Likes

How do you propose judging this? What happens if you have 2 teams that both shared code? Who would win? (asking as a judge)
And it sounds like this would be similar to the GP award?

If anything, I think an award around growing the FRC Community would be good… But then again, how do you judge it?

It would probably end up becoming the Programming version of the Engineering Inspiration Award.
Maybe call it something like the Software Engineering Inspiration Award.

As a part of the “no work before kickoff” rules, teams already have to post publicly any code they wrote before kickoff and want to use for the season. So no teams should be hiding code that they’re using year after year. As far as documenting that code and making it easy for other teams to duplicate, I’d argue that would unfairly benefit teams with large programming teams who can dedicate the student-hours to something that doesn’t actually improve robot performance.

You can get around that restriction by rewriting the code in question after the season starts. If it’s new, you don’t have to release it. Imo, it satisfies the letter but not the spirit of the rule.

3 Likes

No this would still be a robot focused award. I would say that an impressive Auto as the award implies is the bare minimum. What is used to separate teams after that? This year I am not entirely sure what separates them, is it accuracy? Consistency? Interview?

I am proposing a clear second pillar of the award. Looking around champs there were many teams with a 5+ ball auto that is why I think when this on field success is the same or extremely similar the off field advancement of the entire software community should be taken into account.

What does the autonomous award currently celebrate? Many teams had the same autonomous and almost all of the winners also won or did extremely well in their event. This award seems to be just one additional award given to a team that is already winning. It should be about something more. It should inspire teams to be better!

You judge this the same way you currently do judging by talking to the teams and asking them about the things they do to meet the criteria of the award. They will be forthcoming with all of the ways they have helped inspire other teams. I hope the bare minimum of just “sharing code” is not all a team has to offer. This will also encourage teams to do more and share more than they currently do to help differentiate themselves for this award.

Teams are supposed to do this. There is no way to verify this is being done. So rather than add more rules that can’t be enforced I would like to encourage teams to “do the right thing” with a reward.

3 Likes

The auto award currently already celebrates more than just having a good 5 ball auto. Other factors such as tele-op automation, localization, path following (especially custom), sensor usage above the bare minimum, and resistance to disturbances.

There were teams at worlds who ran 5 ball autos using a common pathfollower and nothing more than drive encoders, gryo, and a limelight. There were also teams who automated more actions and uses more advanced software and sensor usage. Such teams clearly aren’t the former group of teams and have done more to win the auto award. This is how the auto award is currently judged and I don’t see any problems with that.

1 Like

I have a problem with it. I think the award should require more than it does now. The award description doesn’t specify what its intended effect is, but my guess is it’s to encourage innovation in autonomous control. If teams aren’t required to release their source code as part of that, it stifles innovation by requiring a lot of reverse engineering and wheel-reinventing by everyone else.

The pre-kickoff design release rule, when properly enforced, provides a 1-year “patent” protection to the previous season’s designs. That’s enough time for the team to benefit from their work, but lets other teams learn from it afterward and build upon it.

Engineering, as well as the arts, is fundamentally a collaborative endeavor, not a competitive one (i.e., “everything’s a remix”). Letting teams keep their designs under wraps indefinitely (through a loophole) runs counter to this and makes FRC worse off in the long term (and society worse off if you apply the same principles there). The stick didn’t work for enforcement, so @jdaming is suggesting a carrot.

6 Likes

Have the teams that you’re advocating should be winning the Autonomous Award explicitly mentioned these efforts to the judges while being judged for the Machine, Creativity, and Innovation Awards? I think the judges may be more receptive to these efforts already than this thread may be letting on.

That being said, I understand and empathize with your concerns regarding technical awards, with particular emphasis on software and controls-oriented awards. All of the technical awards are as much about the sales pitch you give to the judges in the pits as they are about the technical reality. Technical achievement has to be coupled with students who know how to pitch it to the judges, and often that pitch can outweigh the reality of the technical achievement. My team has won awards for features that were barely implemented at the time (although they would come to maturation later in the season). There’s no code review associated with any award, and even the field observer judges aren’t likely to know if your beam break sensor automated your indexer or if your operator cleared that jam manually.

The easy of sharing software makes it particularly hard to judge. The very things you want to celebrate lead to a fog over how judges should evaluate these awards. The judges don’t know who wrote the underlying aspects of your code, only how your students pitch their knowledge of it (and, conversely, judges can sometimes be skeptical of the claims of newly written code even when it is). They don’t know if its just WPILib and COTS implementations, they just know what’s been told to them and what they can observe from field side. And it’s not clear how judges should treat this in general. Should the team that took publicly shared software and built upon it to new heights be awarded, or should the team that created that original baseline? Both teams have rights to be salty to see the other one get awarded (and I know my own personal salt - at the time - after watching a team win a Championship award using a code baseline my team developed).

1 Like

Imma gonna be honest here: I do not see the primary focus of the autonomous award as just “Software”. Sure it involves code, but it seems to me over the years as an award deeply rooted in integration, planning, and implementation transcending mechanical design, control tuning, and code. So focusing on code transparency, availability, etc. does little for the award as written.

As far as the argument of “keeping things secret” goes: I can put all my super amazing code up for viewing in public repos. Very few will see it unless I am 254 (who has a superb culture built around their tech binder, not picking on them <3). Clean easy to read code is one thing, the key to me at least lies in the implementation bridge. When your code is written by High-school students (albeit sometimes managed by mentors that are in software carers) you are almost never going to reach that readability and “massively open source” feel, particularly when it is all being written for different hardware. Just my $0.02.

As the award stands:

Description

Celebrates the team that has demonstrated consistent, reliable, high-performance robot operation during autonomously managed actions. Evaluation is based on the robot’s ability to sense its surroundings, position itself or onboard mechanisms appropriately, and execute tasks.
Guidelines

The award is based on the performance of the robot’s autonomous (non-operator guided) operations during matches
Consistent and reliable operation is weighted more heavily than the ability to score maximum points during any specific autonomously managed actions
A team must be able to explain:
    How the robot understands its surroundings, navigates on the field or positions onboard mechanisms and then executes tasks.
    The factors the teams considered that could interfere with success during autonomously managed actions.
    The design, development, and testing that was done for the robot’s autonomously managed actions.

Maybe I am being a grumpy old alum, but it seems to me that software is an extremely difficult thing to judge, and the award for good software is often touted as a blue banner (just like scouting). If you take a look at the top teams in the world, all of the machines are mechanically amazing in their own right, however, the differences between a great team and truly world class lie in software.

2 Likes

Yeah, you gotta request the mailed shredded copy from Marshall.
Or there’s the audiobook version.

2 Likes

How can you tell?

Question for those who keep their code private throughout the season: What do you believe you’re gaining from the secrecy, especially with more and more advanced control features becoming available to all teams through WPILib?

1 Like

At least in Eastern PA, the initial set of judges are typically split between Team Attribute Judges and Machine, Creativity, and Innovation (Technical) Judges. This isn’t 100% the case, but it is more often than not for events 1712 attends. We will get two judge interviews in the pit, one about the robot and one about team activities and attributes. Even in events where we’ve only received one initial set of judges, they have usually made it clear when they are changing the questions up from the robot-related awards to the team attribute-related awards.

I genuinely haven’t been able to figure out the process used for our events this year. We often had the same judges asking technical questions as well as culture/attribute questions. I’d love to be able to preference the awards somehow and I know our students would too. We train students not to ask directly what award is being judged for but to try and figure it out.

I judged for the first time this year, it was an interesting experience.

One thing that I came away with (that I’ve been meaning to send as feedback to someone) is definitely related to this:

I’ve known this to be true for a long time as a mentor and hadn’t really thought through it much, but when I was faced with making decisions as a judge based primarily on what the ‘pitchmen’ said, it definitely rubbed me the wrong way (and in some cases I tried to go talk to the actual people doing the work afterwards). In particular, what if a team had a really cool technical thing but the pitchman forgot about it because “oh that’s just some programming thing”. Or if a team isn’t organized enough to even have pitchmen.

One thing that I think would really help a lot (from a judging perspective, and probably would help less resourced teams as well) is if teams could submit a couple of bullet points (definitely with a word limit otherwise it’d be way too much) before the competition. This would make it a lot easier for judges to approach a team and ask them about their robot.

4 Likes

Starting with this season, we’re syncing immediately to github and plan to continue to do so. Though it would be quite a feat to transplant any of our code to another codebase, the hope is that by actively publishing our software some techniques might be explored by other teams, or be implemented in wpilib in the future. It also serves as a model for well a architected and thoroughly tested C++ realtime system.

And, as you note, with advanced controls being made more accessible, I don’t see any need for secrecy.

5 Likes

I was thinking about one of our sponsors has teams submit their documentation after the season for a chance for more cash. It would be interesting if Ford would sponsor a second tier competition open to any auto award winner that including documenting your software features using standard public means (i.e. github). They could possibly team with wpilib and other community software contributors to pick the winners.

2 Likes