Award for Scouting Systems

My point being that I would support one as long as it would give small teams and paper scouting teams a fair chance to win the award.

How this would actually happen… I’m not sure.

I think the important part of the award would not be overall effectiveness but the ability to demonstrate an effective PART for your team

If you have a small team, that would be something like “we have x system to be able to scout all the teams while only having 3 people”

For a big team that might be “we set up this algorithm that can scan through all the data for us and find the very best hatch bot compliment our cargo only bot”

Ultimately each team has an equal chance to win because it’s not based off of who can perform the best, but who has developed something neat that works for that team.

1 Like

Has an engineering award ever actually been given for a scouting system? I understand that existing award criteria language would technically allow this, but I’m not convinced that it really happens.

I know sharing scouting data or collaborative scouting is a common factor in GP and related awards, but that isn’t about the technical or functional merit of the systems.

1 Like

The more I think about it, a scouting award has the potential to be one of the most inclusive awards.

Because it would be graded off of a unique part of your system that is effective for your team, any size team has equal opportunity to win it.

Combine that with the fact that you don’t need fancy water jets, smart mentors, or brilliant students to devolop a unique and effective Excel sheet that can works for your team.

The barrier to entry to have an effective scouting system is probably the lowest of all of the awards, next to spirt, and thus would be a great award to encourage teams to spend time with an offten underrepresented part of FIRST, despite it’s importance.

I see a well written and executed scouting award to be a magnificent addition to FRC.


The thing is that a good scouting system isn’t an effective excel sheet or fancy data analysis. Good scouting is almost 100% dependent on the culture and ideals of each individual team. If you try to judge teams based off their scouting culture or excel sheets then you end up with just another award that has the same arbitrariness of awards like GP, Spirit, and Entrepreneurship, except with the implication that this teams fancy excel sheet is something that you should exemplify.


Perhaps you and I have very different experiences with what makes good scouting. Sure, team culture had a lot to do with it, but team culture has just as much to do with scouting as it does to do with quality robots or otherwise. Teams can develop unique systems to keep students from getting fatigued, which would in turn help team culture around scouting. (I mean isn’t FIRST as a whole just changing the culture arround robotics?) Isn’t that what we’re going for?

I for one have never noticed an arbitrary nature in which teams win those awards. Up until this year, GP was judged based off of nominations from other teams, which seems like the exact opposite of arbitrary. It’s very easy to tell what teams have spirit and which ones don’t, and yeah it’s a culture thing, but why is that so bad? Finally, I have never ever found entrepreneurship to be arbitrary. 4131 is well known for their excellence in entrepreneurship, so you of all people should understand how important it is to have a well built bussniess plan, strong financial planning, or otherwise to win.

I honestly forgot about this. If this is the primary “system” that judges judge then I would be a lot more willing to accept this award. However, I still feel it would turn into teams trying to conceive of convoluted systems for the purpose of being unique and special. I am really not a fan of encouraging teams to do things for the award they might get from it.

I don’t mean to morph this thread into an argument about which awards are reasonable/well judged and which aren’t. To my knowledge, each of those awards has its own issues with lack of consistency in judging and the consistency with which deserving teams win.

1 Like

Good anything is 100% determined by team culture. Therefore the only award we should give out is spirit award.


I think finding judges would be very difficult. Our team has won several awards specifically calling out our scouting system (usually GP or Judges’ award). But it’s mostly because of the collaborative effort we lead, and all of our data is public.

If you really wanted qualified judges, you should look toward people who do business intelligence, analytics, or people in the Intelligence Community.

The award for good scouting is being able to build a winning alliance.
Also, having your data sought by other teams is awarding.

I mean, directly not to my knowledge. However, you could likely roll in your scouting to the autonomous or innovation in control awards fairly easily by discussing how you use it to help drive autonomous goals.

Use it as a thing to spice up your discussion and set it apart from others.

In 2017, we won the “Creativity Award” at an event for our scouting system, so there’s just an personal anecdote to add to the discussion.

Also, small pushback on the “good scouting wins blue banners” argument. It is obvious that it can help a lot, but there are other key parts of the team that must be functional for a lot of teams to see the practical implications of scouting.


Honest question - is high correlation a good thing?
I have always said, you end up drafting who you overrate. In theory high correlation means avoiding overrating someone but it only takes one team radically miss-rated team to create a bad pick.

That and “top 24” has circumstantial dependencies. In a year like 2019 where the features of a 1st and 2nd round pick can be very different means the assumption of where you are drafting matters.

wanna win the award? submit the top 24 rankings because I’d estimate over 50% of teams at a competition don’t do proper scouting and that’s their pick list anyways

Just asking for teams to submit OPR rankings.

How it would be determined? That’s a tough one to say.

However, I would say that it wouldn’t be the worst thing to add, I know on my team the scouting leads work extremely hard to scout well, so giving them a chance to achieve an award for their efforts, I would say they would deserve that.

I think I personally would prefer that my team’s list had a high correlation with other teams’ lists. I wouldn’t mind at all if it was a bit different from other lists, but if your list is drastically different than everyone else’s then I think it’s a safe bet you did something wrong. YMMV

But ultimately a draft list is about single teams.
When it’s your turn to pick it’s all about how accurate the team on the top of your list actually is - the rest of your list doesn’t matter.
If you you rate a single team way too high, the flow of the draft makes your list bad.
I would argue a list of 40,1,2,3,4,5,6,etc is a bad list despite being highly correlated to everyone else’s list.

The penalty for rating a team too high is asymmetrical to rating a team too low.


Note that my original suggestion was made primarily in jest and has a whole host of other issues. If we were to build a scoring methodology for a top 24 pick list, I would certainly prefer the teams higher on the list be worth more than the teams lower on the list. Raw correlations don’t really do this.

I don’t know exactly what that scoring methodology would look like but I also don’t really care. :panda_face:

Would it be better to provide an award for a unique strategy? I know this would not directly reward scouting, but it would highlight teams who optimize their position even if they don’t win the competition. This would highlight not just scouting, but also teams strategies. It could be for a qualification match or elims, but the idea would be to have a unique approach to the game. There are a couple problems with this, like which one of the 3 teams deserve it more, but it may be easier to judge rather than explaining convoluted scouting systems.