Award for Scouting Systems

The thing is that at least the first two awards then go on with more criteria, as does the third one–maybe not within their 1-line descriptions, but in their more detailed descriptions.

If I’m a judge whose background is Battlebots and is highly unfamiliar with FIRST (likely won’t happen) I might have a totally different opinion on a good Chairman’s Award candidate than, say, a judge who’d been on 191 back in the 90s or on a Hall of Fame team from the last 5 years.

Judges’ Award, OTOH, is one of the biggest mysteries in FRC, and usually the judges describe why it’s being given to a given team.


@LukeB I’d probably agree on that, especially on the second and third sentences. I can’t write that either. The problem is that when you write a metric that’s going to affect something, people that care about what the metric rates will alter their behavior to make their metrics better so they get the good stuff or what have you. See also: Safety Award. But in order to make the judging more effective, you do want some sort of guidelines. You probably want to place minimal examples and let the judges duke it out.

I wasn’t aware of QR codes being used in FRC before 115, although it might have happened. Has 5137 tried using digital scouting, and if so, what do you use for data transfer?

1 Like

We have sampled many digital solutions as they come and go. None have had exactly what we found useful and were put aside.

Communication takes several forms:

From drive team to scouts and vice versa…this is best done verbally in person as there is a chance to get more clarity in discussions on teams and tendencies.

From scouts to head scout… once again verbal in person communication saves time and enhances clarity.

From Head Scout to other teams… There again a verbal discussion can be highly beneficial to gain and understanding and trust.

One of the biggest issues with Apps…they are limited by design , the human brain is not limited by constraints. No amount of notes will convey what a simple conversation can convey as humans take visual cues from each other.

The other aspect is scouting is only a good as the data collected, in an app there is usually an abundance of date entry. This tends to allow for bad data “just to get through it” …sort of like privacy policy acknowledgements…most just hit YES to get to the game or whatever. As they say “garbage in garbage out”

In conversion, import metrics can be reaffirmed, trends on teams can be reconfirmed as the scouts know the bots they are tracking.

In the end you have 30-60 teams to track, most you play so most you have in game stats on and tendencies. The rest that you don’t play yet you may in eliminations, these can easily be evaluated overnight and in day 2 of competition… some simply won’t help your team in eliminations enough.

So in the end we tend to collect only the data we deem necessary to find partners or to help mitigate threats from other alliances. This tends to produce cleaner data to use in conversations between humans.

This is what I needed to know. Thank you for clarifying.


I’ve read this thread and several posts seem to lean on awarding a technological scouting aid. My concern with a scouting award based on technology aids is that I think as a society we already assume over technologized means better. Sometimes paper and pencil are the more elegant solution to a problem and the best use of resources for the desired results. There are also solutions where the applied art form (cooking, brewing, certain types of data analysis, etc) where the art supersedes the science (although the science may be significantly relevant - cooking, brewing, data analysis, etc). Technology can’t necessarily replace the “art” in everything, the human element can be exceedingly important.

I feel with scouting which is ultimately about team selections, the art of the task is very relevant and the data is a supporting player. How the data is acquired for a team to be successful is only relevant to each individual team. Let’s say team A scouts on paper and manually tally’s their data but puts together a finalist alliance and can deliberate their selection lists in 45’ish minutes. And let’s say team B has a sophisticated app and data sheets and graphs, deliberates for 1.5ish hours and also puts together a finalist alliance.

As a judge, how would I distinguish between the two when it comes to an award? How does an award for scouting account for both the art and the science of scouting? I just cant get my head around criteria for judges to applied universally in evaluation.

I am not opposed to recognition for scouting or other aspects.

1 Like

2729 used QR codes in their scouting app as far back as 2015

2015 was the same year 115 did it. I don’t remember if we had it in season or if it was done in the off-season only.

I knew about 1072’s system and sent a 7667 at CVR over to consult with 1072 on draft picks (and another rookie team that could be an alliance captain as well.)


Rather than a scouting specific award, I’d prefer to see an award for implementing technology on the team in an interesting way. It could be an innovative scouting application, or could be a great system for tracking progress on making parts, or an inventory system, whatever.


Wow, thank you for comparing the work of complex programming and organizing a large group of individuals to collect data to being a janitor. So let’s continue to ignore a growing contingent of students who develop scouting systems while we give 5 awards to the robot design and fabrication group, 2 to the robot programmers and 1 to the business/media group. That’s always an inspiring choice to make. And the criteria for all of those awards are so absolutely clear cut that no one has any disputes about who might be the best in each of those categories.

Or we can take you statements as being as ridiculous as they appear.

As for your statement about rewards, apparently you work for free and never ask for a promotion, because those are the rewards in real life. The awards at the competitions, if they are not simply about participation (which does happen too much now), are equivalent to what since students aren’t gaining monetary or responsibility rewards that are handed out in the workplace. And just as in the workplace, awards serve the same purpose as pay raises and promotions along with achieving goals to inspire students.


I can discuss with you separately about how we use the data from our scouting system to inform our decision making process, which is “human based.” We do not simply take a quantitatively based list and use it directly as our draft list, nor do we rely entirely on quantitative information to develop match strategies. We also developed a method for converting qualitative assessments such as driving skill and defensive ability into relative quantitative measures. That data told us that 6443 was the best defensive robot on Carver and we weren’t surprised to see your alliance in the final.

As for paper vs computer scouting, the issue simply is the overwhelming mass of data at Champs. We used paper up to 2012 and found that we couldn’t scout every team and that we were missing key draft picks. That’s why we switched to a computer system in 2013, and we’ve never looked back. I will categorically say that a team that relies soley on paper with no computer tallying of data will have an inferior draft list. (BTW, 6443 was in a very unique position on Carver being an alliance captain and the top defensive robot. You only had to focus on the top dozen offensive robots for your draft list, whereas the other offensive robots had to sort out both the other offensive robots plus the defensive robots, and screen off the bottom set of robots that don’t make either list–in other words, the entire field.)

The award is for the scouting SYSTEM, not the scouting results. I think describing the data collection process and the incorporation into the decision making process would be the basis of the award. This is no different than the Entrepreneur Award, or even the Chairman’s or EI award. And even the various engineering awards go beyond the simple mechanism or routine to how its implemented.

Finally, the mission of FIRST and FRC is not to stage a competition for the sole sake of competition. This isn’t the NFL or NBA (or even NCAA). It is a STEM education program that uses competition as motivation. We are training students for real world occupations. Paper/pen exercises might have real world applications, but the the truth is that those are declining AND more importantly, graduates straight out of college (and never high school) are rarely put into positions where they make key qualitative judgements simply because they don’t have the experience and associated wisdom. To get to those positions, they first need to go through developing the quantitative analytic tools and skills–pay their dues. That’s what the computer based scouting systems do. As a mentor, I’m less concerned about the effectiveness of the scouting system in competition than about what did the students learn in developing and managing the system. I encourage innovations in our system, even if it seems like its working just fine. I want my students to learn–and that might even mean the system crashes sometimes.


I disagree with your assessments here. We do add a layer of observational scouts who supplement our quantitative data which is useful, but that data is our starting point. In addition, we’ve found that we can communicate to the drive team much more quickly with real time updates of our scouting database. We also use multiple scouts on a robot to ensure data accuracy. Regardless, we’ve found that the anecdotal assessments of individual scouts are often inaccurate when compared to the quantitative data that we collect.

We design our scouting app to limit the data collected to what’s needed, and make the interface as intuitive as possible. This is a great educational opportunity for our students. If your scouts are entering GIGO then you have a motivational problem.

We don’t limit ourselves to just the output from the scouting data. We use it as an input our decision making process, which we have developed to work in a fairly rigorous way based on having worked with other top seeds at Champs since the schedule was shifted to allow night before selection.

One other point: we have made a series of presentations about how we set up our scouting system at our Fall Workshops so you can see more about this works.

I think our record of what our scouting system has done for team over the last 7 years speaks for itself.

1 Like

What assessments are you referring to? I don’t remember challenging any scouting methods rather my own experience with apps and finding them limited in capability (of the ones we tried) I do believe that trying to come up with a scouting award criteria is very nebulous.

As for GIGO that is usually a factor of WIIFM, if the scouts believe in whatever system a team uses they will do their best at entering data correctly via pen,voice recorder or app.

Sure 1678 has a great record and would not have been “as great” without scouting doing its job. I also credit the amazing engineering and ability to think outside the box and pushing the state of the art in competition. Also, helps being around so many great teams that make each team up there better.

Not ignore, just not highlight a single team scouting system per competition with an award. Can you imagine the blow-back, from other team scouts who also worked hard for three days straight? That would not be “inspiring”. Especially if their pick that no one wanted to pick helped just win the competition.

Scouting Award would be mainly “strong alliance formation based” who is to say what formed that strong alliance? Was it the team that won the award or perhaps the captain calling the shots? Maybe they used both sets of data. Hence nebulous as to what effect a single team scouting system had. What if they were a second pick ?

For this to award to gain traction,

I think someone need to come up with what criteria a SCOUTING AWARD would use then let CD decide the value of it. Would that inspire students ? Or would it cause bad feelings?

No one is arguing that scouting is not beneficial, or takes a lot of work and ingenuity. The whole premise of this post is “Award for Scouting Systems” not scouting itself.

Chairman’s is pretty popular too, from what I hear.

For those keeping score at home that had no idea what this means…

Definition of Nebulous from Google:


  1. in the form of a cloud or haze; hazy.
    “a giant nebulous glow”

synonyms: indistinct, indefinite, unclear, vague, hazy, cloudy, fuzzy, misty, lacking definition, blurred, blurry, out of focus, foggy, faint, shadowy, dim, obscure, shapeless, formless, unformed, amorphous; rare nebulose
“the figure was still nebulous—she couldn’t quite see it”

  1. (of a concept or idea) unclear, vague, or ill-defined.
    “nebulous concepts like quality of life”

synonyms: vague, ill-defined, unclear, hazy, uncertain, indefinite, indeterminate, imprecise, unformed, muddled, confused, ambiguous, inchoate, opaque, muddy
“his nebulous ideas about salvation”

Regarding the thread topic itself… I think an award for scouting would be fantastic. It wouldn’t motivate the students on my team to scout (blue banners do that), but the recognition from winning a scouting award, and the experience the students would gain from explaining the system to judges both seem really valuable!


Maybe I’m mistaken, but paper and pencil is a system. As a low resource team we don’t have the student nor mentor numbers to do much development beyond the robot. Our big summer project is for the robot to “understand” its position on the field so we can start building autonomous function in 2020.

If you’re only rewarding tech systems, then i feel you’re creating additional recognition for higher resource teams.

Due to ou minimal resources, (two part time - <50% programming mentors), paper and pencil fits us at this time. During the season our “system” did work to our favor. This system identified that the best partner for us in Houston was 1746 Otto who was ranked 44th. Same for 2990 Hot Wire (r10) & 5085 Laker Bots (r13) at Wilsonville (rode all the way to a blue banner). I’m sure some watching thought we didn’t know what we were doing when we made these picks. At every event where we were an alliance captain (4 times), we never lost to a lower seed.

We did have the luxury (if you want to call it that) of not having to find a defensive robot in the selection process.

I’ll send you a message off line as well. I love the circuits robots and team from afar. I would welcome the opportunity to learn more about and from your team at Chezy Champs.

1 Like

You made a number of assessments in your post. I was addressing each in turn.

This is the most important one. I have no idea what you’re talking about “blowback.” Is this about certain teams not winning the award? You mean there’s currently blowback from teams not winning engineering awards? Or not winning Chairman’s? Do you really believe that giving an award will create bad feelings? And after you put down the idea that awards should be given willy nilly. Which is it–do we give awards on merit, with many potential losers, or do we give participation awards to everyone? We can’t take that concern seriously.

You keep going back to the evaluation being based on the quality of the alliance chosen and the success in the competition. That is not what this award is for. As I said it would be for developing an effective data collection and decision making process. Many teams wouldn’t even have an opportunity to show the quality of the alliance they choose–see the side conversation about 1072 in this thread. In addition, scouting data is also useful for drive team strategy–how would the outcome of that ever be evaluated? And this is no different than the engineering awards. In almost every competition, a team wins an engineering award for a mechanism that worked only sporadically on the field and the team did poorly in the competition. But did that team deserve to not win that award? Perhaps not if the mechanism was awesome when it worked.

I’m wondering if you keep rolling back to this singular criterion for evaluating scouting systems because you have limited experience with developing an electronic based scouting system. If you did have that experience, you would see the ability to evaluate the relative merits of different data analysis approaches and systems. It really is no different than the engineering award evaluations. (I once compared the statistical evaluations methods and results from 71 journal articles and reports with a set criteria. That was really no different than what is being proposed here.)


I can see the merit of pencil & paper systems with limited resources. However, many/most of the awards tend to go to higher resource teams. The Entrepreneurship Award is about creating higher resource teams! The same goes for the winners of many of the engineering awards, and even the Chairman’s tends to go that way.

But if someone can demonstrate that their paper/pencil system is effective, coherent, well organized and rigorous, then they should in the running for the award. If the system sits in the head of a mentor (which is how a poster on this thread described their system a couple of years ago), then that “system” wouldn’t really fit the award, no matter what the competitive outcome. This would be about creating a system that could be duplicated by another team with equal output.

1746 was high on our list–I think they may have been our next pick after 7172 at Houston. We rarely look at quals ranking in our draft list, but their on field performance was obvious. And 7426 also was high on our first pick list, so that pick was no surprise to us. You built the perfect alliance for what you could get. That’s why you ended up in the final and pushed us so hard. But those teams also were standouts from early on that could be easily identified with a narrowed scouting scope.

We also know 2990–they were on our 2016 Champs alliance. They also were high on our list this year, but since they could only score low, we pushed them down. They were very good in the PNW this year. (Don’t know anything about 5085). In the small PNW Districts, it’s much easier to use paper/pencil systems. We were fine in 2012 until we went to Champs and were overwhelmed by the field size (100 teams then).

So we get back to the uniqueness of your situation this year, which you did a great job of exploiting. That ability is a strong indicator that you will be continually successful in the future. But is also means that it’s difficult to extrapolate your experience to other teams who are typically in a very different situation.

I think where I have a major issue with a “Scouting Award” is as follows:

  1. Scouting is never mentioned by FIRST, its something teams do to help win and as such deserves no special FIRST merit which tends to talk down the competition aspect. No need for scouting if a team NOT competing or trying to win. Other items: Gracious Professionalism (Chairman’s) Robot design (EI) safety (Safety) etc are all mentioned in the FIRST materials. Repeatedly, there is a desire there for excellence.

Scouting is OPTIONAL and at a teams decision to do or not do. As such I see no value in an award for something optional. Not in the spirit of enhancing the competition itself. This is why they award team spirit, it makes the audience happier. Scouting? not so much.

  1. Seems to me most of the pro scouting award folks look at awarding innovative scouting technology solutions over actual eyes on bots scouting to judge some award on. This seeming to award kids in team programs that value programming some scouting system. That will certainly be divisive and put at odds the many valid ways teams compete. This of course only applies to teams that scout and choose to do so.

  2. Its sort of silly , as there will be no way to agree on what criteria would be involved. is it just the technology driven system , without winning it? Is it winning it? Is it trying hard ? Is it having the fanciest interface.? Is it using new ways to share data?

I really don’t care if First wants to do a scouting award. Its their call. Would be interesting to see how the community would react to yet another award. Especially one based on optional stuff.

Only for teams with limited resources??? Technology does not a scouting system make…observations can. Some teams choose how they want to do it.

Back to Horseracing, if it was easy to pick a winner don’t you think with all the money involved , someone would create a program to pick a winner every time? Hasn’t happened. So thinking some HS students can game FIRST with a secret sauce scouting program is also unlikely to happen regularly. much of any perceived scouting program app or paper/highlighters effectiveness is from many other factors. Have to be somewhat lucky too ( match schedule, etc)

Paper and Highlighters are every bit as valid a coding a solution when it comes to scouting proper, there is no inequity there. Both involve high level critical thinking. Neither should be considered lesser or only for “low resource teams” to have merit.

With the cost of a First competition entry…there are no low resource teams

What I am reading is that they want to award process and quality over results or method. If paper and pencil was used to create an effective scouting process then it deserved to be awarded. But a system that takes twice as long to get the same information is not a good one, wither that be because of an unintuitive UIs, needed to physically write and process data by hand, or having scouts recite all their results in iambic pentameter.

I don’t think anyone is saying “programmers don’t get enough love, lets make anotther award for them”, but “lets give an award to the people who have spend many hours and days developing a system to fairly evaluate performance and then uses that information to make decisions for what we should do next” which is a very valuable skill in almost any walk of life.

It just so happens that the majority of those effective systems use computers. Just like the majority of the teams that get the design awards use CAD, and the Business plan probably uses Excel or similar. There can be effective solutions without involving computers, for instance when it isn’t feasible for a team to go through the up-front opportunity cost of setting up and training on a computerized system, but that doesn’t mean that digitized systems aren’t oftentimes superior.

Either way if you could describe why your paper and highlighter scouting system is the best choice for your team then I see no reason why it wouldn’t be eligible for a scouting award.