Qualitative Scouting?

Hey y’all,

For the past 3 years, I’ve been part of the scouting team and we have always, done quantitative data, similar to most teams.

One detail I’ve always noticed is that the data can tell you that “these teams are really good” and “these teams are really bad” but there’s a group of teams in the middle that are pretty jumbled in terms of how good they are.

“Mid-range” team ranking can be gotten from sites like Blue Alliance which can provide such information.

I am wondering if making people write out answers, rather than just circle numbers, will give a better idea of “mid-range” teams that are closer to the high end.

Any thoughts or past experiences?

I think having people write out answers in order to elaborate on the robot’s strengths and weaknesses is a solid plan, especially since you’ll want to know those qualities come alliance selection.

Additionally, if you want to stick with the circling numbers approach (perhaps in combination with written comments), we’ve used an Elo rating system that can yield some decent insights in how teams relate to one another – beyond just average rating.

I do a lot of qualitative scouting, and I find it helpful to focus on specific criteria. Here’s a few things I like to watch, especially to sort the middle tier teams:

  • Driving ability - both skills and decision making
  • Cycle times - do they cycle efficiently and use their time well?
  • Patterns - do they do the same thing every match? This one can help your drive team out a lot when you play against them
  • Specific strengths and weaknesses - do they do one type of shot really well?

Qualitative scouting is something that gets easier and better the more you do it and think about it.

We are qualitative first quantitative second…

I look for bots that help us and if I can envision them as part of a strong alliance of ours IF we are captain hopefully (those that compliment what we can do and/or are adding uniqueness), I also look at our paired partners and competitors and any past recent matches. All scouted bots receive from us HIGH and LOW score range estimates per individual contribution and consistency rating <–main metrics

Along with that be honest about your own capabilities, we started out a Jack of all trades and ended a 100% topper/noodle filler as the matches went on. Play to your strength then find others to fill in that’s how you pick the strongest team, what do you do best and find the rest. We then knew we needed stackers to win. Its OK to admit your weaknesses and switch up strategy midstream

This shrinks the pool dramatically I need to “active” scout with my team. We go about 22 deep in our final list.

I find that one can over scout so its best to pair down to the basics. You either help or don’t and what are your tendencies. Also is there a HP advantage? Is there to much overlap?

I think you can tell a lot just by getting to know some teams visually and look for consistency. Not a huge # believer more how you play the game and consistency with eyes on the bot.

Since we pre-rank the entire pool … we see certain trends as matches go on.

During Quals we carefully look at two to three matches ahead for all partners and competitors taking copious notes and strategists for next match. We also identify non partners/competitors and place on the watch list too IF very special make time to see another match of theirs.

I see where “strong/veteran” teams get matched up and “weak/rookie teams” get matched up in alliances in quals …that is why I don’t trust solely #'s because the matching seems like strength levels go together in matches making sometimes weak teams score artificially low and strong teams sometimes artificially high so I take scores/stats with a grain of salt. So I look for individual bots that stand out no matter who their partners are then confirm my observation with stats as we make our list.

I got to a point last year it was comical with a top two team “you have to be kidding” they are partnered with them too? Must be nice. One match all top three teams were partners. Against rooks and lower tiered. Needless to say it was a blowout.

Take any #1 and #2 team going in or low “older” ## team and look at their partners in all matches. Then do same for a low/rookie ranked team you’ll see the trend. Not saying its on purpose just that I notice it. Could be the algorithm they use to schedule matches.

We got pretty good at guessing a score in each match before it was played this way. Not many surprises. We would tell out drive team games they would win and those that they will struggle and have to be on point to win.

For instance a low team ## drive base was ranked ultra high most of a regional due to great partners throughout=artificial high
Another with a characteristic we need “great HP rookie stacker” was ranked low…but high on our list (artificial low). Again due to their partners.

With only 8-10 matches this limited selected partners can sway results.
Are they a fit? that’s the most important question. Trust your eyes not just the #'s on a stat sheet. Eyes do not lie after a few matches even with a 60 deep pool.

My two cents on the matter is use quantitative or at least empirical measures as much as possible. You can track more than score with numbers.

All this can be tracked with numbers, trust me! You just have to think outside the box. Instead of asking how good a driver is, ask how bad they are (Do they get fouls constantly, fail tasks, etc.). Instead of asking how well a team cycles, ask how many cycles they complete a match. Think through your measures, decide whats important to you and whats not. Just be sure your scouts are recording data, not opinion pieces.

Scouting can be such a buzzkill. We did paper scouting for years. An arduous task that can become a useless nightmare. Penmanship and writing skills can be a hindrance to this approach. An affect of the digital age.

Last year, we went to a google forms process where students use their smartphones and check boxes or tally actions of the Robot that they are observing.
At the end of the form, there is a text box where students “speak” their opinions and it is converted to text.
This has given us better qualitative statements that easier to read and organize over the course of a competition.

The students enjoy this process more, are able to watch more of the match, and we end up with less scouting team burnout.

I agree that there’s definitely a benefit in qualitative information, especially given the limited number of matches in a tournament, and the development of teams during each tournament. Some teams get markedly better at the tournament, as they work through their issues or get with the meta-game. Other teams get markedly worse at the tournament as their robot gets worn down by the full contact or drivers lose their composure. Try to provide a sufficiently specific rubric (set of criteria) to reduce variation among your observers.

My team uses an app for our scouting which is highly quantitative so we just have 6 students doing this every match. This is super easy for anyone to do. The qualitative data we have found is a little more difficult to gather. Therefore, we usually put a few of the more experienced students on that job. They gather information that would be helpful for playing against them or things that might be important going into alliance selection. Both types of data are super important for a team to be successful

I should clarify that we do plenty of quantitative scouting as well, and I agree that hard data is really useful. The things I listed are things that I find it’s helpful to take notes on qualitatively, although they certainly can be tracked quantitatively. Scouting can definitely vary a lot from team to team, and we like to combine our data with more subjective opinions.

I agree that they can be tracked with numbers, but I personally don’t think they should. While it’s easy to point out driver faults, a driver who isn’t a bad driver isn’t necessarily a good driver. Things like driver skill and how they drive is very hard to place into a value system and be accurate. I would trust a scouter/strategist(s) that clearly understands the difference between a bad, decent, or good driver more than a database listing fouls. Especially since what makes a bad or good driver is really just an opinion itself. Failure to accomplish tasks may also be a robot error and not necessarily reflective of the driver. In regards to the other qualitative scouting measures, cycles per match doesn’t necessarily work if a team decided not to focus on cycles in some matches. Same with patterns. How many possible patterns are needed to reflect every strategy a team can do?

Oh yes, lots of them. We added qualitative scouting into our system after a first-year scouter voluntarily started taking notes during matches he wasn’t doing quantitative for. We didn’t even know he was doing it until he walked up to the lead scout with a stack of 20-30 pieces of notebook paper covered in notes. In fact, his notes were so helpful that we added formal qualitative scouting to our system at the next event we attended.

When done properly, qualitative scouting can have a massive impact on your ability to make a strong picklist. Of course, quantitative data and strategic needs always come first, but when two or three teams seem almost exactly the same, the notes provide an amount of clarity that nothing else can.

Here’s my advice to you:
-either use an app or laptop to enter notes. Transcribing handwritten notes can be painful.
-if you’re doing what I described above, make sure you use a CSV formatting so the data can be imported into an excel spreadsheet for easy organization. I’ll post a few of my formulas for doing lookups on this kind of data.
-make sure the scouters know what you want them to write about. I’ve had qualitative scouters write (and I quote) “YAS LUV” as a qualitative data entry. I’m not kidding.

Excel works good

I usually just take notes and get eyes on bots minimum 2 ahead for for future partners and one ahead for competitors (sometimes they are the same) and keep an eye out for others we do not face that can “help”

I think this year there is so much to track so I’m keeping it really simple as to not get stat overloaded.

Can they help us? What uniqueness do they offer? Would we want then on our alliance? What do they get stuck with? Any Auto? Any Scale/challenge?

The rest to me is fluff. I like to simplify.KISS.

I think reliability, teamwork and consistency will be huge this year…even more than most years.

Evan: " In fact, his notes were so helpful that we added formal qualitative scouting to our system at the next event we attended."

“When done properly, qualitative scouting can have a massive impact on your ability to make a strong picklist.”

Agree on both counts, we use a website with a mobile friendly interface for match scouting, and pit scouting. Notes are a big part of both. When we train for scouting we stress taking notes.

Qualitative information is 100% more useful than unused or misused quantitative information.

So many teams try to scout too many numbers. You have to be careful to not mis-record/under use/draw strange conclusions from this.

Add me to the qualitative scout hype train. If given a choice between a qualitative and a quantitative system only for an average team, I would choose the qualitative 9 times out of 10.

In my experience heading up our team’s scouting department, both qualitative and quantitative scouting can be VERY helpful, but you need to address the inevitable issues before the competition:

Generally speaking, the problem with qualitative information is that different scouts have different reactions on the same event, and some form of training/ rubric is extremely helpful, even 100% necessary if your scouts are, like ours, younger team members.

If you go quantitative, you need to make sure that you can accurately interpret the data you get. If you don’t attach any external meaning to a statistic you collect (shots, cycles, etc) it literally isn’t a statistic any more, and looses its value as a result. Also, It’s a good idea to add a “comments” area: some times, it’s not obvious if action X should be counted in statistic Y, so allowing the individual scout to write it down is quite helpful.

You’ve brought up a couple really good points. Any team looking to do good qualitative scouting should record who is taking what notes. We can go back and see, “okay, this person said this about this team, and this person said this about the same team.” This allows us to negate the effects of scouter bias, and discussion between disagreeing qualitative scouters can bring out amazing details on a team’s strengths and weaknesses. Also, track the match numbers corresponding to specific notes, this allows you to easily explain anomalies in quantitative performance.

Remember, qualitative data is helpful alone, and so is quantitative data. However, when the two are combined efficiently and effectively, you can derive much more from the data than you could with either one alone.

As many people have said, both quantitative and qualitative (or as we call it on 910, objective and subjective) scouting are incredibly useful in different ways. Both have their strengths and their weaknesses. On 910, we do both but the jobs are handled by different people.

The objective/quantitative data is gathered via paper and pencil scouting on sheets mainly involving tally marks in boxes. These sheets contain a box for notes so that these scouts can record things they notice that are not accounted for on the sheet, but this is not their primary task. This is what the majority of scouts are doing at a competition. (Anywhere from 6 to 24 people depending on how many of our team members are able to attend competitions)

The subjective/qualitative data is the responsibility of a much smaller group. These scouts use a legal pad to simply take notes on robot features, driving ability, general strategies, strengths, weaknesses, etc. These scouts earn their position, they take a test before our first competition and the head scouts and myself look over their answers and determine who will be given this position. We also factor in our observations regarding their abilities as an objective scout. In the past this group has been as small as two people but as our scouting system has become more refined and our team has grown we are lucky enough to have 6 subjective scouts this year, allowing them to each take a position (R1, R2, etc.) and focus on only one robot per match.

While I am lucky enough to be working on a fairly large team, our overall system is able to be scaled to as few as 4 people if absolutely necessary.

tl;dr In my experience both Qualitative and Quantitative scouting are important components of a successful scouting system.

How many scouts do you have available?

Our system uses 8 scouts. We started this in 2013. We believe that our standard scouts are too inexperienced to give good relative qualitative information. Also we’ve found asking them to keep track of that as well distracts them too much from inputting the key quantitative data that we use. So we have 6 scouts, one per robot, tracking scoring and other measurable metrics. This is the basis for our 1st pick list.

We then have 2 other “superscouts” who watch each alliance. They rank the teams within each alliance 1 to 3 across 4 parameters for qualitative measures such as evasion, blocking, speed and pushing ability. Across many matches, these rankings provide fairly good measures of relative abilities. We use this data primarily for our 2nd pick list.

We have mentors in the stands then taking some added notes on peculiar or particular aspects that we add in, but those are not extensive.

This is the system we’ve found to work best. We have level I scouts, level II scouts, and defense coordinators.

Level I scouts track solely quantitative data on Windows tablets, 1 scout per tablet, 6 scouts per match. Level II scouts track qualitative data on laptops, usually set up behind the level I scouts in the stands, 1 scout per laptop, 3 scouts per match, each watching 2 robots. Level II scouts are normally pre-selected, as they must demonstrate the ability to watch 2 robots at a time and still record accurate qualitative data. Level I scouts can be any team member not on the drive team/pit crew.

We’ve worked out a system this season to be hardwired into the stands amongst each other, so all the data collected from each tablet/laptop will update in real-time across the system. Then, 2 or 3 matches prior to the one we’re competing in, we print out strategy sheets, which are created to compile all the data collected throughout the day about the teams we’re with and against in our match. This allows our drive team to get an overview of what they’re in for, and lets us help them decide on a strategy.

Both our qualitative and quantitative data is pieced together to make up the strategy sheets, so our strategists get a complete picture of every robot in the match when they see them.

I think I had a post about this last year, qualitative scouting can be good if it’s detailed, and if scouters can properly recognize their own biases. If it’s not detailed, like our team’s qualitative notes last year… well… war flashbacks of team captain standing at alliance selections for 10 minutes silent