Qualitiative Scouting

Hello all,
With the release of the 2017 game FIRST SteamWorks and all of the excitement that goes along with it I wanted to bring something to the forefront on the Scouting Forum.

There has been a lot of buzz on this thread about the seemingly impossible task of deducing the fuel scored by a single robot. There are plenty of great ideas over on that thread so I encourage you to give that a read before navigating any further.

The purpose of this thread is to address a sort of ‘mental block’ that I am seeing with regard to qualitative scouting, many teams seem to want to function purely in the quantitative space. In 2012 team 876 (re)introduced qualitative scouting into the mix, a simple ‘yes’ or ‘no’ reaction to a team’s performance in a match it has since evolved and become one of the most valuable statistics we track a team on over the course of a match. Not only does it serve as sort of an initial filter when evaluating teams on Friday night, but it serves as a safeguard to easily catch all the little details that are not categorized by more traditional quantitative methods. What I’m trying to get at here is don’t be afraid to work with approximations. **There are very few statistics over the course of the last several years that a team can have 100% certainty with, this year is no different. **

I would rather keep track of 10 data points over the course of a match and have greater certainty in them than gathering data that is more in depth than necessary that may have greater uncertainty. Know what you need to look for in a team and how in depth that data needs to be. Approximation can be an incredibly powerful tool, you just need to know your degree of certainty. Qualitative methods can cut down dramatically on complexity and offer an equally stable basis of comparison for scouting and strategy purposes.

There are some seasons where qualitative scouting is the best way to judge team performances and this year seems like that year. The problem I have always had with qualitative scouting is that two people watching the same robot might have different definitions of variables like speed, accuracy, effectiveness.

In cases where variability is inevitable it may be best to have a hybrid scouting system. Give your scouts parameters and options to choose from. For example, instead of asking “Are they an effective shooter?” ask "What percentage of shots did they make? Less than half, half, more than half, all…

We always leave space for “notes” that perhaps don’t fit a particular question or area but are still important to know. These notes are great for observations about strengths, weaknesses, potential strategies for future matches, etc.

We also practice using our scouting sheet by watching other regionals, scrimmages, and practice matches. Handing someone a sheet to fill out without discussing what they are supposed to observe is a big mistake. We want to make sure that if we have use notes and qualitative comments, we at least have scouts who all understand what we are looking for.

I agree with everything you say here, having preset ranges means you can link numbers in a qualitative way for comparison, note sections are always important and can be the most valuable tool for comparison at your disposal. The human variable needs to be taken into account with whatever system you may be using. I preach “Eyes-on-Field”; if you are not watching the match and just inputting data your degree of data certainty goes right out the window. Practice and familiarity with a system is only going to increase the certainty of your data.

To hop on what Catherine was saying, in the past we have found it very difficult to provide scouters with qualitative criteria and expect to get back useful information. What kinds of things have you measured like this in the past, and how do you use it on Friday night to help generate a pick list?

Additionally, having qualitative data requires scouters to be 100% invested in the match. How do you encourage scouters to think critically during a match, making notes of important points, and tossing out what’s not important.

Throughout the build season, we talk about what we think are important qualities to have in an alliance partner and (maybe more importantly) what are we scared of seeing in an opponent. I encourage my scouters to watch robots initially to place them into categories: fuel dumper, fuel shooter, gear gatherer, climber, or a hybrid combo of multiple areas. Then when we know what categories the teams fit in, watch to see how well they compare to others in that group.

I also try to tap into their competitive nature by saying, if team A is GREAT how can you still beat them? What do they do not so well? Or what can we or an alliance partner do to disrupt what they do so well? This is where those notes come into play. I like to have a mentor or two sitting with the students to point out things like, “Did you see how they pick up from there? Would that be helpful for our partner to do? What if we don’t get them though?”

If you encourage your students to constantly think about eliminations and assume they are going to be picking, it helps them focus on taking the notes you need Friday night. Then on Friday night we decide what kind of partner we need. For example, in 2015 we decided we had to have a team who could pick up totes from the floor since we used the chute. So that really narrowed our selection of teams.

Assume you’re going to be an alliance captain from the beginning. That’s the best way to stay motivated. And even if you aren’t a captain, you may still get picked for your scouting info. That has actually happened more than you would think.

I agree with this, there are certain criteria that you know of prior to the event that you would like to fill with an alliance partner. These criteria have a tendency to evolve as the event progresses, but everyone seems to be on the same page natively. I have several students I place a lot of trust in to try and keep everything running smoothly. They are the ones that define how something should be scored, they are often sitting next to a less experienced member of the team guiding them. Another useful tool is to use a 3 or 4 match ‘floating average’ to examine a team over their most recent matches, if what your team is looking for in an alliance partner changes over an event it will often be expressed as an ‘uptick or downtick’ in qualitative data for a particular robot category (i.e your team all of a sudden sees they need a gear-scorer robot instead of fuel-scorer after an amazing match #41 in qualifications, all of a sudden gear placing robots are more valuable to your team, and ranked higher midway through the event because of this “need for a gear placer robot bias”). You just need to know what you need in an alliance partner. A lot of this comes down to analysis, it’s not actually the data itself.

As robochick said, it is helpful if you assume you are going to be in some sort of picking position, it defiantly helps if you have a team history of being in that position; it’s a different mindset. When we do our scouting meetings we often can guess quite closely how the first round will go, this is not based entirely on data, observation on the field and in the pit plays a big role. The real challenge with scouting is how the second round goes. We often have may sublists to fill the niches of that particular game. These lists are often derived from the qualitative rankings of driver skill, overall shooter performance, tube placement, mini-bot line-up time, etc. and then adjusted based on a key quantitative stat for the particular category. (WARNING: this often results in many lists on the whiteboard which can become overwhelming for students/mentors, be sure to not overstimulate the senses :smiley: )

I would like to think that our team functions just like that: a team. Everyone is invested in someway or another and is addicted to success. Sure, there is some variance in how excited team-members are about the game, but as long as mortal is high and the scouting shifts are not too long everything seems to work out. Everyone wants to watch the match so as long as it is engaging there isn’t much of a problem.

As the scouting/strategy mentor we have solid success with qualitative scouting past two seasons, we started the scouting department two years ago (about 8-10 students) . I highly recommend this approach to any team thinking of scouting the matches, it works. The whole drive team the scouting info helps them greatly in gameplay and per game custom tactic discussions with partner teams. We use the numbers too but only as verification of what our actual eyes saw. Also to re-evaluate those on the numbers sheet that missed our radar. The students come up with the “game by game” strategy and talk to the drive team then the drive team uses that info to partner with other teams. I try to scout all the games and then on Friday night we compare notes. Get pick list ready. We try to do enough day 1 to be in top 12 that gets on every scouts radar Friday night higher if we can. Lower and we did not do our jobs/roles well somewhere along the line. Durability was a huge issue last year… engineering design issue made it hard. A good learning experience to try to overcome.

So on Friday night…We (scouts) discuss what a perfect alliance with us would look like with the list, being realistic with likely pick order and look for groupings each round. Sort of like NFL draft. Sometimes #38/60 has just what you need, we trust our eyes regardless of final position. Know your own strengths and weakness. We as scouts scout our own team like any other and try to estimate our finish position. We then design a playoff strategy based on IF we can assemble that team. Our play in elimination is usully different than qualification, try to find efficiencies with certain bot types.

Saturday is mostly “match strategy” to maintain/increase rank and a quick re-verification of our list make sure no one is having serious issues, looking for upward (or downward) movers that might have missed (or made) our list and interview those teams to find the story. We then target certain teams for elimination partners the ones that mesh to create a strong alliance whether captain or not. This helped us as #8 beat #1 in two games. We were much stronger than selected last year. The stories jived. Part of it is marketing and having a plan how to win if paired with your preference bot type to execute the strategy. Be brutally honest in these discussions what you can and cannot do, find out stated “embellishments” from your scouting notes. Many teams over estimate certain things, eyes don’t lie.

We look for attributes of teams that would “help us” throughout the entire lead up and competition. Take a lot of notes on tendencies/strengths/weakness etc.

We watch any posted video, practice, game of any competitor or partner… then look for the rest in game.
We did video scouting for weeks ahead of our competition in 2014 so we knew over half the field in both competitions bot/drive team tendencies and such way before we saw them live. Last year back to back early so not much video.

From other scouting activities like picking a horse in a horse race…" Class " is a huge indicator same for robotics teams have histories or “Class” which typically indicates how they will do on average each year. Pretty easy to see this if you look at histories in Blue Alliance. Also recent performance is a huge indicator, quick turnarounds are usually a place jump in second competition due to playing in the game already. we enjoyed that last year back to back. Same this year.

We look at it as a three prong approach for gameplay:

Engineering… Driving… Scouting/Strategy
All of equal importance in a season.

The issue I have with number or software solutions is sample size and uneven match pairing . Eyes and good notes don’t lie. especially if you have a team of eyes and limit what they evaluate. Focus on what traits are important. KISS. Avoid information overload and noise.

Our tools: Notebook, pen , eyes and highlighter plus some Internet stat verifications.
Knowing the desired attributes going in helps a lot. Then finding those and assembling that list.

We typically go 24 ordered list deep and 6 outliers/specialty (8 alliances x3 is 24) in regionals almost half go. Its a matter of finding the half that help you. There are always really good bots that don’t pair well with us. With so many tasks you alone cannot do them all and trait duplication not good sometimes. We look for filling in gaps that tends to work well. Plus its fun to watch all the action.

This season especially qualitative seems like a good way to go due to the Fuel deal.

Find the Citrus Circuits whitepaper on our scouting system. We divide our scouts into one group that focuses on individual robots to collect purely countable/quantitative data; and two scouts who what alliances to record qualitative assessments that take time to develop.

This, this is what I am trying to embody, very well put. The only difference is we are trying to use electronic DAQ to pick up a few quantitative data points for use mostly in verification and some strategy (such as cycle time optimization for the alliance). The electronic method also helps with report generation, data tabulation is annoying to do by hand.

Our team for years have utilized a hybrid scouting system in which we collect both qualitative and quantitative data. Our scouting veterans are paired up with newer scouters in order to help train them on what sort of things make good qualitative observations. Smoothness/efficiency of driving, any obvious physical deficiencies that affect robot performance, efficiency of subsystems (i.e. being able to note the fact that just because some body can collect a gear off the ground, if they do it much more slowly than we can then maybe need to be the gear robot this match). However when it comes to scouting for us quantitative scouting takes precedence when it comes to personnel and resources because hard data isn’t up to interpretations, no matter how good your qualitative scouters may be.

This is the exact issue we have had with qualitative data. In he past when we have put qualitative sections in our scouting sheet, we have gotten a wide range of very good data mixed with students messing around and hugley varied opinions.

While our team always tries to incorporate qualitative data such as driver skill level and speed, we focus on quantitative. When our team analyzes our data we also calculate standard deviation in addition to our other metrics which helps account for quantitative data discrepancies. In addition, having our scouts watch other regionals has also helped with our quantitative data collection. This year we are fairly confident that we can record everything except the number of balls scored low which we are going to substitute for “low cycles.”

Basically while in our experience qualitative data can be valuable, words are far more subjective than numbers.

I think the varied data is not a result of qualitative or quantitative. It’s how invested people are in the match. When you have too many catagories for a scout to gather info on you are potentially subject to any number of distractions, especially due to the data entry. I believe this is best avoided by collecting only what is necassary durring the match. There are some stats that can be evaluated prior to or at the end of a match, mainly the standard qualitative stats (driver skill, time managment, etc), this keeps the amount of eyes-on-field time to a maximum.
Utilizing the concepts I just stated we will only be keeping track of 9 different things over the course of teleop, and several of them are simple true-false (i.e. climbed). Simplified UI has been on of my main focii this season, this leads to that all valuable eyes-on-field time. It is my hope that only having a few catagories that are subject to opinion that we can glean the benifits of a qualitative catagory with the consistancy of a quantitative one. This coupled with the ability to filter out data from a spicific scouter if necassary should put us in a strong position to make an accurate pick list.

Simplified UIs coupled with the ability to filter out any known discrepencies allows a highly qualitative system to opererate efficently and reliably.

Sorry, I may not have made this clear. The quantitative data I got during the same matches was spot on. The variation problems were not intrinsic of qualitative data collection methods or the categories being collected, but more so on things like low visibility (seating that included power outlets was pretty bad). The metrics we use eliminate the small variability that resulted therein.

We similarly use few categories/simple categories in order to maximize time watching the match and the small amount of additional notes/other qualitative data we collect is very important. In some categories, I have had significant issues with qualitative data, especially as it relates to things like driver skill and efficiency. Especially here, in the last two years I saw that the quantitative data would often tell a much different story than the qualitative: because a robot moved more cleanly, scouts would perceive it to be more efficient at scoring points while in reality it was not.

I’m curious how you plan to analyze the qualitative data; do you have a specific process that you use? I have found that our team’s z-score metrics have the ability to give a very stable basis of comparison and can are completely objective. Even this year, I am fairly confident scouts can count things like high shot numbers and by simplifying other categories such as low boiler scores to low boiler cycles, I believe that qualitative data will not hold a significant advantage.

TLDR: In my experience qualitative data has hidden fluctuations and human error. I do not expect any possible fallacy in quantitative data to make it less valuable than qualitative this year.

Qualitative scouting is something that is super important if you want to assemble a good alliance when you get into alliance selections. My team finds a couple students/mentors that truly love scouting and strategy and have them focus more on the qualitative things. It’s hard for a first or even second year student to come in and know what is being sought after perfectly.

I truly believe qualitative data can make or break a team when they’re picking an alliance. It’s important to know things like ability to play defense, ability to get around defense, shooting locations, driver habits etc.

These are some things that can allow you to find two other robots that work well with you robot. This is also some of the most important information when it comes down to beating a tough alliance. Having the qualitative data regarding their driver’s habits in regards to shooting/defending can allow you to completely counter even an elite robot.