Scouting: Quantitative vs. Qualitative Poll

I am leading our team’s scouting operation this year, and although we have a system in place, I was just wondering how other teams rank quantitative (number data) scouting with qualitative (mission agreement and personal disputes, as well as observed tactics). Please answer the poll and feel free to discuss results.

Currently we rank them equally, if not slightly leaning to qualitative.

The problem with quantitative scouting is that it’s prone to suffering from “garbage in, garbage out” while simultaneously looking trustworthy and reliable.

We humans have a tendency to give undue weight to numerical values simply because they are numerical values, without really putting much thought into where they came from and how meaningful they actually are. In my experience, I’ve found that, at least in FRC, qualitative impressions of a robot are quite a bit better than attempts to quantify performance.

Granted, there are some useful numerical scouting tools. OPR works reasonably well for offense in many games, for example. However, there’s simply so much information that is clear to an astute observer that can’t be easily quantified, and throwing it out simply because you can’t quantify it is unwise.

…and from a statistical perspective not enough sample size either.

This is a big part of what constitutes “garbage in,” yes.

Along these lines, there are some very interesting cognitive science results that suggest that people make better judgments with extremely limited data sets when they rely on heuristics and intuition rather than explicit utility calculations. This is unsurprising, as the act of formalizing one’s reasoning in that way often greatly restricts the body of evidence that you’re actually using (since you only end up taking into account those things which you explicitly quantify), even if you’re not aware of it. While in situations where you have large amounts of data this drawback is hugely outweighed by the benefits, when you’re stuck with little information to begin with it can (and does) do more harm than good.

If you know what sort of robots you are looking out for, quantitative data is very useful. Like most topics, however, a mix of the two will probably lead to the best results.

Depends whose qualitative impression, though. It is easy to find people to record simple statistics of one robot per match. It is very much harder to find people that can identify strengths and weakness of robots and remember them for a big chunk of the teams at an event. Outside the top handful of teams at an event, I’d bet the average scout couldn’t tell you much of anything for around 50% of teams.

My team is looking at Quantitative results mostly strictly to determine what defenses we should put out against any given robot(s). Qualitative and Quantitative combine for our internal rankings of the robots as far as eliminations.

Make sure that if you are focusing on your quantitative data, you keep track of their partners and opponents. It can skew a team’s trend if they are the middle of the pack team doing all the work for two bottom-quarter robots or if they are the “cheesecake” to 2 extremely highly efficient teams, or if they face the “best” defensive team at the event.