Your scouts hate scouting and your data is bad: Here's why

Looking through it, your data collection caught my eye. I would love to learn more about zipgrade + scantron sheets:

  • What are the biggest “gotchas”?
  • How difficult is it to set-up the data pipeline?
  • How difficult is it to create the sheets?
  • How easy/hard is it to do on-the-fly metrics?

Hi Katie,

Zipgrade is a teacher grading system that is very inexpensive (about $10 a year). Our lead scout, Chip, spent around 150 hours last year developing our Scantron in conjunction with Zipgrade. He now can make a new season’s Scantron in just a few hours. He is willing to share this with others. There are aspects of the Scantron sheet that remain the same every year and parts that can be customized for the current year’s competition.

The wonderful thing about the Scantron is that it is easy and the scouters much prefer it to paper and pencil. The database manager loves it because it is extremely easy to upload (you use a phone screenshot to capture the data - we have 99.94% accurate transfer of data). It is an advantage over a pure app because you can go back to the hard copy if there appears to be unreasonable data (55 cargo in a match). Also, sometimes electronic data gets corrupted or lost. If necessary, there is a hard copy to re-enter the data.

We use Tableau to analyze our data. Chip takes the Zipgrade data and transfers it to an Excel file. Tableau reads the Excel file. Tableau is wonderful. It’s extremely easy to do “on the fly” metrics. However, there is a learning curve with Tableau. We have trained other teams in our Scantron system. Some use Tableau to analyze the data, others stick with Excel.

As in any system, there is a learning curve. We learned that you must have proper lighting to photograph the Scantron. It’s not difficult to get that lighting, but it’s best to practice in different environments so that you know what will be sufficient for the task. It’s also critically important to do scouter training. We use the various Zero Weeks to train our scouters, check for errors, and give them a chance to ask their questions.

By “data pipeline” are you talking about how the data gets from the match scouter through the various processes until it gets to the Drive Coach? Chip has set up his database and a “Mission Control” system that addresses this. The data availability after a match is 2 minutes if there are no errors (determined by our quality control person) from the end of the match to appearing in Tableau. Otherwise, if it is sent back to a scouter, it’s about 5 minutes. Analysis and preparation for match strategy is variable.

We love the system. Let us know if you have any additional questions.


For the record, I also have an open source “scantron” scouting program here if you’re interested. It doesn’t use proprietary scantron software, so all you need is a scanner (or a phone scanner app) and a laptop that can view csv files. We chose not to use it this year for various reasons, but in testing it worked very accurately and was fairly quick for scouting and entering data for practice matches.


Marxist scouting philosophy? I’ll Bite. I wanna hear the rest.


Two years ago we completely changed the scouting culture of our team.
The scouting leader/captain became part of our member leadership structure. It also became a competitive position (students apply for the position).
They receive training and practice to ensure consistency in data collection. So I disagree with the idea that everyone scouts.
The senior members on the scouting team have a voice in the layout of the scouting sheets (yes, we still do paper/pen scouting).
Scouts only scout during regional - they are members with specialized training.
Their breaks are always as long as their scouting time (sometimes longer, if we run 3 scouting teams).
They are invited to participate in the early stages of the “pick list” meeting (naming teams that stood out).
They have water and light snacks available to them during scouting.
They are always thanked for their data.
“BAD” data will sometimes be collected - just as a drive team may have a bad match. It happens. We try to determine why it was collected and if discuss if it can be corrected (or was it a one-time event).

Something I’ve been contemplating and looking for people’s thoughts/feedback…
Teams have a handful of scouters sit in the stands all day and record data match after match. This can tend to be boring and not overly engaging.

What if instead you took teams of 1-2 scouters each and said “You are going to be the expert for 5 teams at the event.” Their job would be to pit scout and match scout those 5 teams. Then when your team played with/against one of those 5, that mini scout team would brief the drive team/lead strategist. The main idea with this is to go from a broad/basic knowledge of the teams at the event and students, sometimes blindly, inputting data, to a more hyper focused view of each of team. I think that it would give more ownership to the individual scouters and provides some accountability in that if there’s a question about team 1234 then that scouter/mini scout team should be expected to provide some information. Hopefully they would be able to tell you if that team was getting better/worse, had comms issues, etc.

Thoughts? Do any of your teams do something similar to this?


We’ve been doing this since 2014 and it’s been fantastic for us. Before having “SuperScouts”, our scouts really struggled to remember teams or have a good idea of what their strengths or weaknesses were outside of what you could learn from the data.

We’ve found that ~15 robots per SuperScout is a good number, ideally with each robot being covered by two or three SuperScouts to promote discussion about each robot. Too few robots per scouts and the scouts get bored watching the same exact robots every match and too many and it becomes difficult to watch enough matches of each team. (These numbers are based on 50-60 team regional events).

1 Like

@XaulZan11 I don’t know if team size affects anything, but how many students do you have scouting? What do you think is the minimum number of scouts for this to be viable?

I think its a really interesting and different way to go about scouting

We’re lucky in that we have a large team to make this type of scouting viable. At each of our regionals this year we had 6-7 SuperScouts, 7 rotating normal scouts (6 data collection, 1 data entry), 1 Match Strategy Scout (watches partners/opponents in upcoming qualification matches to determine match strategy) and myself.

Most teams obviously do not have that many people to devote to scouting. If you had to, you could get away with 9 scouts (6 normal scouts and 3 SuperScouts), but I’d look into partnering with another team for the quantitative data while employing some SuperScouts.


Yes, this is more or less the approach we take. Scouts are assigned a set of teams to become experts on. This works well, and the only hitch is when two of the teams you assigned to a given scout are playing in the same match. Your idea of having teams of scouts would help solve that problem.

We are heading to CHS DCMP later this week and I just found out earlier today that 5 of our normal scouters will not be able to make the trip with us. This leaves us with 6 or 7 people on the team that are not either drive team or safety captain.

What would your all’s advice be for scouting with a skeleton crew. I really do not want to make the same 6 students scout all day for 3 days.

The first thing I’d do is watch as many matches of teams before you get to the event as possible. Teams can improve or add new features, but especially after two district events, most team are what they are. I try to watch the final 3-5 qualification matches from each teams’ previous event.

With 6 students available to scout, I’d try to find another team(s) to scout with. Hopefully you can find a group of teams so you can devote only a couple of scouts at time and still get the full data of every team.

If you can’t join other teams, the top priority is making sure you are watching your upcoming opponents/partners. I’d probably put your most experienced scout in this role and work closely with the drive team to rely the information.

For the other 2-4 scouts, I think it comes down to their experience level. If they are experienced scouts, I’d probably have them focus qualitative notes of each team. If they may struggle with determining what a “very good” vs “average” vs “below average” team is, I’d probably have them stick with quantitative data (maybe the blue alliance for every match?). While you won’t be able to get data for every team for every match, you should be able to get a random sample of half the teams’ matches. If you go this route, I’d take advantage of the sandstorm points and climbing data from TBA API to enhance your data.

1 Like


Unfortunately, most of the available scouts we will have are relatively inexperienced. Our scouting method is almost entirely quantitative.

I had considered using 3 scouts at a time scouting only one end of the field (red or blue) for the entire competition. This should give us about half of all the teams matches and allow us to rotate scouts so no one is stuck doing it the entire time.

Pre scouting is a good idea. I’ve been watching our team Google Drive and today our scouts put together a spreadsheet of everyone going to our DCMP and listed some qualitative and quantitative observations from their previous events including: can they do hatches, cargo, rocket levels, climb levels, what sort of cycle time they’re getting, and other notes (“really fast intake”, “won both their district events”, “kept falling off the hab” etc).

This will give you a good head start on observing the teams at DCMP.

1 Like

Can the ones who can’t attend scout from the livestream and send info to the scouts at the event?


That’s a good idea!

1 Like

This is probably very obvious and I’m certain that many teams do this.
During our last regional, the drive team scouted the teams during the practice matches in the simplest of terms (hatch vs cargo, rocket vs cargo ship, and climb level). A few of the teams were pre-scouted during week 1 & 2 regionals in which they participated.

It helped us determine strategy for each match during the qualification matches.

One thing you could do is to determine from online info what teams are in the middle of the pack and focus on them. It is probably obvious who the top teams are, and they don’t need to be scouted much. In my estimation, knowing a bit more about the middle teams’ capabilities will have the best returns for match strategy and alliance selection.

If you can make some sort of pre-competition ranking list based on online data, then instead of scouting every robot, you can shift the rankings according to teams that stand out or perform a lot differently than you expected. That doesn’t require a close eye on every robot.

Having said all of that… I’d actually recommend scouting with a partner team instead.


Nemo’s idea is pretty solid if you’re concerned about picking.

Another strategy is to only scout teams in your next match, you won’t get every team at the event, but you’ll have information relevant for your matches and you’ll likely get ~half. Additionally, the scouting doesn’t need to be super rigid - it can be more qualitative.

I do think forming or joining a scouting alliance is a good idea. If you do find yourself in a situation where you need scouting data on every team (that your team was unable to collect), ask around - while not every team is keen on sharing (understandably), there will likely be at least one team willing to share.


Our team has tried many different ways of preparing for Worlds. We used to look at our schedule and only pre-scout (video scout) the teams that were going to be our partners and opponents in our matches. This was great for the matches but it was insufficient for being able to choose a team in alliance selections. Sure, we had all of the quantitative data on every robot at the end of qualifications, but in an unfamiliar environment, the qualitative information was key to discerning between two robots with equal stats. Now, we qualitatively video scout ALL of the teams in our subdivision prior to arriving at Worlds. I should add that our qualitative scouting involves some quick and dirty quantitative scouting in it so that it puts the qualitative scouting into perspective. It’s time consuming, but well worth it if you think you might be a captain.

1 Like