Calling all scouting data!

Im currently working on figuring out the best way to scout, what data worked for teams this season, as well as the best way to take in and analyze the data. I understand that this season is basically over, but understanding previous strategies will help me to be able to break down this year’s game even quicker.

If you’re willing, I’d love to take a look at any match data you’ve collected over the season. Raw data is great. PM or posting data here works, whichever you prefer.

If you share, or even if you don’t, I’m curious:

  1. How did your team scout? (Paper, electronic, etc)
  2. What was the main data you looked for/found most important? (Cargo, climb, etc)
  3. Did you change your scouting at all throughout the season?
    3a. What did you change?
    3b. Why?
  4. What program did you use to take in and look at your data? (Google sheets, excel, Tableau, etc)
    4a. Opinions on this system?
  5. How did you use this data to make a pick list for alliance selections?

Any and all responses i get on this will be helpful. Im constantly looking for ways to improve our team, and am currently hot on data analysis and understanding how and why certain data is more important to some teams than it is to others.

Thank you in advance :)


1 Like
  1. Paper into Google Sheet. Major key: Make it so scouts circle and tally as much as possible, then make your data entry person only have to hit tab.
  2. Relative performance in key categories (do they favor the cargo ship, are they hitting L3 reliably, etc).
  3. We did add the ability to scrape The Blue Alliance for starting and ending Hab levels, which eliminated three things per match for our scouts to scout.
  4. The Google Sheet. Here’s ours from Smoky Mountains.
  5. First, we’d do a chop of teams that weren’t putting up at least 8-9 points per match on average. (Remember, if you drive off Level 1 in Sandstorm and are back on at the buzzer you get six.) Then we’d look at the top points-per-match and work downward, adjusting for things like “oh wait, they’re also cargo ship” or Does Not Play Well With Others.

While I’m not sure we made the right strategic choices with the data (we ended up being far more defensive at Smoky, so we could’ve had more cargo ship love), the information we had was great for this year. Hopefully, we won’t have to create a new model for 2020.


Out of curiosity, have you tried electronic forms before? If so, is there a reason you chose paper over that? If not, why not? Our team currently uses an electronic form, but we found it difficult to get detailed game piece placement inputs with something like that,

I vaguely remember 2815 doing it once with another team, but it was that team’s RasPi-based setup.

Why paper for us?

  • Low startup costs–black and white printing, mechanical pencils or pens, clipboards.
  • Low training overhead, which is important as we’ve split scouting with other teams at every event this year.
  • Low sync needs–even if the group watching the other alliance is across the arena for a better view, we send a runner every couple matches to scoop up forms. We don’t have to figure out how to set up some network or USB drive swapping party that complies with venue rules.
  • Low maintenance–our stuff just sits in the box between events with nothing to charge or update. The most we have to do is grab an old robot battery to power the inverter that charges the laptop, and maybe switch it out for another one at lunch.
  • High review ability–the data entry person can GroupMe the scouts to ask what something meant.
  • High handoff ability–you’re not loaning someone your phone or school-issue Chromebook, you just hand them the clipboard when you need to hit the bathroom.
  • High inclusion–sometimes, our kids just don’t have a phone. Or it’s dead. Or it doesn’t have updates. Or whatever. And since we split with other teams, we can’t rely on all of them having devices either.
  • Our data entry can keep pace with the incoming sheets, so we get insights fast enough.
  1. Our team scouted with tablets. Every year, we write a game-specific web app which scouts use to enter data. This allows us to automate team distribution and have a shorter lag between data entry and analysis. Our tablets are set up to upload through LTE to comply with venue rules. We also have the more experienced scouts note down qualitative data on clipboards.

  2. Game piece placements are always important, but especially so this year as the game pieces had synergy. As our robot could do panels very well, we looked for competitive cargo scores in an alliance partner. As the season progressed, we considered climb points less and less, because most teams had begun to do them.

  3. Throughout the regular season our scouting stayed similar to what we envisioned in the beginning. This postseason, I really changed things up: my summer project (Skunk-stats) was a web app to reduce analysis turnaround even further. Not only did Skunk-stats re-introduce pit scouting (which our team has not done for years), but it also made it easier for the drive team to get stats to plan match strategy and look at pit scouting data (one of the reasons we ranked high at chezy).

  4. During the regular season we used Tableau of course. Sadly, the licenses expire after worlds, so for Skunk-stats I used a JS charting library charts.js (much less powerful than Tableau), which meant that the drive team no longer had to wait for me to run down with a Tableau printout to plan for the match.

  5. For alliance selection, we mainly use the qualitative data to pick a defense bot, since first pick bots are normally pretty easy to chose. Secondarily, we always opted for the bot with the better cargo scores as a tiebreaker because of our hatch play.

1 Like
  1. We used paper scouting and input it into excel. Very basic. We have found that the downtime is negligible for analysis, because teams that are playing in our match will have at least a 3 match turn around which ensures at the absolute latest that all data is up to date before heading to the que. Most of the time our strategist have several matches to prepare.

  2. The only data we cared about was net points. We really only looked at points for sandstorm and cycles, then points for endgame. With endgame there is opportunity cost associated with it for lvl 3 climb spacing. We also never recorded what level game pieces were placed on this year, because their points were all the same.

  3. The only thing we changed was adding robot weight to our pit scouting which was left off at our first event.

  4. We use a simple excel spreadsheet. It produces pretty simple match prediction. Using opponents best case scenario vs our most likely scenario. It is not meant to be accurate, but it allows us to never take unnecessary risk with RP.

  5. We would pretty much just look at cycle points for first picks and their differential playing against defense. For second picks it was mostly pit scouting or qualitative. We would watch match tape on every robot that we were considering, or were guaranteed to play against. Usually we watch their best match, their worst functioning match, and an average match.

Care to explain this?

Since this year had a flat point value for game elements we noticed that a lot of teams were making scouting more difficult by trying to record where game pieces were placed. This may have been common, but I noticed a lot of teams who had it in their data.

Same as others - one scout per team on the field, one person compiling data in a googlesheet.
Instead of a buncha papers, we gave each scout a clipboard with our scouting sheet, a blank transparency, and a dry erase marker. We had a dozen such clipboards set up. Before the match, the lead scout would write on the transparency the team and driver station position; the match scouting was all based on tallymark-style bubbles; after the match, they’d all turn the clipboards in to the lead scout who would enter the data in the googlesheet and prepare the transparencies for the next match.

  1. This year, our team used paper. Next year we plan to use tablets that the team has, but the main aversion from using scouts phones is making it so that people are less likely to get distracted during the match and keep as accurate of data as possible. Our scouting card as we call it included a simple diagram of the field that scouts could mark up and then a side area to actually include numbers. We had 6 people each do one robot per match and someone else input data onto the computer.

  2. The main data we would look at for qualification matches was what scoring elements partners were adept with and whether they could climb and formulate a strategy match by match based on our partners strengths. For Elims, we would look for an ideal alliance with us as captain and us as what pick we perceived possible, typically that involved looking at each of the top 12s strengths. For example, at Northern Lights we believed we would be a 2nd pick and thus had little input in who we could pick, but formed strategies based on possible first bots. On the other hand, at Wisconsin, we believed we would be first or second and had already thought of what we wanted, a high tempo first pick / captain and a 2nd pick with decent defensive capabilities and decent hatch scoring, though we made a decision without re-analyzing the data in the end.

  3. All of our data was directly input to Google Sheets and we had the structure for Tableau, but did not use it. Our Google Sheets had a section for input, the raw database, a team lookup, a match lookup and comparison, and a manual pick list

4a. It generally worked very well, though for our pick list meeting the night before elims day, I wish we had the visualization from tableau

  1. Our pick list meetings were heavily based on our Super Scouting program (subjective data), but the individuals involved backed up their stances with the data collected
1 Like

In the app I’m developing, I do the same thing - total hatches/cargo. I do have a checkbox for a Rocket Level 2 & 3 so I can tell if they can do those levels.

Also for keeping the UI simple for scouts, I have 3 screens. Auton, Teleop, and End of Match. So we have “Attempted Level X and Scored Level X” on the end game screen. The ‘End of Match’ screen is to score actions that aren’t time critical. I’m tempted to move Rocket Level 2 & 3 from teletop to end game. But I’d have to trust scouts to remember to score this after the match. Since I’ve reduced the UI, I’m leaving them in Teleop for the time being.



If you didn’t track where a team placed game pieces how did you know what a team could do? Just knowing HOW MANY they placed wouldn’t help you work with a partner to help you complete a rocket in quals or pick thge right team to help you win during elims.

A team scoring 9 game pieces but not being able to score above level 1 is not equal to a team scoring 9 game pieces but being able to score on all levels.

Points wise they might be equal ( assuming that all 9 game pieces are the same piece this year) but match strategy and pick list creating they are not equal.

The way our robot plays the game it never mattered whether our teammates or opponents could do high or low for quals. For elims it was the same thing, we didn’t care where they put the game pieces as long as they got points. So we decided that we would not make scouting any harder than it had to be on our students.

Obviously, you would find a way to track it elsewhere.

Personally, we found the data to be of little use and after our district events cut per level score tracking. It was taking up scouts time to find each specific spot than to just say “rocket score,” and we used pit data and video validation to see how high they can reach.

1 Like

So, just spit balling here.

Say you’re 2910 and you can only score low. you look at your scouting data and see another team that can score 13 game pieces. Sweet lets pick them! They accept, and also didn’t track who scored where, but oh ■■■■ they can only score low too. Where does having 2 teams who can score 13 game pieces but can’t go higher then level 1 get you? It gets you a 1st round exit.


It got them 2 wins actually.

So your argument is that making bad decisions on how to track your data is ok as long as the outcomes were successful?

Obviously my argument about the 2 low robots is easy to avoid by taking pictures and knowing not to pick a robot that has the exact same limitations that yours does. But that’s not the point i was trying to make. Limiting the data you collect “because it worked in the past” and “its just easier” is not a good reason.

1 Like

No. Im not speaking to 2910’s scouting, just that they won 2 events with this exact strategy.

Im (maybe wrongly?) assuming theyre still doing something to make sure their picks arent falling into the scenario they provide, looking at their results. Understanding the limits of your scouting and working around it is important, and sometimes it calls for sacrificing some “nice to haves” to get the “must haves.” Obviously, some teams have different priorities, but if the scouting system proves itself, it usually doesnt merit needing to be fixed.

While we dont track exactly where each game piece was placed, we do find it important to know generally where the pieces were placed. We have this as part of our form:

For this, we’re able to see where a team focused, generally, as well as everywhere they placed a game piece. I agree it can be difficult to track where each piece was placed, but i wouldnt discount at least accounting for where any piece was placed.

For many teams, the effort isn’t worth the payback. Recording cycle counts on four different scoring locations with two different game pieces gives you a ton of data, but it’s a lot more work to go through and make sense of it. Understanding the field “at-a-glance” is hard to do when you’re tracking a dozen metrics or more.

It’s a significant increase in resources to track and analyze that data, and the diminishing marginal returns aren’t necessarily worth it. “Total cycles” is likely enough for most teams to make intelligent picks, and the people making the picking aren’t blind to the competition. I’m sure 2910’s scouts know which teams are scoring high vs. low. Even though it’s not “ideologically pure” to just carry that data in their heads, it likely works well enough to make their pick list intelligently.