What data will your scouting collect?

Our scouts will be looking for the last four numbers of your SS#, mother’s maiden name, favorite or first pet, the city of your birth…

This information will be passed on to the programming team, then sold on the dark web to supplement our fundraising. :wink:


One thing I haven’t seen mentioned in the scouting threads is this year appears like it will be better for OPR than the previous few. Unless the game plays really inconsistent OPR should have an error bar under 10 points this year.

My guess is the big gotchas for OPR will be:

  1. Penalties - a few tech fouls in it that can carry through (those robots get a boost and their partners in other matches get minus).
  2. Defense - if there is a couple robots playing shut down defense in a tournament that can carry through (the robots seeing defense get a minus and robots not seeing defense get a bump).
  3. Disconnects - a robot disconnecting in a feeder station will kill an alliance’s offense. (partners get a minus and subsequent partners of those partners get a boost).

A lot of this is subjective, and I think this is up to the Strategy/Drive team to make mental notes in. A rookie, in their first event will have a different opinion as the driver. Scouting should be quantitive, to weed our lucky match schedules, and fully about ability, whereas I have a strategy group dedicated to taking notes with a pad, about team mechanics.
I measure efficiency by OPR, and have organized all FIRST New England Teams by OPR, just to track progress in the past couple of years.
I find it useful to also track drops (helps determine cycle time, which is crucial in this years game), as well as penalties (crossing midfield, more than 5 second pin), and when the robot breaks. One drop would be when the cargo falls out due to a lack of panel placed, falling panels due to poor velcro… etc.


Team’s should be able to select great alliance partners with just the quantitative data on each robot in every match. I believe that context and meta data can be helpful as well. Such as knowing what that robot’s strategy in each match was so that you can determine what their “true” proficiency in any given objective. This can sometimes be done without any quantitative data, just with smart analysis. I saw this go wrong quite a few time last year when teams were picking their 3rd robot. Most teams simply picked the robot with the next highest exchange cubes scored. A more important metric was how many exchange cubes teams would score in matches where they scored more than 1. Too many times last year I saw great robots get passed up for a robot who had focused on the exchange every match and did it sub optimally even though their average was higher than the rest of the field.


Note that to be legal, you cannot be connected via WiFi at the event. There are various options for connecting otherwise.

Here’s our scouting whitepapers for systems from 2013 to 2018. We’ve taken what we’ve learned from 1983, 971, 254, 1114, 118 and 148 as well as other teams. The associated threads also have useful information.

Our approach to qualitative data gathering is fairly unique but we believe that it is highly effective (although we’ve had a few “how did we miss that?” moments). We are changing our approach this year to measuring defensive effectiveness, but in general we are using the same method. We are also making a significant technical change in the visual set up as well. Come by our scouting crew at our events to see what we’re doing this year (at Central Valley, Sacramento, Boise, and hopefully Houston Champs).

2018 paper

2017 paper

2016 paper

2015 paper

2014 paper

2013 paper

1 Like

We do not require a wifi connection for this scouting method. The app stores all data locally and synchronizes with the server when a connection is available. It is also cell network friendly, as the amount of data exchanged for a full event per device is usually less than a Megabyte.

1 Like

We always do pit scouting before Quals and and usually do a second round of pit scouting half way though Quals. When it comes to match scouting we have about 5-6 students (one per bot) scouting the match. Ever since Ontario has become a district we also do some pre comp scouting before our second event.

How bout this much? https://gist.github.com/Alexbay218/b79fddd18d465224284f4b063c2bb596

The tbaData part was me testing the code to match the scouting data to the TBA matchdata so the data is a weird mix of 2018 and 2019 data. This is just for preview/testing purposes

Development happening here:

Your assumptions that you cannot know what specific robots do in an individual match is just simply an error in your scouting process. We have one student to one robot ratio scouting our quantitative data points and a few other that do qualitative for the entire field. This way, as long as the scouts accurately get the data that they are watching gives us pretty solid info on individual teams overall capability.

This has allowed us in the past to find gems of robots for our second pick that do one task really well but haven’t been chosen yet.

And if you can get accurate data on an individual robots capabilities, then 8-11 matches provides plenty of data in order to make accurate assessments.


So in other words …one scout do qualitative scouting per single bot in a match. I agree. The stats are always per alliance and are somewhat skewed (Matchup makeup, and partner makeup)

@New_lightnining To scout every bot with two scout per bot would take ~110 (10x6x2-10) scouts and is inefficient because at least half the field does not do what you need well… by either repeating what you do or not doing what you don’t do and need well. I will always rate high qualitative notes on cycles than stats

We instead focus on those we play and those that help to gain better insights on bots we are likely to have in same field as us

Why not just 12 scouts? Im lost on your logic.


"We have one student to one robot ratio scouting our quantitative data points and a few other that do qualitative " that basically says 1 quanatatative=at least 2 qualitative per 5 robots each match = 15 per match which is excessive imo

Perhaps I misread what he meant.

1678 uses something similar to that with great success, and if we had the numbers would be something id push to do. Multple scouts taking quantitative data makes sure data is redundant (and correct) and the qualitative gives insight to how teams do that quantitative cant always show (like how they react to issues/defense/how the DT acts)

1 Like

We have 6 students each match doing quantitative scouting. Each of the six students watches one robot on the field. Then there are two or three other, either students or mentors, who are doing all the qualitative notes. At any given time then we have 8-9 students or people or are scouting each of the matches.

All this conversation over how larger teams handle scouting is nice, but what do you all think would be most efficient for a team of around three people at any given time?

Scouting alliances.


Scout abilities, not matches.

Select three or four things you are looking for, assign one or two of these abilities per scout.

Give them a clipboard, paper, pen, a dozen clothespins and a sharpie.

Each ability has that scout as an expert. They generate pick lists based on what they see/notes and use the clothes pins (with team # written on) to create a pick list for that ability on the edge of the clipboard (you could also use a yardstick or a ruler for better granularity). Combine the lists Friday night (i.e. find teams that work will with your strategies that rank well on the lists).

Take pictures of the clothespins every 10 matches or so as a backup.

That’s my best solution for a bare bones setup that is still functional.


I like this idea. The OPRs from TBA are a useful screen–the best teams will not be at the bottom of the OPR list, and vice versa.

I will be collecting

What they can do
how fast
Success Rate
In average how many points they can collect
What they specialize in.