Calling all scouting data!

There were only a handful of teams in the world who were anywhere close to 2910 in cycles, much less also being a low bot. I’m sure it would not take much effort to say that we are going to consistently run out of scoring locations. Even if this was the case, up until worlds most competitions would have been dominated by 2 low bots scoring all 24 low locations against defense.

1 Like
  1. How did your team scout? (Paper, electronic, etc)
    Scouts enter data into cell phones, which feeds into an online database.
  2. What was the main data you looked for/found most important? (Cargo, climb, etc)
    Total cargo scored, total hatches scored, defensive ability, climb capability
  3. Did you change your scouting at all throughout the season?
    3a. What did you change?
    We started off tracking each scoring location individually, and started summing the cargo and hatch counts.
    We weren’t scouting for defense in our week 1, and added a subjective 0-5 rating on defense skill.
    3b. Why?
    Tracking game pieces individually was a lot of extra work for the scouts, and hard to process at-a-glance for the drive team.
    We underestimated the impact of defense in week 1.
  4. What program did you use to take in and look at your data? (Google sheets, excel, Tableau, etc)
    Custom app with visualizations through Tableau.
    4a. Opinions on this system?
    A lot of work but pretty nice in the end.
  5. How did you use this data to make a pick list for alliance selections?
    Del Mar: Picked by 359 who we were planning on picking before an unexpected red card and a surprise upset in a replayed match. Picked 2984 for defensive skill based on some match replays and subjective scouting reports on Saturday morning.
    Utah: Picked 6844 because they designed a 2014 robot and I liked it. because of their high cargo cycle count, and 3006 based on 6844’s scouting data.
    Champs: Picked unexpectedly in the first round and alliance captain opted to use their picklist over ours.

I would say your method you showed is more complicated than what we scouted. We didn’t track each level we tracked each scoring location. so we had 4 things cargo and hatch on cargo ship, cargo and hatch on rocket. 4 locations/items with +/- buttons. Limiting what you track is good if it helps you get good data. Maybe its just your opening comment but it literally sounded like all you did was tally how many game pieces they scored regardless if it was a hatch or a cargo ball and where it was scored. That is what my argument was against, as that might have been easy to scout, but it also IMO is useless data.

The reason we started using level placements was bc of our robot’s capabilities and the strategy we wanted to use in game. We can do all levels of the rocket, but for us, and i would assume many teams, level 3 becomes a bit more difficult, as we are extended higher and become more unstable. Having someone who could score level 3, level 3 hatch especially, became very important to us. Gathering data this way also helped with game strategy. If we saw that someone on the other alliance could score on all levels of the rocket, then we would be aware that they could try to fill a rocket, which we would then want defense against. We did keep a tally of how many of each game piece was scored, but also had what’s shown in the picture above right below that so scouters could easily check off where they placed the gamepiece. It turned out to not be too difficult, and really looks worse than it is.

This year we tried to do this-recording just number of hatches and cargo. However, this came back to hurt us, as once when we tried to complete a rocket with another team, we could not. They could not place of Rocket L3 even though they said they could. Watching videos of them showed that they were fast on the low levels but could not ever reach the top. Also in selection, it is really hard to tell if a robot will be able to score well with this data. 2 bots that can place 6 cargo each, but only in the cargo ship, will only get 18 points, where as 2 bots that can place 4 cargo each in the rockets and cargo ship will win.

1 Like

The way we saw the game, a combo rocket was really only a strategy if the match was going to be a blowout. If it wasn’t, then there is no point is congesting your own side and risking losing the match. If the match was going to be a blowout, then we always told our teammates that if they have a good start we will come over and help. This allowed for us to ensure that we didn’t risk everything just for 1 extra ranking point. This meant that prematch strategy didn’t depend on if they could do a rocket or not, but it was left up to the drive coach to see that they had a strong start and then would make the call to go help them out. This allowed us to not need to scout those data points to help out our scouters.

If our strategy had been to get the rocket RP no matter what, then I completely agree, that knowing exactly how much your alliance partners can assist you would be very important information for teams who couldn’t solo rocket.

1 Like

There definitely is a world where this is still valuable data. It is certainly data that has restrictions on the info you can’t inherently delineate robots that score high or low. Not recording this I think is an effective manner of simplifying data collection to get increased accuracy in your results, because it is simpler to input, and thus less prone to error. More teams SHOULD simplify to this level. MOST teams collect TOO MUCH DATA.

We can pretty easily dismantle your argument by using the same principles to your suggested system, which also happens to be a large trap that many many many people fall into:

In some cases if you have a system that counts location + object scoring, you sum up the numbers, you pick a team that has artificially inflated number of game pieces scored. Easy example is that defense in 2019, high level teams may always receive defense, while upper middle teams don’t, thus making some teams look like they score more (ESPECIALLY at champs). You pick the team that scores more, but all of a sudden they now have defense, and you say… lose in the quarterfinals as the number one seed.

You would not be so bold to claim that all of the data is useless. You wouldn’t tell them to essentially scout better or just not scout at all.

It was a failure on the team’s behalf to recognize the drawbacks and limitations of their data.

Another good example is 2017, where an undefended robot may score significantly more gears than a defended robot. OR teams got super boosted values where they would be paired with other good gear robots, and thus had less scoring opportunities, as you would reach the 4th rotor with time to spare. These kinds of things mess up teams significantly, but that wouldn’t be an argument to just ignore the data completely.


I’m going to enter for 2 teams because I am

  1. How did your team scout? (Paper, electronic, etc)
    1296 - Paper
    253 - Google Form
  2. What was the main data you looked for/found most important? (Cargo, climb, etc)
    1296 - How many, where, how many dropped (2018, 2017)
    253 - How many and where (2019)
  3. Did you change your scouting at all throughout the season?
    Not really, but…
    3a. What did you change?
    I would like to stop tracking fouls (until we find a way to reliably track them) and find a better way to rank driver ability than “good, okay, bad”
    3b. Why?
    Never looked at fouls and most of them can be coached out
    Driver data could be useful, but without some standardization, its not
  4. What program did you use to take in and look at your data? (Google sheets, excel, Tableau, etc)
    1296 - excel (because of SPAM’s poor man’s scouting database)
    253 - Google Sheets (because forms, and 253 really loves google sheets)
    4a. Opinions on this system?
    1296 - It was hard to get the macros to do what I wanted them to do and could be really buggy/picky about input. When it worked it was awesome. Data input was a full time job
    253 - We didn’t do a whole lot of cute data stuff (yet) but the ease of accessibility (because online) was really nice
  5. How did you use this data to make a pick list for alliance selections?
    I’ve already talked about this at length :slight_smile:
1 Like

I think our strategies are different. We could get around 6-8 game pieces a match and could reach level 3, so we wanted to go for rockets. You, on the other hand, were short and would need a much better partner to get a rocket. Thus scouting data depends on the type of game play you want to do.

1 Like

This is my favorite solution, as it makes things simple and puts less stress on the scouts.

How did your team scout? (Paper, electronic, etc)

4020 uses Google Forms. Scouts fill them out on their cell phones. The team always has some extra battery packs for USB charging so scouts are sure to have enough power for their phones. We put in a significant effort to design the form to flow well on a cell phone during a match. We don’t want scouts having to scroll much, if at all.

What was the main data you looked for/found most important? (Cargo, climb, etc)

Deep Space was a great game for scouting since robot actions translated directly into points (unlike, say, Power Up, where count of cubes placed was not directly related to score).

We found value in recording everything that related to scoring - hab exit, sandstorm placement of hatch and cargo, “teleop” placement of hatch and cargo, and climb level. Rather than tally each rocket placement by level, we simply had one question asking what was the highest rocket level attained in the round for hatch and for cargo. We also recorded qualitative assessments of defense ability and time, penalties, and we have a free-form comments field which is often quite helpful.

Did you change your scouting at all throughout the season?

We made some slight changes to Google Form question wording for clarity and question organization for screen flow after our first regional based on feedback from scouts. We did not make further changes between our second regional and Worlds.

What program did you use to take in and look at your data? (Google sheets, excel, Tableau, etc)

Google Forms automatically saves to a Google Sheet. We use Tableau to read from the Google Sheet and deliver interactive visualizations. This system works well for us. The biggest pain is needing to tether computers in the pit or stands to cell phones to get the data from the cloud.

How did you use this data to make a pick list for alliance selections?

We make very significant use of the scouting data for qualification and elimination match strategy in addition to elimination alliance selection. For me, the value of scouting is 80% match strategy / 20% alliance selection. As an example, here is our scouting data for our last quals match at Smoky Mountains. The first graphic is for our teammates.

This lets us understand the capability of our alliance. We can see that we’ll probably want us and 2856 to start on level 2. We will prioritize our best sandstorm placement location since our teammates haven’t demonstrated sandstorm capability. It looks like 2856 should focus on rocket hatches in teleop, for which they have demonstrated up to level 3 placement. We will focus on cargo. 7406 has played some defense and their likely most valuable role will either be defense or counter-defense depending on the opponent alliance capability/strategy. We will take the level 3 climb, since neither teammate has that capability.

Our opponent alliance looked like this.

For Deep Space, the only thing we really have to worry about regarding the offense capability of the opponent alliance is the total score each bot might be capable of. It doesn’t matter much if they score in rockets or the cargo ship. It matters a tiny bit if they are better at hatches or cargo in case we need to plan defense for choking off the cargo ship or choking off a hatch loading station.

What does matter is defense. If opponents play defense we need to have a strategy to deal with it. In this case, it looks like 4576 has a reasonable chance to play defense. Since the other two opponents are averaging a total score of 32 and we average 38 plus contributions from our teammates, the only strategy that would likely yield a win for the opponents is for 4576 to play defense on us and hope it is effective. Thus, our alliance strategy going in to the round is to expect defense and to plan for 7406 to play counter-defense and give us space to operate. If the opponents do not play defense, 7406 will play defense and prioritize 343, looking to make it difficult for them to get in and out of the hatch loading area.

If you watch match video on TBA, you will see that the round played out pretty much like the scouting data suggested that it would. 7406 did a great job of counter-defense and was critical in the round win. Without the scouting data, preparing an effective strategy for matches would be much less likely to be possible.

For alliance selection, we just print out a table of the main scoring categories. For 2019, total points was most important for sorting. We wanted to know hatch vs. cargo capability to maximize point potential of the alliance. We wanted to know hab exit capability to make sure we had two level 2 exits. We also wanted to know climb capability to try to end up with a level 3 and two level 2 climbs (I don’t recall multiple level 3 capability at the regionals we attended).

Our captain takes that sheet into alliance selection, but is on the phone with one or two team members in the pit who are looking at the more detailed Tableau analytics and can see comments, scoring trends, and more. Those people work together to prioritize primarily the 2nd pick based on the combined capabilities of the first two picks. The first pick is often pre-arranged or down to maybe just a few options which are pre-prioritized.

For us, in 2019, the first pick was mostly about raw point scoring ability, regardless of how it was accomplished. We could flex to cargo or hatches as needed. We could flex to rocket or cargo ship as needed. We could flex to level 3 or level 2 climb as needed.

By the time we got around to the 2nd pick, the key capability of teams in that rank range was successful cargo cycles into the cargo ship or rocket level 1. Ability to climb to level 2 was also important, but only if the difference in low cargo was less than 3 points. If one of the best defensive bots was still available, we could swap to that pick strategy, but our primary goal was to be able to outscore because we had three strong scorers and defense, if played, could not stop all of them.

1 Like

I keep meaning to put a data dump on Github, but here’s our OHCL spreadsheet. You can take a look at some of the cool stuff we did this year, and make a copy if you want to interact with it.

  1. Match data was collected through a custom digital form (technically a Google Scripts Web App built on a spreadsheet), which you can see here. Pit data was just a Google Form connected to that same spreadsheet. We did this so that we could have a custom form (see especially the second and third page’s cargo and rocket diagrams, which work great with mobile touch) that the scouters could use on their phones. I’d use team tablets if we had them though, not everybody had data available on their phones, which created (fairly minor, all things considered) headaches.
  2. Everything. We want to collect all the data we can, and then analyze what’s important based on the competition. Anything that happens in a match affects its outcome, so collecting all the data gives us much better insights into matches, and by extension, teams.
    (Edit: I should put a clarifier on this: some things that we didn’t find to be terribly important, like fouls, we relegated to a comment box. Additionally, I don’t think we’ve ever done driver skill except exceptional cases in the comment box because we’ve never found a way for it to work for our team. I think the skill is inferred largely through results anyways, and the extent which is visible on the field but isn’t reflected somehow in the data is, in my opinion, largely not worth worrying about.)
    (Also edit: We didn’t use where they put playing pieces too much, but we I think that was largely the interface’s fault. If I had incorporated that into the interfaces we used more often, I think it would have been more important.)
  3. Slightly. The only changes we had to make were a few lines of text updating the event name and settings. Mostly for fun, I ended up implementing a match prediction engine during our second competition, mostly because at that point things were running smoothly for once and so I had some time to poke around with it, which I had mostly already created during the preseason. We never really used it for anything but to look interesting though. It did provide interest for the scouters sitting in the stands, though, so that’s a benefit I suppose.
  4. As you might have inferred from my first answer, Google Sheets!
    4a. Lots of them! (Mostly good). It was easy to learn (for me, I realize not or everybody) and there’s practically no ceiling. Anything you want to do, there’s a function, add-on (like my TBA Requests), or the ability to make custom functions for. The fact that you start from scratch means that if you want to customize it, you know how to because you set it up in the first place. There’s a time investement, to be sure, but the versatility of it has really convinced me that it’s excellent for scouting. You can do custom UIs, data analysis modeling, really anything, fairly easily.
  5. This is probably the biggest weakness with our scouting at the moment. We have a meeting the evening before the last day to discuss potential candidates, where qualitative observations from mentors and scouters meet custom and simple numerical ratings to generate a pick list. I think the general concept is pretty decent, but it’s unorganized and I don’t think anybody really leaves the meeting happy and confident with the list.

I’m more than happy to field any questions!

1 Like

We use paper and record the data in to an Excel Workbook

We looked at every aspect of a robot’s performance. We couldn’t climb at the time so we looked for a robot that was decent with the game pieces, but could make a Lvl 3 climb.

We made a few editiorial changes to the data sheets: combining the abilities during the sand storm with Teleop.

We used Excel.

We made side-by-side comparisons of robots that had complementary abilities. We looked at teams that could perform the most successful cycles - focusing on teams that could place at least 6 game pieces per match (not really differentiating between hatch panels & cargo).

Check out our scouting whitepaper, it talks in-depth about every aspect of our scouting sub-team from data entry to scout analysis. I posted the link of CD should be called FRC 25- Raider Robotix scouting whitepaper 2019

1 Like

I haven’t read up on this coversation, so if I’m butting in I apolagize

I’ve run through so many different ways of scouting with my team
Three different paper styles
Two different google docs
Paper and then an excel sheet
Three different styles of google sheet

I’ve found out google sheets are working the best so far, I can link a copy of our final design, it’s still set up for last years game but I can share it if you’d all like

1 Like

That would be awesome! I appreciate any and all input I get

These all have some data in them, but these are two different types of spread sheet we have used.

This is the second try, it calculates data percentages and is packed with data.

This is my first try, doesn’t calculate anything, just nicely shows all the data.

1 Like

Unless your access is directed only to @Kaitlynmm569, could you please open access for the rest of us?
Or, I could be going about it the wrong way (which is completely possible).
Thank you

1 Like

It should be published to the web, so it should work for everyone

If it’s not please tell me