Calling all scouting data!

How did your team scout? (Paper, electronic, etc)

4020 uses Google Forms. Scouts fill them out on their cell phones. The team always has some extra battery packs for USB charging so scouts are sure to have enough power for their phones. We put in a significant effort to design the form to flow well on a cell phone during a match. We don’t want scouts having to scroll much, if at all.

What was the main data you looked for/found most important? (Cargo, climb, etc)

Deep Space was a great game for scouting since robot actions translated directly into points (unlike, say, Power Up, where count of cubes placed was not directly related to score).

We found value in recording everything that related to scoring - hab exit, sandstorm placement of hatch and cargo, “teleop” placement of hatch and cargo, and climb level. Rather than tally each rocket placement by level, we simply had one question asking what was the highest rocket level attained in the round for hatch and for cargo. We also recorded qualitative assessments of defense ability and time, penalties, and we have a free-form comments field which is often quite helpful.

Did you change your scouting at all throughout the season?

We made some slight changes to Google Form question wording for clarity and question organization for screen flow after our first regional based on feedback from scouts. We did not make further changes between our second regional and Worlds.

What program did you use to take in and look at your data? (Google sheets, excel, Tableau, etc)

Google Forms automatically saves to a Google Sheet. We use Tableau to read from the Google Sheet and deliver interactive visualizations. This system works well for us. The biggest pain is needing to tether computers in the pit or stands to cell phones to get the data from the cloud.

How did you use this data to make a pick list for alliance selections?

We make very significant use of the scouting data for qualification and elimination match strategy in addition to elimination alliance selection. For me, the value of scouting is 80% match strategy / 20% alliance selection. As an example, here is our scouting data for our last quals match at Smoky Mountains. The first graphic is for our teammates.

This lets us understand the capability of our alliance. We can see that we’ll probably want us and 2856 to start on level 2. We will prioritize our best sandstorm placement location since our teammates haven’t demonstrated sandstorm capability. It looks like 2856 should focus on rocket hatches in teleop, for which they have demonstrated up to level 3 placement. We will focus on cargo. 7406 has played some defense and their likely most valuable role will either be defense or counter-defense depending on the opponent alliance capability/strategy. We will take the level 3 climb, since neither teammate has that capability.

Our opponent alliance looked like this.

For Deep Space, the only thing we really have to worry about regarding the offense capability of the opponent alliance is the total score each bot might be capable of. It doesn’t matter much if they score in rockets or the cargo ship. It matters a tiny bit if they are better at hatches or cargo in case we need to plan defense for choking off the cargo ship or choking off a hatch loading station.

What does matter is defense. If opponents play defense we need to have a strategy to deal with it. In this case, it looks like 4576 has a reasonable chance to play defense. Since the other two opponents are averaging a total score of 32 and we average 38 plus contributions from our teammates, the only strategy that would likely yield a win for the opponents is for 4576 to play defense on us and hope it is effective. Thus, our alliance strategy going in to the round is to expect defense and to plan for 7406 to play counter-defense and give us space to operate. If the opponents do not play defense, 7406 will play defense and prioritize 343, looking to make it difficult for them to get in and out of the hatch loading area.

If you watch match video on TBA, you will see that the round played out pretty much like the scouting data suggested that it would. 7406 did a great job of counter-defense and was critical in the round win. Without the scouting data, preparing an effective strategy for matches would be much less likely to be possible.

For alliance selection, we just print out a table of the main scoring categories. For 2019, total points was most important for sorting. We wanted to know hatch vs. cargo capability to maximize point potential of the alliance. We wanted to know hab exit capability to make sure we had two level 2 exits. We also wanted to know climb capability to try to end up with a level 3 and two level 2 climbs (I don’t recall multiple level 3 capability at the regionals we attended).

Our captain takes that sheet into alliance selection, but is on the phone with one or two team members in the pit who are looking at the more detailed Tableau analytics and can see comments, scoring trends, and more. Those people work together to prioritize primarily the 2nd pick based on the combined capabilities of the first two picks. The first pick is often pre-arranged or down to maybe just a few options which are pre-prioritized.

For us, in 2019, the first pick was mostly about raw point scoring ability, regardless of how it was accomplished. We could flex to cargo or hatches as needed. We could flex to rocket or cargo ship as needed. We could flex to level 3 or level 2 climb as needed.

By the time we got around to the 2nd pick, the key capability of teams in that rank range was successful cargo cycles into the cargo ship or rocket level 1. Ability to climb to level 2 was also important, but only if the difference in low cargo was less than 3 points. If one of the best defensive bots was still available, we could swap to that pick strategy, but our primary goal was to be able to outscore because we had three strong scorers and defense, if played, could not stop all of them.

1 Like

I keep meaning to put a data dump on Github, but here’s our OHCL spreadsheet. You can take a look at some of the cool stuff we did this year, and make a copy if you want to interact with it.

  1. Match data was collected through a custom digital form (technically a Google Scripts Web App built on a spreadsheet), which you can see here. Pit data was just a Google Form connected to that same spreadsheet. We did this so that we could have a custom form (see especially the second and third page’s cargo and rocket diagrams, which work great with mobile touch) that the scouters could use on their phones. I’d use team tablets if we had them though, not everybody had data available on their phones, which created (fairly minor, all things considered) headaches.
  2. Everything. We want to collect all the data we can, and then analyze what’s important based on the competition. Anything that happens in a match affects its outcome, so collecting all the data gives us much better insights into matches, and by extension, teams.
    (Edit: I should put a clarifier on this: some things that we didn’t find to be terribly important, like fouls, we relegated to a comment box. Additionally, I don’t think we’ve ever done driver skill except exceptional cases in the comment box because we’ve never found a way for it to work for our team. I think the skill is inferred largely through results anyways, and the extent which is visible on the field but isn’t reflected somehow in the data is, in my opinion, largely not worth worrying about.)
    (Also edit: We didn’t use where they put playing pieces too much, but we I think that was largely the interface’s fault. If I had incorporated that into the interfaces we used more often, I think it would have been more important.)
  3. Slightly. The only changes we had to make were a few lines of text updating the event name and settings. Mostly for fun, I ended up implementing a match prediction engine during our second competition, mostly because at that point things were running smoothly for once and so I had some time to poke around with it, which I had mostly already created during the preseason. We never really used it for anything but to look interesting though. It did provide interest for the scouters sitting in the stands, though, so that’s a benefit I suppose.
  4. As you might have inferred from my first answer, Google Sheets!
    4a. Lots of them! (Mostly good). It was easy to learn (for me, I realize not or everybody) and there’s practically no ceiling. Anything you want to do, there’s a function, add-on (like my TBA Requests), or the ability to make custom functions for. The fact that you start from scratch means that if you want to customize it, you know how to because you set it up in the first place. There’s a time investement, to be sure, but the versatility of it has really convinced me that it’s excellent for scouting. You can do custom UIs, data analysis modeling, really anything, fairly easily.
  5. This is probably the biggest weakness with our scouting at the moment. We have a meeting the evening before the last day to discuss potential candidates, where qualitative observations from mentors and scouters meet custom and simple numerical ratings to generate a pick list. I think the general concept is pretty decent, but it’s unorganized and I don’t think anybody really leaves the meeting happy and confident with the list.

I’m more than happy to field any questions!

1 Like

We use paper and record the data in to an Excel Workbook

We looked at every aspect of a robot’s performance. We couldn’t climb at the time so we looked for a robot that was decent with the game pieces, but could make a Lvl 3 climb.

We made a few editiorial changes to the data sheets: combining the abilities during the sand storm with Teleop.

We used Excel.

We made side-by-side comparisons of robots that had complementary abilities. We looked at teams that could perform the most successful cycles - focusing on teams that could place at least 6 game pieces per match (not really differentiating between hatch panels & cargo).

Check out our scouting whitepaper, it talks in-depth about every aspect of our scouting sub-team from data entry to scout analysis. I posted the link of CD should be called FRC 25- Raider Robotix scouting whitepaper 2019

1 Like

I haven’t read up on this coversation, so if I’m butting in I apolagize

I’ve run through so many different ways of scouting with my team
Three different paper styles
Two different google docs
Paper and then an excel sheet
Three different styles of google sheet

I’ve found out google sheets are working the best so far, I can link a copy of our final design, it’s still set up for last years game but I can share it if you’d all like

1 Like

That would be awesome! I appreciate any and all input I get

These all have some data in them, but these are two different types of spread sheet we have used.

https://docs.google.com/spreadsheets/d/e/2PACX-1vRwnjksgk_HOvbzEvD7bJxGLTKWddaCPjfzgarKfaSrFTbH1tUSnLOSMW_Sb6034kLCiVNjifzMaX5I/pubhtml

This is the second try, it calculates data percentages and is packed with data.

https://docs.google.com/spreadsheets/d/e/2PACX-1vTRQjDRjQZqMjj15JK-2FqDL9pXojwziXABdRurIQhFd5HT5ZWDn1_yFCHpZ0F4OIGqVKNWbZClTLLj/pubhtml

This is my first try, doesn’t calculate anything, just nicely shows all the data.

1 Like

Unless your access is directed only to @Kaitlynmm569, could you please open access for the rest of us?
Or, I could be going about it the wrong way (which is completely possible).
Thank you

1 Like

It should be published to the web, so it should work for everyone

If it’s not please tell me

I keep getting this message: You need permission to access this published document.

Both

Hatch Panels and balls at levels 2 and 3, also hab 3 climb

Started with everyone scouting, at DCMP I reviewed and scouted every match on my own at the end of the day with help from a few team members for a list of teams to focus on.

My head

Idk, 5 or so hours of looking at the data and matches and then somewhat arbitrarily deciding what teams I wanted to pick.

To say the least I need a better system.

It’s probably the email I’m using, I’ll have to log into a different email to fix the problem, I can fix it in less then twenty minutes

i highly recommend using the app robot scouter on amazon tablets you have to back door download it but its extremely easy to use its customization and uploads to a google spreadsheet with averages and individual team data. i suggest making a email to link all the tablets but it has been so much easier to do everything of course there is room for it to improve but it has been so simple.

1 Like
  1. electronic
  2. the main data points I remember that we made use of were pretty usual things such as cargo/hatch count, rocket 1/2/3 counts, cargo ship counts, climb counts, and per-match defense ratings
  3. once or twice, don’t remember details, probably little functional things in our app
  4. we wrote an Android application to use for scouting, each scouter would record data then send a CSV file to a master device with Bluetooth. We then analyzed the file with Tableau on someone’s laptop. Late in the year, I added a basic analysis screen so that we could take a quick look at scouting data on the fly without deep analysis
    4a. it worked great. I would recommend using a scouting app, and if you don’t want to program it, I think Robot Scouter on the play store looks great. However, I would recommend programming it for the reason that it allowed us to have complete control of the behavior, and it was a great learning experience for the programmers involved.
  5. we analyzed it in Tableau and looked for different criteria, which were a combination of general things we look for and specific things we wanted at the competition.

for some general advice, you should try to get the input of as many people as you can on the team; this way, you can find out things that they noticed that you may have not.

here’s a good paper: Your scouts hate scouting and your data is bad: Here’s why - Competition / Scouting - Chief Delphi

1 Like

Scouting Systems Survey relevant thread about how teams scout

Sorry, I’m a bit late to the party.

1.) How did your team scout? (Paper, electronic, etc)?

Our team scouts electronically with cheap Samsung and Lenovo tablets using the Robot Scouter android app.

2.) What was the main data you looked for/found most important? (Cargo, climb, etc)

We calculated the number of points robots scored during matches by tallying their in-game actions. In qualification matches, this was pretty useful for comparing the “point scoring potential” between two alliances. For example, if one team on the opposing alliance is a much better scorer than the others, we might choose to send one of our 'bots over to interfere with their scoring.

Keeping track of HAB climbs was also really helpful for our team. Our robot’s climber wasn’t the most reliable this year so we liked being able to compare climb success rates to determine which one of us would go for Level 3 and get the ranking point.

Rocket stats were another thing we looked at a lot. If our alliance partners want to and the scouting data indicates that it’s possible, we’d try and go for a rocket RP during the match.

3.) Did you change your scouting at all throughout the season?

Not anything major that I can remember. If we were able to pick more often, we probably would’ve liked to alter our scouting sheet to get more specific data on the defensive capabilities of a robot. We made a lot of changes to our scouting data briefings that were sent to our Driveteam though.

4.) What program did you use to take in and look at your data?
This was our first year using Tableau and we really like it! We export our data directly from Robot Scouter and run it through a simple command-line tool that converts it into a format Tableau can understand. The app also exports a JSON file of raw data that you can write your own application to process as well.

Here’s a link to the sheet we made for alliance selections and an example driveteam briefing. Sorry that they’re not as high quality as what other teams have made – we still have a lot to learn.

4a. Opinions on this system?
Overall everything worked pretty well for us this past season. We don’t really have the resources to program a separate data collection/analysis application every year so we like the simplicity and versatility Robot Scouter gives us. We haven’t done any substantial data analysis in the past so we have no way of comparing our current system to others but based off of this year we’ll definitely stick with Tableau in the future if it’s in the KOP again.

5.) How did you use this data to make a pick list for alliance selections?
We first create an alliance selection workbook in Tableau (see above). It’s set up so we can easily find robots that do specific things well (best scorers, hatch panel placers, cargo ship fillers, climbers, etc.). We also look at qualitative notes that scouts have made about the different robots.

Lemme know if you have any questions and I’d be happy to try and answer 'em! We’ve written a detailed introduction to our scouting process here if you want to learn a lot more about our method.

Robot Scouter was also developed by one of our former members so please feel free to ask me if you have any questions about that!

1 Like

1. Our team originally used a paper scouting system at our first event but later used a tablet scouting system (electronic).

2. The data that we found most important was whether a robot of interest could effectively place hatches, especially on the rocket.

3. We changed our scouting system during the season.
3a. We went from a paper scouting system to a tablet scouting system. At our second event, we transferred the tablet data by connecting the tablets with wires to a master computer. At States, we experimented with Bluetooth transfer but it didn’t go very smoothly and we ended up having to use wires again to transfer data. However, by Worlds, we used Bluetooth to great effect as we were able to essentially collect data at will. We did this by transferring the files from the tablet to a master computer. We had apps on both the tablets and the master computer to help with this process. We used ASTRO Bluetooth Module and ASTRO File Manager on the tablets and on the master computer, we used an app called BlueFTP.
3b. We changed it due to paper recon being very exhausting for the people who enter the data into laptops. By using electronic scouting, we were able to completely bypass this step and have data instantly.

4. We used Excel to view data.
4a. Excel was great for me since I’m the most familiar with that over something like Google Sheets or Tableau.

5. We sorted this data by descending order (average hatches placed per match) and with input from our drive coach, made a picklist for robots that we could work well with.

  1. My team used a program, called epicollect 5, for data entry on mobile phones


It’s a really simple and easy to use app, super easy survey customisation and works with IOS and Android. It also allows offline entry for later upload. I would highly recommend it

  1. The data we focused on was mostly basic. We did total hatches and cargo separately, maximum height of placement, habitat at start and finish.

  2. Epicollect 5 does some basic tabulation, and you can export it to excel easily. You can do that if you want, but if you want data to constantly be put in an excel spreadsheet or whatever program you are using, the system has pretty simple apis.
    Here’s the basic table:
    https://five.epicollect.net/project/koalafied-2019-qualifiers/data

Here’s the csv format: (look at the web address and you can get your program to do it by entering the name of your survey in the right spot, assuming it’s public)
https://five.epicollect.net/api/export/entries/koalafied-2019-qualifiers?format=csv&headers=true

From the csv format page excel was able to automatically access it and do whatever we wanted with it. Here’s a screen shot of the final table


There’s a few things that I’ve messed with a bit, but the bulk is apparent. We didn’t just rely on this, and looked at this with a combination of communication, and more qualitative data as well.

I’d highly recommend using this system, and I’m happy to help if anyone has questions about it. From my experience, epicollect 5 has the same if not better functionality than custom apps, and is way easier to use.

2 Likes

We mostly used paper and found that it was very inefficient. We tracked basically everything worth points, and things like if they were defended or not. We added the “was defended” box after provincials to help differenciate the data. We also redesigned the sheet after provincials to make it easier to use. We interpreted the data in Excel, which worked really well for us, since we could see the actual numbers. For picklists, we mostly looked at things like high level stuff and their Overall scoring, which was in a sheet called “organized Scouting”