What data will your scouting collect?

Besides the obvious raw data (panels, cargo placed, climbing), what match data do you think will be useful to collect for the coming season? Any useful ways of measuring a team’s efficiency? Other tips for scouting or data analysis also helpful

3 Likes

Durability
Driver team/driver skill
Bad wiring (dead on field)
Can they play defense?
Can they do ground hatch pickup?
Do they help win?
Team personality
Can they do what you can’t or won’t do?

5 Likes

Less is more in scouting, quality trumps quantity. Be sure your scouts are not overwhelmed collecting data points and can instead focus on watching the game. Eyes-on-field time is important and seemingly overlooked (pun intended).

9 Likes

As @Skyehawk states eyes on field or “eyes on bots” is the main aspect of scouting to give your team actionable info (gameplanning/alliance selection) . Stats deceive as we really don’t know what that specific robot does we now averages of how thier alliance did and to an extent the schedule of partners can certainy sway stats.

Another aspect is lack of data points, most events have 8-11 matches in quals that is only 8-11 potential data points on their varied partner alliances overall performance. Statistically this is not a great way to infer how a single team did as nothing is the same each match in terms of bots on the field.

By not relying on published alliaince stats, and instead watching bots your data quality increases. The online stats then can confirm your observatons. Its more work to watch and take notes however its simply better information IMO.

Its rather easy to select top 10 in an event , the rest comes with knowing bots through unbiased observation.

1 Like

This can’t be clear enough. Scout only data that you know exactly how to use properly, and know what it means. Stray away from setting arbitrary values to qualitative things, like the poster above, things like:

Are very questionable metrics to scout so broadly because my good driving isn’t the same as your good driving. Don’t assume why teams died on the field, but mark that down.

I don’t really want to pick on this too much but…

Questions like these for generic scouting are also not very good. These should be discussed during a pick list meeting with clear objectives about what you are looking for in partners, not explicitly recorded for every single team.

These are much better questions, and I would advise you to stick to easily quantifiable metrics like:

  • How many hatches did they score, and at what level
  • How many cargo balls did they score and at what level
  • Level of hab platform they reached at the end

etc…

7 Likes

We like to do a preliminary pit scout, and check robots out up close and personal. Another point we like to watch is individual team performance. Say a team has a few hard matches, say agaisnt the top teams at the comp. Recognizing that some matches arent meant to be won is important when trying to gauge a teams performance.

Another important element is compatibility. Just to go back a few years for an example or two, in 2015 Rush, you couldnt just have a team of the best chute bots, or 3 of the best landfill bots, as there wasnt enough room to move. In 2016 Stronghold, a team of 3 of the best shooters was useless if at least 1 couldnt cross a pluarality of defenses. (I’ll just skip 2017, as fuel was a joke that year, and having three dedicated gear bots on an alliance was a viable strategy). For example this year, if an alliance has 2 bots that do cargo really well, but can’t do hatches, the may have a hard time. You need a well rounded team, one that complements each other and plays off of each others strengths.

Another is how good the drivers are. I’d take a mediocre robot with a great driver over one of the best robots with mediocre driver.

We focus on quantitative data that can be easily aggregated and sorted for desirable traits. Qualitative data is used to verify or down select from those that rise to the top of the quantitative analysis.

We use an Android app that makes this data entry easier and more engaging to the scout, which helps with data integrity. This app also synchronizes the data to a central database, and enables scouts to correct mistakes when they recognize them. The direct synchronization also means full integrity of data from entry to analysis. Someone entering data into a database from a paper form is almost as error prone as the scout filling out the form in the first place.

The app I developed for this is available on Google Play. I plan to release the 2019 version before week 1 events begin.

1 Like

That’s the scouting sheet my team will use this year.
We use the comment zone for specific caracteristic like collecting hatch on the ground.
Each sheet is dedicated to 1 robot so it’s easier when you enter your data in exel.
We also have 1 or 2 students that makes a top 5 defensives robots list.
The rest of the scouting can’t be done base on match observation.

1 Like

Our scouts will be looking for the last four numbers of your SS#, mother’s maiden name, favorite or first pet, the city of your birth…

This information will be passed on to the programming team, then sold on the dark web to supplement our fundraising. :wink:

12 Likes

One thing I haven’t seen mentioned in the scouting threads is this year appears like it will be better for OPR than the previous few. Unless the game plays really inconsistent OPR should have an error bar under 10 points this year.

My guess is the big gotchas for OPR will be:

  1. Penalties - a few tech fouls in it that can carry through (those robots get a boost and their partners in other matches get minus).
  2. Defense - if there is a couple robots playing shut down defense in a tournament that can carry through (the robots seeing defense get a minus and robots not seeing defense get a bump).
  3. Disconnects - a robot disconnecting in a feeder station will kill an alliance’s offense. (partners get a minus and subsequent partners of those partners get a boost).
2 Likes

A lot of this is subjective, and I think this is up to the Strategy/Drive team to make mental notes in. A rookie, in their first event will have a different opinion as the driver. Scouting should be quantitive, to weed our lucky match schedules, and fully about ability, whereas I have a strategy group dedicated to taking notes with a pad, about team mechanics.
I measure efficiency by OPR, and have organized all FIRST New England Teams by OPR, just to track progress in the past couple of years.
I find it useful to also track drops (helps determine cycle time, which is crucial in this years game), as well as penalties (crossing midfield, more than 5 second pin), and when the robot breaks. One drop would be when the cargo falls out due to a lack of panel placed, falling panels due to poor velcro… etc.

-AJ

Team’s should be able to select great alliance partners with just the quantitative data on each robot in every match. I believe that context and meta data can be helpful as well. Such as knowing what that robot’s strategy in each match was so that you can determine what their “true” proficiency in any given objective. This can sometimes be done without any quantitative data, just with smart analysis. I saw this go wrong quite a few time last year when teams were picking their 3rd robot. Most teams simply picked the robot with the next highest exchange cubes scored. A more important metric was how many exchange cubes teams would score in matches where they scored more than 1. Too many times last year I saw great robots get passed up for a robot who had focused on the exchange every match and did it sub optimally even though their average was higher than the rest of the field.

2 Likes

Note that to be legal, you cannot be connected via WiFi at the event. There are various options for connecting otherwise.

Here’s our scouting whitepapers for systems from 2013 to 2018. We’ve taken what we’ve learned from 1983, 971, 254, 1114, 118 and 148 as well as other teams. The associated threads also have useful information.

Our approach to qualitative data gathering is fairly unique but we believe that it is highly effective (although we’ve had a few “how did we miss that?” moments). We are changing our approach this year to measuring defensive effectiveness, but in general we are using the same method. We are also making a significant technical change in the visual set up as well. Come by our scouting crew at our events to see what we’re doing this year (at Central Valley, Sacramento, Boise, and hopefully Houston Champs).

2018 paper

2017 paper

2016 paper

2015 paper

2014 paper

2013 paper

1 Like

We do not require a wifi connection for this scouting method. The app stores all data locally and synchronizes with the server when a connection is available. It is also cell network friendly, as the amount of data exchanged for a full event per device is usually less than a Megabyte.

1 Like

We always do pit scouting before Quals and and usually do a second round of pit scouting half way though Quals. When it comes to match scouting we have about 5-6 students (one per bot) scouting the match. Ever since Ontario has become a district we also do some pre comp scouting before our second event.

How bout this much? FRCMscout_Preview · GitHub

The tbaData part was me testing the code to match the scouting data to the TBA matchdata so the data is a weird mix of 2018 and 2019 data. This is just for preview/testing purposes

Development happening here:

Your assumptions that you cannot know what specific robots do in an individual match is just simply an error in your scouting process. We have one student to one robot ratio scouting our quantitative data points and a few other that do qualitative for the entire field. This way, as long as the scouts accurately get the data that they are watching gives us pretty solid info on individual teams overall capability.

This has allowed us in the past to find gems of robots for our second pick that do one task really well but haven’t been chosen yet.

And if you can get accurate data on an individual robots capabilities, then 8-11 matches provides plenty of data in order to make accurate assessments.

3 Likes

So in other words …one scout do qualitative scouting per single bot in a match. I agree. The stats are always per alliance and are somewhat skewed (Matchup makeup, and partner makeup)

@New_lightnining To scout every bot with two scout per bot would take ~110 (10x6x2-10) scouts and is inefficient because at least half the field does not do what you need well… by either repeating what you do or not doing what you don’t do and need well. I will always rate high qualitative notes on cycles than stats

We instead focus on those we play and those that help to gain better insights on bots we are likely to have in same field as us

Why not just 12 scouts? Im lost on your logic.

2 Likes