I debrief with drive team after each match about their perspective of both partner and opponent drive team ability to complement stand observations.
That is a really valid thing to do. We have tried to immediately review a match video and discuss what they found out about working with the other teams on the alliance. I WANT it to be different than it is with scouting metrics from the stands driving decisions, but the quality of that data is totally dependent on the interest level and focus of the scouts, which makes it less reliable for us most years. If we do have a “super scout” I usually pull them out and have them walk around with me talking about teams.
Our first pick on carver this year (1746 OTTO - rank 44th?) was basically a drive team decision because we had one match with them and they totally killed it in our match. Our drive team was very impressed in working with them.
New things i see:
TELEOPERATED - Did they go under the trench?
This can be a pit question honestly. It should be pretty easy under normal match play to see if theyre truthful, and if theyre not being truthful about something this simple id be concerned to pick them for other basic things that they may not be the most truthful about.
TELEOPERATED - Did they play defense?
Since no info is given about if the defense was good or bad (which is still fairly subjective and i personally dont like having rotating scouts judge that) i feel this isint a worthwhile metric to keep track of. That being said, maybe you guys have a way of using it (maybe watching the match vid for a team with a lot of it checked?) that makes it worthwhile for use.
TELEOPERATED - Did they self level on the rung (were they able to level the shield generator switch without any other bots)?
Im curious to the reasoning on this. I dont have any compelling reason not to collect it past it not being useful, as I dont see what would be worth tracking with this.
Boltman’s earlier post about it becoming pretty long and busy is starting to become a bit more valid now. What info do you think you will really need (aka what datapoints will be used in a useful way, this may not be obvious until after your first event) and how can you simplify it for the scouts?
This kind of sounds like a culture issue honestly, but it seems like scouting only 3 matches/DT discussion paid off at a few events already.
If you can self level, it makes a triple level climb much easier (at least that’s the reasoning i can think of)
Yes. Most of our “surprise” choices that turn out well are based on drive team/coach input and solid communication with other teams during comps so we all kind of know who is struggling but then got that piece fixed in time, etc. We always share all of our info, what we are thinking, what is happening with our bot, etc. and are open with alliance picking and at least in our district, I think it helps. At world’s its a little harder for us (on the other end its just say “yes” haha) when we got to pick because less time to build solid communication with 75 unfamiliar teams
For the trench thing: it’s more to see if their driver is comfortable with that area. Just because they can doesn’t mean their driver will. Otherwise I’ll take into account what you said. Ty
The data fields that you’re collecting look decent. A word of warning - make sure that the venue you’re in has reliable wifi or that your scouts all have data; otherwise, you may suffer from missing entries.
Have you thought about how you’re going to compile and analyze this data? iirc Google Forms already has some built in data analysis, and the ability to upload results to Google Sheets, but you may enter in some fake data and experiment with the analysis.
On another note, I’d also recommend making a new copy of the form for each competition you attend, so your results don’t get mixed up with each other from comp to comp.
self climb also allows for 40 points vs 25 points even if your other partners dont climb.
Hey OP, a lot of people have given some good pointers but I thought I’d add my own.
When it comes to scouting, you want to hit that balance of data gathered where it’s not too bloated or not too slim. No matter what you do, TALK TO YOUR DRIVETEAM AND SCOUTS. Communication is key for not only strategy, but, driveteam will let you know what they think is valuable and your scouts will tell you what they do / don’t want to collect.
As for the google forms you showed, I’d caution against that format and length as I’m biased towards things that are both easy to fill out and gives you easy access to the data. Just like robots, I like elegant systems. I’ve used paper sheets and Excel all four years of mentoring and it’s never let me down. Pivot tables are beautiful and I’d advice getting comftorable with em if you want to go down that route.
As for specifics this game, my team wants to keep an eye out for bots that shoot and climb. We think it’s valuable to get a count of how many balls are scored, where they score them, if they climb, and where. If it sounds simple that’s becuase it is! You can get as advanced as you want, but I like to give my students something balanced so they are paying attention but not sweating from trying to keep track of everything.
I know I’m repeating myself, but, please talk to your team on what you should look for. At the end of the day, this data is just for your team and whoever you feel like sharing it with. If you take one thing away from this thread, it’s that after you get an idea of what methods others have done, go through different options with your team and mentors.
I’ll leave my sheet that I made for my team. It’s not perfect and it’s always up for changes as the game changes week to week. 2020 paper sheet.xlsx (11.8 KB)
Also sorry for the dissertation, scouting is my jam and I love any excuse to talk about it.
Hi @Chris_Goodis ,
I thought I’d suggest a couple of alternate start positions. “Lined up on the Left Trench” and “Lined up on the Right Trench”. I treat those as different from left and right.
I also wanted to add one more comment to your dissertation. For match scouting, Don’t collect data you don’t know how to interpret. You aught to be able to weight every data point so that you could do a spread sheet calculation and assign a single value to the robot at the end of the match. I don’t mean that you have to use a spread sheet (I will but you don’t) but you do need at least that understanding of the data you are collecting.
This is partly true. On the one hand, definitely don’t collect data you don’t need. It’s annoying for the scouts and extra noise to get through in the scouting meeting. On the other hand, you don’t know where the competition will take you and some data that may not seem relevant when making your form may be necessary later in the event. If you can’t edit the form on the fly, best to have a contingency plan for collecting extra data. Additionally, on the third hand, assigning a score to each team based on quantitative match data alone is a misguided strategy. I am not sure if that’s what you were suggesting, but if it is, think again. An effective scouting meeting is an effective discussion, not the most definitive spreadsheet formula to answer who the best robots are. And that’s coming from someone that uses spreadsheets for just about all of life’s hard questions.
I do recommend to everyone to keep up with Katie_UPS threads about scouting, here is the first one Your scouts hate scouting and your data is bad: Here's why
Friendly reminder that the FIRST Events API can tell you a number of things:
- Did a specific robot get off the initiation line?
- Did a specific robot hang from the generator switch?
- Is the generator switch scored as balanced?
- Did an alliance reach a given shield generator level?
- How many balls went into each goal?
- If your alliance can’t reliably see the inner goal shots getting scored, this could give you a way to apply a fudge factor based on how many high goal shots a robot made.
All of that can be scraped and put into a spreadsheet; we did the same thing last year using TBA’s API (and I expect them to port over FIRST’s into theirs soon). Whatever you scrape is one less thing for your scouts to forget or write down wrong, and (if on paper) one less thing you have to re-enter.
My brief thoughts:
- The pictures are great but once the scout understands the terms, the pictures will just make it harder to fill out the data - especially if they’re using their phones or other small-screen device.
- My gut feel is that its going to be difficult for a scout to know if a team scored in the inner goal unless they have the perfect seat at every event for every match. The best solution I have for this is “high goals” and “estimated high goals that were inner”
- Providing a way for scouts to tally up events will help with teleop goal statistics. I don’t know if forms provide an interface that has a number with + and - buttons, but that would be ideal.
- You can probably combine the end-game questions
The cool thing is that scouting, like robots (and life), is iterative.
You can test out your system watching scrimmages and webcasts, it should only take a few matches for a your team to find the scouting pain points.
As for the data, if you want - have your scout & strategy team come up with strategy for the next match on the webcast/scrimmage. How did their strategy compare to the actual? What information did they need that wasn’t there? What was unnecessary?*
If you really want to test the whole system, try to make a pick list for your test event(s) - what did you wish you knew? What is noise?**
What you might find out is that you don’t need a whole lot of data. Or that you really want more qualitative data. Or that scouting every team for every match is onerous and it might be better to Super Scout instead (three links). Or that your team likes making graphs and want more data! It’s hard to know until you try.
*This is “in an ideal world”, the odds that my own team does this is slim
** Again, “ideal world”
I didn’t intend to write an essay about this, but hopefully some teams find this useful.
To answer the original question:
- Collect data that you will use
- Collect only as much data as you can without losing accuracy
What data to collect:
Scouting data (generally) serves two purposes:
- Match strat
- Pick lists
Any data that isn’t applicable to one of those two should not be collected.
Any “fancy” data that stretches your scouts thin and potentially loses accuracy on key information should not be collected.
IMO, >90% of teams collect more data than is necessary. Follow the golden rule of scouting: scout within your means. 100% accuracy of one data field is better than 75% accuracy of five. Bad data is worse than useless, it can drive you to wrong conclusions, and to make bad decisions.
So what data is necessary and how do you decide what to cut out?
- For match strat:
- Prioritize data that will affect whether you go for the RP (climbs, total # balls, control panel)
- Then data that will affect whether you go for the win (inner/outer/lower goal)
- Then data that helps with cycle planning (shooting locations, times per shot, auto paths, time to climb, etc.)
- Then data that that takes into account the other alliance (how much are they affected by defense, how easily can they be blocked, etc.)
- Then any other subjective data (driver skill, etc.)
- For pick lists – first picks:
- Very similar to above
- If there are any particular things you cannot do (e.g. you can only climb when the bar isn’t already tilted), make sure you scout for robots that can do that
- For pick list – second picks:
- Usually there are a lot of teams with similar abilities
- Pit scouting data can be very useful here: I generally look for clean/easy to trace and debug wiring, solid bumpers, well tensioned chain/belt in the drivetrain, no mecanum, and a programming language we are familiar with in case we need to coordinate autos
- Subjective match scouting data is also useful, if accurate (driver skill/consistency, ability to avoid fouls when playing defense, etc.)
How to collect the data:
I strongly believe against collecting any data you can get from the field. Things I do like to collect:
- Photo of robot (whole robot, drivetrain/gearboxes, electrical, bumpers)
- Programming language
- Year dependent questions (e.g. for 2020: if they can climb, weight (in case you need to ballast to balance))
Practice match scouting;
- Watcing practice matches can sometimes be useful in place of pit scouting (e.g. in 2018, we asked teams if they had an auto, then tracked which ones actually crossed the lines in practice matches)
- Total numbers of game pieces in auto/teleop and endgame action are almost always the most important
- If you have limited resources, auto and endgame actions can usually be pulled from the field API
- The rest of the objective data I mostly detailed out above
- Subjective data is tricky, and unless the same people watch all robots, I think it usually does more harm than good
Pit scouting round 2:
- If making pick lists, I like to talk to teams towards the later end of quals matches (usually when they have ~2 matches left)
- If they have been having issues, try to understand if they can be fixed
- If there are specific things you’d like to see (especially autos) ask at this point
- For second picks, I always try to talk to them about what we would want them to do, so we don’t end up picking a team completely unwilling to play defense (especially important if you’re asking them to add/modify/remove a mechanism)
Match scouting round 2:
- If making pick lists, last day match scouting is usually mostly subjective
- Have a list of robots based on data from the days before, re-rank based on watching (heavy improvements, significalty better/worse under defense, etc.)
Field API data:
- I love using API data because it’s free – you don’t have to collect it
- It works very well for identifying rare actions (e.g. fuel OPR in 2017 could be used for fuel count extremely accurately)
- It works very well for auto and endgame points, but you need to note things like buddy climbs, climbs from fouls, etc.
- It can work as a sanity check for data (total numbers of balls scored must add up, etc.)
Using the data:
My probably overly complicated way of looking at both match strat and picklists is to look at each alliance’s best, average, and worst case scenarios for a given strat, and pit that against the opposing alliance’s best/avg/worst case scenarios. (Worst case is relative, I usually look at the situations of missed autos/endgame points)
For picklists, I consider each possible action of each team above us, how inner picking could affect rankings, and then work through the bracket of every team we will face (repeating for all possible brackets, and sometimes for simulated rankings). This is especially important when picking from alliances 2-3, since you might want to pick a “worse” team in order to make your bracket significantly easier. It can also affect whether you want to accept or decline. Usually, most scenarios converge into 2-3 main ones (and 2-3 picklists).
This can also be done on the last day once rankings are more finalized, but I prefer to have mostly finished lists by that point. IMO, the majority of banners are lost in the picklist stage. We’ve put in the effort to collect the data, and I want it to be used well.
Yeah the L/C/R field is just a frog force thing we’ve had for a while. It just means relative to the goal since that’s how we communicate with one another. A GOOD THING TO POINT OUT COMMUNICATION IS KING!
And I agree with your line of thinking. However, I don’t try to sum up a robot in a value. Take last year, I’ll rank based on total game pieces and then the tiebreaker was how diverse their starting positions were. But, if assigning values to robots is comfortable for you, go crazy.
You can’t use WiFi at the FRC venues. (See the problems in Houston in 2019…)
Thanks for pointing that out
(Unfortunately, I’m past the edit window for the post)