Make a form that includes two slots for your too alliance parterns and three slots for your opponents. Put all the information that is important to the game such as if their robotics is offensive or densive, effective or ineffective, etc. Also include a part in each that shows the autonomous and a simple sketch of their robot. Once your done scouting the teams that are in your match, you should give it to the drive team, and have them evalute it. Having all the information on one sheet is much easier than having 5 sheets.
Here’s my scouting sheet. I don’t know if we’ll use it, but I think it’s pretty good.
I also attached my “how to use the sheet” guide because I think it’s necessary in order to understand the sheet’s workings.
Sketching the robot might be a bit of a hassle since you are in the stands and the robot might not show up until the last second so you have little time.
I would rather go to their pit after they finished a match or before eliminations/qualifications start and talk to the team.
If they’re a defensive bot, it helps to know the final score because it can be a decent way to gauge how well they did. Plus it only takes like 3 seconds to look at the screen and write down the numbers.
I just want to mention that your setup depends on your scouters as well as your strategy, drivers and robot. For instance, 1511 are scouting gods. Check out their full setup sometime, it’ll blow your mind. But they’ve got a lot of experience in it and can get valuable information out of qualitative answers.
Some teams (mine for instance) are still building their scouting base and don’t have anywhere near as experienced observers. Thus I tend to lean my scouters towards more quantitative metrics to help with comparison. That’s not to say the quantitative equates to inexperience, of course. (If you believe that, you need to meet a man named Karthik.)
As far as format, my best advice is try it. Especially if you’re not a week 1 competition, take your best guess and then try a dry run (with video and/or imagination). Be flexible, take feedback from both the scouters and driver team. Another one of those iterative processes.
a possible way for you to save on ink is to get rid of the full circles and replace them with either an excel-like sheet or circle-outlines.
Also, an idea I have is to have scouters mark the shape according to each shape, so that you can determine whether or not it is a logo just by looking at the scouting sheet.
A couple of things I would recommend:
-add match # for comparing with team members
-Drop “Score yes or no” since you will be recording what pegs are placed.
Add “dead bot” This can be helpful to determine the reliability of the robot in some cases
-Instead of auto “yes OR no,” try Uber-Tube “yes OR no” for better information
-On the top, you might want to add “Team Name.” Sometimes, you might not recognize the numbers but instead the names. There are lots of teams at these regionals, usually at least 50, so it won’t hurt.
-replace “yes OR no” areas with check boxes to save space
-For “Team color,” you can probably drop the “Red or Blue” and have them write it in instead. Add underscores in front of it so they place it to the side, freeing up a little space.
Keep experimenting with it to see what seems to fit your liking. Good luck!
I like a lot of the work you are putting out. What’s unfortunate is that the field management system this year is pretty sweet, but the twitter feed they released doesn’t seem to take advantage of the information, which would have helped to eliminate much of the subjectiveness.
For the peg model, how are you going to compensate for pegs you can’t see. Many of the larger competitions will force you to sit in areas with some blind spots (BAE specifically). Will you dedicate one scouter to each side of the field?
How many teams use laptops for their scouting vs stacks of paper? Both naturally have their pros and cons, and I’m curious about your thoughts for this game.
It also seems one concern is the subjectiveness of team’s claims and how individuals perceive a robot. Do you think a rating system could over come this? If many individuals contributed their scouting to an evaluation of other robots, do you think they would average out to reasonable data, thus making it less subjective.
I’m very interested in collaborative platforms, it seems we tend to do 5x the work for the same payoff we could get by working together in a lot of cases. I think scouting is a perfect example, and maybe by working together we would get better data.
I think Patrick’s (computerteen) idea is a good one. Definitely a step in the right direction. We see so many teams hand out flyers with claims of having the perfect robot, it sure would be nice to have a way to validate those claims.
You could include on your match scouting sheet the amount of distance a human player can aproximately throw the tube. Or instead of distance try dividing the field into zones like opposing alliance zone, opposing cautionary zone, oppsoing alliance middle, alliance middle, etc.
with the “blind peg” comment it will be the same as scouting 07’ you will just have “see-through” netting in your way
one person imputing info into the computer 6 with the sheets, the pros with this is we will have up to date scouting info in the pits when we take the computer or flashdrive and the person imputing the info knows as much as the head scout cause they are reading every paper, the cons are it’s open to more human error and the imput person can’t slack off
Ok i see 1 problem with that…you have about 5 minutes (including field reset) to enter ALL information into the computer for 6 team while still trying to watch the game. my team will have 6 paper scouters with 1 laptop and the data is simply logged in at the end of the match based on the paper sheets. the person inputting data shouldnt worry about the game, afterall, depending on your system, it could take 1 or 2 minutes each to log each team in (ours will take about 30 seconds).
Qualitative data is valuable - but translating opinions into numbers is misleading. Every scout has vastly different ranking systems and opinions for robot effectiveness so taking statistical data based on that is potentially a very bad idea.