Citrus Circuits 2023 Scouting System White Paper

Team 1678 is proud to present our 2023 Scouting System Whitepaper !

The Scouting System Whitepaper is a document that describes both the technical details of our scouting system as well as the processes we went through in the 2023 season to create the system.

The whitepaper contains: an overview of our scouting system and all its components, the development procedures our subteam uses, our review of this season and how we can improve in the future, as well as additional resources for anyone looking to create a scouting system of their own.

All of our scouting whitepapers are available at citruscircuits.org/scouting
The code for our 2013-2023 scouting systems is available at github.com/frc1678

If you have any questions about the whitepaper, our scouting system, how to start your own scouting system, or anything else, please send us an email at [email protected]. Also, if you’d like to give us feedback about the whitepaper, tell us how your team was impacted by the whitepaper, or share your own system with us, please send us an email! We are interested in helping to develop the FRC community and opening the power of an electronic scouting system up to other teams, and we are interested in working with other teams to make innovations in FRC scouting. We’d love to hear from you!

The rest of the scouting subteam and I will be responding to questions in this thread for the next couple weeks, but the best way to contact us is through email at [email protected]. Thanks!

45 Likes

Do you have any sort of “change log” from last year’s system/white paper or can you summarize what if any big changes there are? Aside from game-specific data points of course, unless there’s something unique or interesting to share there as well.

Page 59

3 Likes

Page 58 of the PDF has our summary of big changes. But here are some of our major changes:

*Switching back to Grosbeak for our web server to transfer data to our viewer app
*Created a new (local) scouting app for our stand strategists to efficiently collect qualitative notes on every team in each math
*Created new viewer abilities to have comparison bars for each field in the team screen
*Creeted the ability to track all auto paths a team ran along with the scoring rate of each piece and how many matches they ran that specific auto path
*Created a new viewer screen under the team page that visually displays each auto that the team ran with the ability to scroll across different autos
*Over the off-season and comp-season, we tested the viability of SPR
*Created the new playoff scouting mode that collects collective information for the alliance for quick, local visualization to quickly decide how we will play against whatever alliance we may come across in later playoff rounds

Those are at least the only ones that I can remember. There’s probably something that I forgot. My personal favorite new feature is the auto path tracking. It became invaluable at champs for determining our picks and match strategies.

3 Likes

Thanks! Two questions:
In regards to the stand strategists, are there any specific things you look for to note down for match planning and making picklists? Do you do any training to teach what to look for or do you just try to generally rely on game knowledge to “know it when you see it” in regards to important factors?

I know in your whitepaper you said that you considered factoring SPR into your consolidated team stats, but didn’t due to processing time. Do you have any feeling as to how accurate this system was at actually identifying the errors in scouting data? I imagine with both TBA data and doubly redundant scouting data it’s pretty accurate. Also, how much do you think implementing this would have improved the accuracy of your statistics? Did you have any specific issues with data accuracy in the past that lead to you developing this system or is it just an effort to stay ahead of the curve and try to keep your data as accurate as possible?

In regards to your first question about stand strategists, as some of our most senior strategy members a lot of it is “knows it when you see it”. With that being said, generally going into a competition they will be looking for a few specific things. These will be things that the rest of our data misses or doesn’t collect, for example: a robot spending a very large amount of time in the loading zone, if a robot is only making half cycles instead of full cycles, which could inflate their numbers, robots actively running into or blocking teammates, robots receiving large amounts of fouls, etc. While some of these may also affect subjective scouts rankings of the robots, it is good to know the specific reasons why and to have a catch-all for anything our system misses.

For your second question, we did not have any specific instances where we felt our data was inaccurate. While incorporating SPR into our consolidated team stats would be nice, we did not feel that it was a strict priority, which is why we did not end up doing this. We are always looking for ways to improve our data accuracy, and this was just one of those ways that seemed viable. While it may have improved our data, at the time we did not think the improvement would have been large enough to justify the time input.

1 Like

As for your first question:
Our strategists come from our strategy subteam which goes through specific training on analyzing games and matches for multiple years during the off season and that year’s game during the season. There may be some specific qualitative data that we want for a given match so we will have that on the match page in the strategist app. Normally this will include defense and anything else we think is specific to that year’s game. This year we thought scoring in the coop zone and dropped cones were gonna be important but later removed them as they didn’t end up mattering all that much.

As for consolidation:
Our current consolidation method worked pretty well at filtering out inconsistencies but would have issues if multiple scouts were off by a large margin in the same direction (both were over/under by x).

The SPR consolidation method was more consistent overall but when inconsistencies led to strange consolidation, it led to much larger outliers. This really only happened when a scout accidentally scouted the wrong robot.

Another issue we had was using SPR for this year specifically as we compare totals to totals against TBA. As we compare points, the plethora of scoring options made this a lot more difficult.

I think we’ll most likely move to using SPR consolidation next year or at a minimum use it to highlight scouts to automatically filter out in a given match.

Hope that helps!
-Austin

1 Like