The Scouting System Whitepaper is a document that describes both the technical details of our scouting system as well as the processes we went through in the 2019 - 2020 season to create the system. While our season was cut short after our first competition, we still wanted to release details about our system as they were when development paused.
The whitepaper contains: an overview of our scouting system and all its components, the development procedures our subteam uses, our review of this season and how we can improve in the future, as well as additional resources for anyone looking to create a scouting system of their own.
If youāre looking to give feedback on the whitepaper this year, please fill out the whitepaper feedback form found at https://forms.gle/MJnydy3REtAxQpTo8
If you have any questions about the whitepaper, our scouting system, how to start your own scouting system, or anything else, please send us an email at softwarescouting@citruscircuits.org. Also, if youād like to give us feedback about the whitepaper deeper than just the form above, tell us how your team was impacted by the whitepaper, or share your own system with us, please send us an email. We are interested in helping to develop the FRC community and opening the power of an electronic scouting system up to other teams, and we are interested in working with other teams to make innovations in FRC scouting. Weād love to hear from you!
The rest of the scouting subteam and I will be trying to respond to questions in this thread for the next couple weeks, but the best way to contact us is through email at softwarescouting@citruscircuits.org. Thanks!
Inner goal regression is sick. How often were you running the inner goal regression? After every match? (I never took linear algebra so my understanding of linear regression is⦠poor)
While reading this, I kept thinking, āCan I shadow their scouting team for an event?ā
As always, this is an incredibly impressive writeup of 1678ās best-in-the-world scouting system. I was especially interested in seeing how you were scouting inner vs outer port goals, as that was a bit of a wrench in everyoneās gears this year.
Itās a little hard to tell, but I think we arrived at similar (but shallower) solution to the problem (using cOPRs to create an āinner goal percentageā and then multiplying by the scouted upper goals for estimated inner goal numbers). Are you taking a second step beyond that and using that āinner port regressionā to do some kind of per-match tweaking or adjustment?
The best part of this whitepaper is the data validation/consolidation + accuracy analysis. More teams need to be doing this, and verifying their data against easily accessible numbers from TBA these days. Great work Citrus!
Not sure if you still have this question, but for those interested, we ran the inner goals regression after every match. Running the regression added negligible time compared to pulling the match data from TBA, so not running it when we could would just lead to older data. The only time we did not run the regression was early in competition when there was little or no data for each team as the results would not have been useful until a few matches in.
Thatās basically what we did, but to calculate the average number of inner goals scored, we clipped the proportion to be between 0 and 1 so that teams would not have negative inner goals, which would decrease the pickability unwarrantedly. We also only used the estimated inner goals per game for the pickability and showed the regression results instead of storing the estimated inner goals. This was due to concerns about the accuracy of the regression because we didnāt have much data to test it on before competition.
Great write up. Our team, Ignite (6829), and Otto (1746) developed a scouting alliance in Georgia. We had 7 teams in a close beta. We patterned the system off your 2019 white paper. So we had 4+ scouts per bot, but across 4 full scouting teams and 3 partial. Teams used a team laptop (Team Server) to read QR codes off of the tablets. If the team laptops were internet connected, we had a REST api to sync data to the cloud server. Otherwise they would sync when they leave the comp. The voting logic would run on the cloud server. Team servers would pull down āverifiedā data.
On the tablet, we had a field view. Tap location to score an action (intake, shoot, climb). Low goal scoring, we could auto populate the number of power cells, since counted intake. This way scouts would only count ādropsā that they could see.
We also tracked āHigh Goalā and didnāt try to worry about Inner port. I think we might look at adding the least squares to help out with that.
Looking at opening up the alliance, called The Peach Alliance, to teams next year. I also added the ability for teams to use a google form and the cloud server would auto import their data. We had 3 teams on tablets and 4 teams on google forms. Google form teams would scout on paper and then a single person would enter the data on google forms when they had connectivity.
When you switched off of Firestore/Firebase and moved to MongoDB. Were there specific technical features that you wanted, or was it just something widely used that you felt comfortable using?
My first prototype used firestore, but I was having a sync issues from time to time. I moved to mysql and QR codes ensure data sync couldnāt be impacted by connectivity. Since I knew phyton and mysql, I went with it so I could get things done fast and ready for week 1.
First of all, itās always super cool to see that other teams are inspired by our work, thanks!
During the 2018 and 2019 season when we were using firebase to store all of our data, we had concerns about firebase such as limited downloads with our realtime database firebase and some minor data loss that led us to reconsider in the 2019 - 2020 offseason. However, our main reason for switching to MongoDB was so that we could store all of our data locally on the server laptop if we were to lose internet connection. The remedy for this in the 2019 season was to store data locally using json files on the server computer as well as upload them to firebase for the other apps to view, but we thought that just using a database that could exist locally would be a simpler and more reliable solution.
The one surprise benefits to the move to local and cloud database was some interesting flexibility. The REST API allows teams with a scouting app join the alliance if the share their data. I view data in two buckets, common data and custom data. Common data is any stat the FMS tracks/TBA has. Teams might need some something thatās important to them, I refer this data as Custom Data. The cloud server only handles common data.
We provide a tablet app, if teams have tablets. One of the benefits of using our tablet app is we collect timestamps and location data. This custom data is only stored on their team server. We didnāt do any data visualization this past year. Iām looking to add some basic stats data on the key scoring data. Iām not planning on tapping in to all the custom data. The goal of the scouting alliance is to get everyone accurate data and enough data for alliance selection. But thereās even better data on there laptop if they put in some effort. I wanted the system to be a floor in scouting and not the ceiling. I hoped the teams could use their time to developing ways of visualizing the data that would fit their needs instead of spending time trying to get a scouting system working.