App to Analyze Statistics for Scouting

Hey guys! I’m our scouting captain for team 4776 out of Howell, MI. We typically use an app that we develop each year for scouting but I had an idea and I want to know if it’s at all possible and if it is, how. Is there a way to create an app where each team number is like a “folder”, so that each team’s information is separate, and contains the information for each match. So when you scout, your whatever you are using connects to the folder and uploads the information, then each “activity” (like climbing or shooting fuel) would be weighted for a certain amount of points. You could then use the information provided and punch in which teams are on each alliance and find out the statistical chance of winning based on each teams past points. I’m not very good with computers at all so if anyone could just point me in the right direction I would greatly appreciate it, thank you!

What you’re describing is a very advanced scouting platform.

My team used AppSheet last year. It provides an app interface on the phone that sends data to spreadsheet in Google Docs. From there you can do all sorts of math and functions. You can even have it display graphs of data that has been collected.

It also lets students load photos of robots and saves them in a folder.

…And AppSheet costs to use. I think it was $10/app per month for unlimited users.

The approach with the lowest barrier to entry is probably a paper sheet to Excel/Google Sheets scouting system. We accomplish all of that functionality in Microsoft Excel. The results of each paper scouting sheet are manually entered into the spreadsheet and then we calculate various analytics on the raw match data.

If anyone is interested in seeing our scouting spreadsheet, feel free to say here/PM me and I’ll send it your way.

Next up from Excel is going to be Tableau (they’ve given FRC teams licenses the past few years) or (what I’m currently playing with) Power BI.

But there’s a leap from “taking data” to creating data usable for predictive models and then another leap to actually creating those models. It’s fine to create pretty charts and graphs, but if they can’t provide actionable insights into your next match it’s a waste of time.

I did the basic functionality for what you’re describing with Excel & macros back in 2009 (at least a win prediction if not a win percentage prediction)

These days, the easiest way to pull it off is probably some sort of system using google sheets/forms.

This is definitely possible and it has been something that on 2067 we have been doing since about 2015. A quick warning a match predictor can be way less useful in defense intensive games. If you want to see an implementation of this tool I would definitely recommend looking at 238/2067’s PiScout project seen in this thread:
https://www.chiefdelphi.com/forums/showthread.php?threadid=160087
If you look at the github in gamespecific.py and server.py you can see implementations of match prediction and some further uses of match prediction that I would highly recommend (we often used match prediction to get a rough analysis of final rankings).

1678 has had a match prediction function in its Scout Viewer for 2016 and 2017, but this year’s functionality was disrupted by the weight and randomness of climbing. It was quite accurate in 2016.

There is nothing particularly difficult about what you are describing here in terms of framework. It is simply an object oriented structure. How you choose to implement a system such as this is up to you. Most databases support such a relationship, or you could take a stab at it on your own in Java, Python, C#, etc.

Let’s use 254 as an example.
In the most simple implementation I can think of, you would create:

  • “MatchAppearance” objects - contain a variety of variables related to the number to times a team completed specific tasks & related info. Also includes data such as match number alliance color & driver station, other teams on the field, event code, & year.
  • Input script to take in info during a match (or transcribed from paper sheets) constructs a new MatchApperance object adds data as it is entered, checks to see if there if a folder for team 254 in the parent directory (just checks if the path exists) if not it creates the directory & dumps the MatchAppearance object into the folder as a serialized object.
  • A parser to go through team folders on command and calculate descriptive statistics for desired categories. Contains an accumulator function for reconstructing the serialized MatchAppearance objects & pulling the data from them. Once that has been done you can deconstruct all the MatchAppearance objects that you created form 254’s folder and run your statistics functions on the data you extracted & save the output.

*note you will need a way to sync multiple directory trees, Wildrank V1’s framework handled things similarly as described above, instead it had match folders and put a team instance in the form of a .json in the match that team played in.

This has been touched on by others already, so I won’t go too in depth. Basically there are so many other variables at play that other than a few select years (such as 2015) it is difficult to calculate individual robot scores. Even when you do implement it you will often find that an alliance of robots that perform match actions worth 130, 70, & 100 respectively will not see a match score of 300. It will probably be something between 200 & 250. The same thing happens when you add all of an alliance’s OPR scores. There just isn’t a magic model to handle all the variable inherit in a FRC game. you are better off using community standard statistics that are ordinal in nature, such as CCWM & OPR and using them as just that: ordinal data**.

**I am sure someone will try to make an argument that CCWM & OPR are ratio & not ordinal. Show me a perfect model that predicts FRC scores with complete, 100%, accuracy given a training set of <12 matches. I will change my tune then.

*** Furthermore the bracketed scoring of last year was a PITA.

We only worry about our partners and the other alliance members (foes) upcoming and take detailed scouting notes until we play with them that sometimes means we watch them 9 times before they are on the field with us. Any other “performers” to watch later are noted each match.

This cuts the “noise” down to actionable game by game insights. By the end of day one we have a list of about 20 performers we noted and agreed upon day 1 as a consensus set day 1. Then we verify with the published day 1 stats that night to make sure our list correlates and look for any number outliers we missed usually add 0 -2 more teams . Then verify the next morning to build a list 28 deep by selection time that our lead scout manages.

The main goal of our scouting is to have deep insights to who we play with day 1 and day 2, to hopefully enhance our alliance gameplay abilities as to rank high enough. Then build a list of those performant teams that are who we are looking as a potential partner in eliminations whether we actually ranked high enough or not . Does not matter, its about knowing who you play well with at all times. In case you need that at some point day 2. Risers are also added day 2 after verifying why they rised.

Another day 2 task is to do the same deep individualized scouting on all alliance captain possibilities (Top 16 or so) , looking for tendencies and stategies to slow them down with what they like to do. By watching the matches we usually know what teams can do by the end and what the score is likely to be. 40-60 (10 games) is not a good sample size …that is why we prefer notes on whatever metrics we track that year. Tying to limit what a scout needs to evaluate in game tends to generate better data of what we want to see. Each scout builds their top 20 list day 1…then we all hash it out to get ready for day 2. Excel ,notepads, pens and highlighters.