What would a Universal Scouting system look like?

From the perspective of somebody who’s watched our team’s scouting system go from one guy taking notes in OneNote to a full-fledged Excel spreadsheet with quantitative data for each robot for each match:

Although I see some of the benefits of a universal scouting system, I think the greatest problem that you’d run up against implementing it is this:

Many teams already have their own system that they have used for many years successfully. Software development, finding servers, etc. can conceivably be overcome if you sink enough resources into solving those problems. But convincing teams to switch over is going to prove far more difficult.

Some teams will be proud of the system that they built from scratch and won’t want to switch over to something they didn’t make. Some teams will be skeptical and won’t want to use something that they’re not familiar with. Some teams just won’t know about it.

Tl;dr: the hardest part about implementing a “universal” scouting system is making it universal.

I think the biggest thing that would spur a more universal scouting system would be FIRST recording and publishing individual robot scouting data similar to Climbing and Initiation line from the past several years. From there, groups like TBA could create a really slick interface to display the data. If the basic scoring data was available, the scouting floor would instantly be raised and free up the top teams to focus on advanced metrics (time spent at loading station, defensive metrics) and qualitative notes.

But it’s almost impossible for first to record how game pieces go. And RFID is unviable

1 Like

Using the “same as previous” methodology for writing specs:

  1. Make it like TheBlueAlliance.

–> Data comes only from known and trusted sources.
–> Nicely sortable by event, team, year, etc.
–> Data is purely factual
–> Teams encouraged & expected to use API’s to download data as needed, and derive their own bespoke useful metrics.

Tutorials (especially for that last one) welcomed. But, the system should focus on raw data storage, and leave the interpretation to teams.

Also… as was mentioned in the other thread… Trying to make decisions based on pure qualitative data is gonna miss critical things. There’s inherent value to watching matches, talking to teams, and getting to know the human beings behind the machines. So, even in a perfect environment… quantitative-only will never be “Universal”.

I’m making some assumptions and steps… but I think when taken to an extreme, creating a universal system to describe the qualitative aspects would eventually result in ranking people… and is definitely not something I’m about to get involved in.

There are obviously challenges with FIRST recording who is scoring, but those are the same challenges high school freshman have been dealing with for well over a decade. I’m not sure if it is worth it, but having additional scouting volunteer positions could work.

Will the data be prefect? Probably not, depending on the game, but I think that may be ok. The NFL has been treating “Tackles” as official stats when there are anything but perfect. Check out this article about how flawed the NFL’s tackle data is. The challenge of two robots shooting at the same time is the same challenge of a RB running at the goal line and getting tackled in a pile of players.

If there were to be a Universal Scouting System I assume that it would have to be fully automatic, rock solid dependable, and would capture practically all quantitative data of each match.

It would capture, on a per-robot basis with accompanying time stamps for everything:

  • Game objectives completed (balls scored, wheel rotations, etc)
  • Game objectives attempted (balls missed, failed climbs, etc included)
  • Position data
  • Fouls (including what type of foul)
  • Robot status (A-OK, disconnected, e-stopped, etc)
  • Human player actions (balls delivered to the field, pool noodles thrown, etc)

I’m sure I’m missing something but I think that’s all the quantitative information I can think of wanting. Teams would need to parse and interpret the data on their own, though I would be very surprised if there wasn’t some great teams that very quickly distributed some really A++ programs.
Teams would be on their own for any qualitative observations they want.

As for how to distribute the data to teams during an event, I think it makes the most sense to have a station in/near the pits where a scout could go to refresh their data since the Event does have access to the internet. Distribution could be a phone/computer app that listens for an updated database via Bluetooth, a Faraday cage which has actual WiFi but only connects to the FIRST website, or as simple as a set of USB drives which are being constantly updated with the latest match data. Something wireless would probably be preferred since assuming 5 min per match and 60 teams that’s a scout every 5 seconds.

The hardest part I believe would be the automatic data collection, though I do believe we are getting closer every year. We could have volunteers (instead of volunteering for the glasses booth you volunteer for data collection), but I would question their accuracy and training. Quizzes beforehand?

The good news is that it would give some interesting shout-casting statistics for announcers between/during matches.

There are three big issues here that cause me to hesitate about adopting a potential universal scouting system for my team:

  1. The reliability of the system and ability to troubleshoot it. As a professional software engineer, I would not be able to be comfortable with it without thorough, thoughtful testing at the unit, integration, and systems levels. If I am adopting a system that I would not have much ability to troubleshoot or modify on the fly, I would need to be fully confident that it will not fail at the competition.

  2. Data accuracy. With my own students, I know that I can generally trust the students we have collecting data because I know I can trust them to take it seriously and I know that I’ve trained them to do a good job. As a general rule, I avoid relying on data collected by students from other teams because I don’t have these assurances (in other words, I don’t know if they’re forcing random disinterested freshman with no practice to do it). I think this can be mitigated by doing 1678-style 3+ scouts per robot scouting and taking the median data point, but this requires a lot of manpower to do right.

  3. The choice of data points to collect. If a team doesn’t feel a data point that is collected as part of the standard is useful to them, I suspect they would not collect reliable data for it. On the flip side of the coin, if I want to collect additional data on top of what the collective agrees on, it would be cumbersome for me to run another system on top of this. I’m not sure what the solution would be here.

Those are my thoughts. I think having a one-size-fits-all standard scouting system is going to be challenging, especially for coming up with something that makes picky people like me happy while being easy, but I’m sure that such a system could help improve the scouting experience for a lot of teams if some of these issues can be tackled.

4 Likes

This is the key in my mind.

Each team’s ideal alliance partners will be different depending on how they designed their robot and what they beleive the ideal strategy is in terms of how to play the game. A one-size-fits-all approach would therefore need to collect far more data than any single team would ever care about in order to collect all the data elements than each team wanted. I think this would make a universal scouting system very unwieldy with a lot of extra selections on the screen that you don’t care about which would tend to discourage teams from using it. But, on the other hand, if a simpler scouting system did not collect what a given team was looking for, they would just revert to using their own system.

To add on to this, we make fairly large changes to our scouting system during the season in terms of what data we are collecting. We take our best guess at the beginning of the season what data will be important and design our system to capture that data, but then game strategies emerge that we had not forseen that we want to capture, so we add those elements to our scouting data. Defensive play is probably the biggest item that tends to emerge and change as the season goes on and causes us to refine our scouting data. Penalties is also a big area that we seem to miss at the onset. It is hard to imagine that a universal scouting system would be able to come up with the perfect set of scouting data before the first events.

2 Likes

Not to mention these data points change every single year. Would the people working on this have access to the game beforehand? I’m not sure that would even be good enough. The only feasible way would be to make it so generic that it is useless.

Instead of trying to make one scouting system that works for every team make a system that crowd sources basic agreed upon quantitative metrics while still allowing teams have their own system and metrics. For example 2020 this system could simply track number of high goals scored, climbing, and control panel spinning. It could still allow you to add incomplete data if you decide you don’t want to track a certain metric. The key is making it simple and easy for teams to be able to upload their data and offer valuable incentives for doing so. If you aren’t happy with the standard then you can still track additional metrics that can give you an edge but the whole community benefits from having a baseline of data.

Honestly, what might make more sense would be an “open scout” development environment similar to open sight that would allow teams to create their own customizable scouting app using a simple building block approach where each of the modules had configurable elements.

For example, the base “high goal” module might allow you to keep track of each element scored in the high goals (as a button click on a tally icon on the screen). But you could configure that module to also keep track of inner goals as a secondary button click to a high goal shot if your team wants to keep track of that. Further, if you want the app to keep track of the time stamp of each event so that you can scout the cycle time, or keep track of where the shot was taken from on a diagram of the field, you could turn on those features of the module and it would record that data, but if you kept that feature turned off, it would just give you a raw tally of total scored elements without recording the time that each element was scored. And so on.

An open development system like that would allow teams to quickly and easily create a basic scouting app or spend the time to configure the more sophisticated aspects of the scouting system to suit their needs without needing to know a lot about app development or spending the time to code sophisticated interfaces. The data elements could be universally coded so that the data could be shared with other databases as they would recognize each type of data (but the coding would also recognize whether the data was collected with the simpler module configuration or with more complex settings so that when the data is compared between different teams, you have an idea of what can be compared and what cannot).

Alternatively, you could have community sharing of only the most basic data elements (i.e. number of high goals scored) and the more sophisticated elements (time between shots, inner goal versus outer goal, where the shot was taken from, etc) would not be shared. That way, the community benefits from the same basic data, while still allowing the individual teams can take their own version of the scouting system up a notch or two to suit their needs and be able to keep the data private if they wanted to.

2 Likes

I’ve been giving this problem some thought over the past few years. I don’t have all the answers, but I think a lot can be learned from Google’s early days using social media games (and Captchas) to improve their web indexes.

Aside from the technical issues, I think a big problem (that Google solved) would be the risk that someone would vandalize the database with intentional or accidental bad data. Here’s how I think you could solve that:

  • People have to register individually (and with team associations probably) to enter and retrieve data. Will need an authentication system.
  • Individual data records are associated with each user.
  • A good proportion (if not all) match records are scouted by more than one user (randomly paired). Users gain “reliability” points by matching eachothers scouting data consistently. Averages can be weighted by reliability in the same way that fivethirtyeight.com weights different polling firms.
  • Along with that registration comes points, badges and rankings. There can be leaderboards for the most number of matches scouted, the most reliable data, the most number of scouts per team, the most favourited doodles, the most number of robot photos taken, the highest score on a pre-match warmup mini-game, etc.
  • the resulting data needs to be presented with an open-source API so that teams can build their own analysis apps.

With those kind of incentives, I think people will work really hard to enter good scouting data. Plus it could be really fun!

1 Like

Where is the Universal Horse Racing System or Universal Slot Machine System, How about Universal win the NFL point spread system??? …

Give it up

No one will ever agree on “What is important to track” I could tell you but you would all choose something else. That’s fine by me :slight_smile:

Does Telemetry count? Im not sure if Horse Racing has it, but its pretty common in auto racing…

Nope

Class, Latest race result vs class , Time from last race , lane choice , Jockey , owner etc

Why I spend a lot of time on Class…expected results (no surprises)
Teams that win …win teams that lose …lose biggest factor for easy play

All teams can be “stacked up” in any First competition and most have a historical record of past performance, then you look for movement in any competition or season

But folks here don’t believe in my Horse race analogy and like endless stats (which i find mostly fluff and amusing) , ok by me. I scout the way I want to scout until I find a better method. So far none have changed my mind on the scouting basics.

This is not rocket science with up to 60 teams, in champs maybe I need to flex some to allow for regional distribution strength challenges. But kids like apps so be it go for it.

This only works if teams can agree on the “basic” schema. For some stats this might be obvious, but others may be much more divisive. You would have to carefully craft a democratic process for drafting that schema, and if you have more than a few stats with 50/50 splits on whether they should be included, teams might get frustrated with having to spend time scouting data that they don’t care about. I think your idea is great (and something we’ve discussed for Peregrine), but it’s much more difficult than many people make it out to be, and requires massive buy-in from all users.

Peregrine does both of these things.

Peregrine supports this for the most part, and most of our local events have redundant reports thanks to 6343. Multiple reports can be submitted for the same team in a match and it will average their data together. The reliability thing is a great idea that Peregrine does not currently support (accepting PRs though :slight_smile:).

Peregrine has basic team leaderboards, but something like this is a great idea (someone could PR it)!

https://peregrine.ga/api/openapi.yaml


It’s hard to develop the critical user mass required for this. If Peregrine suddenly had 500 teams using it, even in it’s current state, I think it would grow. As-is nobody really cares since there’s no advantage over a single use system tailored to your team. I don’t think it’s impossible to start from just a few users and grow the app to the point where there are redundant reports for all teams in all matches, but it’s super hard.

TL;DR of this post: a universal scouting system is a very difficult problem. Is it worth solving when teams can fairly easily create a custom tailored system? Maybe.

3 Likes

Thank you for everyone who has responded so far. I get that this is difficult to design and implement and that many teams disagree on what data is useful. Thank you for your feedback. There are many good ideas in here. So far here are the features that I’m seeing the data collection system would probably need.

Data Entry only from known sources

  • 1678 Method of having multiple data collectors and comparing datasets to help confirm accuracy

  • If multiple datasets for a team aren’t available use data from trusted and confirmed scouters

  • Data that is pulled from FIRST API where available (auto line, climbs, etc)

Only collect and display factual quantitative data

  • Have a “base” data set that is simple but everyone collects

  • Allow customized data collection for more advanced metrics, that is only shared and compared with teams collecting that data point, but generally published after the conclusion of the event. This would allow teams to during the event collect more data if they feel it is competitively useful to them and maintains their competitive advantage for collecting more detailed data but makes the data available to everyone after the event is over. Never share any qualitative data outside the collecting team.

Social aspects to increase buy-in (ranking, leaderboards, badges)

Provide data in an open source API so that teams can work on their own data analysis

  • Considering maybe making simple data analysis available but that is not part of the data collection system

I don’t know a good way to temper any concerns about the reliability of the app at events, If anyone has any suggestions for how to approach this I’m open to hear them. What would be a good way to help teams be sure it wouldn’t fail them.

If an app like this existed would your team consider using it? Would your team never use an outside application or is there something else I’m missing for your required features to use this instead of your own system?

1 Like

I think no matter what you make the issue (as others have pointed out) will be buy-in. The elite teams, and the teams who already have developed robust scouting systems probably aren’t going to buy-in. The teams who would find the most use out of a system like this would be teams who don’t regularly scout, or have a hard time interpreting scout data they do gather. So having inexperienced scouters using this system may lead to bad data.

Perhaps a better approach might be to assist specifically the teams who have difficulty or no scouting methods. The endgame is a pick list of 32 teams. How about you create a picklist generator that runs off of blue alliance data or information from Caleb Sykes Scouting Database (or similar.) You could have the user input weights for each metric, and use those weights to generate a picklist. The weights would allow them to skew the list to teams who compliment their robot well. The user could denote teams who are DNPs which auto sorts to the bottom. You could also have the ability to input custom metrics with custom values like “How easy is the team to work with on a scale of 1-10.” Which they could also apply a weight to. Getting a somewhat valid picklist in team’s hands seems to me like it could be a great help to teams who don’t have scouting ability or manpower.

Agree, a universal scouting app will likely never happen. Not all teams scout and those that do generally have their own ideas of what works for them. If needed there are a plethora of data sets and video to watch. I never feel like I am missing data and actively look for data that diverges from what “eyes on bots” tracked. I use this to validate what we saw and look for statistical outliers we may need to evaluate furher… this means 0-2 teams per competition the rest statistically matched with our assessment or 28 deep ordered pick list at end of day 1.

I think the fallacy here is that in competition data collection can be best achieved by random groups doing so often in a single 10 game competition (low sample rate) to selection. That can also handle alliance member weighting. I disagree, this is not as simple as a find thirteen red flags somewhere in the US the most efficient way possible…the winner of that challenge … was Facebook messaging in I think it was 15 minutes that won that challenge and the money.

Robotics “alliance makeup and strategy” is a whole different challenge… that is why there will never be IMO a universal way to scout or shared data app that will work. More likely to introduce noise ,bias and errors than anything. But hey kids like apps they grew up with them. Apps are only as good as written and data inputs. Pen is faster. More nuanced.

Its sort of like asking someone what their favorite car is .

Remember inputting into an app takes your eyes off the field and involves much more dedicated brain processing.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.