Best scouting metrics for Rapid React

The more I study this game the more layers I realize it has.

What are peoples thought on some best metrics for scouting during events for Rapid React?

I think more than ever, a robots value is going to be difficult to measure based on a few traditional metrics like # of cycles or climbing level and speed. Those numbers will vary greatly on what the overall game strategy was…did that robot just grab all of the easy cargo or did they leave that for their partners and go after the difficult ones? Did they concentrate on preventing the other alliance from getting the cargo RP once the match was in hand? Did they slow down on scoring to focus on climb coordination to lock up that RP?

This seems like a real Moneyball / Sabermetrics challenge this year compared to other games.

I wonder if an OPR or an overall winning margin calculation would be the best metric.

On top of that, in Ontario all events are single day, so whatever methods of data collection and processing you are using needs to happen fast as we will be going from quals to alliance selection without any time in between…no late night scouting meeting after day 1!

1 Like

Depends on how many scouters you have. I have a scouting algo that focuses on points scored and points denied but requires a bunch of scouters. There is a version that requires less people but is not as accurate. I have found opr to be unreliable in the past and this adjusted RPP one is better but each to their own. Most properly setup scouting algos should work in one day comps.

Can you elaborate on how you think this year is different from previous years? All the factors you describe seem like they apply just as much to other years with balls and climbs. Defense (such as keeping cargo away from opponents) is always hard to quantify, we usually don’t have any real metrics for it and scouts just add a note if a robot stood out as especially effective. In any game with balls, a few balls will be easier to get to, most will be harder, this doesn’t make cycle time a bad metric (imperfect, sure, but I think it’s still very useful). In any games with multiple objectives (such as shooting and climbing), a team will decide match by match what to prioritize; all you can do is measure what they actually achieved. Did they ever demonstrate that they can do a lot of cycles quickly? Did they ever demonstrate that they can climb? Did they ever try and fail to climb? Did they ever break down?
OPR certainly has some value. However it’s not as helpful when you’re trying to build a team with specific capabilities (i.e. we want a fast cycler and don’t care whether they can climb because we don’t want them to get in our way while we’re climbing. Or vice versa, we really really want a reliable high/traverse climber because we can’t climb ourselves, etc)

Tbh we haven’t really discussed scouting as a team yet, but this year doesn’t seem radically different from any other.

6 Likes

Being mostly a paper scouter. What I will track Is a system that tells the story of what happened in that match. Likely with shorthand. Per team making for an easy visual comparison when it comes to a pick list.

to build on this during that day 1 or 2 strategy meeting we take the raw scoring data, cycle #'s, etc. into account, but we always have a section for comments or have individuals dedicated to simply visual scouting and make sure that we have comments that tell a more complete story

Using 2019 as a reference example, if you were on pace to win and get 4 points in a match, you did not have the ability to send all 3 robots to try to stop the other alliance from completing a rocket to gain that RP.

Denying others from getting an RP is almost as valuable as gaining one, especially in our upcoming Ontario 18 team events this year.

If you are winning and already scored 20 cargo and they have scored 15, best play is likely to have all 3 robots go on defense and deny the opponent as opposed to running up the score.

In that respect, I think different strategies will play out…you want to win by just enough to give you your best chance at RP and deny the opponent, which might translate into funky offensive numbers.

Also the aspect of being a “team” player (hockey analogy…guy who goes into the corners vs. guy who stands in front of the net and waits for the pass)…if our robot is running all over the field chasing down hard to get cargo and someone else is picking up the easy stuff in front of our driver station, their numbers are likely better but who is the better contributor? How con your scouters accurately measure that?

1 Like

bare minimum:

Taxi
Balls scored lower
Balls scored upper
Climb level
observations/comments

if the FRC API is anything like previous years, you will likely be able to get per bot data for taxi/climb.

Nice to have (in order of easiest to hardest to implement/track):

climb time
Auto lower/upper scores
shots missed lower/upper
defense played/quality
auto paths
shooting locations
Dedicated scouts keeping track of overall driver quality

you did not have the ability to send all 3 robots to try to stop the other alliance from completing a rocket to gain that RP.

You couldnt do that anyways in 2019, but thats beside the point. I agree with scichols here, this year really isint much different in previous years. Its always a game of what RP you’re going for and what strats teams are playing to get picked/be the one picking at that event, good scouting should show who is the cream of the crop even with the differing strategies per match.

I don’t think this unique to 2022. In 2020/2021, we saw teams take advantage of the very quick human player overflow or some teams do the relatively quick trench cycles or teams slowly sweep the entire floor for additional balls. One thing that you could do is record where the robot collect and shoots each cargo and determine a total cycle distance to help show if the are scoring the easy cargos. I personally think this is probably a bit overkill for the majority of teams, however.

For single day off-season events where you can’t really have a picklist meeting, I’ve had good success creating a separate picklist tab on excel/google sheets with a few key statistics (full stats with trendlines available for a deeper dive on separate tabs). After each match, you can drag a team up or down the list (using column B as a temporary storage), depending on how each team does, and the stats update. We’ll probably do something similar for Saturday morning at our regionals this year to tweak our picklist from the night before.

1 Like

I like the eyeball test. Then mentally simulating a match with/against each robot. That allows us to factor in our own strategy. Sometimes a data point can be important (cargo for us in 2019) but that may not tell us how a team performs under pressure vs open field. For me gut/eyeball comes in very high in evaluating teams pre-match and alliance selection because the data doesn’t have the depth to capture all aspects.

Echoing this. Remember, you don’t need to scout anything that the API provides. I don’t think we have 2022 match breakdown spec yet, but recent history says it will likely include each robot’s auto and climb as well as total game pieces scored for the alliance. That last metric is not overly helpful for scouting, but if you’re really limited on scouting resources, an average/3 will give you a very rough sense of the team’s contribution to the alliance on scoring.

I’d much rather have one to two people taking high quality, descriptive notes on each team’s attributes (e.g. when is the team at its best? at its worst? where do they shoot from? how accurate are they? is their climb fast or slow? did they fail a climb? did they brown out? etc.) than six people frantically collecting data – much of which is already available.

I would hate to lose my 1 seed in a tie breaker because I was concerned about the 8th seed getting an RP.

  1. I have heard of the API data before but how do you access it and is it realtime? Can I have my kids build an app/spreadsheet around it using their own ideas for formulas?

  2. I have seen a post on here about 1-2 years ago discussing the merits of a simple soring system where teams were using a simple “eyeball” test…basically evaluating an alliance and ranking the 3 robots as 1,2,3 on their performance in that match of how much they contributed. Running total for the event and at the end lowest scores (lots of 1s) were your best performers compared to their peers.

Anyone used a system like that when short of scouters or in off season events?

TBA is at the mercy of the FRC API, who is at the mercy of the events which can have a myriad of issues. When everything is going smoothly though, its fairly close to realtime.

Do note, 2022’s API isint released yet. IIRC, they (FIRST) release it closer to week 0.

Check it out – hot off the presses!

2 Likes

You can! I’m planning on my own students writing a python script to take info from The Blue Alliance, do some OPR math on all the score breakdowns when available and post that to a google sheet. The plan would be to run the script after each match if possible.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.