Scouting Database/Event Simulator Metrics

Hello all,
Now that we finally have access to the detailed scoring breakdown fields, I’m going to get to work on my 2019 scouting database and event simulator. For reference, here’s my 2018 scouting database.

FIRST was kind enough to provide essentially all of the raw scoring data, so now we have to figure out the best ways to aggregate them into useful metrics. I’d like to use this thread to bounce around ideas for metrics and to make sure I’m capturing as much of the raw data as possible while at the same time not adding anything superfluous. Please provide feedback either if you think I’m missing a metric, or if any of my suggested metrics seem useless. Remember I can only work with the data provided to me in the linked documentation, so don’t ask me for a metric like “hatches picked up off the floor”.

With that in mind, here is my first pass at a list of metrics I’m planning to develop. “CC” means calculated contribution (or component OPR), and “rate” means I’m just counting the number of times something happens for a specific team and dividing by all of their matches:
winning Margin Elo
CC to total Points (this is equivalent to OPR)
CC to unpenalized total Points
CC to winning Margin
CC to win
CC to ranking Points
CC to auto Points (I’m assuming right now that “auto Points” represents an unofficial snapshot of the real-time score after auto, but am unsure, I may choose to use sandstorm bonus points for this or just do total points - teleop points - foul points)
CC to teleop Excluding Endgame Points
CC to endgame Points

sandstorm Hab Line Cross Rate
sandstorm level 2 Hab Line Cross Rate (?)
level 2 Start Rate
CC to sandstorm Bonus Points
CC to null Hatch Panels (this one could either be null hatch panels or pre-set cargo, anyone care which one I use? Anyone care which specific stations get null panels or cargo? Or is this aggregate sufficient)

Hatch Panels:
CC to hatch Panel Points
CC to level 3 Hatch Panel Points
CC to level 2 Hatch Panel Points
CC to level 1 Hatch Panel Points
CC to cargo Bay Hatch Panel Points (Merge level 1 rocket with this one?)

** Not sure about including this section, I decided that the same scoring spots on either side of the field could be merged into a single metric, which brings us from 20 metrics down to 10, still maybe overkill though?**
CC to top Rocket Near Hatch Panel Points
CC to top Rocket Far Hatch Panel Points
CC to mid Rocket Near Hatch Panel Points
CC to mid Rocket Far Hatch Panel Points
CC to low Rocket Near Hatch Panel Points
CC to low Rocket Far Hatch Panel Points
CC to DS Facing Cargo Bay Hatch Panel Points
CC to near Cargo Bay Side Hatch Panel Points
CC to mid Cargo Bay Side Hatch Panel Points
CC to far Cargo Bay Side Hatch Panel Points

CC to cargo Points
CC to cargo Efficiency (cargo efficiency = scored cargo/ total hatch panels)
CC to non-null cargo efficiency (non-null cargo efficiency = scored cargo in non-null hatch panels / non-null hatch panels)
CC to level 3 Cargo Points
CC to level 2 Cargo Points
CC to level 1 Cargo Points
CC to cargo Bay Cargo Points (Merge level 1 rocket with this one?)

**See same section for Hatch Panel
CC to top Rocket Near Cargo Points
CC to top Rocket Far Cargo Points
CC to mid Rocket Near Cargo Points
CC to mid Rocket Far Cargo Points
CC to low Rocket Near Cargo Points
CC to low Rocket Far Cargo Points
CC to DS Facing Cargo Bay Cargo Points
CC to near Cargo Bay Side Cargo Points
CC to mid Cargo Bay Side Cargo Points
CC to far Cargo Bay Side Cargo Points

HAB Climb Level 1+ Rate
HAB Climb Level 2+ Rate
HAB Climb Level 3 Rate
CC to HAB Climb Points

dead Or No Show Rate (preMatchLevel = none or habLineRobot = none)
CC to dead Or No Show
CC to right Side Bias (? Points scored on right side - points scored on left side)
CC to own Side Bias (? Points scored on own side minus points scored on other side, 0 if in center station)

CC to rocket RP Percentage (Take scored elements in more completed rocket and divide by 12)
CC to complete Rocket RP
CC to HAB Docking RP Percentage (If HAB Docking RP received, 1, else take HAB Climb Points and divide by 15)
CC to HAB Docking RP
I never know where to put these, they can reasonably go in almost any section, any preferences?

CC to foul Count
CC to tech Foul Count
CC to foul Points
CC to fouls Drawn
CC to tech Fouls Drawn
CC to foul Points Drawn
Chairman’s Strength (mCA)

Feedback appreciated.

1 Like

From glancing at the API data, it seems like the most obvious would be estimated average contributions (EAC) to points from upper bays and lower bays, respectively.

Other things I’d calculate per team (other than the obvious):

  • No-show / Did Not Move rate, based on the HAB starting level / line cross field.
  • Success rates for HAB1, HAB2, and HAB3 climbs, respectively.
  • EAC to number of scoring cycles completed by alliance.

I don’t have any to suggest at the moment. I’ll post back here if I can come up with anything.

I think you can safely get rid of the near/far rocket cargo metrics in favor of just the total for each level. Which side the cargo falls to is pretty much random, so in theory there shouldn’t be any significance to that data. I do think you should keep the cargo ship and rocket level 1 metrics separate, though. Knowing where on the field your opponent will be going before they go there is critical for planning an effective defensive strategy.

Forgot about that, yeah those are pretty silly then, so I’ll take them out.

Regarding this section, I think the information regarding the near/far side of the rocket is useful, as a particular team could be better at scoring on the far side goals vs. another team. I think if you are looking for areas to reduce metrics, the near/mid/far for the cargo ship is much less important. I would argue scoring on the far bay is about the same degree of difficulty as scoring on the mid or near. Even if it is harder, I’m not sure how this would effect scouting/strategy developing.

Caleb your effort is appreciate by Team Resistance. It has helped us with a 3 person scout team to be on top of things like larger teams. Around rounds 6-10 the predictor got to be more and more accurate last year. This year tracking teams that can score on the 2nd and 3rd rocket levels as well as make the 19" climb on a consistent basis will be important for selecting a successful alliance

Caleb I am curious if the week “0” event data shows any robot trends. Observationally, there seemed a lack of robots that could work with hatches and balls. Climbing the 6" and 19" levels was also a small group of the robots percentage wise.

I do not have many suggestions right now, but there are two metrics that I believe you could safely get rid of Caleb.

I saw you were questions having the rocket one and cargo ship metrics being separate, and I think they should be merged as they have the same effective difficulty, and if one team cannot do one, they cannot do the other as they are on the same level.

I also question how useful the null cargo and hatch panels could be, as those are very dependent on the teams alliance. However, I do not know much about scouting so I want to see others way in on that.

Well, I’m going to err on the side of keeping both in since there are conflicting takes. Calculated contributions are linearly related, so you can still just sum them together if you think they are more helpful combined than seperate.

I don’t know how useful these will be, but I am curious to see what comes out of them. Plus since it’s just one field I don’t think it’s too bad to keep it in. It might give you a little bit of a feel for what kind of strategies a team likes or let you know how cluttered with balls their matches usually are.

I think the averages by category are really interesting, TBA has a nice breakdown on their week 0 insights page. Might not be good to read too much into it as it’s all still unofficial, but these are probably closer to week 1 averages than what was in most of our heads.

1 Like

I’ve just updated the “Instructions” sheet of my scouting database to have the following metric descriptions:

I’m not going to delete any of the metrics I have listed here, but I would still be willing to add in a few more if anyone has any last ideas. Also, please let me know if any of the descriptions are unclear.

Thanks for giving the BA Insights link. I forgot to check it before making my inquiry. It is interesting data that confirms most of my observations.

Caleb I look forward using the scouting sheet in 7 days at Palmetto Regional

Just subtlely reminding me I’ve got a deadline. :slight_smile: I’m targeting the scouting database to be out by tomorrow, and the event simulator to be out on Monday. I’ll definitely have both out by Wednesday at the latest, even if they are missing features.

I’m relatively new, so I don’t know whether this exists or not… Is there a defensive metric (DPR) available? It would be interesting to see how a robot can slow down the opposition.

There is one, where you essentially calculate it the same way as OPR except you use your opponents score instead of yours

How can that information be d/l from TBA? I’m looking at trying to create some advanced metrics (mostly like basketball’s PER or similar) which I’ll share. My biggest problem is calculating the effect a defensive team has on an opposing team/alliance.

AFAIK TBA will not give you this kind of data, you will have to calculate it yourself

The only defensive score I’ve seen people try to calculate is DPR

As for actual mathemetical derivations here you go:

OPR: Stands for Offensive Power Rating. Essentially tries to quantify how much an alliance contributed to their alliances final score. OPR assumes that an alliances score is a linear combination of the individual teams score, so you get something like A+B+C= score. A, B, C are the individual teams contribution to the score of that match. To solve for the teams scores, solve a system of equations and use the least squares solution. Generally a higher OPR indicates a stronger robot, but this may not always be the case.

DPR: Stands for Defensive Power Rating. Essentially tries to quantify your teams impact on the other alliance score. To calculate this, you do the exact same steps as above, but instead use the other alliances total score instead of yours. A lower DPR tends to indicate that a team has a higher impact on the opponents score.

CCWM: Stands for Calculate Contribution to the Winning Margin. CCWM= OPR-DPR. Tries to show how much of a positive impact a team has on an alliance. Higher CCWM is usually indicative of a better team to have on an alliance.

These are some statistics you can use, however be cautious as they are not always indicative of true robot capabilities and there are many weaknesses with this method. For example, it assumes that your robot plays at a constant level during each match, which is usually not the case, as strategies change. This method shouldn’t replace actually watching matches and recording data

TBA automatically calculates OPR, DPR, and CCWM for each team at each event. Even if it’s not available on the website, you can get it from the API at “/events/[event_key]/oprs”. You’ll need to set up an account and get an API Key to access this data.