2019: Defensive Rating Systems

Looking at Week 1 games so far, it seems like defence will play a major part in 2019 playoffs. This is perhaps the most defensive game First has released in a while: tight corridors, small point differentials dictating games, few safe zones, refs seeming to be pretty generous on the pinning counts by defence bots, etc etc.

The question is, how does a scouting team effectively measure the defensive capabilities of a playing team? Dedicated defence bots do two main things: reduce the cycles of the opposition by pins, increase the cycles of the alliance by reducing traffic. What systems might suit a scouting team without wifi access and not too computationally inefficient?

Simple counts of game pieces dropped by the opponents obviously do not capture much information. DPR/OPR perhaps lacks accuracy in early matches and does not capture the natural evolution of a team during an event, as far as I know (I haven’t looked into it at all really). Some factors to consider are schedule difficulty, team progression over time, penalties, own offensive scoring potential, etc.

Here are some more ideas for quantifying defence strength

  • Sum of median scoring potential of opposition bots - actual pre-penalty score
  • ^ same with alliance partners
  • Average win margin - average loss margin
  • Cycles conceded
  • Average fouls per defence games
  • Subjective rating

Please share your thoughts! How are you scouting defence this year?

5 Likes

It can be really hard to scout for a defensive bot. Normally you don’t see teams playing defense in qualification matches. Sometimes if you are lucky, teams will realize that they are outside of the top 8, and probably aren’t going to be picked during the first round of alliance selection. So they show off their defensive strength in their last few qualification matches. But that doesn’t happen often during the early weeks of competition.

So what I am trying to say is: I don’t think it is possible to get a quantifiable metric for defense, if teams refuse to play defense. So, then how am I supposed to pick a defensive partner?

I think you can approach this as “What am I generally looking for in an alliance partner?” and then “Will their robot be able to play defense?”

The qualities I consider are:

  • Dependability (No break downs, No Radio Resets, ect.)
  • Low center of gravity (They are in a physical role, and shouldn’t be at risk for tipping over)
  • Appropriate drive train. (Sorry H-drive, Mecanum, All omnis. These drive trains are not effective in a pushing match.
  • Are they able to contribute to another aspect of the game while not playing defense? (For example, is the robot able to drive off lvl 2 hab? If so, they gained an extra 3 points that you wouldn’t of have during a time they are not playing defense.)
  • Are they willing to work as a team, to execute a strategy.
  • MOST Importantly their ability to stay within their frame perimeter while playing defensive (no floppy arms!).
2 Likes

The qualities I consider are:

  • Dependability (No break downs, No Radio Resets, ect.)
  • Low center of gravity (They are in a physical role, and shouldn’t be at risk for tipping over)
  • Appropriate drive train. (Sorry H-drive, Mecanum, All omnis. These drive trains are not effective in a pushing match.
  • Are they able to contribute to another aspect of the game while not playing defense? (For example, is the robot able to drive off lvl 2 hab? If so, they gained an extra 3 points that you wouldn’t of have during a time they are not playing defense.)
  • Are they willing to work as a team, to execute a strategy.
  • MOST Importantly their ability to stay within their frame perimeter while playing defensive (no floppy arms!).

Additionally, the wheel base dimensions. A long robot would likely be able to block more area and withstand more pushing around, but too long a robot would probably have poor handling.

DPR is basically always bad, even by the end of the event. One definition of DPR is OPR - CCWM, meaning that to have a good (low) DPR, you’d wanta high CCWM and a low OPR. This is fine in theory, but in practice CCWM isn’t a great measurement to use for this purpose. OPR though is pretty good at quantifying offensive ability, so if we had a better replacement for CCWM we could use the same idea to come up with ratings of defensive ability. Since Elo is designed around winning margin, potentially you could use some kind of normalized Elo in place of CCWM to help get a measure of defensive ability.

A team with a 250 point Elo difference over another team would be expected to beat that team by 1 score stdev in a match where they both have equivalent partners. So since the stdev of scores this year is roughly 15 points, and the average Elo is roughly 1500, you can convert Elo into units of 2019 winning margin points using ((Elo - 1500)/250)*15. Taking OPR - (normalized Elo) then gives us something that potentially? represents defensive ability. I’ve uploaded a quick book showing this (remember lower = better).
defensive metric attempt.xlsx (148.6 KB)

I think the best way to interpret this would be to ignore the top 16 offensive teams at an event. Teams like 330 and 973 are high on this defensive metric list, but that’s probably because their opponents are voluntarily giving up point earning potential on their own side of the field by consistently sending someone onto the other side of the field to play defense on 330 or 973.

Unsure if this metric has value or not, but my expectation would be that it’s better than DPR at least.

2 Likes

My expectation would be that noise is better that DPR. :wink:

I like this idea. Is it reasonable that after calculating OPRs, you could use them in the equations for another linear regression?
An equation could look like this:
CCD1 + CCD2 + CCD3 = opponent OPRs - opponent score

The problem I see with DPR, is that it doesn’t take your opponents offensive ability into account. It assumes that you opponent’s score is purely a function of you alliance’s defense.

1 Like

Scouting for defense is a lot bigger this year because there’s strategic defense like zone defense then there’s the ram the hell out of any robot physical defense, one in the long run will end up costing you more in penalties as we saw in week one

One of the more effective tools I’ve found for evaluating defense robots is to look at the average cycles / cycle times of some of the top offensive robots at the event, and compare them to their raw total(s) in the match they played against the defense robot. Obviously only choose matches where the defensive robot being evaluated actively defended the offensive robot in question. Your sample size here is going to be small, even by FRC event standards. However, it still gives you a good idea of how effective they actually were, and these numbers complement the eye test nicely.

8 Likes

That is an interesting, especially at the end of quals or day 1. I would foresee quite a bit of cross-referencing work, especially for a smaller scouting team working on paper sheets, as well as the sample size issue you’ve mentioned (defence bots will not be facing top offense every match, and variance and projections over time is difficult to visualize).

P.S. it was great seeing you rage on the habs at Durham #ripjumpkicks

1 Like

Here’s another idea. To avoid doing OPR calculations in Excel or Google Sheets, how about this:

Define “individual scoring contribution” as 2*(Hatches scored) + 3*(Cargo scored) + (Climb/drop bonuses) - (Penalties)

Let “Defensive Contribution” be 3*(median of individual scoring contribution) - (actual pre-penalty opponent score + defender penalties)

The idea is to produce something computationally simple, easy to track over many games, accounts for defender’s effect on all opponents (not just 1 robot’s cycles) while eliminating effect of alliance partner penalties on individual defence contribution. One issue our scouting team identified was if an opponent robot breaks down (disconnection, mechanical failure, stuck, tipped, etc.), it may skew the score for that game. However, having an open “Notes” section on match scouting sheets may help account for outliers.

I think it’s critical to consider that the best defensive robot may be one that only played defense for one or two matches, since they may have also had a decent scoring mechanism.

Because of this, having scouters that understand what great defense looks like, is really important. Giving a ‘comments’ section in a scouting system can allow you to catch that robot that’s great at defense, but didn’t do it every match.

A hard numerical system assumes that a team has the same objectives (such as playing defense) in each match, and for teams that know what they are walking into on a given match, this is simply not true.

1 Like

In Excel or Google Sheets, formulas could be used to quickly determine numerical scores for select games (using some sort of boolean like a checkbox). Coupling automatic after-the-fact computation with subjective ratings and comments might be the way to go?

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.