2019 IRI Zebra's Dart Data Analysis - A new OPR

Extending my previous analysis on defense here, I have used data collected at IRI by the Zebra’s Dart system to develop a new version of OPR, renamed ZPR (Zebra Power Rating). This model aims to take into account defense to present true point contribution estimates.

The current OPR model assumes all robots participate in scoring equally, but in reality most alliances only have two primary scorers. Using data on when teams are playing offense vs. defense, I developed a new model without this assumption.

Some Technical Details

The linear algebra behind ZPR is described here

Essentially we have an input matrix populated with 1’s whenever a team plays a specific match and 0’s everywhere else. The 1 represents 100% offensive output that match. In my new model, I replace these discrete values with a continuous variable depending on how long a team plays offense each match. If a team plays defense, it is given a negative value on the opponent match’s row. This value is also continuous, based on the time spent playing defense. This adds some additional assumptions but helped quantify defensive impact.

Since two robots mainly contribute to a match’s point total, most teams saw their ZPRs rise compared to the standard OPR. These values now hopefully represent the point output of a team under ideal circumstances (no defense, full match time for offense).

Standard OPR
  1. 2056 - 43.9
  2. 1690 - 42.1
  3. 2481 - 40.9
  4. 2910 - 39.5
  5. 3357 - 38.2
  6. 5406 - 38.0
  7. 195 - 37.1
  8. 364 - 37.0
  9. 2168 - 36.0
  10. 5205 - 35.5
  11. 930 - 34.9
  12. 2767 - 34.7
  13. 4362 - 34.2
  14. 1114 - 33.1
  15. 319 - 33.0
  16. 3604 - 32.8
  17. 1241 - 32.8
  18. 225 - 32.5
  19. 1807 - 32.4
  20. 3538 - 32.2
  1. 2056 - 61.2
  2. 1114 - 52.1
  3. 1690 - 47.8
  4. 225 - 47.7
  5. 5406 - 47.3
  6. 195 - 46.5
  7. 3538 - 45.9
  8. 2767 - 45.8
  9. 2168 - 45.3
  10. 6443 - 44.7
  11. 48 - 44.4
  12. 319 - 43.7
  13. 2481 - 43.6
  14. 2910 - 43.5
  15. 3847 - 42.6
  16. 4362 - 42.3
  17. 930 - 42.2
  18. 2075 - 41.8
  19. 4776 - 41.5
  20. 4028 - 41.0

Keep in mind that with only 9 matches played per team, there is significant variability in all these results.

One effect of this change is defensive teams now gain ZPR for reducing opponent scoring. As a result, two primarily defensive teams crack the top 20 under the new metric. It was also interesting to see 1114 rise from 14th to 2nd at IRI.

I was able to achieve 79% match prediction accuracy with this data, which is comparable to the 76% reached with standard OPR. These models are similar in predictive power.

I’m open to questions, suggestions, and feedback!

The code to calculate these stats is available on GitHub .

*edited with ZPR


2019 especially featured a lot of defense defense- not letting an opposing robot onto your side of the field to get in the way of a partner with higher scoring potential. How does this OPR setup handle that case?

I love the idea, love the implementation, and would love to see more!!

1 Like

Rolling defensive value in kinda defeats the point of offensive power rating. But this is a really good idea, I’d love to see what this looks like calculated separately into something like wOPR and (dare I say it) wDPR. I’ll probably try that when I get home from work today

^w meaning weighted, I was borrowing some abbreviations from advanced baseball metrics. Just wanted to clarify that

1 Like

I agree with your point - OPR doesn’t really fit the metric if it includes defense as well. I made another metric where defense is excluded but offense vs. offense while defended is distinguished. The results were fairly similar (slightly lower predictive power). One issue with these models in general is the sparsity of data. Without many simplifying assumptions, there are more variables than data points.

1 Like

How did you go about using new OPR to predict matches? Were all robots assumed to be playing offense? If not, how was the defender chosen, and were those predictions evaluated at all?

This is really cool.

Love it, the theoretical foundation sounds solid. I’ll look over the code later.


One thing I made sure to avoid was directly feeding in the actual times each robot played offense and defense when predicting match outcomes - this would defeat the point of making a prediction before the match occurs.

For each alliance, I looked at previous matches to see which team played the most defense. That team would play defense in the simulation, while the other two played offense. If all three were primarily offense robots, the team with the lowest OPR was sent to defense. I assumed each offense team played offense 100% of the time, and the defense robot spent 2/3 of the match across the field.

I haven’t gotten around to evaluating the accuracy of these predictions, but from the few I spot-checked, it matched up.


Maybe I misunderstood. If two robots are playing offense and one robot is playing defense, does the points total divvy up between the two offensive robots? Something I’m missing?

I’m not too experienced in predictive models, but I don’t think making assumptions about team roles and alliance strategy is a good idea if the goal is a generalized model. I’d think that telling the model to assume something about predominant strategies would cause it to be wildly inaccurate if an alliance doesn’t play as expected.

This is a really cool metric, and a really interesting way to analyze the available datasets.
I have a question and some comments on it:

  • When doing the match predictions, you only used the data that would have been available up until that point, correct?
  • If it’s also a defensive metric, I think calling it OPR is a misnomer. Maybe like, Zebra Power Rating or ZPR?
  • This analysis is really cool, and it makes me wonder how one would apply a similar methodology to different games: perhaps in 2017/18 you could check how much time a robot spends in certain choke points, but for games like 2016 defense was rare enough that I don’t know if this would be valuable.
1 Like

I break the point total into cargo/hatch points and everything else (sandstorm start, HAB climb, etc). All three robots are treated equally when calculating contributions to sandstorm and HAB climb (can be read from TBA, but for now I use OPR). Only robots that played offense contribute to the cargo/hatch point section, weighted for how long they played offense that match. Robots playing defense earn contribution in the cargo/hatch category by denying the other alliance points.

1 Like

Once again I agree that these assumptions really take away from this metric’s value as a predictive model. I am more interested in the actual quantities calculated and their interpretations.

For example, 1114’s new estimate of 52.1 falls closer in line to how much I think they contribute to an alliance (with no defense), compared to their current OPR of 33.1.

1 Like

Cool idea.

This should be better than CCWM = OPR - DPR, because it weights those components based on what the robot was actually doing.

It would be interesting to see a scatter plot of your new OPR vs. CCWM.

1 Like

ZPR sounds really cool! I’m going to have to steal that name :slight_smile:

Currently I use all the data to predict individual matches, as there isn’t enough data available otherwise. Deep Space and Zebra go well together but I’m not so sure about previous years. It will certainly be harder to quantify defense in other games.

1 Like

I wonder if individual robots’ true contributions could be calculated from Zebra in combination with scoring data. In a lot of matches, robots score on their own side of the field, if you notice from the Zebra data that that’s the case, you can find the scoring data for that side rocket/cargo ship and find a teams actual contribution/number of completed cycles.

1 Like

You’re going to have so much more clarity and much less confusion if you name your new metric something else. Considering your metric involves calculations determined in a completely different way than OPR, it would probably be best if you had your own term for it.

That said - a 3% increase in match prediction accuracy is in the realm of static / noise. I’m not sure this is quite yet as big of a breakthrough as it might seem to you.

1 Like

You’re right, with only one event (and maybe some from a few years ago?) of data, its going to be hard to prove that this data can be useful for match predictions. It needs to be used at more events but hopefully no one is deterred from using it because they don’t see the usefulness

Yeah looking back I definitely should not have kept the OPR name; I really like @kevin_leonard’s ZPR instead. I will note that OPR and ZPR are calculated using the same underlying math, just with different inputs to the matrices.

I don’t think this model is particularly better at predicting matches than standard OPR, and I’m sure it’s worse than some other more advanced projects. I am more interested in the interpretation of the values, and used the 76% and 79% stats as more of a sanity check.

I agree more data is always useful. We have 2 in season events worth of data but it is kind of fractured at the moment (uploaded to a few different places). I can have it re uploaded this week to the same high level folder the IRI data is in. We also are planning to track teams at 2 more events for this years game so it might be fun to build a model off the existing data and test it on the new stuff.


HODOR - Hybrid Offensive/Defensive Objective Rating

You’re welcome.