On the TBA API, it provides the OPRs, DPRs, and CCWMs of each team for an event. Is there a “best metric” to determine the performance of a team in an event?
In addition, which metric is usually most inaccurate if there’s one?
On the TBA API, it provides the OPRs, DPRs, and CCWMs of each team for an event. Is there a “best metric” to determine the performance of a team in an event?
In addition, which metric is usually most inaccurate if there’s one?
This is one of those questions that both does and does not have a satisfactory answer.
All of these attempts to quantify a team’s performance have some utility, but the simplest answer to your question is that most teams pay attention to OPR above the other stats. After all, OPR is the measure of the team’s contribution to overall alliance points and thus is a pretty good stat to look at if you’re trying to understand how well a team is likely to do in future matches. DPR is a bit more problematic, since rating defense is done by looking at the opposing alliance’s score, so that can depend on factors that are not actually based on a team’s ability. Since the CCWM formula is basically CCWM = OPR - DPR, that’s even more problematic. My team has never used it as an analytical tool at all. So I would say that if you’re looking to pull in a stat that would be useful for analysis, that’s likely going to be OPR. Not a perfect measure, to be sure, and no substitute for actual human scouting that accounts for multiple factors, but better than the other two stats.
For this year, a large amount of the events had the highest OPR team on the winning alliance. I’m not sure about the exact stat but from me randomly picking a couple events on TBA all of them were won by the highest OPR team.
The framing of this question is poor, because none of these are particularly good metrics of team performance or quality. There’s no substitute for watching matches, really - we don’t have fine-enough data to make the numerical stats anything more than a (very) rough guideline.
Edit: I think I recall someone trained a neural net on the match data (which is going to ultimately be OPR-like, but able to handle nonlinearity a lot better) and the best they ever did was in the 60-70% accuracy range for match predictions. That should give you a decent intuition for the best-case reliability of OPR (worst-case, it’s noise).
Thanks for the answer! So for at least now with the current metrics, there isn’t really a proper way to determine how well a team did defense wise without scouting?
So essentially, there isn’t really an accepted metric that’s accurate for determining how well a team did?
Correct.
All that is about how to predict how well a team might do in the future
If you want to know who did best, there’s a metric for that: Wins
Yep. Defense is too resistant to quantification to be judged simply by a stat like DPR. The absolute best way to judge a team’s defensive capabilities is by observing them in matches. For instance, here in NC District for this last season, I would say that the #1 defensive team was the Wired Wizards (4534). Their DPR doesn’t necessarily look all that impressive for their events (and their OPR is only middling as well) but if you watch their matches you can see them doing some incredible defensive play against some of the best teams in the district. They did much better than their stats would suggest because of this and were very much positive assets to every alliance they were on. This is why, as Oblarg says, there’s no substitute for watching matches and using human scouting.
Scouting data.
A lot of commonly used statistics only really seem to account for overall alliance score; so a team with a horrible schedule would be low on these rankings. I really don’t like these for this reason. If you want a better guideline, you can look at Caleb Sykes’ database.
I completely disagree with you. In 2009 or 2010? Yeah you are probably right. But recent games have flipped the switch.
FMS now reports more team specific data than it used to. For example, all climbing data since 2018 has been a team specific data point in FMS and blue alliance. Additionally, auto movement data points are also team specific (again since 2018). Therefore when calculating OPR, some of us are swapping out the linear algebra parts with averages (at least for the components that have team specific data). Therefore statistically, OPR is becoming more and more representative of a “teams performance or quality”
Edit: I will admit "true OPR’ (the linear algebra solution of finding AX=B where B is the teams alliance score summation) hasn’t change. It is the hybrid approach of finding component OPRs and summing them together that is more accurate and attractive
This isn’t OPR, and you shouldn’t call it OPR. OPR is canonically understood as a linear-least-squares solution of the match score matrix equation.
Scouted scoring data is, of course, extremely useful. It is still not a substitute for actually watching the matches. The data are still not nearly granular enough to rely on numerical stats alone for judging robot performance.
I did address that in my post:
The part I disagree with you on is the granular stats. FIRST has made some big strides in this area to the point where the current era of FRC games can almost be judged on pure numerical data. That is why I pointed out the auto/endgame calculations. In my opinion, you really can get a good idea on robot performance from the past few years of OPR data.
But as always, I think everyone agrees on this:
It’s not clear what is meant by “a good idea” here. Good enough to get a rough idea of which robots are effective and which are poor? Good enough to determine picklist ranking?
The data have gotten better; they’re still not great, and I still wouldn’t rely on any single calculated “power ranking” for anything more than a very rough guideline about which teams are worth looking at more closely. Even in professional sports with way richer data, the numbers don’t tell the whole story.
On a note closer to the OP, if you’re looking for the cleanest practical COPR data to integrate into your in-event scouting, I’d look at something like Rachel’s excellent google sheet that does that: 2022 Google Sheets OPR Calculator
For post-event and such, there are a couple of sites out there that aggregate that.
Once you get to a DCMP or CMP event and TBA has a decent amount of data to use the blue alliance predictions end up being pretty good but it depends on the game. A lot of the matches it gets wrong are the ones it was already unsure about.
78% seems pretty good to me considering all the random things that can happen in an FRC match.
I wouldn’t go this far, but I will say that component OPR was super handy this year for 6619’s picklisting at CVR. Even though we were doomed to go out in quarters, I’m pretty happy with our picks - and they were based off of component OPR and a couple of quick match scouts. I think the effort needed to make a scouting system that’s appreciably better is significant, whereas in the olden days any scouting at all was immediately useful.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.