![]() |
Re: Overview and Analysis of FIRST Stats
Quote:
|
Re: Overview and Analysis of FIRST Stats
Quote:
If you look at the simulated tournaments at the end of the paper with 10x the number of matches as a normal tournament, you'll see that these simple stats will in fact converge to the actual defensive contribution (but again, with 10x+ the normal number of matches). This is also noted in the "Limiting Behavior" discussion on page 11. For example, the Dave measure for team "n" is the average of an opponent's scores, or (starting with the equations on page 6) Dave_n = 1/# * Sum_{i,j,k,l,m} (Oi + Oj + Ok - Dl - Dm - Dn + Noise) where # is the number of matches played by team "n" and where the summation is over team "n"'s opponents i, j, and k in each match and with team "n"'s alliance partners l and m. Since the Dn term is the same in each term, you can pull this out and get Dave_n = -Dn + 1/# * Sum_{i,j,k,l,m} (Oi + Oj + Ok - Dl - Dm + Noise} If # is small and the O values are much bigger than the D values, then Dave_n will still be dominated by the O values. But if # is large, this becomes Dave_n -> -Dn + 3*average(O) - 2*average(D) In the paper, I normalize average(D) to be zero, so this becomes Dave_n -> -Dn + 3*average(O) (the first equation on page 11 of the paper) DPR has similar limiting behavior. Both of them are measures of how well your opponents do in matches, which is some combination of the offensive strength of your opponents and your defensive contribution. If you average over a LOT of matches, then the strength of your opponents will eventually average out and all that you'll be left with is your defensive contribution. But in typical tournaments with typical numbers of matches, this will primarily be more of a measure of strength of opposition with perhaps a very small component being your team's defensive ability (except for perhaps the most severe outlier like the team that only plays strong defense every single match). The Simultaneous OPR+DPR calculations described in the paper on pages 15-16 (and on page 19 if using MMSE instead of LS) do a better job of estimating defense as they take the opponent's offensive ability into account when computing defense, as you suggest. That's why, for example on the simulated tournament on page 27, Da (like average of opponent's scores) and DPRb (like DPR) do so much worse at predicting the defensive contributions compared with sDPR. But I guess my only quibble with your post would be to say that opponent's average and DPR do very slightly measure defensive ability, though I agree that in practice for normal FRC and FTC tournaments this measurement is much more of a measurement of strength of your opposition's offensive contributions rather than strength of your defensive contribution. BTW, I recently released a new version of my app for FTC which computes MMSE-based CPR and OPR live for FTC tournaments. For exactly the reasons you mention, I refer to the difference between these as "Dif" on the stats screen rather than "Defense" as I had in earlier versions of the app. And in the app on the screen that explains the stats, I write "Dif measures a combination of the strength of the opposing alliances a team has faced and the strength of that team's defense. A positive Dif is good for a team, and signifies that the team's opponents have scored fewer points than average by that amount." (Note that in the app, Dif is normalized to be zero-mean across all teams, and with a positive number indicating that a team's opponents score fewer points than average by that number.) In practice, I use the Dif stat more to see who was lucky by playing against weaker alliances than as a measure of defense. |
Re: Overview and Analysis of FIRST Stats
Quote:
I also like the MMSE adjustment. However, one important point. This is a Bayesian estimation method. The errors terms around initial "guess" is normally distributed (or log normally since scores are constrained at zero), and presumably so are the regression results errors. But adding together 2 normal distributions does not result in a normal distribution. It's been 20 years since I looked at this issue, but there is software out there that correctly computes the resulting distribution. |
Re: Overview and Analysis of FIRST Stats
Quote:
|
Re: Overview and Analysis of FIRST Stats
Quote:
|
Re: Overview and Analysis of FIRST Stats
Quote:
|
Re: Overview and Analysis of FIRST Stats
On a tangentially related note: I came across "Regularized Adjusted Plus/Minus" stats for NBA basketball players today. They are essentially exactly the stats we're using in FIRST! See for example, this link.
In our terminology, they view each basketball possession as an "alliance" of 5 players versus a defensive "alliance" of the 5 players on the other team. They count each possession as a "match" and then compute the stats just like we do. Every time a new player is subbed in, the alliance changes. "Raw plus/minus" is just like the averages discussed in the paper. "Adjusted plus/minus" is like sOPR and sDPR (compare the first equation in the link above with the equations on page 6 of the paper). "Regularlized adjusted plus/minus" is just like the MMSE version. This link shows ORPM, DRPM and RPM for NBA players which are essentially just like sOPR, sDPR, and sCPR. |
| All times are GMT -5. The time now is 19:13. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi