Thread created automatically to discuss a document in CD-Media.
How Accurate is OPR? by: brennonbrimhall
By taking Team 20’s scouting data and comparing it to OPRs, we are able to quantify how accurate OPR really was for 2013.
By taking Team 20’s scouting data and comparing it to OPRs, we are able to quantify how accurate OPR really was for 2013. The results are very informative, and it is concluded that OPR should not be used to predict a team’s scoring output at an event in lieu of real scouting.
How did you validate the accuracy of your scouting data? Why did you choose to use percent error, rather then absolute error?
A few more things that would be interesting to look at:
Do the results change if you only look at the top 24 or 30 teams at an event (the teams you would be considering when forming a pick list)?
Is there an OPR at which it becomes more accurate? Looking at chart 13, OPRs above 15 seem much better then those below 15 (obviously game dependent).
Can you quantify the percent chance that Team A is better then Team B, given a specific OPR difference (IE, Team A has an OPR that is 1 higher then Team B, and has a 55% chance of being better then Team B, but Team C has an OPR that is 10 higher then Team B, and has a 90% chance of being better then Team B.
Does the percent error histogram still look normal if you discard the outliers and put more bins between -100% and +200%?
These are all good things to think about. Particularly, you may want to reconsider your use of percent error as opposed to absolute error. I’m estimating here, but it looks like absolute error would’ve been pretty consistent regardless of true scoring. Take a look at the correlation of those; I’d bet very little of the variation in absolute error is explained by variation in true scoring average.
If there were such a correlation, you would have noticed it in the residual plot for your OPR-True Average linear regression. I didn’t notice a residual plot in your pdf; they are essential for determining if your model is a good fit for the data.
At our events, we scouted collaboratively with other teams. Accuracy actually became a prime concern of ours, so we had every entry into our scouting database checked over by our head scout before entry (who had been watching each match as a whole immediately before validating). Additionally, we recorded matches for further verification. There were a few instances I can recall where errors were detected in entries, and we used our footage to re-scout that robot for that match.
This was an attempt to make the results of our paper more applicable to other events in the 2013 season. Archimedes, for instance, is hardly indicative of the average regional or district event, and yet it forms more than half of our data (51%, to be precise; out of 196 event/team combinations sampled, 100 were from Archimedes).
Let me get back to you on answering your questions backed up with relevant diagrams and calculations. I’d also like to look at posting our dataset too. However, here are my suspicions:
If you see my response Basel A’s question below, sampling only the top 30 or so teams from each of our competitions should decrease the percent error, which would change the percent/tolerance table for the better. OPR should become more accurate.
I have no idea. Really intriguing question, though.
Since we do have the averages and standard deviations for each team, you could let Team A and B be represented by a variable with mu equal to their Average Score and sigma equal to the Standard Deviation in their score. If you subtract the two means and add the two variances, you should be able to find the number you’re looking for by integrating from -infinity to 0 and 0 to infinity. We used this method to predict match outcomes, but it was not very accurate. I don’t know of a simple way to extend that to OPR, though.
I’ll get back to you.
If you look at the scatterplot on slide 9 and compare the least squares line to the scatterplot, you’ll see that absolute error decreases as Average Points Scored increases (I did have a residuals plot in here previously, but it looks like I accidentally removed it before I posted it). The percent error decreasing as Average Points Scored is not simply a function of the denominator for the percent error calculation increasing.
I did do a residual plot, and include it previously; it must have been accidentally removed during my revising. I’ll make sure to fix that to back up my previous claim (see answer to previous question).
This thread combined with watching Searching for Bobby Fischer earlier today got me wondering how to apply the Elo Rating to FRC, and I think I figured it out. This is the resulting data from the 7 weeks of the 2013 season (Sorted by rating)
Not sure how useful the Elo ratings are, but there is some statistical significance with having teams like 469, 67, 1114 and 2056 near the top. I’m nowhere near good enough with statistics to determine if this is enough data to work with (my instinct says no), but it’s just another interesting way to look at wins/losses
I’ve added the dataset used for these calculations to the paper. Feel free to use it to do any more analysis that you’d like.
Now, to answer specific questions:
Yes. By selecting the top 30 teams (in terms of Average Score), the least squares line becomes
y = 1.0486x - 0.7729
with an R^2 of 87.32%. The model actually moves further away away from the line we’re expecting (y = x) when compared with the overall combined model, though it has a much higher R^2 value (a change of 7.02%).
In terms of the percent error model, the new mu is 1.11% with a sigma of 45.10%. The new table for the probability a team will fall within a given percent error is as follows:
Here’s a table of the different averages and standard deviations for OPRs greater than or equal to the OPR listed. I see large increases in standard deviation from 10 to 20 (as you observed), and from 30 to 40.
Let Team A be represented by a Normal model with mu OPR, and sigma equal to the sigma in the table above multiplied by the team’s OPR. Follow the same pattern for Team B. Subtract the two normal models (subtract the two averages; A-B, and add the variances to find the new sigma). Integrate underneath this curve from 0 to infinity to find the probability A would score more than B.
We’re not creating a model for the data we have, which is why I removed it; this wasn’t about creating a regression that was supposed to model the data. Instead, this was about checking one of the properties of OPR: ideally, it should have a 100% correlation with True Average score, with an intercept of 0 and slope of 1.
That being said, here’s links to the residual plots:
I removed team numbers and events intentionally for team anonymity/confidentiality concerns. None of the teams used in the study granted any sort of permission for having statistics published about them on Chief Delphi, and it’s not like True Average Score can be derived from other, public sources information. It was kind of like publishing the FRC equivalent of how much someone weighs without prior permission. It just didn’t feel right.