![]() |
paper: How Accurate is OPR?
Thread created automatically to discuss a document in CD-Media.
How Accurate is OPR? by brennonbrimhall |
Re: paper: How Accurate is OPR?
How did you validate the accuracy of your scouting data? Why did you choose to use percent error, rather then absolute error?
A few more things that would be interesting to look at: Do the results change if you only look at the top 24 or 30 teams at an event (the teams you would be considering when forming a pick list)? Is there an OPR at which it becomes more accurate? Looking at chart 13, OPRs above 15 seem much better then those below 15 (obviously game dependent). Can you quantify the percent chance that Team A is better then Team B, given a specific OPR difference (IE, Team A has an OPR that is 1 higher then Team B, and has a 55% chance of being better then Team B, but Team C has an OPR that is 10 higher then Team B, and has a 90% chance of being better then Team B. Does the percent error histogram still look normal if you discard the outliers and put more bins between -100% and +200%? |
Re: paper: How Accurate is OPR?
Quote:
If there were such a correlation, you would have noticed it in the residual plot for your OPR-True Average linear regression. I didn't notice a residual plot in your pdf; they are essential for determining if your model is a good fit for the data. |
Re: paper: How Accurate is OPR?
Merry Christmas!
Quote:
Quote:
Quote:
Quote:
Quote:
|
Re: paper: How Accurate is OPR?
This thread combined with watching Searching for Bobby Fischer earlier today got me wondering how to apply the Elo Rating to FRC, and I think I figured it out. This is the resulting data from the 7 weeks of the 2013 season (Sorted by rating)
Code:
4650 982.089794296798Sorted by Team #: Code:
1 1222.5778695651Code:
#!/usr/bin/perl(Edit: Had mistake in my calculation) |
Re: paper: How Accurate is OPR?
I've added the dataset used for these calculations to the paper. Feel free to use it to do any more analysis that you'd like.
Now, to answer specific questions: Quote:
Code:
y = 1.0486x - 0.7729In terms of the percent error model, the new mu is 1.11% with a sigma of 45.10%. The new table for the probability a team will fall within a given percent error is as follows: Code:
10% 17.543%Quote:
Code:
OPR mu sigmaQuote:
Code:
OPR mu sigmaThis is a method to approximate a prediction strategy I detailed in this post: http://www.chiefdelphi.com/forums/sh...23&postcount=1 Quote:
Quote:
That being said, here's links to the residual plots: As a function of OPR: https://drive.google.com/file/d/0B4t...it?usp=sharing As a function of True Average Score: https://drive.google.com/file/d/0B4t...it?usp=sharing Quote:
It was more accurate than the method we used where a team was represented by a normal model with mu equal to true mu and sigma equal to true sigma, and integrating underneath to find the chance for each alliance to win. See thread here: http://www.chiefdelphi.com/forums/sh...23&postcount=1 |
Re: paper: How Accurate is OPR?
Quote:
|
Re: paper: How Accurate is OPR?
Quote:
I removed team numbers and events intentionally for team anonymity/confidentiality concerns. None of the teams used in the study granted any sort of permission for having statistics published about them on Chief Delphi, and it's not like True Average Score can be derived from other, public sources information. It was kind of like publishing the FRC equivalent of how much someone weighs without prior permission. It just didn't feel right. |
| All times are GMT -5. The time now is 10:59. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi