View Single Post
  #28   Spotlight this post!  
Unread 15-06-2015, 15:58
AGPapa's Avatar
AGPapa AGPapa is offline
Registered User
AKA: Antonio Papa
FRC #5895
Team Role: Mentor
 
Join Date: Mar 2012
Rookie Year: 2011
Location: Robbinsville, NJ
Posts: 323
AGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond repute
Re: Overview and Analysis of FIRST Stats

Quote:
Originally Posted by wgardner View Post
Thanks! That clears things up a bit. And these are all still with Var(N)/Var(O)=2.5 like before? Can you post a link to Ed Law's spreadsheet?
Yes, still with VarN/VarO = 2.5. This was chosen fairly arbitrarily, better numbers probably exist.

Here's Ed Law's database. He has the LS OPRs for every regional/district since 2009. He's also has the "World OPRs" for each team in each year in the Worldrank tab.

Quote:

Compare your chart with the bottom left chart of this (from a few posts ago, and from a simulated 2014 casa tournament). The blue lines at the bottom of that chart are if the OPRs are known to within a standard deviation of 0.3 times the overall standard deviation of all of the OPRs, and the red/pink/yellow lines are if the OPRs are known to within 1.0 times the standard deviation of all of the OPRs. Your bottom chart looks like it could be somewhere between those two. (Also note that my chart is percent of the variance of the error in the match result prediction, whereas yours is the absolute (not percent), standard deviation (not variance) of the error in the match result prediction, so they're just a bit different that way. As the stdev is the square root of the variance, I would expect that the stdev plot to be flatter than the variance plot, as it seems to be.)
Thanks, that makes a lot of sense.

Quote:

Could you compute the standard deviation of the error in your "OPR prediction" (i.e., compute the OPR from worlds minus the OPR from the previous regional tournaments, and take the standard deviation of the result)? And then compare that to the standard deviation of all of the OPRs from all of the teams at Worlds (i.e., just compute all of the OPRs from Worlds for all of the teams and compute the standard deviation of those numbers). It would be interesting to know the ratio of those two numbers and how it compares to the plots in my simulated chart where the ratios are 1 and 0.3 respectively.

And I guess while I'm asking: what was the standard deviation of the match scores themselves?
I'm having some difficulty understanding what you're asking for here, but here's what I think you're looking for.

std dev of Previous OPR (LS) - Champs OPR (LS): 18.9530
std dev of Champs OPR (LS): 25.1509
std dev of match scores: 57.6501


I want to point out that the previous OPR is not an unbiased estimator of the Champs OPR. Champ OPRs are higher by 2.6 on average. (In my MMSE calculations I added a constant to all of the priors to try and combat this).


EDIT: I think we can use this to find a good value for VarN/VarO.

Var(Match Score) = Var(O) + Var(O) + Var(O) + Var(N)
Assuming that Var(Champs OPR) = Var(O) then we can solve the above equation and get that Var(N) = 1425.8, so Var(N)/Var(O) is 2.25. Now this is only useful after the fact, but it confirms that choosing VarN/VarO to be 2.5 wasn't that far off.
__________________
Team 2590 Student [2011-2014]
Team 5684 Mentor [2015]
Team 5895 Mentor [2016-]

Last edited by AGPapa : 15-06-2015 at 16:05.
Reply With Quote