Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Scouting (http://www.chiefdelphi.com/forums/forumdisplay.php?f=36)
-   -   An improvement to OPR (http://www.chiefdelphi.com/forums/showthread.php?t=116791)

Citrus Dad 19-05-2013 14:56

Re: An improvement to OPR
 
Note: I was reviewing the 2834 database and think I found that the Championship OPRs are in error. The sums of the individual components often do not add up to the Total. (3824's in Curie is off by 32.) A quick scan of the regionals finds in some cases no deviations whatsoever and <2 pts maximum in others. I suggest going back and recomputing the OPRs.

efoote868 19-05-2013 14:58

Re: An improvement to OPR
 
A possible explanation is that Ed took into account surrogate matches.

Ed Law 20-05-2013 01:36

Re: An improvement to OPR
 
That is exactly the problem. OPR is calculated using all matches including surrogate matches. I would still want to calculate OPR this way. More data point is better even if the match does not count for that team.

Unfortunately team standing from FIRST website only adds up the total of the auto, teleop and climb points of non surrogate matches. This means when I solve A x = b, the matrix A contains the surrogate match while vector b does not contain surrogate match.

My proposal is to scale the value of b for the teams that have the surrogate matches before solving A x = b. Does anybody have any other suggestion?

Ed Law 20-05-2013 01:42

Re: An improvement to OPR
 
Quote:

Originally Posted by Citrus Dad (Post 1275934)
I suggest going back and recomputing the OPRs.

Thank you for pointing out the issue with sum of individual categoty OPR do not add up to total OPR. I don't know exactly what you mean. You made it sound like I do the calculations by hand. I can ask the computer to run it 100 times and I can guarantee you that I will get the same answer every time. :)

Ether 20-05-2013 11:56

Re: An improvement to OPR
 
Quote:

Originally Posted by Ed Law (Post 1276030)
My proposal is to scale the value of b for the teams that have the surrogate matches before solving A x = b. Does anybody have any other suggestion?

As a short-term solution that sounds like a reasonable approach to try to make the best out of the data that is available.

Going forward, perhaps someone who has Frank's ear and is interested in statistics could make an appeal to him to resolve the Twitter data issues. At the very least, store the data locally (at the event) and don't delete it until it has been archived at FIRST. Then make the data available to the community.



Citrus Dad 20-05-2013 13:45

Re: An improvement to OPR
 
Quote:

Originally Posted by efoote868 (Post 1275937)
A possible explanation is that Ed took into account surrogate matches.

I believe the method relies on the official score database, not on match by match reported scores. The surrogates don't show up there. He would have to use 2 different data sets to get different answers.

Ether 20-05-2013 14:25

Re: An improvement to OPR
 
Quote:

Originally Posted by Citrus Dad (Post 1276145)
I believe the method relies on the official score database, not on match by match reported scores. The surrogates don't show up there. He would have to use 2 different data sets to get different answers.

There are two different score datasets at USFIRST: "Match Results" and "Team Standings".

"Match Results" is necessary to construct the alliances matrix and obtain the total match score. It contains the surrogate matches.

"Team Standings" is necessary to obtain the Auto, TeleOp, and Climb alliance scoring. Problem is, the totals shown there do not include the scores for surrogate teams in matches where said teams played as surrogates.

Ed's proposed work-around to scale the "Team Standings" totals for teams which played as surrogates seems like a reasonable one. Do you have a different suggestion?



MikeE 20-05-2013 22:59

Re: An improvement to OPR
 
Quote:

Originally Posted by Ether (Post 1276151)
There are two different score datasets at USFIRST: "Match Results" and "Team Standings".

"Match Results" is necessary to construct the alliances matrix and obtain the total match score. It contains the surrogate matches.

"Team Standings" is necessary to obtain the Auto, TeleOp, and Climb alliance scoring. Problem is, the totals shown there do not include the scores for surrogate teams in matches where said teams played as surrogates.

Ed's proposed work-around to scale the "Team Standings" totals for teams which played as surrogates seems like a reasonable one. Do you have a different suggestion?



My preferred solution is for FIRST to move to an all district model with 12 matches per event and therefore no more surrogates :)

Until then...

If we have complete Twitter data for an event then we get the component scores for every match so we don't have an issue.

But to solve the surrogate problem we just need the component scores from the specific surrogate matches. There are at most 3 of these in any competition and typically just 1 or 2 consecutive matches in round 3.
Since there is a single surrogate team in an alliance we just need to add the Twitter component scores to their "Team Standing" score to get the corrected total scores for that surrogate team.

MikeE 20-05-2013 23:30

Re: An improvement to OPR
 
Quote:

Originally Posted by Citrus Dad (Post 1274148)
I haven't seen the SEs posted with the OPR parameters, but I can tell you that the SEs are likely to be VERY large for so few observations--only 8 per team at the Champs, at most 12 in any of the regionals.

Small point: The Pinetree regional in Maine had 13 matches per team in qualifications; one of the reasons it was the Best Regional* of the 2013 season.

Bigger point: I've been playing around with maximum likelihood estimate models as an alternative (really an extension) to OPR, and these do provide both a mean and variance of team contribution. It's not quite ready to write up as a white paper but it's giving some interesting early results from Monte Carlo event simulations.

One more point: I'm a fan of the binary matrix approach to solving the regression described by Ryan since it's easy to add in additional match-by-match features such as average (or per team) score gradient during an event.

* from my very small sample of 4 events

Ether 21-05-2013 00:33

Re: An improvement to OPR
 
Quote:

My preferred solution is for FIRST to move to an all district model with 12 matches per event and therefore no more surrogates
The criterion for "no surrogates" is M*6/T = N, where

M is the number of qual matches
T is the number of teams
N is a whole number (the number of matches played by each team)

At CMP, T=100 and M=134, so N was not a whole number; thus there were surrogates.

If instead T=96 and M=128, N would be a whole number (namely 8) and there would be no surrogates.


Quote:

Since there is a single surrogate team in an alliance we just need to add the Twitter component scores to their "Team Standing" score to get the corrected total scores for that surrogate team.
Here's the 2013 season Twitter data for elim and qual matches. It has Archi, Curie, Galileo, & Newton. The usual Twitter data caveats apply.


Quote:

I've been playing around with maximum likelihood estimate models as an alternative (really an extension) to OPR...
What do you mean by "maximum likelihood estimate models" in this context?


Quote:

One more point: I'm a fan of the binary matrix approach...
In this context, I'm assuming "the binary matrix" refers to the 2MxN design matrix [A] of the overdetermined system.

Do you then use QR factorization directly on the binary matrix to obtain the solution, or do you form the normal equations and use Cholesky?



MikeE 22-05-2013 15:48

Re: An improvement to OPR
 
Quote:

Originally Posted by Ether (Post 1276340)
The criterion for "no surrogates" is M*6/T = N, where

M is the number of qual matches
T is the number of teams
N is a whole number (the number of matches played by each team)

At CMP, T=100 and M=134, so N was not a whole number; thus there were surrogates.

If instead T=96 and M=128, N would be a whole number (namely 8) and there would be no surrogates.

I prefer to think of the surrogate issue in terms of looking at
(T*N) mod 6
i.e. how many teams are left over if everyone plays a certain number of rounds.

If there are any teams left over then we need at least one surrogate match. The scheduling software also adds the reasonable constraint that there can only be one surrogate team per alliance. Putting this all together there are between 0 and 3 matches with surrogates in qualification rounds.

Clearly if either T or N are multiples of 6 then the remainder is zero so no surrogates.

Choosing N=12 guarantees no surrogates however many teams are at the event, gives plenty of matches for each team and also has the nice property that M=2*T so it's easy to estimate the schedule impact. I'm sure the designers of FiM and MAR settled on 12 matches per event through similar reasoning.

Quote:

Originally Posted by Ether (Post 1276340)
What do you mean by "maximum likelihood estimate models" in this context?

(I'll try to keep this accessible to a wider audience but we can go into further details later.)

OPR estimates a single parameter model for each team, i.e. what is the optimal solution if we model a team's contribution to each match as a constant. We can also use regression (or other optimization techniques) to build richer models. For example we can model each team with two parameters: a constant contribution per match similar to OPR, plus a term which models a team's improvement per round.

But these type of models are deterministic. In other words if we use the model to predict the outcome of a hypothetical match we will always get the same answer. That means we can't use a class of useful simulation methods to get deeper insight into how a collection of matches might play out.

Here's an alternative approach.
Instead of modeling a team's score as a constant (or polynomial function of known features), we treat each team's score as if it is generated from an underlying statistical distribution. Now the problem becomes one of estimating (or assuming) the type of distribution and also estimating the parameters of that distribution for each team.

With OPR we model team X as scoring say 15.3 points every match, so our prediction for a hypothetical match is always 15.3 points.
With a statistical model we would model team X as something like 15.3 +/- 6.3 points. To predict the score for a hypothetical match we choose randomly from the appropriate distribution, and this will obviously be different each time we "play" the hypothetical match.

So with OPR if we "play" a hypothetical match 100 times where OPR(red) > OPR(blue), the final score would be the same every time so red will always win. But if we use a statistical model then red should still win most matches but blue will also win some of the time. Now we have an estimate of the probability of victory for red, which is potentially more useful information than "red wins", and can be used in further reasoning.

MLE is just an approach for getting the parameters from match data. For simplicity I assume a Gaussian distribution, use linear regression as an initial estimate of each team's mean and linear regression on the squared residuals as an initial estimate of each team's variance.

Quote:

Originally Posted by Ether (Post 1276340)
In this context, I'm assuming "the binary matrix" refers to the 2MxN design matrix [A] of the overdetermined system.

Do you then use QR factorization directly on the binary matrix to obtain the solution, or do you form the normal equations and use Cholesky?

Yes, I mean the design matrix.

I've implemented many numerical algorithms over the years and the main lesson it taught me is not to write them yourself unless absolutely necessary!
So for linear regression I solve the normal equation using Octave (similar to MATLAB). I don't see any meaningful difference between my results and other published sources on CD.

IKE 22-05-2013 16:57

Re: An improvement to OPR
 
Quote:

Originally Posted by Citrus Dad (Post 1274581)
I agree that the OPR is better than nothing--it still had a 0.91 correlation with our offensive scouting data. However, it tends to miss the outlier teams that may be hard to pick out otherwise. Our OPR was 27 points less than our actual offensive average--more than 50% off. But if your team has sufficient resources to calculate the OPR on the fly then you probably have enough to do full scouting. On the other hand, you might be relying on the OPR calculated with one of the apps tracking the competition, in which case then that's the best you have.

.....

Outliers happen. They tend to be worse if teams have a lot of variation, and the fewer the samples, the worse it is...

Being more than 50% off is rarer, but not unheard of. I once saw an OPR of 15 assigned to a team that had only competed in 2/8 matches. And they didn't score 60 points in those two matches...

Another killer of OPR is if a team that reliably does very well has a bad match with your team. For instance, in Archimedes, 469 ended up with an OPR over 80 points. they had a match where their shooter had an issue right from the start and I believe they only scored climb points. Unfortunately to their partners, the OPR calculations will likely penalize those other teams..

For these reasons, and many others, it is very important to scout.

OPR does accurrately though show that some teams are worth less than they score on average. Yep, you heard me right, there are many teams that are worth less than their average score. This is especially true of slow that used the middle shooting position to start their climb. While they would frequently get their 30 points, they would often cost the alliance many missed shots from cyclers that were 75%+ at taht position but 50%- from outside shooting positions. Yes, the climber did score 30 points, but the other two partners that usually put up 30 disc points, and only got 20 results in a -20 total points from them. Some of this if it occurs on a regular basis will get attributed to the climber team. This will also explain an imbalance if you summed the auton, disc points, and climbing OPR.

Basel A 22-05-2013 17:58

Re: An improvement to OPR
 
Quote:

Originally Posted by IKE (Post 1276698)
OPR does accurrately though show that some teams are worth less than they score on average. Yep, you heard me right, there are many teams that are worth less than their average score. This is especially true of slow that used the middle shooting position to start their climb. While they would frequently get their 30 points, they would often cost the alliance many missed shots from cyclers that were 75%+ at taht position but 50%- from outside shooting positions. Yes, the climber did score 30 points, but the other two partners that usually put up 30 disc points, and only got 20 results in a -20 total points from them. Some of this if it occurs on a regular basis will get attributed to the climber team. This will also explain an imbalance if you summed the auton, disc points, and climbing OPR.

This also explains the potential for negative OPR in a category. Such a team might have a negative teleop OPR.

IKE 22-05-2013 18:37

Re: An improvement to OPR
 
Quote:

Originally Posted by Basel A (Post 1276709)
This also explains the potential for negative OPR in a category. Such a team might have a negative teleop OPR.

Correct. Negative OPRs are much more rare now that penalties get added to the other teams score, but they do occur (sometimes for just reasons, sometimes not).

The phenomenon I talked about above is similar to what can occur with the +/- system in basketball. Sometimes a superstar doesn't score a lot of points due to getting double-teamed but his open teammates then score a bunch of points. If you only look at stats, it doesn't tell the whole story.

FRC 33 uses OPR to figure out schedule strength and to double check some of our Stats data. Ultimately I trust the stats more than I do OPR, but especially this year, I found a handful of errors in our scouting team data.

Like Citrus Dad, I generally found OPR to be within 15% of a teams average contribution. However there would be several teams with large deltas. Often this was due to a team not working for a while and then hitting a whole bunch of points. FCS teams would also create havoc in OPR. They would have a 80 point match, and then a 20 point match. Then and 80, then a 20... OPR math depends on a team being reasonably consistent. This behaviour will either dramatically over-predict or under predict...

Ether 22-05-2013 19:17

Re: An improvement to OPR
 
1 Attachment(s)
Quote:

Originally Posted by MikeE (Post 1276688)
for linear regression I solve the normal equation using Octave


Octave uses a polymorphic solver, that selects an appropriate matrix factorization depending on the properties of the matrix.

If the matrix is Hermitian with a real positive diagonal, the polymorphic solver will attempt Cholesky factorization.

Since the normal matrix N=ATA satisfies this condition, Cholesky factorization will be used.


Quote:

MLE is just an approach for getting the parameters from match data. For simplicity I assume a Gaussian distribution, use linear regression as an initial estimate of each team's mean and linear regression on the squared residuals as an initial estimate of each team's variance.
The solution of the normal equations is a maximum likelihood estimator only if the data follows a normal distribution. I was wondering what was the theoretical basis for assuming a normal distribution.




All times are GMT -5. The time now is 08:53.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi