Go to Post MAKE SURE THE BATTERY DOES NOT FALL OUT - magnets [more]
Home
Go Back   Chief Delphi > FIRST > General Forum
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rating: Thread Rating: 4 votes, 5.00 average. Display Modes
  #61   Spotlight this post!  
Unread 18-05-2015, 11:09
Ether's Avatar
Ether Ether is offline
systems engineer (retired)
no team
 
Join Date: Nov 2009
Rookie Year: 1969
Location: US
Posts: 8,125
Ether has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond repute
Re: "standard error" of OPR values

Quote:
Originally Posted by wgardner View Post
What is the standard error for the OPR estimates (assuming the modeled distribution is valid) after the full tournament?

about 11.4 per team. Some teams have a bit more or a bit less, but the standard deviation of this was only 0.1 so all teams were pretty close to 11.4.
I sincerely appreciate the time and effort you spent on this.

I could be wrong, but I doubt this is what Citrus Dad had in mind.

Can we all agree that 0.1 is real-world meaningless? There is without a doubt far more variation in consistency of performance from team to team.

Manual scouting data would surely confirm this.


@ Citrus Dad: you wrote:
Quote:
I'm thinking of the parameter standard errors, i.e., the error estimate around the OPR parameter itself for each team. That can be computed from the matrix--it's a primary output of any statistical software package.
... so would you please compute the parameter standard errors for this example using your statistical software package and post results here? Thank you.


Reply With Quote
  #62   Spotlight this post!  
Unread 18-05-2015, 11:35
Basel A's Avatar
Basel A Basel A is offline
It's pronounced Basl with a soft s
AKA: @BaselThe2nd
FRC #3322 (Eagle Imperium)
Team Role: College Student
 
Join Date: Mar 2009
Rookie Year: 2009
Location: Ann Arbor, Michigan
Posts: 1,941
Basel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond reputeBasel A has a reputation beyond repute
Re: "standard error" of OPR values

Quote:
Originally Posted by wgardner View Post
Scilab code is in the attachment.

Note that there is a very real chance that there's a bug in the code, so please check it over before you trust anything I say below.

/snip
Very interesting results. I wonder if you could run the same analysis on the 2015 Waterloo Regional. The reason I'm asking for that event in particular is because it had the ideal situation for OPR: high matches per team (13) and a small number of teams (30).
__________________
Team 2337 | 2009-2012 | Student
Team 3322 | 2014-Present | College Student
“Be excellent in everything you do and the results will just happen.”
-Paul Copioli
Reply With Quote
  #63   Spotlight this post!  
Unread 18-05-2015, 13:40
wgardner's Avatar
wgardner wgardner is online now
Registered User
no team
Team Role: Coach
 
Join Date: Feb 2013
Rookie Year: 2012
Location: Charlottesville, VA
Posts: 172
wgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to behold
Re: "standard error" of OPR values

Quote:
Originally Posted by Ether View Post
Can we all agree that 0.1 is real-world meaningless?

There is without a doubt far more variation in consistency of performance from team to team.

Manual scouting data would surely confirm this.
Sure. To reiterate though for others on the thread, it looks like the OPR estimates (assuming the model) for a tournament like the one in the data provided had a 1 standard deviation confidence range of around +/- 11.4 for nearly all teams (some teams might have been 11.3, some might have been 11.5, depending on their match schedules but as Ether says these very slight variations are essentially meaningless).

For example, this means that if a team had, say, an OPR of 50, that if they were in another identical tournament with the same matches and randomness in the match results, that the OPR computed from that tournament would probably be between 39 and 61 (if you're being picky, 68% of the time the score would lie in this range if the data is sufficiently normal or Gaussian).

So picking a team for your alliance that has an OPR of 55 over a different team that has an OPR of 52 is silly. But picking a team that has an OPR of 80 over a team that has an OPR of 52 is probably a safe bet.


In response to the latest post, this could be run on any other tournament for which the data is present. Ether made this particularly easy to do by providing the A match matrix and the vector of match results in nice csv files.

BTW, the code is attached and scilab is free, so anybody can do this for whatever data they happen to have on hand.
__________________
CHEER4FTC website and facebook online FTC resources.
Providing support for FTC Teams in the Charlottesville, VA area and beyond.

Last edited by wgardner : 18-05-2015 at 13:46.
Reply With Quote
  #64   Spotlight this post!  
Unread 18-05-2015, 15:10
Ether's Avatar
Ether Ether is offline
systems engineer (retired)
no team
 
Join Date: Nov 2009
Rookie Year: 1969
Location: US
Posts: 8,125
Ether has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond repute
Re: "standard error" of OPR values

Quote:
Originally Posted by wgardner View Post
Ether made this particularly easy to do by providing the A match matrix and the vector of match results in nice csv files.
[A] and [b] CSV files for all 117 events in 2015 (9878 qual matches, 2872 teams) can be found at the link below at or near the bottom of the attachments list

http://www.chiefdelphi.com/media/papers/3132


Reply With Quote
  #65   Spotlight this post!  
Unread 18-05-2015, 19:57
wgardner's Avatar
wgardner wgardner is online now
Registered User
no team
Team Role: Coach
 
Join Date: Feb 2013
Rookie Year: 2012
Location: Charlottesville, VA
Posts: 172
wgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to behold
Re: "standard error" of OPR values

Quote:
Originally Posted by Ether View Post
[A] and [b] CSV files for all 117 events in 2015 (9878 qual matches, 2872 teams) can be found at the link below at or near the bottom of the attachments list

http://www.chiefdelphi.com/media/papers/3132
Thanks for the awesome data, Ether!

Here are the results for the Waterloo tournament:

mpt = matches per team (so the last row is for the whole tournament and earlier rows are for the tournament through 4 matches per team, through 5, etc.)

varM = variance of the match scores

stdevM = standard deviation of the match scores

varR and stdevR are the same for the match prediction residual
so varR/varM is the fraction of the match variance that can't be predicted by the OPR linear prediction model.

/sqrt(mpt) = the standard deviation of the OPRs we would have if we were simply averaging a teams match score to estimate their OPR, which is just stdevR/sqrt(mpt)

StdErrO = the standard error of the OPRs using my complicated model derivation.

stdevO = the standard deviation of the StdErrO values taken across all teams, which is big if some teams have more standard error on their OPR values than other teams do.

Code:
mpt	varM	stdevM	varR	stdevR	/sqrt(mpt) StdErrO stdevO
4	3912.31	 62.55	206.90	 14.38	  7.19	 12.22	  1.60	
5	4263.97	 65.30	290.28	 17.04	  7.62	 10.44	  0.71	
6	3818.40	 61.79	346.49	 18.61	  7.60	  9.44	  0.43	
7	3611.50	 60.10	379.83	 19.49	  7.37	  8.64	  0.30	
8	3617.25	 60.14	429.42	 20.72	  7.33	  8.28	  0.17	
9	3592.06	 59.93	469.44	 21.67	  7.22	  8.00	  0.11	
10	3623.44	 60.20	539.33	 23.22	  7.34	  8.01	  0.10	
11	3530.91	 59.42	548.08	 23.41	  7.06	  7.58	  0.08	
12	3440.36	 58.65	578.65	 24.06	  6.94	  7.38	  0.07	
13	3356.17	 57.93	645.25	 25.40	  7.05	  7.42	  0.06
And for comparison, here's the same data for the Archimedes division results:

Code:
mpt	varM	stdevM	varR	stdevR	/sqrt(mpt) StdErrO stdevO
4	1989.58	 44.60	389.80	 19.74	  9.87	 16.51	  1.28	
5	2000.09	 44.72	714.81	 26.74	 11.96	 16.31	  0.57	
6	2157.47	 46.45	863.88	 29.39	 12.00	 15.17	  0.37	
7	2225.99	 47.18	916.16	 30.27	 11.44	 13.64	  0.29	
8	2204.03	 46.95	985.63	 31.39	 11.10	 12.77	  0.24	
9	2235.14	 47.28	1053.26	 32.45	 10.82	 12.21	  0.10	
10	2209.46	 47.00	1056.14	 32.50	 10.28	 11.37	  0.12

The OPR seems to do a much better job of predicting the match results in the Waterloo tournament (removing 80% of the match variance vs. 50% in Archmedes), and the standard deviation of the OPR estimates themselves is less (7.42 in Waterloo vs. 11.37 in Archimedes).
__________________
CHEER4FTC website and facebook online FTC resources.
Providing support for FTC Teams in the Charlottesville, VA area and beyond.

Last edited by wgardner : 18-05-2015 at 20:01.
Reply With Quote
  #66   Spotlight this post!  
Unread 19-05-2015, 15:07
Citrus Dad's Avatar
Citrus Dad Citrus Dad is offline
Business and Scouting Mentor
AKA: Richard McCann
FRC #1678 (Citrus Circuits)
Team Role: Mentor
 
Join Date: May 2012
Rookie Year: 2012
Location: Davis
Posts: 994
Citrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond repute
Re: "standard error" of OPR values

Quote:
Originally Posted by Ether View Post
I sincerely appreciate the time and effort you spent on this.

I could be wrong, but I doubt this is what Citrus Dad had in mind.


@ Citrus Dad: you wrote:
... so would you please compute the parameter standard errors for this example using your statistical software package and post results here? Thank you.


I believe that the SEs that have been posted are what I was interested in. (I have think harder about this than I can right now.) I think that using the pooled time series method which is essentially what's been done results in SEs will be largely the same for each participant because the OPRs are estimated across all participants.

To be honest setting up a pooled time series with this data would take me more time than I have at the moment. I've thought about it and maybe it will be a summer project (maybe my son Jake (themccannman) can do it!)

Note that the 1 SD SE of 11.5 is the 68% confidence interval. For 10 or so observations, the 95% confidence interval is about 2 SD or about 23.0. The t-statistic is the relevant tool for finding the confidence interval metric.

Last edited by Citrus Dad : 19-05-2015 at 15:09. Reason: Add note about confidence intervals
Reply With Quote
  #67   Spotlight this post!  
Unread 19-05-2015, 17:09
Ether's Avatar
Ether Ether is offline
systems engineer (retired)
no team
 
Join Date: Nov 2009
Rookie Year: 1969
Location: US
Posts: 8,125
Ether has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond repute
Re: "standard error" of OPR values

Quote:
Originally Posted by Citrus Dad View Post
I believe that the SEs that have been posted are what I was interested in.
You stated earlier that "the parameter standard errors, i.e., the error estimate around the OPR parameter itself for each team [is] a primary output of any statistical software package."

From this and other prior statements, I had the very strong impression you were seeking a separate error estimate for each team's OPR.

Such estimates would certainly not be virtually identical for every team!

It would be very helpful if you would please provide more information about statistical software packages you know that provide "parameter standard errors".

I couldn't find any that could provide such estimates for the multiple-regression model we are talking about for OPR computation using FRC-provided match score data. I suspect that's because it's simply not possible to get such estimates for that model and data.



Reply With Quote
  #68   Spotlight this post!  
Unread 19-05-2015, 20:16
wgardner's Avatar
wgardner wgardner is online now
Registered User
no team
Team Role: Coach
 
Join Date: Feb 2013
Rookie Year: 2012
Location: Charlottesville, VA
Posts: 172
wgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to behold
Re: "standard error" of OPR values

Quote:
Originally Posted by Ether View Post
From this and other prior statements, I had the very strong impression you were seeking a separate error estimate for each team's OPR.

Such estimates would certainly not be virtually identical for every team!
The approach I described does find a separate error estimate for each team and, at least in this approach, they are virtually identical. Why do you think they would "certainly not be virtually identical"?

Note that this is computing the confidence of each OPR estimate for each team. This is different from trying to compute the variance of score contribution from match to match for each team, which is a very different (and also very interesting) question. I think it would be reasonable to hypothesize that the variance of score contribution for each team might vary from team to team, possibly substantially.

For example, it might be interesting to know that team A scores 50 points +/- 10 points with 68% confidence but team B scores 50 points +/- 40 points with 68% confidence. At the very least, if you saw that one team had a particularly large score variance, it might make you investigate this robot and see what the underlying root cause was (maybe 50% of the time they have an awesome autonomous but 50% of the time it completely messes up, for example).

Hmmm....
__________________
CHEER4FTC website and facebook online FTC resources.
Providing support for FTC Teams in the Charlottesville, VA area and beyond.
Reply With Quote
  #69   Spotlight this post!  
Unread 19-05-2015, 21:44
Ether's Avatar
Ether Ether is offline
systems engineer (retired)
no team
 
Join Date: Nov 2009
Rookie Year: 1969
Location: US
Posts: 8,125
Ether has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond repute
Re: "standard error" of OPR values

Quote:
Originally Posted by wgardner View Post
Why do you think they would "certainly not be virtually identical"?
Because there's no reason whatsoever to believe there's virtually no variation in consistency of performance from team to team.

Manual scouting data would surely confirm this.


Consider the following thought experiment.

Team A gets actual scores of 40,40,40,40,40,40,40,40,40,40 in each of its 10 qual matches.

Team B gets actual scores of 0,76,13,69,27,23,16,88,55,33

The simulation you described assigns virtually the same standard error to their OPR values.

If what is being sought is a metric which is somehow correlated to the real-world trustworthiness of the OPR for each individual team (I thought that's what Citrus Dad was seeking), then the standard error coming out of the simulation is not that metric.


My guess is that the 0.1 number is just measuring how well your random number generator is conforming to the sample distribution you requested.



Last edited by Ether : 19-05-2015 at 22:05.
Reply With Quote
  #70   Spotlight this post!  
Unread 19-05-2015, 22:09
wgardner's Avatar
wgardner wgardner is online now
Registered User
no team
Team Role: Coach
 
Join Date: Feb 2013
Rookie Year: 2012
Location: Charlottesville, VA
Posts: 172
wgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to behold
Re: "standard error" of OPR values

Quote:
Because there's no reason whatsoever to believe there's virtually no variation in consistency of performance from team to team.

Manual scouting data would surely confirm this.
[Edit: darn it! I tried to reply and mistakenly edited my previous post. I'll try to reconstruct it here.]

Your model certainly might be valid, and my derivation explicitly does not deal with this case.

The derivation is for a model where OPRs are computed, then multiple tournaments are generated using those OPRs and adding the same amount of noise to each match, and then seeing what the standard error of the resulting OPR estimates is across these multiple tournaments.

If you know that the variances for each team's score contribution are different, then the model fails. For that matter, the least squares solution for computing the OPRs in the first place is also a failed model in this case. If you knew the variances of the teams' contributions, then you should use weighted-least-squares to get a better estimate of the OPRs.

I wonder if some iterative approach might work: First compute OPRs assuming all teams have equal variance of contribution, then estimate the actual variances of contributions for each team, then recompute the OPRs via weighted-least-squares taking this into account, then repeat the variance estimates, etc., etc., etc. Would it converge?

[Edit: 2nd part of post, added here a day later]

http://en.wikipedia.org/wiki/Generalized_least_squares

OPRs are computed with an ordinary-least-squares (OLS) analysis.

If we knew ahead of time the variances we expected for each team's scoring contribution, we could use weighted-least-squares (WLS) to get a better estimate of the OPRs.

The link also describes something like I was suggesting above, called "Feasible generalized least squares (FGLS)". In FGLS, you use OLS to get your initial OPRs, then estimate the variances, then compute WLS to improve the OPR estimate. It discusses iterating this approach also.

But, the link also includes this comment: "For finite samples, FGLS may be even less efficient than OLS in some cases. Thus, while (FGLS) can be made feasible, it is not always wise to apply this method when the sample is small."

If we have 254 match results and we're trying to estimate 76 OPRs and 76 OPRvariances (152 parameters total), we have a pretty small sample size. So this approach would probably suffer from too small of a sample size.
__________________
CHEER4FTC website and facebook online FTC resources.
Providing support for FTC Teams in the Charlottesville, VA area and beyond.

Last edited by wgardner : 20-05-2015 at 07:20.
Reply With Quote
  #71   Spotlight this post!  
Unread 20-05-2015, 07:27
wgardner's Avatar
wgardner wgardner is online now
Registered User
no team
Team Role: Coach
 
Join Date: Feb 2013
Rookie Year: 2012
Location: Charlottesville, VA
Posts: 172
wgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to behold
Re: "standard error" of OPR values

See also this link:
http://en.wikipedia.org/wiki/Heteroscedasticity

"In statistics, a collection of random variables is heteroscedastic if there are sub-populations that have different variabilities from others. Here "variability" could be quantified by the variance or any other measure of statistical dispersion."

And see particularly the "Consequences" section which says, "Heteroscedasticity does not cause ordinary least squares coefficient estimates to be biased, although it can cause ordinary least squares estimates of the variance (and, thus, standard errors) of the coefficients to be biased, possibly above or below the true or population variance. Thus, regression analysis using heteroscedastic data will still provide an unbiased estimate for the relationship between the predictor variable and the outcome, but standard errors and therefore inferences obtained from data analysis are suspect. Biased standard errors lead to biased inference..."
__________________
CHEER4FTC website and facebook online FTC resources.
Providing support for FTC Teams in the Charlottesville, VA area and beyond.
Reply With Quote
  #72   Spotlight this post!  
Unread 21-05-2015, 20:58
Citrus Dad's Avatar
Citrus Dad Citrus Dad is offline
Business and Scouting Mentor
AKA: Richard McCann
FRC #1678 (Citrus Circuits)
Team Role: Mentor
 
Join Date: May 2012
Rookie Year: 2012
Location: Davis
Posts: 994
Citrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond reputeCitrus Dad has a reputation beyond repute
Re: "standard error" of OPR values

Quote:
Originally Posted by Ether View Post
You stated earlier that "the parameter standard errors, i.e., the error estimate around the OPR parameter itself for each team [is] a primary output of any statistical software package."

From this and other prior statements, I had the very strong impression you were seeking a separate error estimate for each team's OPR.

Such estimates would certainly not be virtually identical for every team!

It would be very helpful if you would please provide more information about statistical software packages you know that provide "parameter standard errors".

I couldn't find any that could provide such estimates for the multiple-regression model we are talking about for OPR computation using FRC-provided match score data. I suspect that's because it's simply not possible to get such estimates for that model and data.

I think one solution is to use a fixed-effects model that includes a separate variable for each team and the SE for each team will show up there. To be honest, issues like that for FE models is getting beyond my econometric experience. Maybe someone else could research that and cheick. FE models (as well as random effects models) have become quite popular in the last decade.
Reply With Quote
  #73   Spotlight this post!  
Unread 24-05-2015, 13:43
sur sur is offline
Registered User
AKA: Sujit Rao
FRC #3324 (Metrobots)
Team Role: Alumni
 
Join Date: May 2012
Rookie Year: 2011
Location: Ohio
Posts: 12
sur is an unknown quantity at this point
Re: "standard error" of OPR values

Quote:
Originally Posted by wgardner View Post
To be as clear as I can about this: This says that if we compute the OPRs based on the full data set, compute the match prediction residuals based on the full data set, then run lots of different tournaments with match results generated by adding the OPRs for the teams in the match and random match noise with the same match noise variance, and then compute the OPR estimates for all of these different randomly generated tournaments, we would expect to see the OPR estimates themselves have a standard deviation around 11.4.
This sounds very similar to bootstrap resampling (http://www.stat.cmu.edu/~cshalizi/40...lecture-08.pdf), which should measure the variation in estimated OPR from the "true" OPR values rather than how consistently individual teams perform. This may be why the values are virtually identical.
Reply With Quote
  #74   Spotlight this post!  
Unread 24-05-2015, 14:06
wgardner's Avatar
wgardner wgardner is online now
Registered User
no team
Team Role: Coach
 
Join Date: Feb 2013
Rookie Year: 2012
Location: Charlottesville, VA
Posts: 172
wgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to beholdwgardner is a splendid one to behold
Re: "standard error" of OPR values

Quote:
Originally Posted by sur View Post
This sounds very similar to bootstrap resampling (http://www.stat.cmu.edu/~cshalizi/40...lecture-08.pdf), which should measure the variation in estimated OPR from the "true" OPR values rather than how consistently individual teams perform. This may be why the values are virtually identical.
Yep, though my derivation is for straight bootstrapping (Figure #1 in your attachment) rather than re-sampled bootstrapping (Figure #3). And yes, given this, the standard errors I compute are the variations of the OPR estimates if they fit the model, all of which assumes that there is not variation in the way individual teams perform other than their mean contribution. Obviously, this final assumption is suspect.
__________________
CHEER4FTC website and facebook online FTC resources.
Providing support for FTC Teams in the Charlottesville, VA area and beyond.
Reply With Quote
  #75   Spotlight this post!  
Unread 24-05-2015, 16:52
sur sur is offline
Registered User
AKA: Sujit Rao
FRC #3324 (Metrobots)
Team Role: Alumni
 
Join Date: May 2012
Rookie Year: 2011
Location: Ohio
Posts: 12
sur is an unknown quantity at this point
Re: "standard error" of OPR values

Quote:
Originally Posted by wgardner View Post
And yes, given this, the standard errors I compute are the variations of the OPR estimates if they fit the model, all of which assumes that there is not variation in the way individual teams perform other than their mean contribution. Obviously, this final assumption is suspect.
I think this assumption can be restated a different way. We might assume that the contribution of each team is normally distributed with a certain mean and variance. If the variance is fixed and assumed to be the same for each team, then the maximum likelihood estimate of the means of the distributions should be the same as the least squares estimate as in usual OPR. This assumes that there is some hidden distribution of the contribution of each team which is normal and that the variance of each distribution is the same.

Last edited by sur : 24-05-2015 at 17:48.
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 08:03.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi