View Single Post
  #6   Spotlight this post!  
Unread 01-04-2013, 17:52
MikeE's Avatar
MikeE MikeE is offline
Wrecking nice beaches since 1990
no team (Volunteer)
Team Role: Engineer
 
Join Date: Nov 2008
Rookie Year: 2008
Location: New England -> Alaska
Posts: 381
MikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond repute
Re: OPR after Week Five Events

Thanks Ed & Ether for this great resource.

However I do need to quibble about how OPR/CCWM is being discussed, as exemplified by several earlier posts. Picking one of these:

Quote:
Originally Posted by efoote868 View Post
Summary of global OPR and CCWM match win/loss predictions: 81.72% and 82.47% respectively.
Those numbers are misleading since OPR/CCWM are calculated from the same data which are being used to test their predictive power, i.e. the training & test sets are the same.

It's analogous to (although not as extreme as) stating that final qualification ranking is a good predictor of the performance in earlier qualifying matches. Whereas obviously qualification ranking is a consequence of performance in earlier matches.

Good practice would use disjoint training and testing sets. I'm sure this analysis has been performed in previous seasons but I didn't see it from a brief search of CD.

Interestingly the simple baseline heuristic of "alliance with lower team numbers" has 59.1% predictive power for qualification matches this season. I'm assume that OPR and CCWM are better than that, but not as good as the ~82% claimed above.
Reply With Quote