Quote:
Originally Posted by Ether
I dub this metric GPR :-)
I ran the numbers for 2015 MISJO (40 teams, 80 qual matches, no DQs, no surrogates).
|
EDIT2: Ignore this, it uses incorrect numbers.
I checked out how these stats relate to the match results.
Your numbers correctly predicted* the outcome of 66% of matches, while OPR and CCWM both predicted the correct winner 84% of the time.
It makes sense that this stat doesn't work for a game where the other alliance can't affect your score. Can you run the numbers for a 2014 event so we can see if it's better with that?
*I don't like these sorts of "predictions" because they occur with numbers obtained after the fact. Could you also run numbers for the first ~60 qual matches and then we'll see how they do on the next 20?
EDIT: Looking through the numbers a little, more, I can see that this new stat gives radically different evaluations to a few teams than OPR and CCWM. Look at these select teams:
Code:
Team GCCWM OPR CCWM
3688 -22.0 44.7 23.5
2474 -2.3 54.2 21.8
1940 8.4 5.4 -22.5
The first two are very undervalued by GCCWM while the last one is very overvalued. These aren't the only egregious differences.
Here are the correlation coef for each pair of metrics:
OPR-CCWM: 0.82
GCCWM-CCWM: 0.39
GCCWM-OPR: 0.35
Quote:
Originally Posted by wgardner
But the easy way around this is to just find the minimum norm solution (one of the many solutions) using, say, the singular value decomposition(SVD), and then subtract off the mean from all of the values. The resulting combined contributions to winning margin values represent how much a team will contribute to its winning margin compared to the average team's contribution (which will be 0, of course).
|
Could you explain a bit more how SVD will help you find the minimum norm solution? Unfortunately I'm only familiar with SVD in terms of geometric transformations.