|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools |
Rating:
|
Display Modes |
|
|
|
#1
|
||||
|
||||
|
Re: Incorporating Opposing Alliance Information in CCWM Calculations
I ran the numbers on the 2014 St. Joseph event. I checked that my calculations for 2015 St. Joseph match Ether's, so I'm fairly confident that everything is correct.
Here's how each stat did at "predicting" the winner of each match. OPR: 87.2% CCWM: 83.3% WMPR: 91.0% I've attached my analysis, WMPR values, A and b matrices, along with the qual schedules for both the 2014 and 2015 St. Joe event. |
|
#2
|
||||
|
||||
|
Re: Incorporating Opposing Alliance Information in CCWM Calculations
Quote:
On the same data doing the "remove one match from the training data, model based on the rest of the data, use the removed match as testing data, and repeat the process for all matches" method, I got the following results: Stdev of winning margin prediction residual OPR : 63.8 CCWM: 72.8 WMPR: 66.3 When I looked at scaling down each of the metrics to improve their prediction performance on testing data not in the training set, the best Stdevs I get for each were: OPR*0.9: 63.3 CCWM*0.6: 66.2 WMPR*0.7: 60.8 Match prediction outcomes OPR : 60 of 78 (76.9 %) CCWM: 57 of 78 (73.1 %) WMPR: 62 of 78 (79.5 %) Yeah! Even with testing data not used in the training set, WMPR seems to be outperforming CCWM in predicting the winning margins and the match outcomes in this single 2014 tournament (which again is a game with substantial defense). I'm hoping to get the match results (b with red and blue scores separately) for other 2014 tournaments to see if this is a general result. [Edit: found a bug in the OPR code. Fixed it. Updated comments. Also included the scaled down OPR, CCWM, and WMPR prediction residuals to address overfitting.] Last edited by wgardner : 27-05-2015 at 08:37. |
|
#3
|
||||
|
||||
|
Re: Incorporating Opposing Alliance Information in CCWM Calculations
Quote:
|
|
#4
|
||||
|
||||
|
Re: Incorporating Opposing Alliance Information in CCWM Calculations
Quote:
|
|
#5
|
||||
|
||||
|
Re: Incorporating Opposing Alliance Information in CCWM Calculations
Iterative Interpretations of OPR and WMPR
(I found this interesting: some other folks might or some other folks might not. )Say you want to estimate a team's offensive contribution to their alliance scores. A simple approach is just compute the team's average match score/3. Let's call this estimate O(0), a vector of the average match score/3 for all teams at step 0. (/3 because there are 3 teams per alliance. This would be /2 for FTC). But then you want to take into account the fact that a team's alliance partners may be better or worse than average. The best estimate you have of the contribution of a team's partners at this point is the average of their O(0) estimates. So let the improved estimate be O(1) = team's average match score - 2*average ( O(0) for a team's alliance partners). (2*average because there are 2 partners contributing per match. This would be 1*average for FTC.) This is better, but now we have an improved estimate for all teams, so we can just iterate this: O(2) = team's average match score - 2*average ( O(1) for a team's alliance partners). O(3) = team's average match score - 2*average ( O(2) for a team's alliance partners). etc. etc. This sequence of O(i) converges to the OPR values, so this is just another way of explaining what OPRs are. WMPR can be iteratively computed in a similar way. W(0) = team's average match winning margin W(1) = team's average match winning margin - 2*average ( W(0) for a team's alliance partners) + 3*average ( W(0) for a team's opponents ). W(2) = team's average match winning margin - 2*average ( W(1) for a team's alliance partners) + 3*average ( W(1) for a team's opponents ). etc. etc. This sequence of W(i) converges to the WMPR values, so this is just another way of explaining what WMPRs are. |
|
#6
|
||||
|
||||
|
Re: Incorporating Opposing Alliance Information in CCWM Calculations
Currently we've mostly been seeing how WMPR does at a small district event with a lot of matches per team (a best-case scenario for these stats). I wanted to see how it would do in a worse case. Here's how each stat performed at "predicting" the winner of each match 2014 Archimedes Division (100 teams, 10 matches/team).
OPR: 85.6% CCWM: 87.4% WMPR: 92.2% EPR: 89.2% WMPR holds up surprisingly well in this situation and outperforms the other stats. EPR does better than OPR, but worse than WMPR. I don't really like EPR, as it seems difficult to interpret. The whole idea behind using the winning margin is that the red robots can influence the blue score. Yet EPR also models bf = b1 + b2 + b3, which is counter to this. Quote:
On another note: I've also found that it's difficult to compare WMPR's across events (whereas OPR's are easy to compare). This is because a match that ends 210-200 looks the same as one that ends 30-20. At very competitive events this becomes a huge problem. Here's an example from team 33's 2014 season. WMPRs at each Event: MISOU: 78.9 MIMID: 37.0 MITRY: 77.8 MICMP: 29.4 ARCHI: 40.8 Anyone who watched 33 at their second district event would tell you that they didn't do as well as their first, and these numbers show that. But these numbers also show that 33 did better at their second event than at the State Championship. This is clearly incorrect, 33 won the State Championship but got knocked out in the semis at their second district event. You can see pretty clearly that the more competitive events (MSC, Archimedes) result in lower WMPRs, which makes it very difficult to compare this stat across events. This occurs because using the least-norm solution has an average of zero for every event. It treats all events as equal, when they're not. I propose that instead of having the average be zero, the average should be how many points the average robot scored at that event. (So we should add the average event score / 3 to every team's WMPR). This will smooth out the differences between each event. Using this method, here are 33's new WMPRs. MISOU: 106.3 MIMID: 71.7 MITRY: 112.7 MICMP: 86.0 ARCHI: 93.5 Now these numbers correctly reflect how 33 did at each event. MIMID has the lowest WMPR, and that's where 33 did the worst. Their stats at MICMP and ARCHI are now comparable to their district events. OPR has proliferated because it's easy to understand (this robot scores X points per match). With this change, WMPR also becomes easier to understand (this robot scores and defends their opponents by X points per match). Since this adds the same constant to everybody's WMPR, it'll still predict the match winner and margin of victory with the same accuracy. Thoughts? Last edited by AGPapa : 27-05-2015 at 15:41. |
|
#7
|
||||
|
||||
|
Re: Incorporating Opposing Alliance Information in CCWM Calculations
Quote:
I'll try to get to the verification on testing data in the next day or so. I personally like this normalized WMPR (nWMPR?) better than EPR as the interpretation is cleaner: we're just trying to predict the winning margin. EPR is trying to predict the individual scores and the winning margin and weighting the residuals all the same. It's a bit more ad-hoc. On the other hand, one could look into which weightings result in the best overall result in terms of whatever measure of result folks care about. I still am most interested in how well a metric predicts the winning margin of a match (and in my FTC android apps I also hope to include an estimate of "probability of victory" from this which incorporates the expected winning margin and the standard deviation of that expectation along with the assumption of a normally distributed residual). And using these for possible scouting/ alliance selection aids (especially for lower picks). But other folks may be interested in using them for other things. |
|
#8
|
||||
|
||||
|
Re: Incorporating Opposing Alliance Information in CCWM Calculations
Here's a generalized perspective.
Let's say you pick r1, r2, r3, b1, b2, b3 to minimize the following error E(w)= w*[ (R-B) - ( (r1+r2+r3)-(b1+b2+b3) ) ]^2 + (1-w) * [ (R-(r1+r2+r3))^2 + (B- (b1+b2+b3))^2] if w=1, you're computing the WMPR solution (or any of the set of WMPR solutions with unspecified mean). if w=0, you're computing the OPR solution. if w=1-small epsilon, you're computing the nWMPR solution (as the relative values will be the WMPR but the mean will be selected to minimize the second part of the error, which will be the mean score in the tournament). if w=0.5, you're computing the EPR solution. I wonder how the various predictions of winning margin, score, and match outcomes are as w goes from 0 to 1? |
|
#9
|
||||
|
||||
|
Re: Incorporating Opposing Alliance Information in CCWM Calculations
Quote:
Again, I like it because it is one number instead of two numbers. I like it because it has a better chance to predict outcome regardless of the game, rather than OPR being good for some games and WMPR being good for some other games. |
|
#10
|
||||
|
||||
|
Re: Incorporating Opposing Alliance Information in CCWM Calculations
Quote:
And from my testing, the order of predictiveness goes WMPR>EPR>OPR. The only improvement EPR has over OPR is that it's half WMPR! Why not just go all the way and stick with WMPR? Again, this is with using the training data as the testing data, if EPR is shown to be better when these are separate then perhaps we should use it instead. Last edited by AGPapa : 27-05-2015 at 15:38. |
|
#11
|
||||
|
||||
|
Re: Incorporating Opposing Alliance Information in CCWM Calculations
Quote:
When we have more data, multiple years and multiple events that support WMPR as the best predictor for match outcome, then I will stop looking at OPR. But sometimes in alliance selection for first round pick, without any scouting data and you want somebody for pure offense, OPR is still a good indicator. |
|
#12
|
||||
|
||||
|
Re: Incorporating Opposing Alliance Information in CCWM Calculations
[Edit: The data has been updated to reflect an error in the previous code. Previously, the data was reported for the scaled down versions of the metrics in the TESTING DATA section. Now, the data is reported for the unscaled metrics (though the last table for each tournament shows the benefits of scaling them, which is substantial!)]
Here's the data for the four 2014 tournaments starting with "A". My thoughts will be in a subsequent post: Code:
2014: archi Teams = 100, Matches = 167, Matches Per Team = 1.670 TRAINING DATA Stdev of winning margin prediction residual OPR : 51.3. 66.9% of outcome variance predicted. CCWM: 57.0. 59.2% of outcome variance predicted. WMPR: 36.1. 83.6% of outcome variance predicted. Match prediction outcomes OPR : 142 of 166 (85.5 %) CCWM: 146 of 166 (88.0 %) WMPR: 154 of 166 (92.8 %) TESTING DATA Stdev of winning margin prediction residual OPR : 72.1. 34.8% of outcome variance predicted. CCWM: 85.2. 8.8% of outcome variance predicted. WMPR: 89.3. -0.1% of outcome variance predicted. Match prediction outcomes OPR : 127 of 166 (76.5 %) CCWM: 124 of 166 (74.7 %) WMPR: 123 of 166 (74.1 %) Stdev of testing data winning margin prediction residual with scaled versions of the metrics Weight: 1.0 0.9 0.8 0.7 0.6 0.5 OPR: 72.1 70.8 70.2 70.3 71.2 72.8 CCWM: 85.2 80.3 76.3 73.5 71.9 71.7 WMPR: 89.3 84.3 80.3 77.3 75.4 74.7 2014: abca Teams = 35, Matches = 76, Matches Per Team = 2.171 TRAINING DATA Stdev of winning margin prediction residual OPR : 59.8. 65.1% of outcome variance predicted. CCWM: 62.9. 61.2% of outcome variance predicted. WMPR: 51.5. 74.1% of outcome variance predicted. Match prediction outcomes OPR : 63 of 76 (82.9 %) CCWM: 60 of 76 (78.9 %) WMPR: 65 of 76 (85.5 %) TESTING DATA Stdev of winning margin prediction residual OPR : 78.9. 39.1% of outcome variance predicted. CCWM: 93.6. 14.4% of outcome variance predicted. WMPR: 92.5. 16.3% of outcome variance predicted. Match prediction outcomes OPR : 56 of 76 (73.7 %) CCWM: 55 of 76 (72.4 %) WMPR: 55 of 76 (72.4 %) Stdev of testing data winning margin prediction residual with scaled versions of the metrics Weight: 1.0 0.9 0.8 0.7 0.6 0.5 OPR: 78.9 77.9 77.6 78.2 79.6 81.6 CCWM: 93.6 89.5 86.4 84.3 83.4 83.7 WMPR: 92.5 88.8 86.1 84.3 83.6 84.1 2014: arfa Teams = 39, Matches = 78, Matches Per Team = 2.000 TRAINING DATA Stdev of winning margin prediction residual OPR : 45.8. 61.4% of outcome variance predicted. CCWM: 46.6. 60.1% of outcome variance predicted. WMPR: 38.2. 73.1% of outcome variance predicted. Match prediction outcomes OPR : 59 of 78 (75.6 %) CCWM: 66 of 78 (84.6 %) WMPR: 64 of 78 (82.1 %) TESTING DATA Stdev of winning margin prediction residual OPR : 61.8. 29.8% of outcome variance predicted. CCWM: 71.7. 5.6% of outcome variance predicted. WMPR: 75.4. -4.5% of outcome variance predicted. Match prediction outcomes OPR : 55 of 78 (70.5 %) CCWM: 53 of 78 (67.9 %) WMPR: 49 of 78 (62.8 %) Stdev of testing data winning margin prediction residual with scaled versions of the metrics Weight: 1.0 0.9 0.8 0.7 0.6 0.5 OPR: 61.8 61.0 60.6 60.8 61.4 62.5 CCWM: 71.7 68.4 65.9 64.1 63.1 62.9 WMPR: 75.4 71.9 69.1 66.9 65.5 64.9 2014: azch Teams = 49, Matches = 82, Matches Per Team = 1.673 TRAINING DATA Stdev of winning margin prediction residual OPR : 36.3. 78.2% of outcome variance predicted. CCWM: 37.8. 76.4% of outcome variance predicted. WMPR: 25.5. 89.2% of outcome variance predicted. Match prediction outcomes OPR : 66 of 79 (83.5 %) CCWM: 68 of 79 (86.1 %) WMPR: 73 of 79 (92.4 %) TESTING DATA Stdev of winning margin prediction residual OPR : 52.1. 54.9% of outcome variance predicted. CCWM: 67.5. 24.6% of outcome variance predicted. WMPR: 63.0. 34.3% of outcome variance predicted. Match prediction outcomes OPR : 59 of 79 (74.7 %) CCWM: 56 of 79 (70.9 %) WMPR: 66 of 79 (83.5 %) Stdev of testing data winning margin prediction residual with scaled versions of the metrics Weight: 1.0 0.9 0.8 0.7 0.6 0.5 OPR: 52.1 52.1 52.8 54.2 56.2 58.7 CCWM: 67.5 65.7 64.6 64.1 64.2 65.0 WMPR: 63.0 59.6 57.3 56.1 56.1 57.3 Last edited by wgardner : 28-05-2015 at 10:05. Reason: Data was for scaled metrics not unscaled metrics! Updated! |
|
#13
|
||||
|
||||
|
Re: Incorporating Opposing Alliance Information in CCWM Calculations
[Edit: my previously posted results had mistakenly reported the values for the scaled versions of OPR, CCWM, and WMPR as the unscaled values (!). Conclusions are somewhat changed as noted below.]
So my summary of the previous data: WMPR always results in the smallest training data winning margin prediction residual standard deviation. (Whew, try saying that 5 times fast.) WMPR is also very good at predicting training data match outcomes. For some reason, CCWM beats it in 1 tournament but otherwise WMPR is best in the other 3. But on the testing data, things go haywire. There are significant drops in performance in predicting winning margins for all 3 stats, showing that all 3 stats are substantially overfit. Frequently, all 3 stats give better performance at predicting winning margins by using scaled down versions of the stats. The WMPR in particular is substantially overfit (look for a later post with a discussion of this). BTW, it seems like some folks are most interested in predicting match outcomes rather than match statistics. If that's really what folks are interested in, there are probably better ways of doing that (e.g., with linear models but where the error measure better correlates with match outcomes, or with non-linear models). I'm going to ponder that for a while... Last edited by wgardner : 28-05-2015 at 10:03. Reason: Major updates! |
|
#14
|
||||
|
||||
|
Re: Incorporating Opposing Alliance Information in CCWM Calculations
I've been watching this thread because I'm really interested in a more useful statistic for scouting--a true DPR. I think this path may be a fruitful way to arrive at that point.
Currently the DPR doesn't measure how a team's defensive performance causes the opposing alliance to deviate from its predicted OPR. The current DPR calculation simply assumes that the OPRs of the opposing alliances are randomly distributed in a manner that those OPRs are most likely to converge on the tournament average. Unfortunately that's only true if a team plays a very large number of matches that capture potential alliance combinations. Instead we're working with a small sample set that is highly influenced by the individual teams included in each alliance. Running the DPR separately across the opposing alliances becomes a two-stage estimation problem in which 1) the OPRs are estimated for the opposing alliance and 2) the DPR is estimated against the predicted OPRs. The statistical properties become interesting and the matrix quite large. I'll be interested to see how this comes out. Maybe you can report the DPRs as well. |
|
#15
|
||||
|
||||
|
Re: Incorporating Opposing Alliance Information in CCWM Calculations
I tested how well EPR predicted match outcomes in the four events in 2014 beginning with "a". These tests excluded the match being tested from the training data and recomputed the EPR.
EPR: ABCA: 59 out of 76 (78%) ARFA: 50 out of 78 (64%) AZCH: 63 out of 79 (78%) ARCHI: 123 out of 166 (74%) And as a reminder, here's how OPR did (as found by wgardner) OPR: ABCA: 56 out of 76 (74%) ARFA: 55 out of 78 (71%) AZCH: 59 out of 79 (75%) ARCHI: 127 out of 166 (77%) So over these four events OPR successfully predicted 297 matches and EPR successfully predicted 295. Last edited by AGPapa : 28-05-2015 at 14:30. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|