View Single Post
  #14   Spotlight this post!  
Unread 27-05-2015, 13:22
AGPapa's Avatar
AGPapa AGPapa is offline
Registered User
AKA: Antonio Papa
FRC #5895
Team Role: Mentor
 
Join Date: Mar 2012
Rookie Year: 2011
Location: Robbinsville, NJ
Posts: 323
AGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond repute
Re: Incorporating Opposing Alliance Information in CCWM Calculations

Quote:
Originally Posted by wgardner View Post
Yes, this makes sense if you want to compare results across events. Sounds like a good idea, though perhaps then it needs a different name as it's not a WM measure? Also, if I continue to find that scaling the WMPRs down does a better job at winning margin prediction, that needs to be done before the average event score/3 is added in.

I'll try to get to the verification on testing data in the next day or so.

I personally like this normalized WMPR (nWMPR?) better than EPR as the interpretation is cleaner: we're just trying to predict the winning margin. EPR is trying to predict the individual scores and the winning margin and weighting the residuals all the same. It's a bit more ad-hoc. On the other hand, one could look into which weightings result in the best overall result in terms of whatever measure of result folks care about.
I'd still consider it a WM measure, as it doesn't only take offense into account (like OPR). This WMPR tells us how many points this robot will score and how many it'll take away from the opponents, that sounds like win margin to me, no? I don't really like the nWMPR name, it's long/slightly confusing. I think this thread should work out the kinks in this new statistic and call the final product "WMPR".

In order for this to catch on it should
1. Be better than OPR at predicting the winner of a match
2. Be easy to understand
3. Have a catchy name
4. Apply very well to all modern FRC games
5. Be easy to compare across events

I think that by adding in the average score and calling it "WMPR" we accomplish all of those things. 2015 is probably the strangest game we've had (and I would think the worst for WMPR), and yet WMPR still works pretty well.

I'm not sure why scaling down gives you better results at predicting the margin. I know you said it decreases the variance of the residuals, but does it also introduce bias? Would you propose a universal scaling factor, or one dependent on the event/game?


Quote:
Originally Posted by Ed Law View Post
I think EPR would be more accurate in predicting match scores. Would somebody like to test it out?
Another reason I like EPR is that it is easier to compute without all that
SVD stuff. I would prefer high school students to be able to understand and implement this on their own.
You actually don't need to know anything about singular value decomposition to understand WMPR. It can be explained simply like this:

Ax=b

Where A is who played on what alliance in each match and b is the margin of victory in each match. x is the contribution from each robot to the margin. You'd expect x to be the inverse of A times b, but A is not invertable, so we use the pseudoinverse of A instead.

In Matlab the code is

x = pinv(A)*b

And that's it, pretty simple.

I agree with you though that the ultimate test would be how it performs in predicting matches. I compared it to WMPR in the 2014 Archimedes division, although that was with using the training data as the testing data, so it's probably not the best test.
__________________
Team 2590 Student [2011-2014]
Team 5684 Mentor [2015]
Team 5895 Mentor [2016-]

Last edited by AGPapa : 27-05-2015 at 13:35.
Reply With Quote