Quote:
Originally Posted by Michael Hill
My baseline was just using OPR for predicting match outcomes, it was able to predict about 77.1% of the matches this year. This was calculated by adding up the OPRs of each alliance and comparing with the result of the match. TrueSkill was able to predict 79.0% of the matches, a pretty good improvement. I need to develop the prediction model a bit better because it currently doesn't take into account the standard deviation as a measure of certainty. The modified Elo system was able to predict 79.5% of matches, an improvement over TrueSkill. The baseline, unadulterated Elo system as used in this thread was able to predict a whopping 81.4% of matches, by far the best out of any of these models.
|
Try calculating "OPR" using min L1 norm of residuals (LAD) instead of min L2 norm (least squares), and see how that compares.