About 4 weeks overdue, I finally went back and found how predictive various methods were of predicting matches. Here are the results for week 4 competitions excluding Israel champs. Values are Brier scores, lower Brier scores mean higher predictive power.
As expected, my Elo model takes a slight but appreciable edge over a simple calculated contribution to total points (OPR) model. I still believe it would be possible to improve upon this calculated contributions model to beat my Elo model, but I have yet to spend sufficient time developing anything that could prove or disprove this theory.
The most powerful predictor we have remains the simple average between calculated contributions and Elo, so this is what I will be using for match predictions moving forward (I’ve been using Elo only so far just because I hadn’t done a proper analysis until now).
To the surprise of no one who follows my work, CCWM predictions are awful as usual relative to the other methods, but I thought I’d throw them in anyway since someone was bound to ask about them.
In comparison to other years, using these methods we have slightly less predictive power than 2010, and slightly more predictive power than 2016.
I’ll make a calibration curve for Elo sometime soon, but I’m confident it’s well calibrated.