Originally Posted by Nimbus
I haven't read everything entirely in-depth, but in a game like this years averaging scores isn’t as accurate a measurement of how good a robot is. Do you account for this in any way? To me it seems to accurately predict matches better data would be required.
Are you referring specifically to any of my books or just speaking generally?
I'm assuming the latter here. My general match prediction algorithm is a raw average of predicted contribution win probability and Elo win probability. Neither of these methods "average scores" like you seem to be saying. Both of them only incorporate raw match scores though, not any scouting data or detailed score breakdowns. I'm looking to make a second more advanced Elo model soon which incorporates some aspects of the published score breakdowns, that will hopefully be noticeably more predictive than my current Elo model.
Not sure I answered your question though, so let me know if you were asking something else.