|
Re: "standard error" of OPR values
Quote:
Originally Posted by wgardner
Yes. In the paper in the other thread that I just posted about, the appendices show how much percentage reduction in the mean-squared residual is achieved by all of the different metrics (OPR, CCWM, WMPR, etc). An interesting thing to note is that the metrics are often much worse at predicting match results that they haven't included in their computation, indicating overfitting in many cases.
|
I don't think this necessarily indicates "overfitting" in the traditional sense of the word - you're always going to get an artificially-low estimate of your error when you test your model against the same data you used to tune it, whether your model is overfitting or not (the only way to avoid this is to partition your data into model and verification sets). This is "double dipping."
Rather, it would be overfitting if the predictive power of the model (when tested against data not used to tune it) did not increase with the amount of data available to tune the parameters. I highly doubt that is the case here.
__________________
"Mmmmm, chain grease and aluminum shavings..."
"The breakfast of champions!"
Member, FRC Team 449: 2007-2010
Drive Mechanics Lead, FRC Team 449: 2009-2010
Alumnus/Technical Mentor, FRC Team 449: 2010-Present
Lead Technical Mentor, FRC Team 4464: 2012-2015
Technical Mentor, FRC Team 5830: 2015-2016
Last edited by Oblarg : 13-07-2015 at 22:01.
|