Quote:
Originally Posted by Lil' Lavery
Hmm, was hoping for a more complete data set to try and grade the accuracy of OPR for 2014. Still a nice paper, and an interesting first look at accuracy in scouting.
|
I tried to do a project like this back in 2012 using data from 2010-2012 (I used 2337's datasets for this period, 2-3 events per year). While this was somewhat useful in comparing the utility of OPR across different years, I ran into difficulty comparing OPRs directly to scouting data because extrapolating scores from teams' statistics is a nontrivial task. OPRs are, of course, dependent directly on match scores, so this extrapolation is necessary for comparison. For example, in 2011, tube scoring did not scale linearly. You'd encounter the same problem in 2014 with scores of balls depending on the number of assists. However, it's definitely a topic worthy of further study.
Quote:
Originally Posted by XaulZan11
The sample size may be small, but did you find a correlation between amount of data taken and accuracy? (My personal opinion/assumption is that teams generally take way too much data which makes it harder to scout, hurting the data teams actually use).
|
Interesting question! This isn't something I originally considered, but I took a look. With only 5 data points, 4 of which are pretty similar in accuracy, I wasn't optimistic about having a real answer here, but then it got worse. All 5 teams took around the same amount of data! They varied from 16 fields of data to 21. None had exactly the same number, and the most accurate team was in the middle at 18 fields.
__________________
Team 2337 | 2009-2012 | Student
Team 3322 | 2014-Present | College Student
“Be excellent in everything you do and the results will just happen.”
-Paul Copioli