Quote:
Originally Posted by Ether
...But yes, arguably more so this year.
|
This is an interesting argument. I know we don't have the data to address it directly, but is there are way to examine it by proxy, at least ordinally? For instance, we don't have enough live scouting data, but we do have draft order. If we posit that teams draft based on the real scouting data that OPR attempts to replicate*, are these data available in a form that allows for easy comparison? For instance, I just compared the top 15 OPRs to their draft order at 3 random 2015 events. ("Random" is used here non-technically to mean "the first three I clicked on in TBA".) I found that the average absolute value differences were 2.3, 1.2, and 1.3. The medians were even lower. This seems pretty good to me, but I haven't taken the time to do it more comprehensively or with other years.
Of course, this also only works for the top 24 teams at an event. On the other hand, that's the main reason most teams scout in the first place.
*This is an assumption whose falsity varies year-over-year. And also between events and teams, but I'll assume those variations have negligible effects on the YoY rankings for now.