Quote:
Originally Posted by IKE
I would recommend using 2008 OPR distributions to see what negative penalties could do to the game. It was a game that tracked well to OPR and there were a fair amount of penalties.
Also, how well does your curve shape match the OPR curve shapes for 2013, 2012 (modified one), 2010, and 2008?
|
Dunno, never graphed it/compared to OPR. I'll see if I can drag out an OPR curve for 2008 and use it to generate skill levels.
I recognized that the only output that matters is the final ranking, there's a whole analysis section that only deals with it. I just felt it was interesting to watch how teams moved through rankings as matches progressed. Specifically, how the rankings eventually reached a stable point for teams in the top/bottom of them. The middle needs more matches to settle out since they tend to be closer in skill. I was looking into adding a Average Error value in (summing abs(actualRank - expectedRank) for each team and divide by number of teams)
but I just didn't get around to it before this went live (something something build season)
I supposed we could take this model even further and simulate picks (assume each team picks the best available robot, and some sort of metric for declines) then we could play out elims to see what teams end up "qualifying" from the event. Since, really, for Regionals/CMP Divisions/CMP the only output that matters is the winning alliance. But I question the value of this since it is much more driven by team's ability to pick an alliance than by FIRST's rules (at least this year, other years are another story).