Go to Post I have always liked to see people get recognized for what they do, not people doing something to get recognized. - sanddrag [more]
Home
Go Back   Chief Delphi > FIRST > General Forum
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
 
 
Thread Tools Rate Thread Display Modes
Prev Previous Post   Next Post Next
  #9   Spotlight this post!  
Unread 20-01-2015, 08:47
Andrew Schreiber Andrew Schreiber is offline
Joining the 900 Meme Team
FRC #0079
 
Join Date: Jan 2005
Rookie Year: 2000
Location: Misplaced Michigander
Posts: 4,068
Andrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond repute
Re: [beyondinspection] 2015 Ranking System

Quote:
Originally Posted by IKE View Post
I would recommend using 2008 OPR distributions to see what negative penalties could do to the game. It was a game that tracked well to OPR and there were a fair amount of penalties.

Also, how well does your curve shape match the OPR curve shapes for 2013, 2012 (modified one), 2010, and 2008?
Dunno, never graphed it/compared to OPR. I'll see if I can drag out an OPR curve for 2008 and use it to generate skill levels.


I recognized that the only output that matters is the final ranking, there's a whole analysis section that only deals with it. I just felt it was interesting to watch how teams moved through rankings as matches progressed. Specifically, how the rankings eventually reached a stable point for teams in the top/bottom of them. The middle needs more matches to settle out since they tend to be closer in skill. I was looking into adding a Average Error value in (summing abs(actualRank - expectedRank) for each team and divide by number of teams)
but I just didn't get around to it before this went live (something something build season)


I supposed we could take this model even further and simulate picks (assume each team picks the best available robot, and some sort of metric for declines) then we could play out elims to see what teams end up "qualifying" from the event. Since, really, for Regionals/CMP Divisions/CMP the only output that matters is the winning alliance. But I question the value of this since it is much more driven by team's ability to pick an alliance than by FIRST's rules (at least this year, other years are another story).
__________________




.
Reply With Quote
 


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 23:53.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi