View Single Post
  #5   Spotlight this post!  
Unread 17-01-2015, 03:08
SoftwareBug2.0's Avatar
SoftwareBug2.0 SoftwareBug2.0 is offline
Registered User
AKA: Eric
FRC #1425 (Error Code Xero)
Team Role: Mentor
 
Join Date: Aug 2004
Rookie Year: 2004
Location: Tigard, Oregon
Posts: 486
SoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant future
Re: [beyondinspection] 2015 Ranking System

I'm glad to see that this year's ranking system appears to correlate with a traditional measure of robot performance. I've had different opinions on FIRST's different tie breakers over the years but never thought enough about it to quantify them. If we wanted to take this really seriously we might be able to come up with some better metrics than correlation coefficient with OPR.

So, let's take this way too seriously: First, the final output is ordinal rather than continous data, so we might want to correlate the rankings between systems rather than the raw numbers. Second, only a subset of the results actually matter to the tournament so we might want to consider only that portion of the results. In particular, the rankings determine who can be which alliance captain so your team being ranked 1 vs 8 matters but being ranked 17th vs 30th means nothing.

We might also want to consider how a ranking effects the tournament. For example, consider the following two cases:

1) The best robot is ranked last, but all other are in order
2) The worst robot is ranked first, but all others are in order

In the first case, the robot that should have been ranked first will most likely be picked early and the alliances will look sort of like what they would have if the rankings had been perfect.

In the second case, I think the results would be much more severe. First, it guarantees a team a slot in the eliminations that shouldn't be there. Second, it's likely to change all of the aliance pairings because there will be a bunch of declines. This reduces the value of being the 2nd or 3rd alliance captain and some team ranked just out of the top 8 will be hosed.

I wonder what you'd get for something like:
1) Determine a rank based on the OPR from that event
2) Determine a rank by desired metric (QS, QA, etc.)
3) For each of the top 10 (or so) positions, if the OPR rank is lower than the rank by the desired metric then add that how much it is lower by.
4) Compare totals & the metric with the lower total is better

For example, if you had something like this:
Code:
QS rank|OPR rank
1      |3      
2      |5      
3      |2      
4      |1      
5      |12
6      |8
7      |4
8      |11
9      |6
10     |9
Then you'd get: 2+3+0+0+7+2+0+3+0+0=17 for QS

Anyway, this is pretty ad-hoc. I'm sure there's a nicer way to do this.
Reply With Quote