|
Re: [FVC]: Analysis Shows Improvement Possible in Ranking System
Let me preface this with the observation that perhaps FIRST intended the selection process to be purposefully uncertain. In that case, the process is working, and we should attempt to understand the rational for this course of thinking as opposed to attempting to improve upon the system. If this is the case, it would be helpful if FIRST would acknowledge that it wants the system have an element of randomness (and perhaps provide the reasoning).
Personally, I can come up with only three reasons for FIRST wanting a system such as this: (1) It teaches kids that the world is not necessarily fair or just. (2) It gives teams that have lesser robots an opportunity to experience the excitement of making it to the finals and thus inspires them to do better work in the future. (3) It adds greater emphasis to the scouting aspect of the game.
Both numbers 1 and 2 are probably valid, but I wonder about number 3. It would seem to me that you would want your top robots to become alliance captains. Because these are the teams that had the interest and energy to build a competitive robot, they are the teams that are most likely to have invested time towards devising and organizing a coherent scouting effort. They also have a clue that they might be required to act as alliance captains. Contrast this with a team that "accidentally" becomes an alliance captain. These teams are less likely to have formed a rigorous scouting effort and may not have done any work in this regard because they did not "expect" to be put in this position. This leaves them in the unfortunate circumstance of having to pick partners by using incomplete or non-existent data, which then results in alliance teams that may include the robots in the bottom quartile of the field.
In some respects, I think the unreliability of the current system has to do with the rather large disparity in abilities between the top and bottom performing robots. The nature of these pairings, coupled with the rather large step values that a win/loss/tie system creates, makes the system prone to unreliable results. The introduction of the grouping method normalizes some of this out. But as could be seen in the simulations, grouping has much less effect if a system with greater granularity (such as total points) is used. Although I did not test this, my instinct is that grouping would also show less improvement if the robots were more evenly matched. I may attempt to simulate this.
As I see it right now, the biggest weakness with grouping is that it requires a precise number of teams--in increments of thirty-two. Perhaps this could be avoided by using the pairing algorithm that Blake and MM are working on. I would be happy to run it if they give me data (or I can give them engine).
As MM pointed out, our culture expects winning to be rewarded and using total points might not be as readily accepted or understood (although who really understands the rationale for scoring into your opponents goals to boost QPs).
In any case, it is possible to create a system which adds "bonus" points to the winner's total points tally. I may attempt to simulate this also.
I will report back with any results.
Last edited by billw : 06-06-2007 at 13:07.
|