Quote:
|
Originally Posted by Alan Anderson
The Law of Large Numbers tells us that the ranking at the end of a sufficient quantity of randomly assigned qualification matches will reflect the robot goodness with high confidence. It's too bad that an actual competition doesn't have anything near the number of matches necessary to make that happen. All we get is a very rough approximation.
Besides, an effective alliance is not just the sum of its component teams. Two or three complementary robots can do better together than two or three nominally "better" robots that don't work with each other as well.
|
I took some time to look at that problem from a statistical point of view. If a team does well in one match because they are with complimenting robots, they have a lesser chance of repeating the same scenario. Therefore, "how good teams actually are" can be found by looking at their individual score for a match compared to the teams they are allianced with's average scores over the event.
I have started an excel spreadsheet program that will help factor luck out of the equation for rankings.
www.team195.com/scouting/aimhighstats.xls
Please take the time to look at this spreadsheet. If you have any questions, please post or PM me. The format is quite crude seeing as I made this in a short amount of time, but I plan to further the development of this tool to help aid in team selections at nationals.