Quote:
Originally Posted by George Nishimura
I'm interested in this. What sort of testing/validation do you think would be possible through mathematical simulations and modelling?
|
Time/Difficulty vs Point Reward is my primary focus. 2010 is the most obvious case of Difficulty/Point Reward being crazy out of whack. Suspensions were rare because it was often worth more to keep scoring than take the time to line up a suspension and risk the damage to another robot.
2011 is another pretty obvious scoring whack up wherein the rack didn't matter nearly as much as the minibots.
Please note that these assertions are primarily true for 90% of FRC teams. Obviously 254's 6 second climb was crazy but they were one out of 3k teams to do that. Most other 30 point climbers took close to 2 minutes to ascend.
This year the big thing woulda been the penalty points as compared to the match scores. Even our no defense analysis showed that a 50 point penalty would be massive. Factoring in that defense can usually halve optimal that 50 point tech foul was high on our list of "things to avoid" (I think it was shortly behind 'ejecting the battery').
Coupling the difficulty in scoring (as compared to years like 2008/2012 where teams could score points merely by being mobile) with high foul points should have been able to show the GDC that a disproportionate number of matches would be decided by fouls even at high levels of play. [insert Ether here to back up the exact number or elim matches decided by fouls]