Thread: Galileo 2012!
View Single Post
  #15   Spotlight this post!  
Unread 23-04-2012, 14:58
IKE's Avatar
IKE IKE is offline
Not so Custom User Title
AKA: Isaac Rife
no team (N/A)
Team Role: Mechanical
 
Join Date: Jan 2008
Rookie Year: 2003
Location: Michigan
Posts: 2,149
IKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond repute
Re: Galileo 2012!

Quote:
Originally Posted by Rashek View Post
I just wanted to post what my research ....

Next I added all the numbers together in order to come up with a grand total of points. I then arranged the grand totals in descending order. I came up with the following:

Team # HP BP TP Grand Total
2054 227 200 289 716
...
I am well aware of the fact that FRC Spyder doesn't count the elimination matches, and therefore this spredsheet isn't 100% accurate. This is my first time doing anything like this so any feedback woul be greatly appreciated.
I really like that you are looking at the data but... this is kind of bad math gone wrong scenario. As others have stated, you have two factors contributing to a lot of possible error.

#1 12 matches at FiM & MAR versus 8-10 matches most other regionals. If you normalize per match, you will get a much better metric.

#2 Alliance partners. The metric above can be highly dependent on alliance partners. When a robot is scoring around 2X the average, in theory they are only responsible for 50% 0f the points (2X+X+X)=4X & 2X/4X=50%...). Since MSC and MAR are "qualified" events, the average strength of the participant is significantly different from most regionals.


The general trends are often reasonable (highest scorer will be highest), but trying to sort with any sort of fidelity requires additional work.

Look at some of the normalizing functions (dividing by number of matches, normalizing to event strength...). Normalizing function have their own problems as well.
In 2010, from an OPR perspective, 3 robots all at an OPR of 2 (about 2X national average) could actually combine to be worth 8-10 pts. versus the predicted 6 pts. in a match doing zone strategy.
In 2011, at really deep fields, OPR was actually driven down as 60+60+60 would often equal 120 due to the digresive scoring last year. Thus teams taht had really good partners at a strong event might actually see their OPR go down as they more frequently got into the stronger digresive scoring regions .