View Single Post
  #27   Spotlight this post!  
Unread 21-03-2011, 11:58
JesseK's Avatar
JesseK JesseK is offline
Expert Flybot Crasher
FRC #1885 (ILITE)
Team Role: Mentor
 
Join Date: Mar 2007
Rookie Year: 2005
Location: Reston, VA
Posts: 3,733
JesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond repute
Re: Top 25 ETCs after Week 3

I see your point about ranking the teams relative to each other; yet there will still have to be a sort of 'confidence' factored in before we can rely on it any sort of relativistic data like that. For example, the teams ranked 4-12 may be within a Y% of each other, therefore we really can only say with X% confidence that their true rankings are correct. If the confidence levels are low across the board, then the algorithm itself isn't as useful as it may appear.

Since it's a learning exercise, I'd encourage you to lay out your assumptions, figure out which statistical metrics are better (mean versus median, and does std. dev. help explain anything?), as well as the methods of the algorithm (why average minibot over 3 teams instead of 2 poles?). It can only help you more correctly align the algorithm's results with the reality on the field.

Having a predicted match score based upon N-assumptions is much more valuable than having something that says "This team is 10% better than your team".
__________________

Drive Coach, 1885 (2007-present)
2017 Scoring Model
CAD Library | GitHub
Reply With Quote