Go to Post Strategy is rarely static, and how you design your robot isn't always the best way to win the game. - Bill Moore [more]
Home
Go Back   Chief Delphi > FIRST > General Forum
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
 
 
Thread Tools Rating: Thread Rating: 4 votes, 5.00 average. Display Modes
Prev Previous Post   Next Post Next
  #4   Spotlight this post!  
Unread 06-06-2015, 13:26
AGPapa's Avatar
AGPapa AGPapa is online now
Registered User
AKA: Antonio Papa
FRC #5895
Team Role: Mentor
 
Join Date: Mar 2012
Rookie Year: 2011
Location: Robbinsville, NJ
Posts: 322
AGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond reputeAGPapa has a reputation beyond repute
Re: Overview and Analysis of FIRST Stats

This is a really well written paper, thanks for putting it together!

I have some questions about how to choose VarD/VarO and VarN/VarO since I'm unfamiliar with MMSE estimation. How would you go about choosing these values during/before an event?

Quote:
Originally Posted by Page 31
The (VarD, VarN) numbers show the values of VarD/VarO and VarN/VarO in the MMSE search that produced the best predicted outcome on the Testing data.
Does this method lead to the same overfitting that using the training data as the testing data did with the LS estimators? Choosing the apriori variances after the fact to get the best results seems wrong, or is the effect actually too small in reality to be a factor? It seems like each set of training data also needs to find what variances work best, and then apply them to the testing data, instead of "searching" for the best values and applying them after the fact.


Quote:
Originally Posted by Page 44
// pick your value relative to sig2O, or search a range.
// 0.02 means you expect defense to be 2% of offense.
From this I'd expect that the values for VarD/VarO to be largely dependent on the game, yet the data shows that the "best" values depend very little on the game. For example, in the 2014 Newton Division the best values for VarD/VarO for sCPR was 0.10, but for 2014 Galileo it was 0.00! The complete other side of the search range! How can two divisions in the same year have such different values?
__________________
Team 2590 Student [2011-2014]
Team 5684 Mentor [2015]
Team 5895 Mentor [2016-]
Reply With Quote
 


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 08:25.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi