Quote:
Originally Posted by JesseK
I see your point about ranking the teams relative to each other; yet there will still have to be a sort of 'confidence' factored in before we can rely on it any sort of relativistic data like that. For example, the teams ranked 4-12 may be within a Y% of each other, therefore we really can only say with X% confidence that their true rankings are correct. If the confidence levels are low across the board, then the algorithm itself isn't as useful as it may appear.
Since it's a learning exercise, I'd encourage you to lay out your assumptions, figure out which statistical metrics are better (mean versus median, and does std. dev. help explain anything?), as well as the methods of the algorithm (why average minibot over 3 teams instead of 2 poles?). It can only help you more correctly align the algorithm's results with the reality on the field.
Having a predicted match score based upon N-assumptions is much more valuable than having something that says "This team is 10% better than your team".
|
Thanks for the feedback. Where the EMC is calculated using percentages there is usually a team that gets a very small share of the overall EMC.
But after actually running through the race options, 30 is the only score that can happen in 2 different ways, and is very easy to check against, basically if the opposing alliance score is 45 you came in second and forth, if not you came in first alone.
Then I will be attributing the minibots score to the appropriate teams based on EMC.
Confidence metrics are not sometimes I have dealt with much, but I am definitely going to research them. Again I am a long ways away from being happy. But there are times where these statistics paired with actual scouting data present some interesting information, that OPR may leave out.