Quote:
Originally Posted by Ether
This was discussed in earlier posts in this thread. Can you suggest something better, for which the data is available?
|
Could try this if you can point me to the raw data (I only found the Team 2834 OPR generation scouting database):
Calculate an "effective QA" for each team by:
- For each match, sum up the final QA result of all teams in the alliance
- For each team in the alliance, their personal contribution is estimated as a percentage of their alliance's score proportional to the sum of the alliance team's original QA
- Calculate effective individual QA by averaging all matches in their competition (to normalize and account for different # of matches played at different events)
For example:
Team 1 QA = 95
Team 2 QA = 38
Team 3 QA = 56
Sum is 189
Match 1 Score = 87
Match 1, Team 1 "effective individual QA" = 95/189 * 87 = 43.7
Match 1, Team 2 "effective individual QA" = 38/189 * 87 = 17.5
Match 1, Team 3 "effective individual QA" = 56/189 * 87 = 25.8
In this case, teams with higher scores get rewarded with more credit for points in rounds when they played with normally underperforming robots. Also, the final sum of all teams represents the actual (normalized per regional) number of points scored at regionals, which more directly answers OP's question