|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools |
Rating:
|
Display Modes |
|
|
|
#1
|
||||
|
||||
|
Re: How many matches are really needed to determine final rankings?
Quote:
I put together partial OPRs for each of the 8 events you listed above, in case someone is interested enough to plot them or otherwise analyze/summarize them. If this looks interesting/promising, I will generate partial OPRs for all 106 events thus far (Weeks 1 through 6). Column A is team number, Column B is final OPR (after all Qual matches), Column C is partial OPR after all Qual matches less one, etc Some of these events have surrogates and some have DQs. I ignored this information and included those scores in the OPR computations. Last edited by Ether : 08-04-2015 at 19:44. |
|
#2
|
||||
|
||||
|
Re: How many matches are really needed to determine final rankings?
Quote:
I put together something the other day that calculated my team's average after every Qual match through the VA regional. The above graphs are updated after each match round. I only needed something quick, and that seemed to be quicker, but probably not following the official scoring methodology. |
|
#3
|
||||
|
||||
|
Re: How many matches are really needed to determine final rankings?
I believe that especially with this years game ranking can jump a lot. I know at the Tech Valley regional we "lost" a lot of matches with our score being lower then the opponent that match. But in the end it doesn't matter just of who has the best bot but who plays the alliance they have the smartest and most effective. Three of the lower ranked bots can succeed if played correctly and this year it seemed if teams figured out what worked best for them from the get go and kept to their strategy they would usually have a lot of success.
|
|
#4
|
||||
|
||||
|
Re: How many matches are really needed to determine final rankings?
partial Final-Score OPRs for all 106 events weeks 1-6 can be found here: http://www.chiefdelphi.com/media/papers/3125 |
|
#5
|
||||
|
||||
|
Re: How many matches are really needed to determine final rankings?
I love this type of research, it really helps to tune those algorithmic scouting applications.
This type of work is incredibly similar to mathematical economics/econometrics type fields. Fundamentally you'll see the law of averages dictate direction, but the real question is what was the Quality of the Other Alliance Members for each team based on the final ranking. The Strength of Schedule should be a good proxy for determining the quality of the match scheduling, which should help you determine the minimum number of matches needed to rank better. But hey, we all have those days where we jump from 48th to 6th in the last 4 matches. |
|
#6
|
||||
|
||||
|
Re: How many matches are really needed to determine final rankings?
Fun comments. Thanks!
I decided to go ahead and look at the top seed alliance effect. That is, how does a team's rank change depending on if it is with or against the top seed team during a match, or not in a match at all with the top seed team? To do the analysis, I essentially took out every match that the top seed was in; marked which teams were with, against, or not in a match with the top seed; and then looked at how the new rankings changed versus the official rankings. See below for the Silicon Valley (SV) and North Star (NS) Regionals. I took out team 254 from the SV Regional and 2826 from the NS Regional. In the SV Regional, a team's overall rank increased by about 6.55+/-5.81 places if it was in an alliance with 254. But, a team's overall rank decreased by 5.87+/-3.49 places if the team did not have a match with 254. In the NS Regional, a team's overall rank increased by about 8.05+/-8.68 places if it was in an alliance with 2826. But, a team's overall rank decreased by 5.79+/-2.58 places if the team did not have a match with 2826. Ya, I would say this confirms the hypothesis that being in an alliance with the top team is going to boost a team's overall ranking. Likewise, not being in a match with the top team does not help. Ether and Doug, I'll look into making some plots tomorrow with the OPR, including attaching the code for getting the data (just a heads up, it's not totally automated). But, ya, could be interesting. |
|
#7
|
||||
|
||||
|
Re: How many matches are really needed to determine final rankings?
Quote:
|
|
#8
|
||||
|
||||
|
Re: How many matches are really needed to determine final rankings?
Quote:
The OPR for the final 20 matches seems to be fairly constant for teams. It looks like whenever a team has a match, there's a jump. And then in between matches, it might go back towards the mean. Beyond the top 10 teams, it gets very cluttered. That is, there aren't many OPR points distinguishing teams. So, for the Silicon Valley Regional, I took out all but the top 10 teams. This makes it a little easier to see how the team's OPR changes between matches. I didn't dive too much into correlating the OPR with the ranking because of the different domains. But, it appears that the OPR is better able to account for the top seed effect, as I like to call it. Speaking of which, I wanted to revise my analysis from a couple posts ago, where I looked at how the rankings would change if the top team wasn't at the competition. The three categories are not mutually exclusive. The With status could include teams that also have matches against the top seed. Likewise, the Against status could include teams that also have matches with the top seed. So, I filtered the results a little differently to look at Only With, With And Against, Only Against, and Neither. See below for the North Star and Silicon Valley Regionals. I also computed a few statistics using the student t-test. For both the NS and SV Regionals, if a team was either against or not at all in a match with the top seed, that team ended up having a lower rank (p<0.001). On the flip side, if a team was with the top seed or with and against it, then it did have a higher rank (p=0.007 for NS With And Against, p<0.001 for all other cases). So, the conclusion is the same - a team does better if it's with the top seed and worse if it's against or not with the top seed - but I think this method proves the point better. Cheerio. |
|
#9
|
||||
|
||||
|
Re: How many matches are really needed to determine final rankings?
Quote:
Quote:
Column C is the OPR using the first 94 matches Column D is the OPR using the first 93 matches . . . Column U is the OPR using the first 76 matches ... so the progression from Column U to Column B shows how the OPR changed over the course of the last 20 matches. Partial OPRs for all 106 events in Weeks 1 through 6 are posted here: http://www.chiefdelphi.com/media/papers/3125 Last edited by Ether : 10-04-2015 at 23:26. |
|
#10
|
||||
|
||||
|
Re: How many matches are really needed to determine final rankings?
Okay, I see. Thank you for clarifying. Is it at all possible to get the OPR for all matches, not just the last 20? I understand that the algorithm needs every team to compete at least once before the OPR can be computed, which means I don't expect the OPR for the first 10 or so matches. But, ya, it could be interesting to see a bigger picture of how the OPR changes over the course of the competition. Cheers.
|
|
#11
|
||||
|
||||
|
Re: How many matches are really needed to determine final rankings?
Quote:
Just kidding. Let M be the number of qual matches at an event, and T be the number of teams. The analysis proceeds as follows for each event: for (k=M; k>T/2;k--){computeOPR(); deleteMostRecentMatch();} PS - forgot to mention: I can transpose the rows and columns if that would make it easier for you to do your plotting. |
|
#12
|
||||
|
||||
|
Re: How many matches are really needed to determine final rankings?
Oooooohhhh.... preettttyyyy....
![]() Had time for just a couple plots. But these really show what I was expecting: lots of movement/changing early on and then a leveling-out. However, the leveling-out really isn't that level. It seems like there are still jumps after teams have matches later in qualifications. Perhaps because of good alliance partners, or perhaps the team had a good match. The first chart has all teams plotted from the Silicon Valley Regional, and the second plot has just the top ten OPR teams from the North Star Regional. Thank you for all the number crunching to get all the OPR scores, Ether. Perhaps later there will be time for more analysis with reference to rankings instead of just graphical. Or, has this analysis gone on long enough? What else would be interesting? |
|
#13
|
|||||
|
|||||
|
Re: How many matches are really needed to determine final rankings?
I'd like to see an analysis of how much different things would be if the old W-L-T structure was still there. Because I know that my team would not have been in the top 10 at our regionals if the old structure was still in place.
Some say this structure is harder to win because you have to out score every team, not just your opponents. But I think it really does ensure that the best teams come out on top, if at the expense of a less exciting rank-watching time during the events. |
|
#14
|
||||
|
||||
|
Re: How many matches are really needed to determine final rankings?
Quote:
Quote:
|
|
#15
|
||||
|
||||
|
Re: How many matches are really needed to determine final rankings?
Quote:
The actual elapsed time to get from raw data to those reports for all 109 events (6398 partial OPR reports) was only 17 seconds on a single core of a Pentium D in an 8-year-old machine running XP Pro SP3, using AWK to wrangle the data and Octave to crunch the linear algebra. Last edited by Ether : 14-04-2015 at 12:21. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|