View Full Version : Division Strengths 2010
From the weighed average OPR numbers, I put all 4 divisions on a chart and sorted in descending order. I also added the Michigan State Championship numbers as a comparison.
Using weighed average OPR data, the mean of each division is
Archimedes 2.00
Curie 1.97
Galileo 1.69
Newton 1.75
MSC 2.47
Looking at the graph, I have the following observations
1) The upper half of Archimedes is stronger than Curie but the bottom half of Currie is stronger than Archimedes. The mean is about the same as the numbers show.
2) Galileo and Newton are weaker than Archimeedes and Curie.
3) There is an obvious kink/knee at around 10 meaning the top 10 teams drop off at a much faster rate in OPR than the remaining teams.
Now comparing to Michigan State Championship, MSC is clearly higher in the whole range. This was not the case last year. Please refer to my old post from last year.
http://www.chiefdelphi.com/forums/showpost.php?p=848037&postcount=12
Have fun looking at the numbers.
If you want more data, please refer to my original scouting database white paper at
http://www.chiefdelphi.com/media/papers/2174
Ed
Joe Ross
08-04-2010, 21:10
I looked at the OPR of the top 2, plus the 24th which would assume that the perfect alliances are formed. Curie is 19.3, Newton is 17.5, Archimedes is 16.5, and Galileo is 15.6. MSC is 19.4
Ed,
Very neat thread, but could you include the non-stretched MSC data also? I think it shows some interesting trends.
IKE
Ed,
Very neat thread, but could you include the non-stretched MSC data also? I think it shows some interesting trends.
IKE
Hi Isaac,
May I ask what the non-stretched MSC data will show? I emailed the graph to you separately. It should be fairly easy to change on the original spreadsheet if somebody else wants to play around with the data.
Regards,
Ed
I think it shows a more accurate representation of MSC strength. Stretching it moves MSC's Number 3 OPR guy to compare relative to a divsions number 4. this artifically inflates the MSC from a visual perspective. Some other interesting points would be the bump in the MSC curve from 10-30. I would suspect that these 20 would show an improving trend if you graphed their OPRs versus events.
I would be really interested in comparing 2010 MSC data to a theoretical 2008 MSC. Both are robot centric games with higher OPR correlation to match outcomes.
************************
Back to the intent of the thread, it looks like I will need some stronger coffee for Friday night as the picklist debate will be very difficult for Archimedes. (not that i am assuming we will seed in the top 8, but even if we were 80th, we would still make a comprehensive list).
I really respect you guys stretching and weighing your averages and OPR's, etc....and I like the sound of it swishing over my head!:)
Will all that data allow you to make a prediction as accurate as this one made last fall that is possibly the best prediction ever made?
http://www.youtube.com/watch?v=sMFxfEjx3zo
I think it shows a more accurate representation of MSC strength. Stretching it moves MSC's Number 3 OPR guy to compare relative to a divsions number 4. this artifically inflates the MSC from a visual perspective. Some other interesting points would be the bump in the MSC curve from 10-30. I would suspect that these 20 would show an improving trend if you graphed their OPRs versus events.
I would be really interested in comparing 2010 MSC data to a theoretical 2008 MSC. Both are robot centric games with higher OPR correlation to match outcomes.
The reason why I stretch the MSC data is to make it a fair comparison. Otherwise you will be comparing the 65 teams at MSC to the top 65 out of the 87 teams in each division. I am trying to compare the overall relative strength of each division and MSC and not trying to compare the top robots to the top robots.
The reason why I stretch the MSC data is to make it a fair comparison. Otherwise you will be comparing the 65 teams at MSC to the top 65 out of the 87 teams in each division. I am trying to compare the overall relative strength of each division and MSC and not trying to compare the top robots to the top robots.
When you post the unscaled version, MSC appears to be a truncated version of a division. That is all that I was getting at. There are a few other interesting anomalies to the MSC data. There is a fattening in the middle of the curve similar to Archimedes or comparable to IRI data some years. From an offense perspective, MSC is like the higher powered 3/4 of a division. This is what lead to the outrageous amounts of scoring. The stretch does do a good job of explaining the "feel" of the competition.
Wayne TenBrink
10-04-2010, 10:30
Thanks for the chart, Ed.
It would be interesting to see the results (stretched) from a few randomly selected regionals to put the CMP and MSC data into perspective.
From an offense perspective, MSC is like the higher powered 3/4 of a division. This is what lead to the outrageous amounts of scoring. The stretch does do a good job of explaining the "feel" of the competition.
That's quite possible considering that MSC was like a division in Atlanta, but more cut off. In general, it's harder to get to the State Championship than the World Championship (not in regard to winning regionals; that is still very difficult, and a team that wins is completely worthy. I'm talking about teams that register that may be great teams, but are not always as high of quality in the robot aspect). On the other hand, a team must do seriously well at at least one of their districts to make it to Ypsilanti. If that makes sense..
I took all the seeding point from all the regionals and the Mich Championship. Then found the mean and standard deviation for the whole set. Turned them into z-scores and them plotted the distribution. I ignored the winning margin because in the prelims all that counts is getting points. The curve is very interesting and where the top teams are located is also very curious.
Ed were you able to use the data I sent.
Dave
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.