View Full Version : Comparison of DIV MAX OPRs
Joe Johnson
23-04-2016, 10:30
I took the every division, sorted the teams by MAX OPR and then plotted all the divisions on one chart. The gap on the left side of the chart between Newton and the rest of the divisions is huge, crazy huge.
http://i.imgur.com/YSZnCDk.jpg
One thing to note with this game, the division median is about 50% top teams. Most years, there was a bigger despair if I remember correctly.
Richard Wallace
23-04-2016, 13:42
Newton and Hopper teams will not have a long walk to Einstein. That is convenient. :)
Joe Johnson
23-04-2016, 14:29
The more I look at the data the crazier it seems.
If you take the MAX OPR numbers at 50th, 75th, 90th, and 95th percentiles for each division and then subtract the corresponding percentiles from the entire 600 team CMP, you can get a sense for just how much better the teams on NEWTON are then the other divisions. Newton's numbers are all positive and the 75th, 90th and 95th percentiles are over 5 points higher than the corresponding CMP numbers.
See the chart below.
As I said, crazy. I don't think that this means that Newton is a shoe in for winning on Einstein because that depends on how the drafts happen to pair teams into alliances, but I will say that the qualifying rounds and playoffs on Newton are going to be very different than those played in some other divisions (I'm looking at you Curie & Galileo).
http://i.imgur.com/9L6a1ra.jpg
What would happen if you took the top 4 out of Newton? I get the impression it might shift it enough that it would reduce a lot of the shift. Thoughts?
So from my interpretation of the second graph, Carson's 75th percentile is better than the average, while their 95th percentile robots tend to be worse than the average 95th percentile robot. Is that right?
Joe Johnson
24-04-2016, 00:11
So from my interpretation of the second graph, Carson's 75th percentile is better than the average, while their 95th percentile robots tend to be worse than the average 95th percentile robot. Is that right?
Close but not quite right.
If you replace "average" with "75th (or 95th) percentile of the CMP as a whole" then you'd be right.
The idea I was trying to get at was a metric that made it easy to see which alliances were more or less competitive than others. I choose MAX OPR as the surrogate from how good a particular team is. Then I choose 4 precentiles 95th (which is the roughly the 4th best MAX OPR in the division and 30th best MAX OPR in St. Louis), 90th (roughly 8th in division, 60th in CMP), 75th (19th, 150th) and 50th (38th, 300th). And Finally, to make it easy to compare these MAX OPRs, I normalized each division percentile by subtracting the associated CMP percentile.
I think it paints a pretty clear picture. The top tier robots in Newton are about 1 high goal boulder better per match than the similarly rare top tier robots in the general CMP population. That is a big difference. Not insurmountable by any means but still pretty significant.
Dr. Joe J.
Joe Johnson
24-04-2016, 12:39
What would happen if you took the top 4 out of Newton? I get the impression it might shift it enough that it would reduce a lot of the shift. Thoughts?
Actually, things don't change that much. That is one of the great things about using percentiles rather than averages. With averages a few high numbers can really skew the result but for populations like we have in this case, it doesn't have that much effect -- all that happens when you pull those top 4 monster teams out of the list is the shift the 95th, 90th, 75th and 50th percentile scores down 4 slots in the list.
I have done this and you can see that even without those top 4 teams, Newton is still in a class by themselves.
http://i.imgur.com/NNgvuuL.jpg
microbuns
24-04-2016, 12:53
I'd love to see the same comparisons between these divisions, district championships and some of the top regionals. I think it'd be a great indicator of how strong these worlds groups actually are.
Joe Johnson
24-04-2016, 19:18
I'd love to see the same comparisons between these divisions, district championships and some of the top regionals. I think it'd be a great indicator of how strong these worlds groups actually are.
I love this idea. Can someone point me to an OPR/MAX OPR spreadsheet (or website, but I'd prefer not to have to scrape all that data)? I just don't have the time right now to write my own. I just need Team numbers and MAX OPR in columns (other columns are fine but at least those so I can use Match & Offset to automatically have Excel generate the data for whatever competition we're interested in).
I scraped the CMP team data from http://frc.divisions.co/ but I need teams that are not in the CMP team list if I want to do other competitions.
Dr. Joe J.
P.S. Also, I think the Division CMP numbers are going to be higher than typical districts or Regionals but lower than the DCMPs JJ
Caleb Sykes
24-04-2016, 19:31
I love this idea. Can someone point me to an OPR/MAX OPR spreadsheet (or website, but I'd prefer not to have to scrape all that data)?
Here you go.
2834 scouting database (http://www.chiefdelphi.com/media/papers/3242)
4536 scouting database (http://www.chiefdelphi.com/media/papers/3248)
Collin Stiers
24-04-2016, 20:30
The way things worked out this year with divisions will make things interesting. A lot of really good teams will get locked up in newton, because they won't win in the division, allowing some teams from other divisions to come out with strong alliance, or perhaps teams who normally wouldn't make it to Einstein's can make it this year. I think the possibilities of a dark horse alliance are higher this year.
zinthorne
24-04-2016, 20:56
I think it is possible to maybe see a situation happen on Newton like one that happened back in 2013 with 1678. 1678 seeded one and burned many of the top teams in their division then selected 148 to go on to Einstein, because some very good alliances were unable to be formed. Very smart play by 1678!
Joe Johnson
24-04-2016, 21:45
Here are the DCMP's compared to the Divisions at Worlds (note, the zero line is equal to the associated percentile of all teams CMP this year). I have also added an non-normalized chart. Harder to read but you can see raw numbers.
I have to say, I am surprised and I was wrong, very wrong. The DCMP's are not generally better than the Divisions at Worlds.
A wise man's mind changes, a fool's changes not...
Dr. Joe J.
P.S. Thanks for the data (several sources).
http://i.imgur.com/xFve9AB.jpg
http://i.imgur.com/bBt6duI.jpg
ATannahill
24-04-2016, 21:49
Here are the DCMP's compared to the Divisions at Worlds (note, the zero line is equal to the associated percentile of all teams CMP this year). I have also added an non-normalized chart. Harder to read but you can see raw numbers.
I have to say, I am surprised and I was wrong, very wrong. The DCMP's are not generally better than the Divisions at Worlds.
A wise man's mind changes, a fool's changes not...
Dr. Joe J.
P.S. Thanks for the data (several sources).
<images snipped>
Did you intentionally leave out PNW, NC and Chesapeake? I know they were week 6.
FiMFanatic
24-04-2016, 21:57
If I read the chart correctly, the 50% scoring indicates that the Michigan Championship has the highest average quality.
Of course top 5% at World's will be higher than the top 5% of any district championship.....so 95% level somewhat irrelevant.
Caleb Sykes
24-04-2016, 22:09
Of course top 5% at World's will be higher than the top 5% of any district championship.....so 95% level somewhat irrelevant.
Except for New England.....so your point is somewhat irrelevant. :)
Did you intentionally leave out PNW, NC and Chesapeake? I know they were week 6.
This (http://www.chiefdelphi.com/forums/attachment.php?attachmentid=20648&d=1461119511) is not the same as Joe's chart, but it compares all 7 district CMPs.
howellroy
25-04-2016, 12:21
Ok, so I am looking things over and it is true Newton seems to be the toughest but there seems to be some balance. It doesn't mean things are completely unbalanced although there is a gap between Galileo and Newton but that is going to happen when trying to set these things up. I think the greater variable is the number of matches each team played. Those who go to Regional play less matches than those who are part of a District. Regional matches do cost more to go to and there is a bit more travel over longer distances related to it. That give district matches more practice and development of a strategy that best fits their robot. Parity is a matter of every team having exactly the same resources and access money. I'm just saying.
Galileo 35.68
Tesla 35.95
Curie 36.28
Archimedes 36.93
Carver 37.85
Carson 37.96
Hopper 38.15
Newton 40.11
Joe Johnson
25-04-2016, 14:18
Did you intentionally leave out PNW, NC and Chesapeake? I know they were week 6.
No I didn't. I was in a hurry to post this with my wife basically turning out the lights in the living room as I was crunching the data (think pits at closing time ;-) and I completely forgot about the week 6 districts.
Sorry. Maybe tonight if I get time...
Dr. Joe J.
Jared Russell
25-04-2016, 14:50
Another trivia question for you stat-gurus out there to chew on...
Given random assignment of teams to 8 divisions, what is the probability that any division's [mean OPR, 90th %ile OPR, 75th %ile OPR, 50th %ile OPR, etc.] is as strong as Newton's?
Joe Johnson
25-04-2016, 17:22
Another trivia question for you stat-gurus out there to chew on...
Given random assignment of teams to 8 divisions, what is the probability that any division's [mean OPR, 90th %ile OPR, 75th %ile OPR, 50th %ile OPR, etc.] is as strong as Newton's?
Zing!
I don't know but I'll try to model this tonight.
Dr. Joe J
FWIW
Ten million 75-team samples randomly drawn from 600-team CMP population; 2.5% had average OPR greater than Newton's.
Interestingly:
mean of max OPR of 600 CMP teams = 37.331
std dev of max OPR of 600 CMP teams = 12.287
predicted std dev of the means of the 75-team samples = 12.287/sqrt(75) = 1.4187
mean OPR of Newton = 40.1149
predicted Zscore of Newton = (40.1149-37.331)/1.4187 = 1.9623
area under normal curve between mean and 1.9623 = .4570
predicted probability = 0.5 - 0.4750 = 0.025 = 2.5%
Another trivia question for you stat-gurus out there to chew on...
Given random assignment of teams to 8 divisions, what is the probability that any division's [mean OPR, 90th %ile OPR, 75th %ile OPR, 50th %ile OPR, etc.] is as strong as Newton's?
I ran 1000 simulations (creating 8000 divisions) of the assignment process (with rookies dealt out randomly and evenly at first, then vets), and for each real CMP division counted the percentages of the simulated divisions that had higher stats than the real division.
Division, Mean, 50th, 75th, 90th
Newton 2.11% 9.93% 0.26% 1.13%
Hopper 25.31% 75.99% 63.89% 29.55%
Carver 33.69% 14.65% 49.40% 43.60%
Carson 38.50% 40.76% 7.49% 41.76%
Arch. 62.38% 60.72% 34.96% 89.81%
Curie 79.78% 53.46% 77.20% 98.06%
Tesla 86.04% 89.74% 68.84% 81.65%
Galileo 90.23% 76.91% 76.33% 82.76%
SpaceBiz
25-04-2016, 19:52
I ran 1000 simulations of the assignment process (with rookies dealt out randomly and evenly at first, then vets), and for each division counted the percentages of simulated divisions that had higher stats than the real division,
Division, Mean, 50th, 75th, 90th
Newton 2.11% 9.93% 0.26% 1.13%
Hopper 25.31% 75.99% 63.89% 29.55%
Carver 33.69% 14.65% 49.40% 43.60%
Carson 38.50% 40.76% 7.49% 41.76%
Arch. 62.38% 60.72% 34.96% 89.81%
Curie 79.78% 53.46% 77.20% 98.06%
Tesla 86.04% 89.74% 68.84% 81.65%
Galileo 90.23% 76.91% 76.33% 82.76%
If 75th %ile is the most important one (I think it is, because the third pick wins events) the .26% figure is really scary. Odds are basically 1 in 400 you are placed in a theoretical division this good.
Sent from my SAMSUNG-SM-G930A using Tapatalk
jlmcmchl
25-04-2016, 20:07
I ran 1000 simulations of the assignment process (with rookies dealt out randomly and evenly at first, then vets), and for each division counted the percentages of simulated divisions that had higher stats than the real division,
Division, Mean, 50th, 75th, 90th
Newton 2.11% 9.93% 0.26% 1.13%
Hopper 25.31% 75.99% 63.89% 29.55%
Carver 33.69% 14.65% 49.40% 43.60%
Carson 38.50% 40.76% 7.49% 41.76%
Arch. 62.38% 60.72% 34.96% 89.81%
Curie 79.78% 53.46% 77.20% 98.06%
Tesla 86.04% 89.74% 68.84% 81.65%
Galileo 90.23% 76.91% 76.33% 82.76%
How did you get .01-precision decimals in the percents, if you ran 1000 simulations? that should be accurate to .1, not .01.
ATannahill
25-04-2016, 20:08
How did you get .01-precision decimals in the percents, if you ran 1000 simulations? that should be accurate to .1, not .01.
I imagine that 1,000 simulations created 8,000 divisions.
jlmcmchl
25-04-2016, 20:11
I imagine that 1,000 simulations created 8,000 divisions.
Then that would effectively be 8000 simulations, if you treated each group of 8 simulations as permutations of each other, not 1000.
ATannahill
25-04-2016, 20:15
Then that would effectively be 8000 simulations, if you treated each group of 8 simulations as permutations of each other, not 1000.
I think each team was dulled out once during each simulation making 8 nameless divisions, not one Newton division, one Carver division, etc. and each real division was compared to each of the 8,000 nameless ones. I don't think it was 8,000 instances of grabbing 75 teams from the entire championship list.
I'm now going to wait for Wes to actually tell us what he did.
I think each team was dulled out once during each simulation making 8 nameless divisions, not one Newton division, one Carver division, etc. and each real division was compared to each of the 8,000 nameless ones. I don't think it was 8,000 instances of grabbing 75 teams from the entire championship list.
I'm now going to wait for Wes to actually tell us what he did.
I assigned 8 divisions 1000 times, to create 8,000 divisions. I edited my post so it was more clear.
Ten million 75-team samples randomly drawn from 600-team CMP population...
A different simulation:
1) shuffle the 600-element vector of CMP max OPR scores
2) divide the vector into octants and compute the average OPR for each octant
3) if one or more of the octants have average OPR >= Newton's, increment a counter.
4) repeat steps 1 thru 3 ten million times.
14.4% of the time, at least one octant has average OPR >= Newton's
Jim Zondag
26-04-2016, 01:55
http://i.imgur.com/BeawgZo.jpg
Being a former signals guy, I prefer to look at the world like this graph shows.
By comparing the average OPR against the Signal to Noise Ratio of the same data, you can get a singular view of both the overall strength of an event and its competitive balance.
We try to get our District System to produce a DCMP event which as far up and to the right as possible. As high performing and balanced as we can make it.
This makes for an exciting event (and exciting TV :) )
The CMP divisions always under-perform the DCMP events on balance, because there are simply more weak teams at the CMP due to the FIRST regional promotion strategies as compared to the district promotion system.
This graph gets pretty cluttered in the center these days with 150 events (i have a little wizard plugin which helps with viewing this)
The CMP divisions are in RED. These points are of course predictive, actual results may vary next weekend.
From this you can see that the Newton division is a pretty big outlier, so like Waterloo, the top teams from Newton will have a real battle to win their division. It should be fun to watch.
Joe Johnson
26-04-2016, 10:50
http://i.imgur.com/BeawgZo.jpg
Being a former signals guy, I prefer to look at the world like this graph shows.
By comparing the average OPR against the Signal to Noise Ratio of the same data, you can get a singular view of both the overall strength of an event and its competitive balance.
We try to get our District System to produce a DCMP event which as far up and to the right as possible. As high performing and balanced as we can make it.
This makes for an exciting event (and exciting TV :) )
The CMP divisions always under-perform the DCMP events on balance, because there are simply more weak teams at the CMP due to the FIRST regional promotion strategies as compared to the district promotion system.
This graph gets pretty cluttered in the center these days with 150 events (i have a little wizard plugin which helps with viewing this)
The CMP divisions are in RED. These points are of course predictive, actual results may vary next weekend.
From this you can see that the Newton division is a pretty big outlier, so like Waterloo, the top teams from Newton will have a real battle to win their division. It should be fun to watch.
Jim,
I like the chart but I am always suspicious of averages over medians/percentiles. Can you redo the data with each "point" becoming a bar between two points (50%tile & 90%tile say).
Can you explain how each you generate an SNR for each group of teams? I suppose it is the https://upload.wikimedia.org/math/6/b/0/6b0fb5b8354d80c7b0392486133f5fc9.png (average/stdev) but perhaps you have another idea. If you are using average/stdev, I would suggest that the two points for each competition group would be (50%tile/stdev, 50%tile) & (90%tile/stdev, 90%tile). I think this bar would better represent the team groupings. FWIW.
Dr. Joe J.
P.S. What happened to Newton this year is really a fruit of the same seed that produces a lot of the frustration I have with FIRST (I am not saying FIRST is nothing but frustration. I love them. They are doing a lot of great things. But when I get frustrated with them, it is often from the same root cause). Namely, they dither between being a "everyone should to the right thing because it's the right thing and we all agree that right is right and who can be against being right?" and being a robotic sports activity.
Because part of them doesn't want to be a robotic sport, they are completely blind to problems that participants in the robotic sports activities they organize are painfully aware of. Like having 1 division out of 8 be stacked with capable robot teams while others are sparely populated.
14.4% of the time, at least one octant has average OPR >= Newton's
Yet a another perspective:
75 (= 1/8 or 12.5%) of the 600 CMP teams have max OPR greater than 50.98
Newton has 18 (or about 24%) teams greater than 50.98, roughly twice the CMP population percentage.
I re-ran the simulation mentioned in the quote above, but this time counted the number of times at least one division had 18 or more teams >= 50.98. The result was 1.87% of the time.
microbuns
26-04-2016, 14:24
I was interested in comparing the divisions against all the district champs, as well as against SVR and Waterloo. Here is what I came up with. Max OPR is used as the metric:
http://www.chiefdelphi.com/media/img/ff0/ff0d42708eb41717b0ea0a06d53a1ba3_l.jpg
It is super crowded, and I tried my best to colour each one differently, but I'm not sure if I was successful.
I also did a comparison against the champs averages. My formula was (1 - cmp/x), where cmp is the championship's averaged value and x is the value for the given competition. For top 8/24, I just wanted to look at how strong the elim alliances might be. Top 8 and 24 for championships (which others are compared against to be above or below 0) are just done by averaging the top 8/24 in each group, NOT by taking the top 8*8 or 8*24. That was just to make the analysis a bit easier to do.
http://www.chiefdelphi.com/media/img/235/235a2929c537f9864aa4d71850639cfe_l.jpg
And yes, I know I'm not a data vis guy :rolleyes: I try
I was interested in comparing the divisions against all the district champs, as well as against SVR and Waterloo. Here is what I came up with...
You posted what appears to be the same graph earlier this morning in another thread (http://www.chiefdelphi.com/forums/showpost.php?p=1579216&postcount=1), and received several responses; some asking you questions. Have you been back there to follow up?
I was interested in comparing the divisions against all the district champs, as well as against SVR and Waterloo. Here is what I came up with. Max OPR is used as the metric:
...snip...
And yes, I know I'm not a data vis guy :rolleyes: I try
I did very similar ones in the past, but normalized the lists to "100" participants for each group. This gets rid of some of the natural drift you see due to event size. IE, the 30th at Waterloo is not that much different than most of the "lowest" you see in the divisions.
Another interesting way to look at the data is to look at the top 24/28/32...
Typical alliances are comprised of mostly the top 24-ish teams, so comparing that group can really give you an idea of what elims will look like. for the "ish" sometimes 28 is more representative as it is often hard to tell 20-28 in a lot of fields. 32 is useful for Worlds as each alliance gets their back-up.
Big Ideas
26-04-2016, 19:54
http://i.imgur.com/BeawgZo.jpg
Being a former signals guy, I prefer to look at the world like this graph shows.
By comparing the average OPR against the Signal to Noise Ratio of the same data, you can get a singular view of both the overall strength of an event and its competitive balance.
We try to get our District System to produce a DCMP event which as far up and to the right as possible. As high performing and balanced as we can make it.
This makes for an exciting event (and exciting TV :) )
The CMP divisions always under-perform the DCMP events on balance, because there are simply more weak teams at the CMP due to the FIRST regional promotion strategies as compared to the district promotion system.
This graph gets pretty cluttered in the center these days with 150 events (i have a little wizard plugin which helps with viewing this)
The CMP divisions are in RED. These points are of course predictive, actual results may vary next weekend.
From this you can see that the Newton division is a pretty big outlier, so like Waterloo, the top teams from Newton will have a real battle to win their division. It should be fun to watch.
I like this view BUT; I would love to see a time axis WK1-CMP, colors would work. Since San Diego (week 1) shows as less competitive then Sacramento (week 4 ) But my perception was opposite. So, how much does competitiveness slide as the season progresses?
Joe Johnson
26-04-2016, 22:22
Another trivia question for you stat-gurus out there to chew on...
Given random assignment of teams to 8 divisions, what is the probability that any division's [mean OPR, 90th %ile OPR, 75th %ile OPR, 50th %ile OPR, etc.] is as strong as Newton's?
As I suggested above, I looked into this very question.
TL;DR: I estimate that as a competitor you would expect to be in a division as stacked as Newton (or more stacked) once every 250-500 years. Further as a spectator, you would would expect to attend a Worlds with a division as stacked as Newton (or more stacked) once every 40-70 years.
A lot depends on the method you use to define what we mean when we say Newton is stacked. I decided that I would plot the MAX OPR for the Nth Percentile team for a division and compare that to the same Nth Percentile for the CMP Population as a whole.
This chart helps you see what I mean.
http://i.imgur.com/QGMmpz2.jpg
This chart is better because I do the subtraction (DIV OPR %tile - CMP OPR %tile).
http://i.imgur.com/TnKBUnt.jpg
So NOW I can propose my metric. Let's integrate the area under the curve for the above chart in some range and then normalize for the width of the range the area.
But what range do we want to use?
Here are two charts for proposed ranges.
The first is my "I think this would be a fair metric" range (55%tile to 95%tile). My thinking is that it eliminates the very top of the range because, Hey, whenever a handful of powerhouse teams end up in a division, the top is off the charts, that's just normal.
http://i.imgur.com/G38c0rS.jpg
My second is "Let's tailor the metric as much as we can to be favorable to Newton" That is, pick the range so that it measures where Newton opens up a big gap compared to the CMP as a whole. Coincidentally that also is the top 24 Teams in a 75 Division so, if those teams make up the Playoff teams they'd have a pretty amazing set of 8 Alliances.
http://i.imgur.com/YtXOunR.jpg
Now I can do a simulated Division Assignment. I did it (almost) like FIRST is rumored to do it. I divided the rookies teams and the non-rookies randomly but separately. For Rookies, the rule should be that teams with numbers 5726 and higher but that gave me a number of rookies that wasn't evenly divisible by 8. For reasons that are hard to explain, it was easier for my simulation if I had the number of rookies be a divisible by 8 so I added 3 teams to my "rookie group" (5686, 5690, & 5712) I don't think this changed my results significantly.
After running 2,500 simulated CMP Division Assignments (that is 20,000 Divisions), I get this when you look at the distribution of Division Stacked Metric (only showing chart for Method 1, but Method 2 was similar).
http://i.imgur.com/fVX1Z1v.jpg
Looking at the data, it is easy to see that 2,500 CMP's are plenty when it comes to characterizing the variation involved.
Long story short.
Based on Method #1:
I estimate that as a competitor you would expect to be in a division as stacked as Newton (or more stacked) once every 250 years. Further as a spectator, you would would expect to attend a Worlds with a division as stacked as Newton (or more stacked) once every 40 years.
Based on Method #2:
I estimate that as a competitor you would expect to be in a division as stacked as Newton (or more stacked) once every 500 years. Further as a spectator, you would would expect to attend a Worlds with a division as stacked as Newton (or more stacked) once every 67 years.
So... there you have it. Any way you cut it, Newton is one wacky division this year.
Also, FIRST should probably think about how they make the divisions a bit more to avoid this kind of think in the future.
Comments welcome.
Dr. Joe J.
Joe Johnson
26-04-2016, 22:36
FWIW
Ten million 75-team samples randomly drawn from 600-team CMP population; 2.5% had average OPR greater than Newton's.
Interestingly:
mean of max OPR of 600 CMP teams = 37.331
std dev of max OPR of 600 CMP teams = 12.287
predicted std dev of the means of the 75-team samples = 12.287/sqrt(75) = 1.4187
mean OPR of Newton = 40.1149
predicted Zscore of Newton = (40.1149-37.331)/1.4187 = 1.9623
area under normal curve between mean and 1.9623 = .4570
predicted probability = 0.5 - 0.4750 = 0.025 = 2.5%
Interesting. 3 methods provide the same result (Ether's two methods and my method #1* 2.5% is 1 in 40)
I swear, I didn't look at Ether's answers before I did my simulation.
Dr. Joe J.
*which, I thought should be the better metric - method 2 was more tailored to be stacked in the way that Newton was stacked, so of course it is more rarely observedd.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.