Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   General Forum (http://www.chiefdelphi.com/forums/forumdisplay.php?f=16)
-   -   Comparison of DIV MAX OPRs (http://www.chiefdelphi.com/forums/showthread.php?t=147628)

Ether 25-04-2016 23:27

Re: Comparison of DIV MAX OPRs
 
Quote:

Originally Posted by Ether (Post 1578904)
Ten million 75-team samples randomly drawn from 600-team CMP population...

A different simulation:
1) shuffle the 600-element vector of CMP max OPR scores

2) divide the vector into octants and compute the average OPR for each octant

3) if one or more of the octants have average OPR >= Newton's, increment a counter.

4) repeat steps 1 thru 3 ten million times.
14.4% of the time, at least one octant has average OPR >= Newton's



Jim Zondag 26-04-2016 01:55

Re: Comparison of DIV MAX OPRs
 


Being a former signals guy, I prefer to look at the world like this graph shows.
By comparing the average OPR against the Signal to Noise Ratio of the same data, you can get a singular view of both the overall strength of an event and its competitive balance.
We try to get our District System to produce a DCMP event which as far up and to the right as possible. As high performing and balanced as we can make it.
This makes for an exciting event (and exciting TV :) )
The CMP divisions always under-perform the DCMP events on balance, because there are simply more weak teams at the CMP due to the FIRST regional promotion strategies as compared to the district promotion system.
This graph gets pretty cluttered in the center these days with 150 events (i have a little wizard plugin which helps with viewing this)
The CMP divisions are in RED. These points are of course predictive, actual results may vary next weekend.
From this you can see that the Newton division is a pretty big outlier, so like Waterloo, the top teams from Newton will have a real battle to win their division. It should be fun to watch.

Joe Johnson 26-04-2016 10:50

Re: Comparison of DIV MAX OPRs
 
Quote:

Originally Posted by Jim Zondag (Post 1579157)


Being a former signals guy, I prefer to look at the world like this graph shows.
By comparing the average OPR against the Signal to Noise Ratio of the same data, you can get a singular view of both the overall strength of an event and its competitive balance.
We try to get our District System to produce a DCMP event which as far up and to the right as possible. As high performing and balanced as we can make it.
This makes for an exciting event (and exciting TV :) )
The CMP divisions always under-perform the DCMP events on balance, because there are simply more weak teams at the CMP due to the FIRST regional promotion strategies as compared to the district promotion system.
This graph gets pretty cluttered in the center these days with 150 events (i have a little wizard plugin which helps with viewing this)
The CMP divisions are in RED. These points are of course predictive, actual results may vary next weekend.
From this you can see that the Newton division is a pretty big outlier, so like Waterloo, the top teams from Newton will have a real battle to win their division. It should be fun to watch.

Jim,
I like the chart but I am always suspicious of averages over medians/percentiles. Can you redo the data with each "point" becoming a bar between two points (50%tile & 90%tile say).

Can you explain how each you generate an SNR for each group of teams? I suppose it is the (average/stdev) but perhaps you have another idea. If you are using average/stdev, I would suggest that the two points for each competition group would be (50%tile/stdev, 50%tile) & (90%tile/stdev, 90%tile). I think this bar would better represent the team groupings. FWIW.

Dr. Joe J.

P.S. What happened to Newton this year is really a fruit of the same seed that produces a lot of the frustration I have with FIRST (I am not saying FIRST is nothing but frustration. I love them. They are doing a lot of great things. But when I get frustrated with them, it is often from the same root cause). Namely, they dither between being a "everyone should to the right thing because it's the right thing and we all agree that right is right and who can be against being right?" and being a robotic sports activity.

Because part of them doesn't want to be a robotic sport, they are completely blind to problems that participants in the robotic sports activities they organize are painfully aware of. Like having 1 division out of 8 be stacked with capable robot teams while others are sparely populated.

Ether 26-04-2016 11:58

Re: Comparison of DIV MAX OPRs
 
Quote:

Originally Posted by Ether (Post 1579110)
14.4% of the time, at least one octant has average OPR >= Newton's


Yet a another perspective:

75 (= 1/8 or 12.5%) of the 600 CMP teams have max OPR greater than 50.98

Newton has 18 (or about 24%) teams greater than 50.98, roughly twice the CMP population percentage.

I re-ran the simulation mentioned in the quote above, but this time counted the number of times at least one division had 18 or more teams >= 50.98. The result was 1.87% of the time.



microbuns 26-04-2016 14:24

Re: Comparison of DIV MAX OPRs
 
I was interested in comparing the divisions against all the district champs, as well as against SVR and Waterloo. Here is what I came up with. Max OPR is used as the metric:



It is super crowded, and I tried my best to colour each one differently, but I'm not sure if I was successful.

I also did a comparison against the champs averages. My formula was (1 - cmp/x), where cmp is the championship's averaged value and x is the value for the given competition. For top 8/24, I just wanted to look at how strong the elim alliances might be. Top 8 and 24 for championships (which others are compared against to be above or below 0) are just done by averaging the top 8/24 in each group, NOT by taking the top 8*8 or 8*24. That was just to make the analysis a bit easier to do.



And yes, I know I'm not a data vis guy :rolleyes: I try

Ether 26-04-2016 14:39

Re: Comparison of DIV MAX OPRs
 
Quote:

Originally Posted by microbuns (Post 1579408)
I was interested in comparing the divisions against all the district champs, as well as against SVR and Waterloo. Here is what I came up with...

You posted what appears to be the same graph earlier this morning in another thread, and received several responses; some asking you questions. Have you been back there to follow up?



IKE 26-04-2016 15:09

Re: Comparison of DIV MAX OPRs
 
Quote:

Originally Posted by microbuns (Post 1579408)
I was interested in comparing the divisions against all the district champs, as well as against SVR and Waterloo. Here is what I came up with. Max OPR is used as the metric:
...snip...
And yes, I know I'm not a data vis guy :rolleyes: I try

I did very similar ones in the past, but normalized the lists to "100" participants for each group. This gets rid of some of the natural drift you see due to event size. IE, the 30th at Waterloo is not that much different than most of the "lowest" you see in the divisions.

Another interesting way to look at the data is to look at the top 24/28/32...
Typical alliances are comprised of mostly the top 24-ish teams, so comparing that group can really give you an idea of what elims will look like. for the "ish" sometimes 28 is more representative as it is often hard to tell 20-28 in a lot of fields. 32 is useful for Worlds as each alliance gets their back-up.

Big Ideas 26-04-2016 19:54

Re: Comparison of DIV MAX OPRs
 
Quote:

Originally Posted by Jim Zondag (Post 1579157)


Being a former signals guy, I prefer to look at the world like this graph shows.
By comparing the average OPR against the Signal to Noise Ratio of the same data, you can get a singular view of both the overall strength of an event and its competitive balance.
We try to get our District System to produce a DCMP event which as far up and to the right as possible. As high performing and balanced as we can make it.
This makes for an exciting event (and exciting TV :) )
The CMP divisions always under-perform the DCMP events on balance, because there are simply more weak teams at the CMP due to the FIRST regional promotion strategies as compared to the district promotion system.
This graph gets pretty cluttered in the center these days with 150 events (i have a little wizard plugin which helps with viewing this)
The CMP divisions are in RED. These points are of course predictive, actual results may vary next weekend.
From this you can see that the Newton division is a pretty big outlier, so like Waterloo, the top teams from Newton will have a real battle to win their division. It should be fun to watch.

I like this view BUT; I would love to see a time axis WK1-CMP, colors would work. Since San Diego (week 1) shows as less competitive then Sacramento (week 4 ) But my perception was opposite. So, how much does competitiveness slide as the season progresses?

Joe Johnson 26-04-2016 22:22

Re: Comparison of DIV MAX OPRs
 
Quote:

Originally Posted by Jared Russell (Post 1578808)
Another trivia question for you stat-gurus out there to chew on...

Given random assignment of teams to 8 divisions, what is the probability that any division's [mean OPR, 90th %ile OPR, 75th %ile OPR, 50th %ile OPR, etc.] is as strong as Newton's?

As I suggested above, I looked into this very question.

TL;DR: I estimate that as a competitor you would expect to be in a division as stacked as Newton (or more stacked) once every 250-500 years. Further as a spectator, you would would expect to attend a Worlds with a division as stacked as Newton (or more stacked) once every 40-70 years.


A lot depends on the method you use to define what we mean when we say Newton is stacked. I decided that I would plot the MAX OPR for the Nth Percentile team for a division and compare that to the same Nth Percentile for the CMP Population as a whole.

This chart helps you see what I mean.




This chart is better because I do the subtraction (DIV OPR %tile - CMP OPR %tile).




So NOW I can propose my metric. Let's integrate the area under the curve for the above chart in some range and then normalize for the width of the range the area.

But what range do we want to use?

Here are two charts for proposed ranges.

The first is my "I think this would be a fair metric" range (55%tile to 95%tile). My thinking is that it eliminates the very top of the range because, Hey, whenever a handful of powerhouse teams end up in a division, the top is off the charts, that's just normal.




My second is "Let's tailor the metric as much as we can to be favorable to Newton" That is, pick the range so that it measures where Newton opens up a big gap compared to the CMP as a whole. Coincidentally that also is the top 24 Teams in a 75 Division so, if those teams make up the Playoff teams they'd have a pretty amazing set of 8 Alliances.



Now I can do a simulated Division Assignment. I did it (almost) like FIRST is rumored to do it. I divided the rookies teams and the non-rookies randomly but separately. For Rookies, the rule should be that teams with numbers 5726 and higher but that gave me a number of rookies that wasn't evenly divisible by 8. For reasons that are hard to explain, it was easier for my simulation if I had the number of rookies be a divisible by 8 so I added 3 teams to my "rookie group" (5686, 5690, & 5712) I don't think this changed my results significantly.

After running 2,500 simulated CMP Division Assignments (that is 20,000 Divisions), I get this when you look at the distribution of Division Stacked Metric (only showing chart for Method 1, but Method 2 was similar).




Looking at the data, it is easy to see that 2,500 CMP's are plenty when it comes to characterizing the variation involved.

Long story short.

Based on Method #1:
I estimate that as a competitor you would expect to be in a division as stacked as Newton (or more stacked) once every 250 years. Further as a spectator, you would would expect to attend a Worlds with a division as stacked as Newton (or more stacked) once every 40 years.

Based on Method #2:
I estimate that as a competitor you would expect to be in a division as stacked as Newton (or more stacked) once every 500 years. Further as a spectator, you would would expect to attend a Worlds with a division as stacked as Newton (or more stacked) once every 67 years.

So... there you have it. Any way you cut it, Newton is one wacky division this year.

Also, FIRST should probably think about how they make the divisions a bit more to avoid this kind of think in the future.

Comments welcome.

Dr. Joe J.

Joe Johnson 26-04-2016 22:36

Re: Comparison of DIV MAX OPRs
 
Quote:

Originally Posted by Ether (Post 1578904)
FWIW

Ten million 75-team samples randomly drawn from 600-team CMP population; 2.5% had average OPR greater than Newton's.


Interestingly:

mean of max OPR of 600 CMP teams = 37.331

std dev of max OPR of 600 CMP teams = 12.287

predicted std dev of the means of the 75-team samples = 12.287/sqrt(75) = 1.4187

mean OPR of Newton = 40.1149

predicted Zscore of Newton = (40.1149-37.331)/1.4187 = 1.9623

area under normal curve between mean and 1.9623 = .4570

predicted probability = 0.5 - 0.4750 = 0.025 = 2.5%


Interesting. 3 methods provide the same result (Ether's two methods and my method #1* 2.5% is 1 in 40)

I swear, I didn't look at Ether's answers before I did my simulation.

Dr. Joe J.


*which, I thought should be the better metric - method 2 was more tailored to be stacked in the way that Newton was stacked, so of course it is more rarely observedd.


All times are GMT -5. The time now is 17:52.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi