|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#31
|
||||
|
||||
|
Re: Comparison of DIV MAX OPRs
Quote:
1) shuffle the 600-element vector of CMP max OPR scores14.4% of the time, at least one octant has average OPR >= Newton's |
|
#32
|
||||
|
||||
|
Re: Comparison of DIV MAX OPRs
![]() Being a former signals guy, I prefer to look at the world like this graph shows. By comparing the average OPR against the Signal to Noise Ratio of the same data, you can get a singular view of both the overall strength of an event and its competitive balance. We try to get our District System to produce a DCMP event which as far up and to the right as possible. As high performing and balanced as we can make it. This makes for an exciting event (and exciting TV )The CMP divisions always under-perform the DCMP events on balance, because there are simply more weak teams at the CMP due to the FIRST regional promotion strategies as compared to the district promotion system. This graph gets pretty cluttered in the center these days with 150 events (i have a little wizard plugin which helps with viewing this) The CMP divisions are in RED. These points are of course predictive, actual results may vary next weekend. From this you can see that the Newton division is a pretty big outlier, so like Waterloo, the top teams from Newton will have a real battle to win their division. It should be fun to watch. |
|
#33
|
||||||
|
||||||
|
Re: Comparison of DIV MAX OPRs
Quote:
I like the chart but I am always suspicious of averages over medians/percentiles. Can you redo the data with each "point" becoming a bar between two points (50%tile & 90%tile say). Can you explain how each you generate an SNR for each group of teams? I suppose it is the (average/stdev) but perhaps you have another idea. If you are using average/stdev, I would suggest that the two points for each competition group would be (50%tile/stdev, 50%tile) & (90%tile/stdev, 90%tile). I think this bar would better represent the team groupings. FWIW.Dr. Joe J. P.S. What happened to Newton this year is really a fruit of the same seed that produces a lot of the frustration I have with FIRST (I am not saying FIRST is nothing but frustration. I love them. They are doing a lot of great things. But when I get frustrated with them, it is often from the same root cause). Namely, they dither between being a "everyone should to the right thing because it's the right thing and we all agree that right is right and who can be against being right?" and being a robotic sports activity. Because part of them doesn't want to be a robotic sport, they are completely blind to problems that participants in the robotic sports activities they organize are painfully aware of. Like having 1 division out of 8 be stacked with capable robot teams while others are sparely populated. Last edited by Joe Johnson : 26-04-2016 at 10:54. |
|
#34
|
||||
|
||||
|
Re: Comparison of DIV MAX OPRs
Quote:
Yet a another perspective: 75 (= 1/8 or 12.5%) of the 600 CMP teams have max OPR greater than 50.98 Newton has 18 (or about 24%) teams greater than 50.98, roughly twice the CMP population percentage. I re-ran the simulation mentioned in the quote above, but this time counted the number of times at least one division had 18 or more teams >= 50.98. The result was 1.87% of the time. |
|
#35
|
||||
|
||||
|
Re: Comparison of DIV MAX OPRs
I was interested in comparing the divisions against all the district champs, as well as against SVR and Waterloo. Here is what I came up with. Max OPR is used as the metric:
![]() It is super crowded, and I tried my best to colour each one differently, but I'm not sure if I was successful. I also did a comparison against the champs averages. My formula was (1 - cmp/x), where cmp is the championship's averaged value and x is the value for the given competition. For top 8/24, I just wanted to look at how strong the elim alliances might be. Top 8 and 24 for championships (which others are compared against to be above or below 0) are just done by averaging the top 8/24 in each group, NOT by taking the top 8*8 or 8*24. That was just to make the analysis a bit easier to do. ![]() And yes, I know I'm not a data vis guy I try |
|
#36
|
||||
|
||||
|
Re: Comparison of DIV MAX OPRs
Quote:
|
|
#37
|
||||
|
||||
|
Re: Comparison of DIV MAX OPRs
Quote:
Another interesting way to look at the data is to look at the top 24/28/32... Typical alliances are comprised of mostly the top 24-ish teams, so comparing that group can really give you an idea of what elims will look like. for the "ish" sometimes 28 is more representative as it is often hard to tell 20-28 in a lot of fields. 32 is useful for Worlds as each alliance gets their back-up. |
|
#38
|
|||
|
|||
|
Re: Comparison of DIV MAX OPRs
Quote:
|
|
#39
|
||||||
|
||||||
|
Re: Comparison of DIV MAX OPRs
Quote:
TL;DR: I estimate that as a competitor you would expect to be in a division as stacked as Newton (or more stacked) once every 250-500 years. Further as a spectator, you would would expect to attend a Worlds with a division as stacked as Newton (or more stacked) once every 40-70 years. A lot depends on the method you use to define what we mean when we say Newton is stacked. I decided that I would plot the MAX OPR for the Nth Percentile team for a division and compare that to the same Nth Percentile for the CMP Population as a whole. This chart helps you see what I mean. ![]() This chart is better because I do the subtraction (DIV OPR %tile - CMP OPR %tile). ![]() So NOW I can propose my metric. Let's integrate the area under the curve for the above chart in some range and then normalize for the width of the range the area. But what range do we want to use? Here are two charts for proposed ranges. The first is my "I think this would be a fair metric" range (55%tile to 95%tile). My thinking is that it eliminates the very top of the range because, Hey, whenever a handful of powerhouse teams end up in a division, the top is off the charts, that's just normal. ![]() My second is "Let's tailor the metric as much as we can to be favorable to Newton" That is, pick the range so that it measures where Newton opens up a big gap compared to the CMP as a whole. Coincidentally that also is the top 24 Teams in a 75 Division so, if those teams make up the Playoff teams they'd have a pretty amazing set of 8 Alliances. ![]() Now I can do a simulated Division Assignment. I did it (almost) like FIRST is rumored to do it. I divided the rookies teams and the non-rookies randomly but separately. For Rookies, the rule should be that teams with numbers 5726 and higher but that gave me a number of rookies that wasn't evenly divisible by 8. For reasons that are hard to explain, it was easier for my simulation if I had the number of rookies be a divisible by 8 so I added 3 teams to my "rookie group" (5686, 5690, & 5712) I don't think this changed my results significantly. After running 2,500 simulated CMP Division Assignments (that is 20,000 Divisions), I get this when you look at the distribution of Division Stacked Metric (only showing chart for Method 1, but Method 2 was similar). ![]() Looking at the data, it is easy to see that 2,500 CMP's are plenty when it comes to characterizing the variation involved. Long story short. Based on Method #1: I estimate that as a competitor you would expect to be in a division as stacked as Newton (or more stacked) once every 250 years. Further as a spectator, you would would expect to attend a Worlds with a division as stacked as Newton (or more stacked) once every 40 years. Based on Method #2: I estimate that as a competitor you would expect to be in a division as stacked as Newton (or more stacked) once every 500 years. Further as a spectator, you would would expect to attend a Worlds with a division as stacked as Newton (or more stacked) once every 67 years. So... there you have it. Any way you cut it, Newton is one wacky division this year. Also, FIRST should probably think about how they make the divisions a bit more to avoid this kind of think in the future. Comments welcome. Dr. Joe J. Last edited by Joe Johnson : 26-04-2016 at 23:24. |
|
#40
|
||||||
|
||||||
|
Re: Comparison of DIV MAX OPRs
Quote:
I swear, I didn't look at Ether's answers before I did my simulation. Dr. Joe J. *which, I thought should be the better metric - method 2 was more tailored to be stacked in the way that Newton was stacked, so of course it is more rarely observedd. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|