Log in

View Full Version : New FRC Stat: aWAR


Nathan Streeter
21-02-2014, 14:01
First, I'd like to recognize Dan Niemitalo for being the one to create most of this spreadsheet for a statistic that he invented, called Performance Index. Here is where he introduces that: http://www.chiefdelphi.com/forums/showthread.php?t=121979&highlight=performance+index. Thanks also go to Ed Law and Mark McLeod as all the raw data from this spreadsheet comes from their spreadsheets.

My goal for aWAR is to be a season-long statistic that accurately ranks all FRC teams based on all-around performance... It is primarily based off of the new standardized district point model, but also implements the tweaks mentioned in the bullets below (devised and implemented by Dan Niemitalo and I). aWAR stands for aggregate Wins Above Replacement and is based on the concept of WAR from baseball... Wins Above Replacement.

What is really unique about aWAR is that it is scaled to a useful range... Similar to the concept of a "Replacement-Level" found in WAR in baseball, 0 indicates a "replacement-level team" which we defined to be one that goes something like 5-7, is on the cusp of being picked in elims (hence replacement-level), and may or may not win an award. Based on this 0-point, it is then scaled so that the top teams usually get an aWAR of around 7 (12-0 record... 7 Wins Above Replacement). A team that has an aWAR of about 4 probably went about 9-3 at the regional/district level.

The changes made from the 2014 district point model include:
- Omission of the 1st and 2nd year team benefit. This is in no way because I dislike this element of the point system (I actually like it a lot!), but because it exists primarily to help younger teams get to the next level of competition. With aWAR I'm just trying to rank teams based on skill, this element would be counter-productive for that.
- Includes a weight for OPR (primarily to differentiate among top-tier teams... is: 16 * IndividualTeam'sEventOPR/MaxOPRofSeason, thus being comparable in significance to alliance selection)
- Pro-rates for 12 qual matches so that variation due to # of qual matches is reduced (thus a team that goes 6-2 in 8 qual matches would be treated as if it went 9-3 in 12 qual matches).
- Includes essentially 3 events: a team's best two regional/district events and the average of their performance at DCMP and CMP events.
- Multiplies points earned at DCMP and CMP by 1.7x so that mid-tier teams that do well at the regional/district level but struggle at higher levels aren't punished for having competed against stronger teams.
- Uses a weighted average of the (up to) 3 events, helping teams that have proved consistency by competing at a particular level over multiple events. This weight is: 1x for one event, 1.11x for two events, and 1.18x for three events. I, personally, really like how this balance works out in the rankings.
- Includes another 10 points for Einstein Finalist and 20 points for Einstein Winner (these are then multiplied by the DCMP/CMP 1.7x multiplier)
- Based on the concept of replacement-level, subtracts out 12 points (putting the replacement-level team at 0; teams above that level would have a positive aWAR while teams below that level would have a negative aWAR).
- Then scales the raw points to fit a max of about 7 by dividing the "subtotal" by 15.
- For multi-season aWAR, the aWAR from the past 4 seasons are averaged, with higher weight being put on more recent seasons (32%, 29%, 23%, 16%).

I hope to update this spreadsheet weekly to include 2014 data... I'll need to add the infrastructure to do this, and then bring in data from Ed Law's spreadsheet every week. I also hope to include rWAR (robot Wins Above Replacement), which would include only robot-performance-related data (i.e. few or no awards)... I doubt that will happen soon, though!

The spreadsheet should be attached... it's a massive 30+ mb though, so don't expect a speedy download! :-) There are three tabs that you're intended to use (or tinker with)... The "Team Lookup" can be used just to look up one team at a time. The "aWAR" tab has the aWAR calculations from the past 6 years along with the Weighted Average of aWAR as of 2013. Feel free to sort by anything and/or filter by states/regions/countries! Lastly, you can use the "Point System" tab to tweak the system in various ways and see what happens! I would recommend if you're trying to really see the dynamics of the system, pick a region and season you're familiar with, and sort... watch what happens at each level of the rankings (but especially top and middle levels).

Please feel free to provide feedback... While I do generally like how aWAR is calculated currently, I would definitely be interested in improving it! If you find bugs, improvements, or recommendations, please mention it!

I tried putting it up in CD-Media under white papers, but that was over an hour ago with no result yet... hopefully that'll come through eventually. :-) So for now at least I started this thread and uploaded a copy of the spreadsheet here: https://app.box.com/s/l6f9tgciw929415om2z0.

Top-50 by 4-year Weighted Average of aWAR:
1 1114 7.08
2 2056 7.03
3 469 6.78
4 987 6.67
5 254 6.50
6 67 6.28
7 33 5.95
8 148 5.90
9 118 5.83
10 1986 5.75
11 359 5.66
12 1717 5.62
13 341 5.50
14 1676 5.40
15 111 5.33
16 16 5.30
17 1983 5.18
18 233 5.07
19 2054 5.03
20 610 4.94
21 1538 4.94
22 1918 4.90
23 1477 4.85
24 217 4.81
25 2169 4.81
26 1718 4.66
27 2826 4.61
28 27 4.60
29 1519 4.53
30 2415 4.52
31 525 4.51
32 2016 4.48
33 330 4.40
34 365 4.38
35 1218 4.37
36 234 4.31
37 2590 4.23
38 245 4.21
39 2337 4.20
40 11 4.19
41 1678 4.18
42 25 4.16
43 180 4.10
44 103 4.03
45 1241 3.99
46 973 3.99
47 624 3.96
48 2471 3.92
49 2474 3.80
50 3138 3.80

Top 50 from 2013, by aWAR:
1 469 7.41
2 1986 7.11
3 1538 7.04
4 1114 6.91
5 33 6.78
6 987 6.72
7 2056 6.63
8 2169 6.50
9 118 6.41
10 1983 6.35
11 148 6.32
12 610 6.26
13 862 6.17
14 254 6.12
15 1477 6.09
16 2590 6.08
17 2054 5.90
18 359 5.83
19 1241 5.75
20 67 5.69
21 11 5.64
22 3476 5.59
23 1718 5.53
24 868 5.49
25 1717 5.47
26 1678 5.46
27 1676 5.44
28 20 5.41
29 128 5.41
30 4814 5.29
31 2415 5.29
32 341 5.12
33 2474 5.07
34 1918 5.07
35 3539 5.06
36 245 5.02
37 2052 4.98
38 2468 4.89
39 2959 4.89
40 1519 4.84
41 948 4.83
42 2729 4.83
43 126 4.81
44 234 4.79
45 1806 4.76
46 3997 4.76
47 1334 4.75
48 27 4.75
49 3824 4.72
50 3990 4.67

Lil' Lavery
21-02-2014, 14:17
What methods did you use to come up with the various scaling factors applied?

e; Got it working

JamesCH95
21-02-2014, 14:21
Download worked fine for me.

XaulZan11
21-02-2014, 14:22
What methods did you use to come up with the various scaling factors applied?

Also, your link to the spreadsheet on box seems to not work. :(

The link worked for me, but I had to download the spreadsheet.

Very cool stat and database. Just by looking at a few teams' year to year aWAR, it seems to be a very good indicator of performance.

And, I'm glad you only did the past four seasons so you didn't include our 08 and 09 robot. :)

Nathan Streeter
21-02-2014, 14:30
What methods did you use to come up with the various scaling factors applied?

I didn't have any special methods for coming up with the scaling factors... just closely examining specific samples (NH and New England in particular years, occasionally the global set) to try to see how they impact the ranking from top to bottom. Some of the scaling factors I studied more (# of events; 1.7x for CMPs) than others (/15 to scale the top teams to have aWAR of about 7; *16 for OPR).

To all, please do investigate what you think of the various factors... I'd like to get them "right!"

Also, your link to the spreadsheet on box seems to not work. :(

I (and others) have downloaded it... let me know if it continues to not work.

And, I'm glad you only did the past four seasons so you didn't include our 08 and 09 robot. :)

Hah, I am too... our team got significantly better from 2009 to 2010! :-)

Regardless of personal preference, I think 4 years makes sense in a lot of ways... it's a cycle of HS students and provides significant history without going so far back that it stops being relevant. That said, I think 3 years may make more sense as a predictor, as the 4th year (that you're predicting) would still have many of the same students from the last two or three. Doing only 2 or 3 years would also help the rookie or up-and-coming teams who are hurt by the length of 4 or more year averages...

c.shu
21-02-2014, 14:58
Our decrease from 2011 to 2012 just goes to show what happens when you graduate off 75% of your team in one year. ;)

wilsonmw04
21-02-2014, 15:09
going to go through it this weekend and see what shakes out. I find it humorous that the file that is 32 megs is called "slimdown" :-)

rsegrest
21-02-2014, 15:10
Afternoon,

We are very interested in looking at your spreadsheet but when I follow the link to Box there is a message that says:

We're sorry, this file could not be opened. It might be password protected. :confused:

I created a Box account and still cannot download the file. Suggestions please?

Joe Ross
21-02-2014, 15:22
Afternoon,

We are very interested in looking at your spreadsheet but when I follow the link to Box there is a message that says:

We're sorry, this file could not be opened. It might be password protected. :confused:

I created a Box account and still cannot download the file. Suggestions please?

Click the button that says download. Box is trying to preview the file and it's too large to preview, which is what the message should be says.

DonRotolo
21-02-2014, 16:19
Nice, I like and appreciate your efforts to come up with a 'better way'. And, I am astounded that 1676 is ranked #14 over 4 years.

Considering on your explanation (thanks for that) I don't understand how you might treat a 3rd event, such as a 3rd district or regional, before DCMP.

rsegrest
21-02-2014, 16:25
Click the button that says download. Box is trying to preview the file and it's too large to preview, which is what the message should be says.

I already tried...both buttons the one underneath the file and the one in the top right corner...

Joseph Smith
21-02-2014, 16:27
I noticed an interesting thing in the team lookup form, there are several teams (just from the few that I checked) where their Championship stats (OPR, rank, win/loss record, etc.) is a direct duplicate of their State/Regional Championship stats. For example, team 469, their Archimedes division stats are a direct duplicate of their MSC stats. The same is true for team 33, team 217, and team 245, but not for ALL MSC teams... I'm not sure if I see a pattern, but I'm really confused. Is this some element of the system that I'm not understanding?

I-DOG
21-02-2014, 16:31
Hands down, this is the coolest thing I've seen all season.

As someone who loves ranking everything, this rankings list is like candy to me.

XaulZan11
21-02-2014, 16:40
What is really unique about aWAR is that it is scaled to a useful range... Similar to the concept of a "Replacement-Level" found in WAR in baseball, 0 indicates a "replacement-level team" which we defined to be one that goes something like 5-7, is on the cusp of being picked in elims (hence replacement-level), and may or may not win an award. Based on this 0-point, it is then scaled so that the top teams usually get an aWAR of around 7 (12-0 record... 7 Wins Above Replacement). A team that has an aWAR of about 4 probably went about 9-3 at the regional/district level.

Unless I'm mistaken, couldn't the aWAR of 4 be a 4-8 team that won Chairmans award, though? I'd be curious to see the rankings based only on on-field preformance.

IKE
21-02-2014, 16:44
I find the yearly ranking element interesting:
(32%, 29%, 23%, 16%).

On simialr efforts, using a fraction to the exponent of years produces a neat result:
For instance 1/2^year would be (1/2, 1/4, 1/8, 1/16 or 0.50, 0.25, 0.13, 0.06)... This weighting tends to favor last year's performance heavily with a quick roll-off on history. The neat thing about it is you can use all of history and still not hit 1.0. The bad thing about it is that past 4 years has very little impact.

Using (2/3)^year gets 0.667, 0.444, 0.296, 0.198 which normalized to a sum of 1 would be 0.417, 0.275, 0.185, 0.116. Using this algorithm tends to favor teams with longevity ans consistent high performance, but can also keep a team in the spot-light possibley a year or two after their prime if they have a very strong legacy.

A correlation study would be interesting to dig into.

rsegrest
21-02-2014, 17:11
Finally got it. Turned out it was my school firewall blocking the download with no popup warning.

Changed to a different computer not behind the firewall and it worked. Although one more thing I discovered for anyone else who may be having trouble, the page did not want to open in Chrome but opened immediately in Explorer...how's that for weird? :]

Christopher149
21-02-2014, 17:50
Any chance of saving a copy as .xls (2003 era)? It's too big for Google Docs, and I won't have access to newer Excel until Monday :P

Joe Ross
21-02-2014, 18:29
A correlation study would be interesting to dig into.

I did a quick linear regression using the aWAR data from 2008-2012 to predict aWAR in 2013. The R^2 was 0.50.

Coefficients Standard Error t Stat P-value
Intrcpt 0.3038132 0.052869809 5.746440231 1.23044E-08
2008 -0.000281711 0.032586229 -0.008645104 0.993104123
2009 0.070683871 0.031397538 2.251255225 0.02459942
2010 0.058408153 0.034406836 1.697574055 0.089918829
2011 0.250272535 0.035138483 7.12246277 2.10545E-12
2012 0.427316251 0.033320186 12.82454585 8.15505E-35


This shows that data from 5 years ago is not statistically significant, and 3-4 years old is minimally significant.

Nathan Streeter
21-02-2014, 20:10
going to go through it this weekend and see what shakes out. I find it humorous that the file that is 32 megs is called "slimdown" :-)

Thanks! I'd appreciate having more eyes on it! Yeah, the "slimdown" doesn't drop that much file size... it does drop the extra functionality to easily rank different events that Dan Niemitalo had made... I hadn't yet adapted all the formulas to work with aWAR so I dropped it for now. I would like to add it back in though, as it is really cool!

I noticed an interesting thing in the team lookup form, there are several teams (just from the few that I checked) where their Championship stats (OPR, rank, win/loss record, etc.) is a direct duplicate of their State/Regional Championship stats. For example, team 469, their Archimedes division stats are a direct duplicate of their MSC stats. The same is true for team 33, team 217, and team 245, but not for ALL MSC teams... I'm not sure if I see a pattern, but I'm really confused. Is this some element of the system that I'm not understanding?

Thanks for pointing out the bug... there's probably an issue with the formulas somewhere. Feel free to look into it yourself... I will later too tough.

Hands down, this is the coolest thing I've seen all season.

As someone who loves ranking everything, this rankings list is like candy to me.

Thanks! :-)

Unless I'm mistaken, couldn't the aWAR of 4 be a 4-8 team that won Chairmans award, though? I'd be curious to see the rankings based only on on-field preformance.

Yes, that could be... I don't think winning Chairmans would have *that* much of an impact... but I do agree that it would be interesting to have a robot-only version. That was my original goal but it got tabled... the intention was that it would be called rWAR, though. I'll add it in before long... it's an easy-enough addition.

I find the yearly ranking element interesting:
(32%, 29%, 23%, 16%).

On simialr efforts, using a fraction to the exponent of years produces a neat result:
For instance 1/2^year would be (1/2, 1/4, 1/8, 1/16 or 0.50, 0.25, 0.13, 0.06)... This weighting tends to favor last year's performance heavily with a quick roll-off on history. The neat thing about it is you can use all of history and still not hit 1.0. The bad thing about it is that past 4 years has very little impact.

Using (2/3)^year gets 0.667, 0.444, 0.296, 0.198 which normalized to a sum of 1 would be 0.417, 0.275, 0.185, 0.116. Using this algorithm tends to favor teams with longevity ans consistent high performance, but can also keep a team in the spot-light possibley a year or two after their prime if they have a very strong legacy.

A correlation study would be interesting to dig into.

Agreed that the correlation study is worthwhile... thanks Joe Ross!

Finally got it. Turned out it was my school firewall blocking the download with no popup warning.

Changed to a different computer not behind the firewall and it worked. Although one more thing I discovered for anyone else who may be having trouble, the page did not want to open in Chrome but opened immediately in Explorer...how's that for weird? :]

It opened for me in Chrome... maybe not as slow though? Glad you ended up getting it downloaded!

Any chance of saving a copy as .xls (2003 era)? It's too big for Google Docs, and I won't have access to newer Excel until Monday :P

I can do that... maybe tomorrow.

I did a quick linear regression using the aWAR data from 2008-2012 to predict aWAR in 2013. The R^2 was 0.50.

Coefficients Standard Error t Stat P-value
Intrcpt 0.3038132 0.052869809 5.746440231 1.23044E-08
2008 -0.000281711 0.032586229 -0.008645104 0.993104123
2009 0.070683871 0.031397538 2.251255225 0.02459942
2010 0.058408153 0.034406836 1.697574055 0.089918829
2011 0.250272535 0.035138483 7.12246277 2.10545E-12
2012 0.427316251 0.033320186 12.82454585 8.15505E-35


This shows that data from 5 years ago is not statistically significant, and 3-4 years old is minimally significant.

This would probably argue for only doing the past three years... so maybe we should drop it to 3. I'd be curious if 2009 or 2010 are outliers though... given how much of a drop there is from 2011 to 2010. I have some more thoughts related to this correlation study... I'll comment more when I have time later.

Practice 'bot to finish... :-)

DampRobot
21-02-2014, 20:29
I did a quick linear regression using the aWAR data from 2008-2012 to predict aWAR in 2013. The R^2 was 0.50.

Coefficients Standard Error t Stat P-value
Intrcpt 0.3038132 0.052869809 5.746440231 1.23044E-08
2008 -0.000281711 0.032586229 -0.008645104 0.993104123
2009 0.070683871 0.031397538 2.251255225 0.02459942
2010 0.058408153 0.034406836 1.697574055 0.089918829
2011 0.250272535 0.035138483 7.12246277 2.10545E-12
2012 0.427316251 0.033320186 12.82454585 8.15505E-35


This shows that data from 5 years ago is not statistically significant, and 3-4 years old is minimally significant.

If I'm interpreting your P-value results correctly, 2008 and 2010 aWARs were really bad predictors of 2013 success, while 2009 aWARs were reasonably good? From the games, that's pretty much the opposite of what I'd expect. Or am I misinterpreting your stat program's outputs?

Orion.DeYoe
21-02-2014, 20:38
Something that really disappoints me about ranking statistics (including this one) is that they do not attempt to capture a team's improvement over the season.
A team that improves over the season should rank higher than a team that start their season off really well and then drops off. This stat really needs to have some sort of weighting system for doing well at harder regionals/districts (and the inverse for weaker events).
I also really want to see how removing non-competitive awards would affect the rankings.

IKE
21-02-2014, 21:58
Something that really disappoints me about ranking statistics (including this one) is that they do not attempt to capture a team's improvement over the season.
A team that improves over the season should rank higher than a team that start their season off really well and then drops off. This stat really needs to have some sort of weighting system for doing well at harder regionals/districts (and the inverse for weaker events).
I also really want to see how removing non-competitive awards would affect the rankings.

If you do OPR mapping of 1st event to second event to third event, you will find some really neat trends. For open scoring games (non-2011), the event to event increase is pretty impressive for the first 3 events for most teams with some "flat-lining" after that. Some of the teams that practice more will "flat-line" earlier.

Basel A
22-02-2014, 04:53
Thanks for pointing out the bug... there's probably an issue with the formulas somewhere. Feel free to look into it yourself... I will later too tough.

It's a very simple problem. Most cells in the 5th row (/event) for each year has the same formulas as the 4th. You just need to shift the column references.

DonRotolo
22-02-2014, 18:15
some sort of weighting system for doing well at harder regionals/districts (and the inverse for weaker events).Of course, then you'd need a way to identify stronger and weaker events...

Joe Ross
22-02-2014, 18:34
If I'm interpreting your P-value results correctly, 2008 and 2010 aWARs were really bad predictors of 2013 success, while 2009 aWARs were reasonably good? From the games, that's pretty much the opposite of what I'd expect. Or am I misinterpreting your stat program's outputs?

2009 was statistically significant, but not necessarily what a normal person would call a good predictor on its own. It made up 5% of the prediction, which only predicted about 50% of the 2013 results.

Nemo
23-02-2014, 23:16
I've been looking through the aWAR numbers and having a lot of fun checking it out! Thanks for putting this out there!

Unless I'm mistaken, couldn't the aWAR of 4 be a 4-8 team that won Chairmans award, though? I'd be curious to see the rankings based only on on-field preformance.

You can actually do that with the spreadsheet. Go to the Point System tab and change all of the award values to zero (or any other value you like), then check the new totals in the aWAR tab. The Team Lookup tab isn't going to update, because it's looking at a table of static values, but the aWAR tab will have the numbers you want.

Speaking more broadly, you can also change the point system in various other ways in that tab if you're into playing with numbers.

Any chance of saving a copy as .xls (2003 era)? It's too big for Google Docs, and I won't have access to newer Excel until Monday

The spreadsheet uses functions that aren't available in Excel 2003, so here are just the aWAR numbers for each team:
16379

I did a quick linear regression using the aWAR data from 2008-2012 to predict aWAR in 2013. The R^2 was 0.50.

Thanks for running those numbers! One of these summers I want to go more in depth into this type of thing.

Racer26
24-02-2014, 15:44
One thing I don't really like about the weighted multi-year average is that teams of age < length of your stat rolloff are crippled because they have an aWAR of zero for the years before they existed.

In that case, I think their weighted average should not include the years they didn't exist. (this affects newer strong teams like 4001, 4334, 4814, 4451, 3990, etc)

XaulZan11
24-02-2014, 16:06
One thing I don't really like about the weighted multi-year average is that teams of age < length of your stat rolloff are crippled because they have an aWAR of zero for the years before they existed.

In that case, I think their weighted average should not include the years they didn't exist. (this affects newer strong teams like 4001, 4334, 4814, 4451, 3990, etc)

I tend to agree with this, but they also have the advantage of winning rookie all star and rookie inspiration, which will boast their numbers. Teams that have won Chairmans at the championship also are disadvantaged.

Nemo
24-02-2014, 19:54
One thing I don't really like about the weighted multi-year average is that teams of age < length of your stat rolloff are crippled because they have an aWAR of zero for the years before they existed.

In that case, I think their weighted average should not include the years they didn't exist. (this affects newer strong teams like 4001, 4334, 4814, 4451, 3990, etc)

I'd love to figure out the best way to predict future competitive success based on the combination of years of experience, number of events per year, success in each event, consistency of success, awards, and OPR.

One gets into some tricky gray areas. For example, which team is likely to do better next year: the one that went quarterfinalist & finalist in two events, or the one that was a finalist in a single event? I've been thinking about how to organize all of the data to make it easier to study questions like that, but that's something that won't happen until the summer.

I tend to think that teams that playing in 2+ events correlates with better success, and that teams in their 3rd year or more will tend to perform better than rookies and second year teams. For that reason, I think it makes sense to give extra credit for the seasons that are 3 or 4 years ago, even if it's a small amount. But as to the exact amount it should be, I don't know. Needs to be studied.

Teams that have won Chairmans at the championship also are disadvantaged.

On the contrary, I looked them all up and gave them 10 points in each year after they won the Chairman's Award. And one can adjust that figure to whatever value one wants in the Point System worksheet. I guessed that a Hall of Fame team is more likely to do well than a non HoF team in a given year, so I think it is a good adjustment. It's one more thing that would be interesting to study in a regression analysis.

Christopher149
24-02-2014, 20:04
Looking at the numbers for 857, aWAR looks pretty accurate at first blush:


2008: 0.17 - poor robot, driving was its best quality
2009: 1.2 - we were pretty good, got into elims at one event
2010: -0.1 - that was such a horrible bot *shudders*
2011: 0.7 - we may have been a #7 captain, but we were a bit mediocre
2012: 0.2 - poor robot, didn't really get anywhere until near the end of 2nd district event, didn't move several matches
2013: 1.46 - fantastic robot, semis and quarters and MSC, scored every match I think

Anthony Galea
04-03-2014, 22:32
Sorry to be reviving an old thread, but will this be updated for 2014, like Ed Laws database?

Nathan Streeter
18-04-2014, 11:55
The aWAR spreadsheet is finally updated for 2014, including all events through the District Championships! The link to download the 110MB file is here... https://app.box.com/s/cg8ha68otofx3rqo7drk.

A couple cautions:
- There is no awards data for 2014. So, the 2014 aWAR is "robot only..." not the combination of both on-field performance and awards. As such, aWARs for 2014 are lower than usual, affecting the values for everyone.

- Since no one has competed at the global CMP, the teams from districts who competed at their DCMPs have an edge in the 2014 and aggregate stats... Both DCMPs and CMPs get a 1.7x multiplier for their higher level of play. The teams that have gotten that have done well at DCMP and gotten that 1.7x multiplier will be ranked well above comparable robots that haven't competed at a DCMP. To improve the relative rankings of the top robots, set the DCMP multiplier down to 1.1x for now, so that it's weighted comparably... It definitely will not be a perfect fix (DCMPs are still harder, so teams that didn't do exceptionally well at their DCMP will be very under-valued), but it will enable a reasonable comparison of the top ~50 or so teams.

Happy exploring!

(as a sidenote, I also posted a division comparison (http://www.chiefdelphi.com/forums/showpost.php?p=1375864&postcount=129) over in the "When Will Divisions Be Announced?" thread...)