Quote:
Originally Posted by IKE
How so? I have heard others say this, but I don't find any actual evidence of such.
Many predicted "stand outs" didn't make it to the finals of their divisions, and there really did not seem like a lack of scoring capability capping scores. Lots of #1 alliances didn't win their division, which means that there was enough depth to form competitive alliances.
I don't think the mix would have dramatically changed 2014 either.
|
Cut the depth of each field in half. Field winners were the 1,2,4,1,2,1,1,2 seeds. 2014 had 1,1,5,5. 2013 we had 1,2,3,5. Fields were significantly less deep across the board - and your shot at winning Einstein was dependent on which field you ended up on. No offense to any of the teams on Curie, but nobody was even close to 1114+148. They had ~20 point cushions on the rest of the division through quals, QF's, SF's, and F's. That's partially attributable to the game dynamic, but Einstein turned into a less deep version of division finals.
Elim scores weren't bad, but that's mainly due to the fact that 3rd robots didn't have much of a role on strong alliances (failure of game design), and a few were picked purely for "cheesecakeability".
3467 was our "4th robot" in 2014 (29 of 32 picked), but they were a 2x District Winner and a 1st round selection at NE DCMP. Most "3rd robots" didn't have those qualifications this year.
2012 I watched 1114,2056,4334 topple 67,2826,4143
2013 I watched 33,469,1519 beat 987,2415,2959 (after having to go through 254,2468,11)
2014 our alliance (2590,1625,1477,3467) squeezed past the MSC champs (33 & 27, along with 175,334)
I didn't see anything comparable to that this year - to some extent it was shifted up to Einstein, but I didn't see those deep, skilled "IRI-lite" alliances this year. 1671 being the notable exception, not sure how they slipped that far.