[EWCP] Presents TwentyFour -- An FRC Statistics Blog

As mentioned in the State of EWCPcast, we think that there is a huge amount of data floating around that most teams aren’t using to design better robots. We’ve decided to “do the math” and walk through some of that data to hopefully make it more accessible to teams. There are currently two posts: one is a basic introduction, the other looks at just how many points alliances scored in qualifying matches in 2012. We think the data might surprise you!

In the Beginning

Rebound Fumble: Aim Low

There is also a facebook page & twitter feed if you want to know when a new article is posted (should be about once a week). I’m pretty sure you can follow the tumblr blog itself as well… but I am not a tumblr expert.

I had a chance to read this last night and passed it onto 3929. Great info in here to use as an eye opener. Nice work Ian & all.

I really enjoyed the post. There’s a good mix of images & text describing the data and I was somewhat surprised by the extreme nature of some of the numbers. 4 points as a median in TeleOp for a given alliance? Ouch.

I think it shows how deceptively challenging the game was last year to a good number of teams and reinforces the importance of recognizing your team’s capabilities and building your robot within your team’s means.

Ian,

This is great stuff. One of the largest mistakes made by teams, year after year, veteran and rookie alike, is to overestimate scores. Even with the power of hindsight, they still fail to do it correctly. I have a lot of data points from previous games that are just as “surprising” as what you’ve presented for Rebound Rumble. I look forward to seeing your future posts!

The main thing that I take away from this is that Michigan district competitions, at least the ones that we attended, are way above average, score-wise. Especially Troy. Just looking at our own match lineup, the lowest score I see is 9. The average (just looking at matches we played in) losing alliance score is 31.4 points, and the average winning alliance score is 52 points. (I’m not even going into MSC- that’s in an entirely different league.)

Great job and valuable data! Thanks for taking the time to do the analysis and to present the conclusions in such a clear and concise format.

Is it my imagination or do the game designers normally incorporate 3 scoring methods or strategies? And designing a robot to execute all 3 very well proves difficult because you run out of time (in the build season and during matches) or mass or both. We usually try to identify the game designer’s intentions and optimize strategy and design around 2 strategies and/or scoring methods.

Thanks Again!

First of all, thanks everyone for your kind words. We really hope teams can take this to heart and use it to ground their assumptions about 2013 and ultimately participate in more exciting matches. :slight_smile:

Yep, I have a rough draft of an article along these lines. Michigan events are higher scoring and in a more enjoyable way as teams are better across the board as opposed to just having an exceptional top tier. Haven’t looked into it more in depth to determine how much of that is due to experience (the two events) and how much is a stronger team base. (and I bet Jim Zondag already knows that answer :D)

I was talking about this with my dad over the Christmas break and I think he summed it up quite well. “We always figured everyone else (including us) had learned so much from the previous year that everyone would be way better. And we never were.”

http://bl.ocks.org/4431911 You can look at it yourself. Ian and I worked off the same data set. These charts are rough cuts at what we hope to be able to do for all ones on the site in the future. We felt it’d be better to get the data out than making it perfect first. Please ignore the bridge charts, the data is correct but the display is just wrong and I’m still fighting with it.

You may also notice that our data is being pulled from S3 at https://s3.amazonaws.com/twentyfour.ewcp.org/FRC2012-twitter.csv Feel free to download that and use it in your own analysis. We’re working on cleaning the 2010 and 2011 data still. If you see glaring problems with it let us know and if your provide data we can update the data set.

Thanks for sharing your good work and astute observations. I agree that most teams overestimate the level of play and scoring, and that a team will do very well by setting a modest goal and actually achieving it.

If you were to break the scoring statistics down by week (or even by first half and second half of the competition season), I think you would find a significant increase in the scores. This is particularly true in Michigan and MAR where teams play in at least two events and have a real chance to improve.

I disagree slightly with one of your conclusions, however. Although many teams over-reach technically and would do better with less, it is still good to reach a bit beyond your comfort zone. A functioning, “simple” machine is best at early events where it is playing against non-functioning “complex” machines that haven’t reached their full potential. They will eventually reach a plateau and struggle to remain competitive. I think a team should understand their technical limitation and design within them, but you should always strive to be competitive against the “great” teams, not the pack. Some of us are fortunate enough to have an MSC to aspire to.

As you imply, overestimating scores is a result of not predicting how the game will actually be played out. If your early brainstorming/strategy sessions don’t result in a reasonably accurate version of reality, then it is hard to decide what functions you need to design into your robot. Week 1 of build season is the most important one by far.

Hamming







Yup, it’s attributed in the paragraph above.

We would like to explain the guiding principles with a quote from Richard Hamming, who determined the atomic bomb being developed by the Manhattan Project would not light the atmosphere on fire.

“The purpose of computing is insight, not numbers.”

.

I like the quote underneath it as well (in the book).

The raw 2012 Twitter data for weeks 1 through 8 in Excel is posted here if anyone wants to play with it.

Absolutely. Scores do improve over the course of an event, and definitely week to week. This is a “time history” of all matches played in 2011 that illustrates that. I think the number of teams that reach that plateau is relatively small in practice though. As Jim Zondag pointed out a year ago, roughly 25% of robots are worth less than zero points, and the 50% point doesn’t move a whole lot higher. If you count it up, you see that in 2011 the median robot had an OPR of less than 5 points!

In all fairness my team in 2006 won a regional by that formula of aiming low and actually working. Sure we shot from the ramp – but we were one the few teams at the event who could reliable score our starting balls.

Our target audience is really the teams that are missing out on eliminations (roughly that 50%), hence the name of the blog. Of course, we also hope it is somewhat useful to everyone. We don’t want everyone to field a Dozer – but we hope our analysis pushes some teams who haven’t had much on the field success to emulate him.

Just a heads up guys, we’ve released another post talking about the exploits of Dozer (the little plow bot) over the years.

Dang it feels good to be a Dozer

Great article. You give Dozer credit for 10 points during Rebound Rumble…I assume that’s equating half a Co-op balance to 10 points (similar to what many did with OPR scores)?

That is correct. Co-op is an annoying game mechanic when you are trying to back out robot goodness from match data because it could do obnoxiously perverse things to team incentives. But we can at least give Dozer the 10 pts.

I like bits and pieces of both the original conclusions and this statement. I agree that teams should reach a bit outside their comfort zone (only those willing to fail will ever have great success), but to the original posts point, I think many reach WAY too far outside their capabilities. And it is also incredibly important for teams to decide what their goal is…
Are you building:

  1. A Robot within your means
  2. A Robot that can do everything
  3. A Robot designed to win Matches
  4. A Robot designed to win Regionals
  5. A Robot designed to win Championships
    Arguably these five points can be very different robots, and can be very different for each team. And in reality there are probably between 3 & 10 teams out of thousands that can do all five. I think the data presented shows us that very well. Each team needs to do an analysis of the numbers, the ways to score, and what they think they are capable of, and then figure out how to match that to what they want to do.

And another point that has been discussed before but I’ve only seen hinted at here, is that a HUGE factor in any of the “winning” strategies is driver time. Even an average robot with great drivers who have lots of practice and good strategy can very easily seed high and win events.

I want to thank EWCP for putting this blog together. I love statistics. My team is considered one of those that pushes the envelope as far as design and tries to be competitive to a chamionship level. I’ll admit we have completely missed the mark sometimes and bit off way more than we could chew. Strategy alone can win you alot of matches in FRC and I think this blog more than anything else shows that.

Some thoughts on the 3 Day Robot.

http://twentyfour.ewcp.org/post/40074650393/thoughts-on-the-3-day-robot

As an aside, the Robot in 3 Days guys will be on the EWCPcast this Sunday at 9pm EST.