Log in

View Full Version : [EWCP] Presents TwentyFour -- An FRC Statistics Blog


Ian Curtis
04-01-2013, 10:22
As mentioned in the State of EWCPcast, we think that there is a huge amount of data floating around that most teams aren’t using to design better robots. We’ve decided to “do the math” and walk through some of that data to hopefully make it more accessible to teams. There are currently two posts: one is a basic introduction, the other looks at just how many points alliances scored in qualifying matches in 2012. We think the data might surprise you!

In the Beginning (http://twentyfour.ewcp.org/post/38349991132/in-the-beginning)

Rebound Fumble: Aim Low (http://twentyfour.ewcp.org/post/39402342932/rebound-fumble-aim-low)

There is also a facebook (http://www.facebook.com/FrcPlowie) page & twitter (https://twitter.com/frcplowie) feed if you want to know when a new article is posted (should be about once a week). I'm pretty sure you can follow the tumblr blog itself as well... but I am not a tumblr expert.

Akash Rastogi
04-01-2013, 10:37
I had a chance to read this last night and passed it onto 3929. Great info in here to use as an eye opener. Nice work Ian & all.

Ryan Dognaux
04-01-2013, 10:45
I really enjoyed the post. There's a good mix of images & text describing the data and I was somewhat surprised by the extreme nature of some of the numbers. 4 points as a median in TeleOp for a given alliance? Ouch.

I think it shows how deceptively challenging the game was last year to a good number of teams and reinforces the importance of recognizing your team's capabilities and building your robot within your team's means.

Karthik
04-01-2013, 11:23
Ian,

This is great stuff. One of the largest mistakes made by teams, year after year, veteran and rookie alike, is to overestimate scores. Even with the power of hindsight, they still fail to do it correctly. I have a lot of data points from previous games that are just as "surprising" as what you've presented for Rebound Rumble. I look forward to seeing your future posts!

Joseph Smith
04-01-2013, 12:08
The main thing that I take away from this is that Michigan district competitions, at least the ones that we attended, are way above average, score-wise. Especially Troy. Just looking at our own match lineup, the lowest score I see is 9. The average (just looking at matches we played in) losing alliance score is 31.4 points, and the average winning alliance score is 52 points. (I'm not even going into MSC- that's in an entirely different league.)

wireties
04-01-2013, 12:22
Great job and valuable data! Thanks for taking the time to do the analysis and to present the conclusions in such a clear and concise format.

Is it my imagination or do the game designers normally incorporate 3 scoring methods or strategies? And designing a robot to execute all 3 very well proves difficult because you run out of time (in the build season and during matches) or mass or both. We usually try to identify the game designer's intentions and optimize strategy and design around 2 strategies and/or scoring methods.

Thanks Again!

Ian Curtis
04-01-2013, 12:27
First of all, thanks everyone for your kind words. We really hope teams can take this to heart and use it to ground their assumptions about 2013 and ultimately participate in more exciting matches. :)

The main thing that I take away from this is that Michigan district competitions, at least the ones that we attended, are way above average, score-wise. Especially Troy. Just looking at our own match lineup, the lowest score I see is 9. The average (just looking at matches we played in) losing alliance score is 31.4 points, and the average winning alliance score is 52 points. (I'm not even going into MSC- that's in an entirely different league.)

Yep, I have a rough draft of an article along these lines. Michigan events are higher scoring and in a more enjoyable way as teams are better across the board as opposed to just having an exceptional top tier. Haven't looked into it more in depth to determine how much of that is due to experience (the two events) and how much is a stronger team base. (and I bet Jim Zondag already knows that answer :D)

Even with the power of hindsight, they still fail to do it correctly.

I was talking about this with my dad over the Christmas break and I think he summed it up quite well. "We always figured everyone else (including us) had learned so much from the previous year that everyone would be way better. And we never were."

Andrew Schreiber
04-01-2013, 12:45
The main thing that I take away from this is that Michigan district competitions, at least the ones that we attended, are way above average, score-wise. Especially Troy. Just looking at our own match lineup, the lowest score I see is 9. The average (just looking at matches we played in) losing alliance score is 31.4 points, and the average winning alliance score is 52 points. (I'm not even going into MSC- that's in an entirely different league.)

http://bl.ocks.org/4431911 You can look at it yourself. Ian and I worked off the same data set. These charts are rough cuts at what we hope to be able to do for all ones on the site in the future. We felt it'd be better to get the data out than making it perfect first. Please ignore the bridge charts, the data is correct but the display is just wrong and I'm still fighting with it.

You may also notice that our data is being pulled from S3 at https://s3.amazonaws.com/twentyfour.ewcp.org/FRC2012-twitter.csv Feel free to download that and use it in your own analysis. We're working on cleaning the 2010 and 2011 data still. If you see glaring problems with it let us know and if your provide data we can update the data set.

Wayne TenBrink
04-01-2013, 12:52
Thanks for sharing your good work and astute observations. I agree that most teams overestimate the level of play and scoring, and that a team will do very well by setting a modest goal and actually achieving it.

If you were to break the scoring statistics down by week (or even by first half and second half of the competition season), I think you would find a significant increase in the scores. This is particularly true in Michigan and MAR where teams play in at least two events and have a real chance to improve.

I disagree slightly with one of your conclusions, however. Although many teams over-reach technically and would do better with less, it is still good to reach a bit beyond your comfort zone. A functioning, "simple" machine is best at early events where it is playing against non-functioning "complex" machines that haven't reached their full potential. They will eventually reach a plateau and struggle to remain competitive. I think a team should understand their technical limitation and design within them, but you should always strive to be competitive against the "great" teams, not the pack. Some of us are fortunate enough to have an MSC to aspire to.

As you imply, overestimating scores is a result of not predicting how the game will actually be played out. If your early brainstorming/strategy sessions don't result in a reasonably accurate version of reality, then it is hard to decide what functions you need to design into your robot. Week 1 of build season is the most important one by far.

Ether
04-01-2013, 14:03
In the Beginning (http://twentyfour.ewcp.org/post/38349991132/in-the-beginning)

Hamming

Andrew Schreiber
04-01-2013, 14:21
Hamming

Yup, it's attributed in the paragraph above.

We would like to explain the guiding principles with a quote from Richard Hamming, who determined the atomic bomb being developed by the Manhattan Project would not light the atmosphere on fire.

“The purpose of computing is insight, not numbers.”



.

Ether
04-01-2013, 14:40
Yup, it's attributed in the paragraph above.

I like the quote underneath it as well (in the book).

Ether
04-01-2013, 14:46
In the Beginning (http://twentyfour.ewcp.org/post/38349991132/in-the-beginning)

The first several articles we have planned use the @FRCFMS twitter feed data. We know that this is not a perfect dataset – some regionals are not recorded, some matches get replayed, etc.



The raw 2012 Twitter data for weeks 1 through 8 in Excel is posted here (http://www.chiefdelphi.com/media/papers/2682) if anyone wants to play with it.

Ian Curtis
04-01-2013, 19:30
A functioning, "simple" machine is best at early events where it is playing against non-functioning "complex" machines that haven't reached their full potential. They will eventually reach a plateau and struggle to remain competitive. I think a team should understand their technical limitation and design within them, but you should always strive to be competitive against the "great" teams, not the pack. Some of us are fortunate enough to have an MSC to aspire to.

Absolutely. Scores do improve over the course of an event, and definitely week to week. This is a "time history" of all matches played in 2011 that illustrates that. (http://www.chiefdelphi.com/media/photos/37143) I think the number of teams that reach that plateau is relatively small in practice though. As Jim Zondag (http://www.chiefdelphi.com/forums/showpost.php?p=1092833&postcount=13) pointed out a year ago, roughly 25% of robots are worth less than zero points, and the 50% point doesn't move a whole lot higher. If you count it up, you see that in 2011 the median robot had an OPR of less than 5 points!
http://i.imgur.com/hCTYh.jpg

In all fairness my team in 2006 won a regional by that formula of aiming low and actually working. Sure we shot from the ramp -- but we were one the few teams at the event who could reliable score our starting balls.

Our target audience is really the teams that are missing out on eliminations (roughly that 50%), hence the name of the blog. Of course, we also hope it is somewhat useful to everyone. We don't want everyone to field a Dozer -- but we hope our analysis pushes some teams who haven't had much on the field success to emulate him.

Andrew Schreiber
07-01-2013, 15:37
Just a heads up guys, we've released another post talking about the exploits of Dozer (the little plow bot) over the years.

Dang it feels good to be a Dozer (http://twentyfour.ewcp.org/post/39532553438/dang-it-feels-good-to-be-a-dozer)

Jared Russell
07-01-2013, 15:53
Just a heads up guys, we've released another post talking about the exploits of Dozer (the little plow bot) over the years.

Dang it feels good to be a Dozer (http://twentyfour.ewcp.org/post/39532553438/dang-it-feels-good-to-be-a-dozer)

Great article. You give Dozer credit for 10 points during Rebound Rumble...I assume that's equating half a Co-op balance to 10 points (similar to what many did with OPR scores)?

Ian Curtis
08-01-2013, 00:25
Great article. You give Dozer credit for 10 points during Rebound Rumble...I assume that's equating half a Co-op balance to 10 points (similar to what many did with OPR scores)?

That is correct. Co-op is an annoying game mechanic when you are trying to back out robot goodness from match data because it could do obnoxiously perverse things to team incentives. But we can at least give Dozer the 10 pts.

Kims Robot
08-01-2013, 08:18
Although many teams over-reach technically and would do better with less, it is still good to reach a bit beyond your comfort zone. A functioning, "simple" machine is best at early events where it is playing against non-functioning "complex" machines that haven't reached their full potential. They will eventually reach a plateau and struggle to remain competitive. I think a team should understand their technical limitation and design within them, but you should always strive to be competitive against the "great" teams, not the pack.

I like bits and pieces of both the original conclusions and this statement. I agree that teams should reach a bit outside their comfort zone (only those willing to fail will ever have great success), but to the original posts point, I think many reach WAY too far outside their capabilities. And it is also incredibly important for teams to decide what their goal is...
Are you building:
1. A Robot within your means
2. A Robot that can do everything
3. A Robot designed to win Matches
4. A Robot designed to win Regionals
5. A Robot designed to win Championships
Arguably these five points can be very different robots, and can be very different for each team. And in reality there are probably between 3 & 10 teams out of thousands that can do all five. I think the data presented shows us that very well. Each team needs to do an analysis of the numbers, the ways to score, and what they think they are capable of, and then figure out how to match that to what they want to do.

And another point that has been discussed before but I've only seen hinted at here, is that a HUGE factor in any of the "winning" strategies is driver time. Even an average robot with great drivers who have lots of practice and good strategy can very easily seed high and win events.

techtiger1
08-01-2013, 09:44
I want to thank EWCP for putting this blog together. I love statistics. My team is considered one of those that pushes the envelope as far as design and tries to be competitive to a chamionship level. I'll admit we have completely missed the mark sometimes and bit off way more than we could chew. Strategy alone can win you alot of matches in FRC and I think this blog more than anything else shows that.

Andrew Schreiber
09-01-2013, 11:06
Some thoughts on the 3 Day Robot.

http://twentyfour.ewcp.org/post/40074650393/thoughts-on-the-3-day-robot

As an aside, the Robot in 3 Days guys will be on the EWCPcast this Sunday at 9pm EST.

Sandvich
09-01-2013, 18:53
These are very good posts, please keep it up.

Ian Curtis
10-01-2013, 14:50
Another entry -- Do Teams Get Better at Events? (http://twentyfour.ewcp.org/post/40189513155/do-teams-get-better-at-events) The answer is unsurprising, but the magnitude of the improvement and what teams get better at may come as a surprise.

Has anyone ever done scouting with the time value of points (http://en.wikipedia.org/wiki/Time_value_of_money)? I'm not sure the result would be worth the effort, but it is an interesting thought.

Nuttyman54
10-01-2013, 15:06
Another entry -- Do Teams Get Better at Events? (http://twentyfour.ewcp.org/post/40189513155/do-teams-get-better-at-events) The answer is unsurprising, but the magnitude of the improvement and what teams get better at may come as a surprise.

Has anyone ever done scouting with the time value of points (http://en.wikipedia.org/wiki/Time_value_of_money)? I'm not sure the result would be worth the effort, but it is an interesting thought.

Very interesting, I'm looking forward to the further results on the topic.

Regarding the time-value of points scouting, you are proposing to determine the "value" of points throughout the competition, and then modify a team's worth based on the "valued" points they score? Thus, a robot which consistently scores 3 points every match and does not improve or decline is worse at the end of the competition than at the start, because 3pts is worth less at the end?

It's an interesting theory. If you could get your hands on some hard scouting data from someone (maybe 1114?) for one of their events, you could test it out and see how much of an effect it has on the final results.

Andrew Schreiber
10-01-2013, 15:16
Very interesting, I'm looking forward to the further results on the topic.

Regarding the time-value of points scouting, you are proposing to determine the "value" of points throughout the competition, and then modify a team's worth based on the "valued" points they score? Thus, a robot which consistently scores 3 points every match and does not improve or decline is worse at the end of the competition than at the start, because 3pts is worth less at the end?

It's an interesting theory. If you could get your hands on some hard scouting data from someone (maybe 1114?) for one of their events, you could test it out and see how much of an effect it has on the final results.

If anyone is willing to send scouting data our way let us know and we'll see what we can do.

Just a heads up guys, don't expect this pace of updates all the time. We've got a lot of data we want to get out to teams while it can still be useful to them.

Jon Stratis
10-01-2013, 15:21
I'll see what sort of data we still have sitting around on computers... we've been using an excel-based scouting system for several years aimed at quantitative analysis of points scored by each individual robot for our events. If we have those spreadsheets somewhere, they might provide what you're looking for, at least for the Minnesota competitions.

IKE
10-01-2013, 15:56
Another entry -- Do Teams Get Better at Events? (http://twentyfour.ewcp.org/post/40189513155/do-teams-get-better-at-events) The answer is unsurprising, but the magnitude of the improvement and what teams get better at may come as a surprise.

Has anyone ever done scouting with the time value of points (http://en.wikipedia.org/wiki/Time_value_of_money)? I'm not sure the result would be worth the effort, but it is an interesting thought.

Very interesting. This seems to correlate well with the multiple event and the "practice makes perfect" mantra. IE, the more you play, the better you get.

An interesting experiement to observe. Take someone who does not play video games, and give them a game that needs some controller level skill. Say racing. Mario cart is particularly good. Have them keep practicing the same level. You will find a drastically different driver at the end of 1 hour of drive time...
This would be a really interesting experiment at the next Extra Life 24 hour video game a thon. My guess is the trends you see in that sort of experiement would correlate pretty well with amoutn of play time and scoring.

Akash Rastogi
10-01-2013, 16:05
Another entry -- Do Teams Get Better at Events? (http://twentyfour.ewcp.org/post/40189513155/do-teams-get-better-at-events) The answer is unsurprising, but the magnitude of the improvement and what teams get better at may come as a surprise.

Has anyone ever done scouting with the time value of points (http://en.wikipedia.org/wiki/Time_value_of_money)? I'm not sure the result would be worth the effort, but it is an interesting thought.

Nice another very interesting blog post, Ian!

Ian Curtis
10-01-2013, 16:09
Very interesting. This seems to correlate well with the multiple event and the "practice makes perfect" mantra. IE, the more you play, the better you get.

An interesting experiement to observe. Take someone who does not play video games, and give them a game that needs some controller level skill. Say racing. Mario cart is particularly good. Have them keep practicing the same level. You will find a drastically different driver at the end of 1 hour of drive time...
This would be a really interesting experiment at the next Extra Life 24 hour video game a thon. My guess is the trends you see in that sort of experiement would correlate pretty well with amoutn of play time and scoring.

That is actually how I initially phrased the article, that practice is super important since teams get so much better and you don't want to short-change your drivers. Someone pointed out though that perhaps that improvement was coming from good teams getting even better than unpracticed teams improving.

After some cursory glances it looks like the bottom tier does not really improve over the course of the event (low scoring events do not see the same rise the "typical" event sees). Looking at some traditionally stronger regionals you see the "typical" upward trajectory, but I haven't gone through to prove to myself who drives that. I sincerely hope it is the majority of the field, but since scoring is so concentrated in the "elite" tier it is possible for maybe the top 20% or less to drag the event average up by themselves.

Nuttyman54
10-01-2013, 17:19
After some cursory glances it looks like the bottom tier does not really improve over the course of the event (low scoring events do not see the same rise the "typical" event sees). Looking at some traditionally stronger regionals you see the "typical" upward trajectory, but I haven't gone through to prove to myself who drives that. I sincerely hope it is the majority of the field, but since scoring is so concentrated in the "elite" tier it is possible for maybe the top 20% or less to drag the event average up by themselves.

There are several factors at work here. Certainly practice and play time will make anyone better, and teams have a chance to make changes and improvements to the robot over the course of the competition. But good teams automatically give themselves room for improvement.

A team whose robot has a fundamentally flawed mechanism that never works is not going to see much improvement over the event, because no matter how good their drivers get, the mechanism is their limitation. A better team that is working on accuracy might have more success as their drivers improve their aim. A great team who has good accuracy will improve as their drivers get faster and can make more shots.

The tricky part is that you're inherently focusing on offensive capabilities that scale. A robot who is only designed to play defense and hang for 10pts this year won't see any "improvement" in this scheme, even if their mechanism works perfectly, because they can only score 10pts maximum ever. Their defense may improve drastically over the course of the event, or how quickly they hang, but we don't have an accurate way to statistically evaluate that.

Ian Curtis
11-01-2013, 00:34
Very interesting. This seems to correlate well with the multiple event and the "practice makes perfect" mantra. IE, the more you play, the better you get.

An interesting experiement to observe. Take someone who does not play video games, and give them a game that needs some controller level skill. Say racing. Mario cart is particularly good. Have them keep practicing the same level. You will find a drastically different driver at the end of 1 hour of drive time...
This would be a really interesting experiment at the next Extra Life 24 hour video game a thon. My guess is the trends you see in that sort of experiement would correlate pretty well with amoutn of play time and scoring.

So, I kept thinking about this & Evan's comments and realized I was probably being too pessimistic. I'm not sure if any self respecting statistician would ever use a rolling median, but I plotted one anyways and it says rather optimist things about the improvement of the 50th percentile over the course of the event. Based on the linear regression line, the median match starts at 12.6 points and rises to 21.2 over the course of the event, a roughly 70% improvement.

http://i.imgur.com/JDOGwl.jpg

Then I plotted the evolution of the quartiles over the event and it turns out that the 25th percentile and 75th percentile also improve over the course of the event. Seeing as penalties remained basically constant I was expecting to see the bottom quartile stagnate -- instead they got twice as good! I'm going to look at all the data before opening my big fat mouth in the future. :o

(I'll clean up the plots and post it in a few days. I'm never sure if I love or hate Excel, because it usually has a function to do what I need that is fairly idiot-proof, but the plotting tools always assume what I don't want when trying to set up the graphs.)

DampRobot
11-01-2013, 02:19
Is there any interest of starting a poll thread to see what teams think their robot will be scoring, and then compare it with actual match data? It might provide some both illuminating and new data about how much teams actually overestimate game play and their abilities.

TerryS
11-01-2013, 03:47
Thanks for taking the time to crunch this data and share your insights. We're trying to do a better job with strategic planning this year we'll definitely take your findings to heart. One goal that we've already set is to have a functioning robot early this year so that the programmers can tune autonomous scoring (even if it's not for 18 points). Glad to see that your findings backed up the importance of doing that.

Ian Curtis
11-01-2013, 14:39
Is there any interest of starting a poll thread to see what teams think their robot will be scoring, and then compare it with actual match data? It might provide some both illuminating and new data about how much teams actually overestimate game play and their abilities.

The problem with that is that on CD you would be sampling teams that post on CD, and measuring gameplay we are measuring all teams that show at the event. I think a great idea would be to ask people that question in pit scouting and then actually record how many points they score in match scouting. I'd love to know what you find!

Anecdotally, in 2008 at BAE we asked teams how many hurdles they thought they would average, and they told us 3. The actual regional average was a little below 1 if I remember right. And the higher caliber the team, the closer their estimate was to their actual performance. I think the RhodeWarriors said 3 and actually got close to that, while your typical team would say 3 and average 1 or less!

(If anyone doesn't know what hurdling is, go on youtube and watch the 2008 game animation)

DampRobot
11-01-2013, 16:02
The problem with that is that on CD you would be sampling teams that post on CD, and measuring gameplay we are measuring all teams that show at the event. I think a great idea would be to ask people that question in pit scouting and then actually record how many points they score in match scouting. I'd love to know what you find!

Anecdotally, in 2008 at BAE we asked teams how many hurdles they thought they would average, and they told us 3. The actual regional average was a little below 1 if I remember right. And the higher caliber the team, the closer their estimate was to their actual performance. I think the RhodeWarriors said 3 and actually got close to that, while your typical team would say 3 and average 1 or less!

(If anyone doesn't know what hurdling is, go on youtube and watch the 2008 game animation)

I know, there would be bias in the sample. The study could have some compensation (ie multiplying the average guessed score by a factor of .6), or you could just look at the team numbers of those who voted, and compare that to their actual OPR.

If guessed score is plotted against OPR, one data point per team, it would be interesting to see if you could observe a trend. For example, almost all data points would be below f(x)=x, and I suspect most would be below f(x)=x/2. If yout hypothesis is true, there would be a tighter correlation for higher x values.

IKE
11-01-2013, 16:15
...snip...
Anecdotally, in 2008 at BAE we asked teams how many hurdles they thought they would average, and they told us 3. The actual regional average was a little below 1 if I remember right. And the higher caliber the team, the closer their estimate was to their actual performance. I think the RhodeWarriors said 3 and actually got close to that, while your typical team would say 3 and average 1 or less!

(If anyone doesn't know what hurdling is, go on youtube and watch the 2008 game animation)

I often tell my scouts, that teams aren't lying, they either remember their best match, and think that is what they can (or do) every time. Sometimes you will also be told what the team believes it is capable of once it is fixed.

This is true though of when you ask people to assess the performance of some of the top performers at an event. They typically remember the best and worst matches of top performers, but then assume their average is really close to that.

Andrew Schreiber
14-01-2013, 11:31
Hey guess what? Turns out scoring is hard.

http://twentyfour.ewcp.org/post/40524033103/week-1-in-review-scoring-points-is-hard

Anupam Goli
14-01-2013, 12:15
Great post. Although not too much on new statistics and more summarizing old ones, the point is very much valid. Week 1 is fuelled by excitement and adrenaline. Then the designs start colliding geometrically, don't work out mathematically, and some prototypes start to become duds. This is where it gets hard: week 2 and week 3, when you have to crank out a design.

Ian Curtis
14-01-2013, 14:54
Then the designs start colliding geometrically, don't work out mathematically, and some prototypes start to become duds. This is where it gets hard: week 2 and week 3, when you have to crank out a design.

Quoted for truth.

If any of you enjoy reading the blog, it would be awesome if you could pass it along to any teams in your vicinity that you think might learn something from it. Andrew put in Google Analytics, so we can see where people are reading it and it is mostly from veteran areas. We get tons of traffic from places with established teams (Manchester NH is actually #1 :eek:), but not as much from places where we know there are lots of younger teams that may have not had as much competitive success in the past. For us, those younger areas are where we hope we can have the most impact.

PayneTrain
14-01-2013, 15:17
Now is the time that separates the real deals from the other guys (actually there are a lot of times, but this is the first big separation you see in the season.) Those that choose to ignore the process of iteration of primary and secondary mechanisms, those that refuse to put their drive train on competition-quality carpet, those that fail to understand the statistical realities that have revealed themselves over many FRC seasons, those that do not use the limitless power of hindsight to guide their future thinking over the next two weeks will fail. Weeks 2 and 3 is when your strategy should literally materialize in the build room. Be it late stage prototyping, drive base construction... anything. Don't wait for the robot to be built because the game is figured out in your head. Blind optimism invites corner cutting. Cut enough corners in any real-life application and the project collapses.

The teams that can do at least most of these things have bought themselves some extra time to daydream about victory. To those that ignore these things, time to get on the horse or wait until next year.

Andrew Schreiber
17-01-2013, 12:15
Ever curious about what alliances improve over the course of an event? We were... http://twentyfour.ewcp.org/post/40769815527/which-alliances-get-better-at-events

Bonus rowboats at the end

GBilletdeaux930
17-01-2013, 14:05
Hey guess what? Turns out scoring is hard.

http://twentyfour.ewcp.org/post/40524033103/week-1-in-review-scoring-points-is-hard

Since there are no comments on your blog, I'll do it here :)

In the last hanging game (2010) where a hang was worth 2/3 of the median match score, only 30% of qualifying matches ended with one or more robots hanging!

I'm curious if you have hanging data from another year where hanging was more viable. I feel that because hanging was worth about as much as a ball, that many teams didn't go for it, instead focusing on just being able to score one more ball.

I'm not trying to discredit, I've just been struggling myself with trying to put a worth onto hanging in relation to worth in shooting.

Ian Curtis
17-01-2013, 14:56
I'm curious if you have hanging data from another year where hanging was more viable. I feel that because hanging was worth about as much as a ball, that many teams didn't go for it, instead focusing on just being able to score one more ball.

I'm not trying to discredit, I've just been struggling myself with trying to put a worth onto hanging in relation to worth in shooting.

The most recent hanging game prior to 2010 is 2004. I probably could find match scores, but there really would be no way to back out hangs from that. (Hangs were worth more, 10 regular balls or 5 with a doubler)

I don't have the 2012 numbers handy, but in 2011 a successful Minibot was launched in 67% of matches. I would expect the percentage of successful (10 pt) climbs to be bounded by these two years. I would imagine the percentage of 20 and 30 pt hangs are somewhere around a triple balance. (in terms of robots that can pull it off. Since it is worth points in quals, you will see it more)

I also apparently messed up the first link in the article and accidentally linked to a local news story. Sorry about that, it has since been fixed to correctly link to an article about "A Rising Tide Lifts All Boats"

Chris is me
17-01-2013, 19:04
I'm curious if you have hanging data from another year where hanging was more viable. I feel that because hanging was worth about as much as a ball, that many teams didn't go for it, instead focusing on just being able to score one more ball.

It was worth as much as two balls. I actually asked a lot of teams why they chose not to hang in 2010, and by far the most common answer was "we can score two balls in 30 seconds, why bother?"

While tons of teams said that, and thought that, the mean alliance score that year was just over three points, for the entire alliance, for the entire match. Hanging was extremely underrated that year precisely because teams overreached and assumed scoring basketballs was easy. If the average team could score two balls every 30 seconds, more matches would end in double digit scores than single digit scores, which was obviously not the case at all.

Ian Curtis
18-01-2013, 17:51
I wanna be the very best (http://twentyfour.ewcp.org/post/40872342968/what-separates-okay-good-and-great)

The important difference between average and exceptional.

Andrew Schreiber
21-01-2013, 13:24
I live my life 15 seconds at a time.

http://twentyfour.ewcp.org/post/41118722347/i-love-autonomous-mode-how-were-doin-it-wrong

Nuttyman54
21-01-2013, 13:40
I live my life 15 seconds at a time.

http://twentyfour.ewcp.org/post/41118722347/i-love-autonomous-mode-how-were-doin-it-wrong

This definitely highlights the reliable autonomous over the high risk-high reward. Especially in Rebound Rumble where there was only a 1pt difference between the high and middle goals, I was surprised to see fewer teams go for the middle when the top accuracy just wasn't there.

After Friday at Sacramento, 971 was not nearly reliable enough in the high goal, so we switched to a mid-goal shot in auto, and went from <30% going in to >75% accuracy. We kept this through Sacramento and SVR.

PayneTrain
22-01-2013, 01:00
Some of the better auto modes in 2012 were ones that shamelessly drove straight to the middle goal and vomited the balls, but the elite few could obviously hit >3 in the top goal almost every time.

However, some could easily argue that the targets resting on a plane perpendicular to the ground like 2006 and 2013 is easier than shooting into ones parallel from the ground. It's up to teams who test and collect their own data.
But isn't it always?

Andrew Schreiber
22-01-2013, 01:02
But isn't it always?

Yup ;)

Andrew Schreiber
24-01-2013, 11:45
So, we wanted to take some time to get to know some of the awesome teams out there, thanks @Team3313 for being awesome http://twentyfour.ewcp.org/post/41365781157/a-slice-of-your-time-team-3313-mechatronics

MrBydlon
24-01-2013, 13:55
This was awesome! Thank you to TwentyFour and Andrew for inviting our team to be interviewed.

The TwentyFour blog has been a constant discussion piece for our team this year and we genuinely appreciate the advice and statistical analysis of it.

Andrew Schreiber
31-01-2013, 11:53
When you have a hammer all problems look like a nail, when you have a plow ramming looks good… Play smart defense! http://twentyfour.ewcp.org/post/41951236448/d-fence-an-anecdote

pfreivald
31-01-2013, 12:55
Great blog, guys -- I hadn't seen it before!

Taylor
31-01-2013, 13:21
Defense is a terribly underserved, underdiscussed, and underdeveloped strategy. I would like to see further discussion on the merits and types of defense more than "it's not just ramming."

Peter Matteson
31-01-2013, 13:45
I have to say your info on the 2006 finals is quite wrong.

Although 25 had a low shooter you have to look at where the ball actually exited their frame perimeter. The trajectory was hard to block without a full 60" tall robot.

Also on the Newton alliance that year 25 was the decoy. 968 was actually the high scoring robot through out elims. If you took them out of play 25s cross field loading time limited the damage they could do against 2 or three solid shooters. For a good example of what I'm talking about watch the Newton vs. Galileo match 1 (Einstien QF1-1). That was unfortunately the only of the 3 matches in that series where we had 3 functional robots for the full match.

Also when it comes to undefeated streaks I believe that the only team to win the championship while undefeated at the championship event was 111 in 2009. Prior to that no team had made it past 14-0 before drawing a loss.

pfreivald
31-01-2013, 15:16
Defense is a terribly underserved, underdiscussed, and underdeveloped strategy. I would like to see further discussion on the merits and types of defense more than "it's not just ramming."

Playing piece denial, area denial, drawing penalties... There's a lot more to defense than just pushing and ramming, though some years it's well harder (and harder to parse) than others!

Ian Curtis
31-01-2013, 15:17
I have to say your info on the 2006 finals is quite wrong.

Although 25 had a low shooter you have to look at where the ball actually exited their frame perimeter. The trajectory was hard to block without a full 60" tall robot.

Also on the Newton alliance that year 25 was the decoy. 968 was actually the high scoring robot through out elims. If you took them out of play 25s cross field loading time limited the damage they could do against 2 or three solid shooters. For a good example of what I'm talking about watch the Newton vs. Galileo match 1 (Einstien QF1-1). That was unfortunately the only of the 3 matches in that series where we had 3 functional robots for the full match.

Also when it comes to undefeated streaks I believe that the only team to win the championship while undefeated at the championship event was 111 in 2009. Prior to that no team had made it past 14-0 before drawing a loss.

Thanks Peter, I doubt anyone can tell it better than someone that was there! We'll append/edit it as soon as one of us gets a chance. I still think 2006 is a great lesson in bad defense, as a halfway decent rampcamper the only team that just sat under the goal and rendered us completely useless was 67. We had plenty of people ramming us once we were already under the goal and firing though...

Lil' Lavery
31-01-2013, 15:30
I think 2006 can also be a lesson in great defense. It just all depends on the teams involved. I recall more than one occassion where strong defensive play determined the course of a match. In fact, it was strong defense (and a preference towards human loading) that caused 1114, 1503, and 1680 to re-evaluate their loading strategies before IRI. Many of the more adept defenders were successful at trapping teams like 25 and the triplets into the corners beside the ramp while they human loaded. 25's freakish drivetrain and aggressive driving would help them get free of the trap, but it would eat plenty of time off the clock. The Simbots began loading up from the corner of the ramp, to give them an escape route, at IRI.

Defense is a terribly underserved, underdiscussed, and underdeveloped strategy. I would like to see further discussion on the merits and types of defense more than "it's not just ramming."
With ever changing games, defense is an ever changing subject. Different strategies and tactics work in each game, and even change dramatically based on what robots are involved. In some games, ramming can be particularly effective (if often was in 2006, assuming you weren't playing against a triplet). In others, it's pratically useless. Some games, "starvation" strategies can crush an opposing alliances' score (2009 being the best example). In 2006, on the other hand, it was essentially impossible to starve the other alliance of balls and still win the match.

As a general rule of thumb, getting to where the offensive robot needs to be before they get there is usually a pretty good defensive strategy.

coldfusion1279
31-01-2013, 15:32
I have thought a lot about defense, particularly in contrast with other sports where in order to field a full team, you must perform on the offensive and defensive side of the ball.

The problem is FIRST games aren't always designed to allow for heavy defense. Low scoring matches are just plain boring. Penalties are assessed for egregious behavior, and defense tends to aggravate students that spent 6 weeks to see their robot throw a frisbee in the goal. You can say "they should have designed for that," but really, should they have to? (PM me if you would like to speak on this in particular). I think that's why the last two years have had "safety zones" to shoot from.

The article still makes a valid point, which is that when defense is played, it is often played pretty poorly, with penalties galore. Just get between your opponent and their destination, even with mecanums. Time is the enemy, and having to push your robot out of the way is the best way to prevent points from being scored.

Andrew Schreiber
31-01-2013, 15:45
The article still makes a valid point, which is that when defense is played, it is often played pretty poorly, with penalties galore. Just get between your opponent and their destination, even with mecanums. Time is the enemy, and having to push your robot out of the way is the best way to prevent points from being scored.

Oddly enough, a well driven mecanum can play very effective defense. (Oh Chris is gonna hate me for saying that) High caliber drivers often have a flow to their driving. If you can force them to break that flow, even for a second or two, you've reduced their performance. The practical impact of this is difficult to measure due to there being more variables but the goal should be breaking flow and pattern. The best defense is disruptive and forces your opponents to change their game plan while executing it.

Nuttyman54
31-01-2013, 17:02
A defense discussion is definitely merited here. Pushing is often not super effective defense, but what is?

Lets break it down to functional goals. What is the point of defense? To prevent the other team from doing the action that it wants to do (usually scoring). Sometimes this means keeping them away from a certain part of the field, sometimes it means preventing them from lining up. So lets turn that goal around to "What do we want them to be doing", and the answer pops out quite plainly: Anything else. Any time that they spend not doing what they want to be doing is wasted time. Any way that you can get them to do this is good defense.

A great example is from Einstein Semifinals in 2007. 190 was assigned to try to score a few tubes and then play defense on RAGE 173, leaving 987 as our primary tube scorer. At one point, RAGE and 190 both have a tube in their grippers, and RAGE turns away from the rack and pushes 190 halfway across the field. 190 chose not to push back, because our goal was to prevent them from scoring. Any time they chose to spend pushing us around was just as good as getting in their way.

Andrew is spot on that a well driven mecanum can easily be a great defense robot, simply by getting in the way or being disruptive. High speed is also very helpful when defending, often more-so than pushing power. If you can get there before they do, then you have the advantage.Team 71 at Smokey Mountain Regional finals in 2011 drive their fast swerve in one of the best defensive performances I have ever seen, almost single-handedly shutting down their opponents.

PayneTrain
31-01-2013, 23:09
A reason you never see a strong, methodical defense develop is because defense is typically a tertiary objective for powerhouse teams and an emergency, all-or-nothing fallback for weaker teams.

Those that have the ability to play a multidimensional defense now choose to make as many offensive plays as possible, and those that "choose" to play defense are usually forced out of there intended strategy based on design choices that resulted in a robot too weak to play either side of the ball.

Another reason would be that when comparing the offense and defense in conventional sports to FRC, there are key differences in how you attack strategy. In a 3 on 3 ultimate frisbee game, there is one game piece, one goal for each team, and no other objective besides "put the game piece in your goal more than the other guys do for their goal" over a long lapse of time. In Ultimate Ascent, there are over 100 game pieces, four goals for each team, a time and resource-consuming secondary objective irrelevant to the primary, and must be played in a compressed period of time.

If you're developing a strategy that needs to be relevant in late round qualifications at CMP, you don't think "Man, we need to play some KILLER defense." Your strategy guys figure out how to score the most amount of points with the least amount of interference from the opposition. An FRC game has restrictions and key differences that make defense less than a red-headed stepchild in strategy discussions.

Good teams can build a robot to execute a desirable defensive strategy, but the parameters of an FRC game would coerce those teams to build a robot more geared towards putting up a lot of points efficiently.

pandamonium
01-02-2013, 11:35
I am aware of the data that you have discussed and the typical low scores
I have a gut feeling that this year may be different. I have been in first for many years and I have seen so many more succesful acurate prototypes than ever before. The dynamics of disks are quite different than balls and these goals are quite large. The math from past years sugests that if a robot can score 20 points reliably they will be quite good. I just dont know though I could see a simple robot in 3 days based robot capable of 40 points consistantly.

Anupam Goli
01-02-2013, 12:31
I am aware of the data that you have discussed and the typical low scores
I have a gut feeling that this year may be different. I have been in first for many years and I have seen so many more succesful acurate prototypes than ever before. The dynamics of disks are quite different than balls and these goals are quite large. The math from past years sugests that if a robot can score 20 points reliably they will be quite good. I just dont know though I could see a simple robot in 3 days based robot capable of 40 points consistantly.

Resources like Robot in 3 days definitely will help bring up the level of competition, but while i've seen many shooters, I've yet to see an efficient intake, storage, and feeder. Granted, many powerhouse teams and others are not showing those (since these are obviously the toughest parts of designing this year's robot) Teams will definitely score more points, but the distribution of points will be the same. Those teams that iterate and practice will still be far above the ones that don't. The "low score" threshold will be raised this year, but so will the mid range and top tier score.

EricLeifermann
01-02-2013, 12:36
Resources like Robot in 3 days definitely will help bring up the level of competition, but while i've seen many shooters, I've yet to see an efficient intake, storage, and feeder. Granted, many powerhouse teams and others are not showing those (since these are obviously the toughest parts of designing this year's robot) Teams will definitely score more points, but the distribution of points will be the same. Those teams that iterate and practice will still be far above the ones that don't. The "low score" threshold will be raised this year, but so will the mid range and top tier score.


100% agree! Aside from a drive base, a shooter is the easiest thing to do. Acquiring and delivering the discs to your shooter is the difficult part.

Climbing is just insanely tough.

pandamonium
01-02-2013, 13:37
I see litle value in floor intake outside of auto mode though. And storage is a relatively easy obstacle in the big picture. The robot in 3 days looks to be capable of 18 auto 5 runs in teleop so 62 if 100 Percent in top and add 10 for the hang 68 points is definately feasible with out climbing above level one and with out colecting off of the floor.

Peter Matteson
01-02-2013, 14:21
I see litle value in floor intake outside of auto mode though. And storage is a relatively easy obstacle in the big picture. The robot in 3 days looks to be capable of 18 auto 5 runs in teleop so 62 if 100 Percent in top and add 10 for the hang 68 points is definately feasible with out climbing above level one and with out colecting off of the floor.

That is a completely unrealistic expectation.

Top level teams would fight to get 5 teleop runs across the field. Look at load time, drive time, and how long it takes to aim. I think you will find you can't do this on an empty field let alone one that has 5 other robots.
This is like the teams in 2011 that could hang 5 tubes every match if you talked to them but really only hung one or 2. Their belief was what they did on an empty practice field was the same as what they could do in a match.

Also Assuming 100% accuracy is ridiculous. The first shoot will rarely be on target unless you using a very good auto targeting system. History i.e this blog shows 60%-80% accuracy would be a high end team.

I think you are drastically overestimating what teams will really be able to do in 2 minutes. What you described is a PERFECT match.

A more realistic look would be 18 in auto and 3 runs across the field at 66% accuracy resulting in 36 pts. Just remember most teams won't even be able to do that.

coldfusion1279
01-02-2013, 14:31
I see litle value in floor intake outside of auto mode though. And storage is a relatively easy obstacle in the big picture. The robot in 3 days looks to be capable of 18 auto 5 runs in teleop so 62 if 100 Percent in top and add 10 for the hang 68 points is definately feasible with out climbing above level one and with out colecting off of the floor.

I agree with Peter. I also think that a whole class of robot has been overlooked. One which picks up from the floor efficiently, and can "shovel" frisbees a few feet in the air regardless of orientation. I foresee many matches with frisbees all over the floor near the goals in the last 45 seconds. A clean-up shoveler can pick up 4 after all the robots have cleared out to go hang, raise a lift, and push them into the 2 or 3 pointer with ease. Last 10 seconds (with a good driver) go grab the bar for 10 points.

I really wanted to build one of these this year. I hope I get to see a good one.

KrazyCarl92
01-02-2013, 15:37
Floor pick up is an interesting debate this year. We decided that a reliable floor pickup would make a potential 24 point auto extremely easy, and still leave the possibility for improvement. So that's 6 points*robot accuracy advantage for floor pick up. We also estimated that in a match where the floor was littered with discs, we could expect to score an average of only 5 points*robot accuracy more in teleop with a floor pick up than without. As you get to eliminations or championship, you may expect to see less discs on the floor due to teams hitting their shots at a higher accuracy.

However, you could either send your robot or an alliance partner to the feeder station to send 12-20 discs down to your end at the beginning of the match so as to litter the floor with discs on purpose and very quickly. So if you have 2 otherwise equal robots: 1 with good floor pickup, 1 without, I expect to see a maximum score difference of 11 points, barring any crazy 5-7 disc autos (okay, 5 isn't that crazy). That makes a good floor pickup approximately equal to an additional climbing level, if used strategically.

pandamonium
01-02-2013, 16:37
All of my first experience and the data certaintly agrees with you. It all depends how far a trip is. If some robots are shooting full field they can do more than 4 trips.


If there are 5 teams at a regional scoring 70 or more points each match and 20 teams scoring 3 or less points their will be a ton of frustrated people. If you guys are right I forsee a larger spread than we have seen in past years. There will be much more seperation between the classes of teams. If a match is 100 to 6 how much fun will it be to watch?

Chris is me
01-02-2013, 18:26
I think if you aren't doing the extended autonomous, while a floor input isn't worthless, it's not worth spending design time on when you could be working on your hanger or tuning your shooter or practicing.

The extended autonomous is REALLY worth it in terms of seeding, but some of us know our software limitations and would rather focus on what we can perfect mechanically.

Jibri Wright
02-02-2013, 17:20
Michigan regionals have a lot of good teams. We get Wildstang, Bombsquad, Winnovation, Team Hammond, Techno Kats, and a lot of other teams that score really well. Not only that, but teams around here actually help each other out a lot surprisingly. That plus since the scores get so high every year, a lot of teams fixate on getting their robot to do autonomous really well (you should see the autonomous scores, they are always high and really close) and doing really well in the end game just to get picked for elimination round. The top 8 teams at the regionals normally can do everything.

EricLeifermann
02-02-2013, 17:31
Michigan regionals have a lot of good teams. We get Wildstang, Bombsquad, Winnovation, Team Hammond, Techno Kats, and a lot of other teams that score really well. Not only that, but teams around here actually help each other out a lot surprisingly. That plus since the scores get so high every year, a lot of teams fixate on getting their robot to do autonomous really well (you should see the autonomous scores, they are always high and really close) and doing really well in the end game just to get picked for elimination round. The top 8 teams at the regionals normally can do everything.

How are non Michigan teams competing in Michigan? Michigan is a district state...

MagiChau
02-02-2013, 18:37
How are non Michigan teams competing in Michigan? Michigan is a district state...

I assume he meant to say Midwest ;)

pwnageNick
02-02-2013, 19:06
How are non Michigan teams competing in Michigan? Michigan is a district state...

He definitely meant Midwest Regional.

While some of the things he said are true, there still seems to be a large gap in points scored by robots. More then other regionals I think but that's just my $o.02

-Nick

Jibri Wright
02-02-2013, 19:34
Lol ya I meant Midwest

Jibri Wright
02-02-2013, 20:27
Michigan regionals have a lot of good teams. We get Wildstang, Bombsquad, Winnovation, Team Hammond, Techno Kats, and a lot of other teams that score really well. Not only that, but teams around here actually help each other out a lot surprisingly. That plus since the scores get so high every year, a lot of teams fixate on getting their robot to do autonomous really well (you should see the autonomous scores, they are always high and really close) and doing really well in the end game just to get picked for elimination round. The top 8 teams at the regionals normally can do everything.

The reason I say this stuff is because last year, our robot was in the 70th percentile according to EWCP, just not at our regionals. Our robot averaged 12 points in autonomous, and 6 points in teleop. We also balanced on the bridge almost every game. Even so, we only won about 60 percent of our games for both the regional that was held in Cinncinati and the one in Chicago. We came in 18th place at one of them and 21st at another. We were chosen at one of them to be on an elimination team. That is the thing about these regionals up here, almost all of the robots fit the criteria to be in the 70th percentile according to this. For the ones that don't, their teammates are good enough to carry them. It got to the point where we actually placed lower than a box on wheels at one of the regionals. Honest story.

Andrew Schreiber
02-02-2013, 21:03
The reason I say this stuff is because last year, our robot was in the 70th percentile according to EWCP, just not at our regionals. Our robot averaged 12 points in autonomous, and 6 points in teleop. We also balanced on the bridge almost every game. Even so, we only won about 60 percent of our games for both the regional that was held in Cinncinati and the one in Chicago. We came in 18th place at one of them and 21st at another. We were chosen at one of them to be on an elimination team. That is the thing about these regionals up here, almost all of the robots fit the criteria to be in the 70th percentile according to this. For the ones that don't, their teammates are good enough to carry them. It got to the point where we actually placed lower than a box on wheels at one of the regionals. Honest story.

You're pointing out one of the biggest flaws in our statistics. They are drawn from all over. What impact does a specific region have on our numbers? Let's take a look at just Midwest (IL) -

I filtered the qualification data down to just IL and then I looked at the quartiles of the scores teleop scores:

25th - .25
50th - 3
75th - 9
100th - 30

How'd it compare with the rest of the world though? Fun story, your bottom tier is a little higher but the rest are right on par. Basically, the numbers don't really back up what you're saying about the teams being better up there.

And before you guys complain that Midwest skewed the numbers and that's why they line up? Midwest had 89 matches. Our qualification data set is 4883 matches. (we remove the qualification only events like MAR/MSC/CMP)

This analysis was all done using the quantile command in R. I can provide code if requested.

Ether
02-02-2013, 21:11
This analysis was all done using the quantile command in R.

Do you use R regularly in your work? I've never used it, but it's been on my bucket list for about 5 years now.

Andrew Schreiber
02-02-2013, 21:17
Do you use R regularly in your work? I've never used it, but it's been on my bucket list for about 5 years now.





I don't know what Ian uses for his but pretty much any time I need to do any sort of analysis on a data set I use R. I've found it's fairly easy to use if you think of everything as a set. I like using RStudio rather than just the command line though. If you're interested in it I'd suggest starting that way. It is a full language but I generally prefer to work in a language I know a little better (Python/Ruby) if I'm doing any sort of logic.

For some of the experiments I'm doing with match prediction my current workflow is to use R to filter the set down to what I want, export as a CSV file, then process it in Python. After that I'll process the output in R to see if my model is decent.

Jibri Wright
02-02-2013, 22:05
You're pointing out one of the biggest flaws in our statistics. They are drawn from all over. What impact does a specific region have on our numbers? Let's take a look at just Midwest (IL) -

I filtered the qualification data down to just IL and then I looked at the quartiles of the scores teleop scores:

25th - .25
50th - 3
75th - 9
100th - 30

How'd it compare with the rest of the world though? Fun story, your bottom tier is a little higher but the rest are right on par. Basically, the numbers don't really back up what you're saying about the teams being better up there.

And before you guys complain that Midwest skewed the numbers and that's why they line up? Midwest had 89 matches. Our qualification data set is 4883 matches. (we remove the qualification only events like MAR/MSC/CMP)

This analysis was all done using the quantile command in R. I can provide code if requested.

A lot of the points were scored in autonomous. Can you tell me how those points fared out? Not only that, a lot of alliances did score about 9 points in teleop. A lot of them even scored a little more than that like with us.

Andrew Schreiber
02-02-2013, 22:07
A lot of the points were scored in autonomous. Can you tell me how those points fared out?
IL
0.00 6.00 11.75 24.00
All
0 0 6 12 46

EDIT:
And to address your edit...
No, 25% of Alliances scored 9 or more points in teleop.

Lil' Lavery
02-02-2013, 23:21
The reason I say this stuff is because last year, our robot was in the 70th percentile according to EWCP, just not at our regionals. Our robot averaged 12 points in autonomous, and 6 points in teleop. We also balanced on the bridge almost every game. Even so, we only won about 60 percent of our games for both the regional that was held in Cinncinati and the one in Chicago. We came in 18th place at one of them and 21st at another. We were chosen at one of them to be on an elimination team. That is the thing about these regionals up here, almost all of the robots fit the criteria to be in the 70th percentile according to this. For the ones that don't, their teammates are good enough to carry them. It got to the point where we actually placed lower than a box on wheels at one of the regionals. Honest story.

You averaged 12 points in autonomous? What average are you using? If that's your mean autonomous score, that means you either picked up additional balls from one of the bridges in autonomous or never missed a single shot at the top basket. If its your median or mode autonomous score, perhaps it's more plausible.

Jibri Wright
03-02-2013, 06:41
IL
0.00 6.00 11.75 24.00
All
0 0 6 12 46

EDIT:
And to address your edit...
No, 25% of Alliances scored 9 or more points in teleop.

Oh ok thanks, numbers normally don't lie.

Littleboy
03-02-2013, 08:03
I am just curious, but can you get the same numbers for MI (including MSC)?
Thanks

Andrew Schreiber
03-02-2013, 10:54
Oh ok thanks, numbers normally don't lie.

Nope, just the people with them.

Andrew Schreiber
21-08-2013, 13:07
NecroThreading this just to keep everything contained.

After a vacation TwentyFour has a new post:

http://twentyfour.ewcp.org/post/58924335899/what-time-is-it

We discuss the impact smart defense can have on both the offensive robot and the defensive robot's point contributions. Admittedly, it's a narrow example but the concepts can be expanded fairly easily.

Lil' Lavery
21-08-2013, 17:13
*And, for the record, Cycles is one of those metrics like shots on goal in hockey. A cycle where you spray your shots wide the minute you cross the mid field line is akin to that lazy bounce off the boards that the goalie leisurely deflects to a player. It’s just padding numbers. *


http://www.youtube.com/watch?v=wxMEH_eggP4
:cool:

rsisk
21-08-2013, 18:52
Quote:
*And, for the record, Cycles is one of those metrics like shots on goal in hockey. A cycle where you spray your shots wide the minute you cross the mid field line is akin to that lazy bounce off the boards that the goalie leisurely deflects to a player. It’s just padding numbers. *

Plowie does have a way with words :)

DampRobot
21-08-2013, 21:32
First of all, I really enjoy reading these posts. They're informative, and fun, and I hope the blog keeps going next (this?) season.

I do have to disagree with a few points in your analysis though, specifically regarding the "Team Plowie" match. I completely agree that often slowing down the top cycler on the other team just a little bit will cause them to drop a cycle and score significantly less. However, trying to do this yourself will often slow down your scoring much more than it slows down theirs. You touched on this, but it isn't effective defense because you're likely spending more time defending them then they are defended for. You usually have to wait around, then intercept them, and maybe even chase them for a little bit. That might make them drop a cycle, but it certainly will make you have less time to score.

I'd argue that instead, you should task one of your alliance partners with, if nothing else, just camping out behind the pyramid. That should make the other alliance's high cycling robot drop a cycle, and it won't impact your scoring at all (I'm assuming that this team would contribute <9 points anyway in teleop). If you're your team's primary scorer, you should focus on scoring. Other people can play defense and improve the difference in score much more.

Both to illustrate my point and as a point of general interest, let me talk to you about a match, specifically this (http://www.thebluealliance.com/match/2013casj_qm50) match. We were with 1868 and 766 against an alliance where 971 was the primary scorer. We knew that 971 was the better robot both on paper and in the real world, so our alliance would have to defend against them. With 766 (an above average cycler at SVR) playing defense against 971, we severely restricted their ability to move around the field and score, while we scored with help from 1868. After autonomous, the score was 58-20 Red. By restricting their main scorer and still focusing on scoring, the score was 69-78 Blue by the end of teleop. (We got 3 robots up and they got none, for a final score of 108-69).

Playing defense in a tough match is always a very smart move. But, you still have to focus on outscoring the other alliance, not just shutting them down.

Andrew Lawrence
21-08-2013, 21:47
Both to illustrate my point and as a point of general interest, let me talk to you about a match, specifically this (http://www.thebluealliance.com/match/2013casj_qm50) match. We were with 1868 and 766 against an alliance where 971 was the primary scorer. We knew that 971 was the better robot both on paper and in the real world, so our alliance would have to defend against them. With 766 (an above average cycler at SVR) playing defense against 971, we severely restricted their ability to move around the field and score, while we scored with help from 1868. After autonomous, the score was 58-20 Red. By restricting their main scorer and still focusing on scoring, the score was 69-78 Blue by the end of teleop. (We got 3 robots up and they got none, for a final score of 108-69).

While your point is valid, I don't think the score was so low because you had someone defending 971. The other two bots on their alliance were dead, so a victory in a 3v1 match is going to be a likely event. Was 971 capable of scoring more points? Yeah. But the defense didn't win the match.

Chris is me
21-08-2013, 21:59
The point of the post certainly wasn't to say "if you can score, playing defense is never worth it". That's the wrong lesson to take away here. At the end of the post the limitations of the specific scenario analyzed are touched on - not the least of which includes that you absolutely can't disregard alliance partners in a cost benefit analysis.

Andrew Schreiber
21-08-2013, 22:07
I do have to disagree with a few points in your analysis though, specifically regarding the "Team Plowie" match. I completely agree that often slowing down the top cycler on the other team just a little bit will cause them to drop a cycle and score significantly less. However, trying to do this yourself will often slow down your scoring much more than it slows down theirs. You touched on this, but it isn't effective defense because you're likely spending more time defending them then they are defended for. You usually have to wait around, then intercept them, and maybe even chase them for a little bit. That might make them drop a cycle, but it certainly will make you have less time to score.

You should probably go back and read the post. Specifically:

We would like to point out that smart defense is very important…a bad defensive team will spend 10 seconds of driving to perhaps slow down the team they are defending by 2 seconds. A good defensive team could, and arguably should slow down an opponent by MORE time than the time spent defending.

So I have to assume you have not read or understood the article since you are disagreeing with a point and then going on to make the same point.

I'd argue that instead, you should task one of your alliance partners with, if nothing else, just camping out behind the pyramid. That should make the other alliance's high cycling robot drop a cycle, and it won't impact your scoring at all (I'm assuming that this team would contribute <9 points anyway in teleop). If you're your team's primary scorer, you should focus on scoring. Other people can play defense and improve the difference in score much more.

I have to strongly disagree with this. I used a dynamic strategy during eliminations at Orlando and DC. All of our robots were approximately equally capable of scoring. This allowed us to harry opponents as we went back to reload, a little bump here, getting in the way there. But since all of us were removing maybe 1 second a cycle we were able to reduce their scoring far more than if we had assigned one of our robots to defending. I'd say it was pretty successful.

AllenGregoryIV
21-08-2013, 23:23
I agree with Andrew, we were finalist at Razorback and one of the keys to our strategy was that our driver knew if he had a shot at one of their robots in the middle of the field he took it. We weren't playing the full defense strategy we had at IRI since we were the best offensive robot on our alliance as well but a few well placed hits can swing a match. Especially if those hits can affect the other team more than they affect you, like knocking frisbees out of their hoppers, moving them to less optimal scoring positions, that sort of thing.

In many cases when you have a robot that won't give you much on offense it's okay to have them play D the entire time but when you're lucky enough to have 3 quality offensive machines *cough* World Champions *cough* this strategy works well.

Oh and thanks for bringing the blog back. I love this stuff.

KrazyCarl92
22-08-2013, 14:02
You touched on this, but it isn't effective defense because you're likely spending more time defending them then they are defended for.

What about a pushing match where you push an opponent toward your loading zone? That gets you where you need to be, takes them away from where they need to be, and takes up an equal amount of time for both teams. That's causing the opponent to waste more time than the active defender put into defending.

What about if it takes 2 seconds of defense to force a tall robot to go the other way around the pyramid? This can result in about 5 seconds of delay just from being in the right place at the right time, and it could likely result from just spending an additional 1-2 seconds during a single cycle to be in their way enough for them to decide to go the other way.

What about a floor loader hassling a loading opposing cycler in their unprotected feeder station? It will make their loading slower/more difficult and result in some likelihood that they will drop some discs on the floor for you to collect...that would be a win-win because it slows down their cycle time and speeds up your own.

The point of the post is that opportunistic defense can narrow or widen a point margin between two robots or two alliances if used properly. A point prevented = a point scored, so the fact that the point margin in the example shown goes from 19 points to 10 points is tremendous. If one of your partners has a match where they get 1 more cycle than they usually do, the act of playing defense can help turn a close loss into a close win.

Lil' Lavery
22-08-2013, 14:33
Time, like cycles/shots on goal, is a secondary metric. It's not actually points. Even if both teams spend the same amount of time on a non-scoring activity (such as a pushing match), they don't necessarily lose the same amount of points.

matthewdenny
23-02-2014, 22:00
Has this blog been discontinued? I really liked it last year, but it just sort of disappeared.

Ian Curtis
23-02-2014, 22:14
I've been pretty busy at the job I get paid for, so haven't had much time to build a robot, let alone write about them. :o

Right now, it looks like my work schedule rebalances sometime after the Championship, so I don't think I will be contributing much. Andrew Schreiber was the other co-conspirator, I'm not sure what he's been up to. If anyone has some cool statistical stuff along the same lines, I'm glad to put it up. :)

Andrew Schreiber
23-02-2014, 22:22
I've been pretty busy at the job I get paid for, so haven't had much time to build a robot, let alone write about them. :o

Right now, it looks like my work schedule rebalances sometime after the Championship, so I don't think I will be contributing much. Andrew Schreiber was the other co-conspirator, I'm not sure what he's been up to. If anyone has some cool statistical stuff along the same lines, I'm glad to put it up. :)

Yeah, so fun facts Ian and I both have this full time job thing. (I actually just moved 2 months ago up here to the frozen north)


So, basically TwentyFour (and, truth be told most of EWCPcast) is on hiatus for a little while. We enjoyed writing them for you guys and I think we'll do our best to bring it back in a little while. However, in the mean time - if you guys have any requests for topics for us to cover send 'em our way (this thread works or PM Ian and I) and we will see what we can do. OR if you really feel ambitious, write an article. We'll give it a quick once over, and post it up (crediting you of course).

I just put in for a slot at CMP to do a conference talk, it's a little more technical than last year, we're aiming to cover timing in FRC: how to make mechanisms operate in times and how to derive timing requirements based off strategy.

I wish I had time to do more of this right now but unless someone can figure out how to add about 6 more hours to each day I can't.

Sorry :(