View Single Post
  #12   Spotlight this post!  
Unread 21-03-2005, 02:27
Ethulin Ethulin is offline
Too many hats to count
AKA: Erik Thulin
FRC #0492 (Titan Robotics)
Team Role: Leadership
 
Join Date: Mar 2005
Rookie Year: 2003
Location: Seattle
Posts: 245
Ethulin has much to be proud ofEthulin has much to be proud ofEthulin has much to be proud ofEthulin has much to be proud ofEthulin has much to be proud ofEthulin has much to be proud ofEthulin has much to be proud ofEthulin has much to be proud of
Send a message via AIM to Ethulin
Re: How 492 won the PAC NW regionals

Quote:
Originally Posted by J Flex 188
While this thread and its data are rather interesting, I don't believe that the actual contents of the message (that is to say, that strategy and scouting are fundamental to winning a regional) are any different in past years. It only seems like a focus this year because of the fact that there is only one field object to manipulate, as opposed to a variety last year. Every robot has the same basic function.
My point was that this year a certain type of scouting was important. The type that required individual scouting rather than going by ranking and scoring.

Quote:
Originally Posted by Rick TYler
Impressive scouting method. Nice work, and you certainly picked good partners for the finals.

I wanted to take this opportunity to point out to the utes here that purely quantitative analyses have to be subjected to some critical thinking before they are accepted. Since we were at the PNW regional, I'm going to go ahead and use that as an example.

1. Your quant analysis draws some misleading conclusions. Using my own team as an example, your quant analysis shows that we scored between 0 and 4 times per round, and you averaged that out to 1.6 per round. This analysis using the mean (average) number neglects that fact that our scoring did not have a normal distribution. We had an arm fail in three rounds (twice due to a PWM cable being knocked loose before the match, and once when the arm operator jammed the arm by moving it into the stop when our software limit detector had been accidentally overridden by some code changes). This means that we scored 0 tetras in three early rounds, but scored between 2 and 4 times in the other rounds. A more useful measure might have been the median (center value when ranked) or even the mode (the most common result) rather than the mean for this particular measure. Using 1294 as an example, our average score was 1.6 capped tetras per round. Our median was probably 3, and our mode was probably 3 or 4. My records aren't complete, but this is pretty close. This means that when our robot hadn't been sabotaged by our own team, we reliably capped 3-4 tetras per round. Your quantitative analysis missed this. (Now you could make the argument that being unreliable should count against a team, and I would agree with you. That's not my point. My point is that, by its nature, a simple formula cannot take everything into account.)
In our newest version of the scouting app we take into account mean, median , and mode. Also, the reason we have all those columns is precisely that, no one formula can take into account all the aspects of the game.

Quote:
Originally Posted by Rick TYler
2. You have no sense of time series in your data. Bots (like 997) that started off really strong ended up fairing comparatively worse as others learned to defend against them. Likewise, other teams became stronger as their driving teams got better. By treating all data the same, you probably over-emphasize early results. Try a weighting factor over time next year and see if it changes your analysis.
Weighing all data equally requires a consistent bot, I will grant you that. But do we really want a bot that only worked well for the last 4 matches?

Quote:
Originally Posted by Rick TYler
4. Some of your data are wrong, probably because of under-sampling. What process do you use to collect and quality-control it? You probably want to make sure to have different scouts evaluate each robot. As an example, in most of our matches we started off with a held tetra, yet you say "no" to this in your spreadsheet.
Collect: we had 4 scouts covering every match. 1 for each row, as well as 1 for entering data.
For quality control: well even though each person was assigned to a row, they were always keeping an eye on the others, just to make sure know one had fallen asleep at the wheel.
If we say "no" for your team it was only because that was what were told by your team. That n/y was recorded PRE games on thursday, so I guess one of your team mates was confused.

Quote:
Originally Posted by Rick TYler
5. Unlike a baseball statistical analysis, your universe of measurements is too small to be statistically significant. This means that you should always apply human analysis before accepting the results. (You may, in fact, do this. I just wanted to encourage all teams not to blindly accept nice-looking quantitative results that may actually mean nothing.)
Well we went on the data we had to go by. Of course if we had more data it would be more accurate, but that is just wishful thinking.
Quote:
Originally Posted by Rick TYler
Your methods are proven by your results. You went through the finals like Patton through France. As I said up there, I am just encouraging all scouting teams to not just trust their numbers without fully understanding what those numbers mean and where they came from.
I think you did a nice job.
Thanks.

Last edited by Ethulin : 21-03-2005 at 02:33.