View Single Post
  #15   Spotlight this post!  
Unread 12-04-2011, 08:26
ToddF's Avatar
ToddF ToddF is offline
mechanical engineer
AKA: Todd Ferrante
FRC #2363 (Triple Helix)
Team Role: Mentor
 
Join Date: Apr 2011
Rookie Year: 2011
Location: Newport News, VA
Posts: 588
ToddF has a reputation beyond reputeToddF has a reputation beyond reputeToddF has a reputation beyond reputeToddF has a reputation beyond reputeToddF has a reputation beyond reputeToddF has a reputation beyond reputeToddF has a reputation beyond reputeToddF has a reputation beyond reputeToddF has a reputation beyond reputeToddF has a reputation beyond reputeToddF has a reputation beyond repute
Re: TOWER Malfunctions since Week 3

Having been through the Palmetto and the Virginia Regionals, I've now seen quite a few instances of "tower failures". These tend to fall into two catagories:
A) False positives that happen when a robot hits the base hard or a deployment mechanism hits the pole hard. Robots with deployment issues often repeatedly ram the tower trying to get their minibots to deploy, sometimes practically knocking the towers over in the process.
B) False negatives that happen when a minibot climbs the tower, hits the plate, but the sensors don't light. Minibots fall into two categories: direct drive bots that are very fast and light; and geared bots that use the stock Tetrix gearboxes and often stock Tetrix wheels and tend to be slower and heavier. After watching two regionals worth of matches, it is my impression (not backed up by supportable data) that all or nearly all false negatives happen to direct drive minibots. I'm sure there may be exceptions, but I'm also pretty sure there is a statistically supportable correlation.

As an engineer, I can understand that designing a sensor assembly that is sensitive enough to eliminate false negatives while also eliminating false positives may not be as easy as it first appears. But, with something as fundamentally important as an automated scoring system for a national competition, the sensors as currently designed don't seem to be fulfilling their design objectives. I'm sure they could have been designed to be more consistent, possibly by using sensor technology other than mechanically actuated limit switches. I'm also sure that they aren't going to be redesigned at this point.

We, as mentors and engineers, are now provided with a "teachable moment" for our students. Nothing in the real world is exact. Material properties, such as the yield strength of Aluminum, are generated statistically from experimental data. If you want to be absolutely positive to prevent a failure, you use allowable stresses which are well below the statistically generated averages.

We have enough observable data of the behavior of the tower sensors to conclude that their triggering threshold is somewhat inconsistent, but which could be statistically characterized if someone took the time to do so. Teams must choose to either use a heavier, slower minibot which triggers the sensor 98% (rough estimate) of the time or to use a lighter, faster minibot which triggers the sensor 70% (rough estimate) of the time. Engineers make these types of decisions everyday when designing things like cars, aircraft, and spacecraft.

We can complain all we want about the behavior of the sensors, just as we can complain all we want about how engineering materials don't break or buckle under the exact same loads every single time. Or we can teach our students how to deal with uncertainty in their design choices, and accept the consequences of those choices. As a mentor, I see my job as showing the kids how to think about the world less like a high school student (Those stupid sensors don't work right! We just got robbed! This isn't fair! Waaaah!) and more like an engineer (Now that we have observed how the sensors behave, let's make an educated choice of how best to use that behavior to make our team most likely to win.)

Todd F.
mentor, Team 2363
Reply With Quote