Sensors Not Required: FRC Design "Sans Feedback"

Hey there CD Community! I have a few questions I’d like to ask.

First, to give some background. I’ve been working with my current team for the past two build seasons. I have a mechanical engineering background, and most of the students on my team err towards the mechanical side as well. We have a small programming team (one college mentor and one student) who do their best with LabView. We build simple and towards our strengths, and keep programming to a minimum.

The only functional sensor on our 2012 and 2013 robots is the required pressure switch for the pneumatics system!

And we’ve been competitive both years. Last year we had a regional win, #4 seed at Madera and #8 seed in Newton. No camera or shooter encoder, with lots of driver practice we were shooting >50% from the key. This year we seeded #1 at Davis, ran a 5 disk auto, and have an OPR of 55.0.

That said, I have two questions:

  1. Are there any other team’s that don’t use sensors? Why? With what results?

  2. Should FIRST provide bigger incentives within game design to utilize sensor feedback systems? For instance, 2005 had the random-placed vision tetras, 2007 had the rotating rack, Big Ball placement in 2008, etc. In recent years (2009-now) though, there has been little “randomness” in game design that would require advanced sensor utilization. What would a change in “emphasis” mean for FIRST teams? How much stretching in this area is appropriate?

-Mike

I think there is already enough incentive to use sensors.
This year we didn’t have time to setup our shooter speed control on the robot until half way through our first district; the change in firing rate alone was jaw dropping (going from a 1 sec delay between shots to allow the wheel to spin up, to a .4 sec delay), add in the additional reliability (since an open loop would overshoot if left running too long, or undershoot if fired too quickly) and you can make a really strong case that sensors are an important part of the game.

Before I joined our team as a programming mentor, the team was heavily mechanically oriented, and they didn’t use many sensors. Last year, I took my first swing with an encoder on our roller wheels (short range vertical shots), but not until we reached Michigan State Champs; this year we built a US1881 Latching Hall Effect sensor and a couple ring magnets for speed sensing, along with a gyro to keep us driving straight. We’ve also toyed around with PID controls and such, but haven’t gotten enough expertise to use them effectively.

I think it is good that the games remain accessible to teams that don’t have strong programming and controls abilities; however building your capabilities (in-season or off-season) has some pretty huge benefits in competition, but also in learning opportunities.

We’re very lucky to have Mr. Ether nearby to bail us out when I get in over my head, and to dangle carrots out there for me to chase :slight_smile:

3623 used photosensors last year in a futile attempt at an automated pickup and shooter system (in retrospect we designed the mechanical system poorly. I joked “we forgot about inertia”). This year we used two limit switches, and prevented the system from overshooting certain points.

I’ve always wanted to get automated systems as a programmer myself, but the reality is I’m working on making sure the system as a WHOLE works. It’s definitely viable to not use sensors and have no automated systems. I’ve had robots reach the semi-finals with less sensors than people on the drive team.

Automated systems are cool, and definitely can bring in kids, but don’t overreach yourself just because you think you need to automate systems and have more sensors than the average automobile. If you want to put an automated system together, the offseason is a perfect time to try it out (assuming you can get kids and resources).

3785 doesn’t use to many sensors. This year we tried to implement a gyro to sense the angle of our shooter, and the camera for distance to run some simple calculations to be able to shoot for the 3pt goal easily and reliably but the processing caused our robot to lag. So we went the mechanical route, and had great success.

Since it’s our 3rd year, I think we’re progressing naturally in our abilities due to experience. But at the end of the day we need a robot that works, and we’d rather have a robot that can play the game well with manual driver controls, than one that marginally works but does so semi-automatically.

I will say that this year we grouped operations together so that one button would perform two tasks. We used timing to perform the operations correctly. I think as long as you keep trying to push the envelope, you’re going in the correct direction. Whether you implement the actions or not are a team decision.

Let strategy, your goals and the mechanical design guide your use of sensors. Stay true to the KISS principal. In 2011 we used the AB photo eyes for autonomous to follow the tape and an ultrasonic prosimity sensor to give us distance to the wall to know when to hang the tube. Last year we used vision to set the shooter speed and indicate the aim was centered by turning on a light ring on the back side of the robot. The driver actually postioned the shot manually. This year we don’t have much for sensors but some limit switches.

If sensors can give us a significant advantage, we use them. If we don’t think they will add much we leave them out as they can be just another thing to go wrong in a match.

Mike,
I mentor a couple of different teams in my area besides by “home” team so I’ll share with you some mixed view points.

One team tends to avoid using sensors as it makes it complex to program as in your case. They do use some, but, would rather not if they don’t have to. They believe manual operator by the drivers is good enough. They do fairly well competitiveness wise.

A 2nd team, believes that sensors are important and are making strides to incorporate them into their designs. They believe that with sensors working properly they can automate many functions. By having the machine do it for them faster and more accurately than by human operator control. This team has up and downs competitiveness wise, but, they accept them as growing pains. They believe as long as they improve over last season they are progressing.

Yet a 3rd team believes sensors would be an enormous improvement for them, but alas they simply don’t have the funding to invest a few hundred dollars for sensors like, encoders, pots, CAN, etc… They can barely scape together the 5K needed to compete. So they are mostly manual mode operations. They struggle to compete, but, they too are constantly improving.

Disclaimer…
My own personal “dream” would be to design a machine with the proper sensors, the proper software and the proper control system that has only 2 Joysticks for the driver, and 2 buttons for the manipulator operation.

  1. Drive to area near game piece
  2. Press button one to acquire game piece(s) (time approx. 0.25 seconds)
  3. Drive to scoring area
  4. Press button two to score game piece(s) (time approx 0.50 seconds)

This would require lots of automation. BTW Automation doesn’t always mean complex code either.

Now to answer about pushing the needs for sensors…
I’ve always believed their should be 3 tiers of challenges for autonomous mode.

Tier 1 - Basic primitive, something a moderately skilled rookie team could succeed at.
Bonus 1X

Tier 2 - Some sensors required, fairly simple code. The start or end points or goal locations change just before autonomous starts.
Bonus 3X

Tier 3 - Some sensors, lots of code. Complex, a real reach. Something very challenging like the targets move, and/or change color during autonomous.
Bonus 5X

I’ve been on sensor-heavy teams (my current one, 2702) and sensor-deficient teams.

The sensorless teams did ok, though they’d be inconsistent. It’s hard to make sure that arms get to the right place each time, it’s hard to do a proper repeatable auto mode, and it’s hard to make sure that your robot can’t damage itself in case of operator error. One time a structure that required two motors moving at the same time had the operator only move one motor and bent the entire frame of the robot.

2702 goes far in the opposite direction. Assuming everything worked (and at Waterloo, things did), this year’s robot can be completely operated with 3 buttons: one to go to “load mode”, one to load a frisbee, and one to fire the frisbees. Last year’s robot needed two digital sidecars because we had about 14 sensors to manage the basketballs.

The firing mode had to spin up the motor until it was at the right speed, hold the speed, then move a set of augers exactly one rotation to release exactly one frisbee, then move a servo to push the frisbee out. All those operations were repeated 4 times in about 3 seconds. A human simply couldn’t do it.

The advantages of a fully-sensored robot are pretty obvious: properly programmed, they’re faster, more reliable, don’t damage themselves, and are easier to operate.

Advantages to sensorless teams:
-Far fewer points of failure
-Simpler to code: not only do they not have to program sensor code, but they also don’t have to handle cases where sensors fail and overrides are needed
-Few easier to test
-If designed properly (aka “let’s hope the driver never makes a mistake” is never said or implied in the design), then they can be extremely robust and reliable.
-May be more likely to come across simpler (but equally good) solutions to the problem, as “solving it in software” is never proposed.

Michael,
I for one, truly appreciate the simplicity and reliability approach 1662 has taken over the years. Your success alone is a great testament to it’s effectiveness.

That said, if we look at what a “Robot” is: “A robot is a mechanical or virtual artificial agent, usually an electro-mechanical machine that is guided by a computer program or electronic circuitry. Robots can be autonomous, semi-autonomous or remotely controlled…” Wikipedia. Or my “definition”: “A electromechanical device that operates and reacts to it’s environment based on programming and sensor input.”. It seems apparent that sensors are a major part of a “robot”. If we neglect to take advantage of their input, are we really teaching robotics.

Please don’t get me wrong, robotics is such a huge field that having a focus on the mechanics and less on the sensors is just a valid as any other approach. The last thing I would want to suggest is that 1662 should change their approach.

What I would like to suggest is that you take time in the off season and start experimenting with sensors that you feel you might benefit from. My first suggestion would be to start with quadrature encoders, limit switches, and potentiometers.

Please feel free to contact 2073, EagleForce, if you would like any help with this.

This year we were top seed and won the Phoenix regional with no sensors on the robot other than the pressure switch on the pneumatic system.

Oh…I fibbed…we had a camera on it, but it wasn’t used, it just sat there looking pretty.

We try to eliminate sensors as much as possible, but sometimes they are useful. We have basically only used limit switches and encoders, though this year we used an IR sensor.

2013: Open loop shooter speed control. Because the discs slip on the shooter wheels repeatably, and because discs fly into the goal on the “upward” part of the trajectory, this works just fine.

We used an encoder + limit switch on our arm almost exclusively for autonomous mode. Probably could have used a pot, but meh. Arm goes down, zeros on the limit switch, goes up until we hit our preset encoder count, fires. This style of arm and shooter could have been done completely without sensors using pneumatics, but we liked being able to use the camera and a drawn crosshair to shoot from anywhere we wanted, so we made an aimable arm.

For our disc path, a limit switch and IR sensor would help us detect what state the hopper was in.

2012: Only sensor was our shooter wheel encoder. We had something resembling PID working for our offseasons, but it wasn’t stellar. Software isnt’ our strong suit.

2011: Zero sensor feedback. Robot was a pretty crappy arm-bot that we have all tried to forget since. Set points would have been nice.

2010: Zero sensor feedback. Robot was basically just an 8WD base with a piston kicker and a suction cup. There was no need.

It sounds like minimizing the use of sensors and advanced automation techniques is a good fit for your team based on its resources. In general I applaud teams for being pragmatic about what is within their means.

As awesome as they are, the problem with sensors, automation, feedback control, and advanced autonomy is that there is a HUGE learning curve that you must conquer before you come up with even pretty basic functionality (ex. how many teams in FIRST can program their robot to accurately and repeatably drive a non straight line path?). Adding a sensor or automated routine to your robot also causes the number of potential failure points to increase exponentially, and in the 6-week-build world of FRC you often find a lot of heavily automated robots having teething problems as they discover these failure points.

FRC is very heavy on mechanics. While true “powerhouse” robots are a result of mastery of all aspects of engineering, many well-built robots with bare-bones code have won Regionals. Comparatively few kitbots with world-class code have done the same.

Mechanical disciplines, in general, are more tangible and are more quickly grasped by students. They have also been around for thousands of years longer than software engineering and we have become pretty good at formalizing them. Even in the microcosm of FRC, we had COTS gearboxes available to us long before COTS computer vision software (since the CMUcam was basically a closed system, I do not include it in this self-indulgent lamentation about the short changing of software in FRC :)).

FRC games usually have some sort of incentive for automation (vision targets, randomization, additional game pieces during autonomous mode, etc.), but clever teams often find low-tech solutions to these problems that work just as well during teleoperation (best example I can think of is the “photon cannon”). Balancing the “need” for sensors and automation against the fact that many teams are deficient in these areas is (I would imagine) a real challenge for the GDC.

This year we are using way less sensors. I always try to plan for a design not to use sensors just in case a sensor breaks. We did have to implement a few this year.

  1. We have a light sensor to measure shooter wheel speed. We have a ban-bang control loop set up to control the speed. 766 is running a similar set up to us and they have no control loop. We have a wobblier mounting for our shooter than than them so i feel our design has to eliminate our speed error a bit to match their accuracy. Mostly our bang-bang controller eliminates firing at too fast of a rate and causing shots to to be fired at a slow speed. I think a practiced operator could do this manually. We also use polycarb whips to touch the pyramid, so we avoided vision systems.

  2. The drive train pretty much runs with no control loop, we have been messing with our gyro but it looks like we wont need it. We do mess with the joystick curve to change the throttle response. We did design the drive to have encoders have have them in place. We only use them when the PTO is engaged on the left hand drive. I do have a design philosophy this year of making sure our motor systems always have a spot to place an encoder.

  3. Our climber has two limit switches to prevent crashes. We tried to design the climber to be strong enough to crash but the PTO has enough torque to skip chains so we have to use the limit switches. We also zero the encoders using the limit switches. I think we have everything to automate the climb but we bounce over corners so I think we will leave in manual. We also have a camera to check our alignment.

I find that there are good benefits to sensors of course providing they work. I use them for autonomous, checking if the shooter is at the correct RPM, preventing mechanisms from slamming into the hard stops, and provide accurate feedback of the robot’s location on the field.

On our robot we have an Axis Camera primarily used to help our drive team make sure the robot is lined up in autonomous and align to the corner for climbing. Going blind would be a nightmare for our driver.

We use limit switches on our climber to prevent our motors from stalling trying to get past the hard stops.

We have encoders on our PTOs for autonomous driving after our three shots are fired and for autonomous climbing. We have done manual climbing but it gets tiresome for ourdriver to do a 12 stage climb every match. I do not trust using timers as distance covered can vary based on the battery.

Lastly we have a gyro for turning in autonomous. Again the actual turn can vary based on battery voltage if timer based.

On the shooter we have a hall effect sensor to measure RPM. This allows our operator to 100% know that the shooter wheel is up to speed before firing a frisbee.

My team every season tries to find the balance between manual and automation. We try to use sensors as simplistically as possible. We do not do anything beyond basic closed feedback loops to slow down a mechanism as it approaches the target and simple logic check for if a mechanism is at the hard stop.

Sensors are there to supplement the mechanical system of the robot. Try to find what works the best for you.

I agree with most of what is said here. It is a balancing act. The more your know about the various sensors and have experience in implementing them the better off you will probably be but poorly implemented they can be a detriment. Our team could use more experience with things like vision, gyros, encoders etc… and I am sure we would find more uses for them. I think the mechanical design, which by nature has to be first, drives the automation and what can be done to add effectiveness to the robot by sensors and programming to make the drivers job easier and automomous the most effective it can be within your skill set.

Growing your knowledge is tuff for many of us but always useful. My work experience has made me familiar with utrasonic range, photo detection and analalog principals but not with vision, gyros, encoders and the like. For our vision system last year I would have be lost without the white papers and Greg McKaskle posts on CD. You have to keep it simple but realize when you need more high tech solutions.

I agree with this completely. Every year 1706 has been around, we have implemented a gyro, now it is no big deal for our (labview) programmer (sadly, yes, it is singular). We are big fans of limit switches. This year, we put limit switches on our robot to know if we our claws are touching the pyramid so we can hang. Last year we had one to know if we were touching the bridge or not. Just simple tasks.

As this being my second year doing vision programming, it came a lot easier for me and the program this year is much more advanced than last year. That, of course, comes with experience.

I feel as if every team should at least attempt to use different sensors, just to see if they like them. That is what we did with vision last year, and we (meaning I) loved it, and fortunately it helped, a lot.

The less you have to guess. the better you are off. Or at least that is how I see it.

Last year, 4183 used no sensors on our robot except two limit switches at the ends of the turret travel and the pressure switch. We suffered because of it. Our shooting was slow to align and inaccurate since the driver had to manually aim the turret (a potentiometer or encoder would be far better) and there was no closed-loop shooter speed control.

During the off-season, we competed in Vex Robotics Competition, using more sophisticated software techniques, such as velocity and position PID control, multi-threaded code, motor controller linearization and trapezoidal velocity profiles. Most of these features worked well, given the limitations of Vex sensors, and made the drivers’ lives much easier.

This year, we applied the lessons learned from Vex to FRC. On our robot, we have encoders and a gyro for the drivetrain, a potentiometer and limit switches on the shooter mount, custom optical encoders on the shooter wheels, a limit switch for pyramid alignment, the pressure switch and a camera to help run the floor intake. These sensors helped make our hardware far more competitive. However, effectively using all these sensors requires extensive software skills (Thanks Ether, 254) that must be learned and practiced before build season. It is certainly worth pursuing these skills; they can only help you. That said, there are many highly competitive robots (1726, as mentioned above) running very simple software.

Also, more complex sensor-based software can be far more difficult to debug and fix at competition. We learned about this the hard way: Since we built a practice bot, our software was written and tuned for it. When we tested it on our competition robot, we had serious issues with shooter mount and drivetrain control. Thursday afternoon and most of Friday of the Arizona regional was spent debugging the code. That cost us our first five qualification matches (though we did then win the second five).

Please feel free to ignore all my ramblings. This was written late at night after finishing a rather annoying Literature essay.

255th post. Is my post counter about to roll over to 0?

When I was a student on 498 we made heavy use of sensors, but that was largely because I was head of the programming team and kept wanting to learn new things each year. Sensors can definitely help make a good robot better and there is a nice shiny award for using them well, but I would put them as the last focus because they will usually not make up for mechanical deficiencies (the only personal exception was we used a gyro to drive straight which prevented our drivetrain in 2007 from drifting to one side). Our robots were living proof of it, in 2006 and 2007 we had multiple autonomous mode options and in match automation of our shooter targeting (2006) and arm positions (2007). None of it mattered because our game piece manipulators were poor in both years and we tended to lose control of game pieces long before we could score them. I guess it did matter some, as our ability to score in autonomous was probably the only reason we made eliminations in 2006, but we were more or less useless 20 seconds in to most matches due to ball jams.

If you have a little extra programming resources, push for one new sensor each year. Maybe use a potentiometer on an arm next year, the following try to add a gyro, etc.

Something I think teams tend to forget about when they go ahead with more sensor control is what will happen if the sensor fails. Starting in 2006 we always built in a Manual Override switch that would allow us to disable all sensor control when flipped. It saved us once or twice when a potentiometer slipped or a limit switch got jammed and we were able to get out of the software lock on our arm by overriding sensor feedback.

Last year we tried to automate so much- and it was well beyond the abilities of our programming team. Our robot was a well built robot, but it couldn’t shoot well at all (we ended up playing “feederbot” at UTC and CMP).
This year we simplified. We have two limit switches that turn on lights when we’re in shooting position.
That’s just about it.
Our drivers, however, are incredible, and make up for any deficiencies in software this year.

I also agree with a lot of what’s been stated here. It really does take a few years to develop that “library of experience” that can be repeated applied to different applications year in and year out.

I’ve been on a few teams over the years and have seen different approaches to design with and without sensors in mind. In my experience/opinion, my most successful years have been with robots that have been designed achieve the goals of the game through simple mechanical systems with sensors/feedback added to enhance those systems. This doesn’t mean that control system personnel are excluded from the design process as mechanical systems are designed. You have to have control designers involved so that accommodations can be made to make control enhancements.

This season alone, I’ve seen some great robots who have no feedback back to drivers stations during matches. You kind of have “Duh!” moments with yourself when you see the simplicity in design with these teams.

Nate