What's your favorite programming/control system magic this year?

Effective programming/control system techniques almost work like magic. Unlikely mechanical designs, they are frequently hidden from regular observers.

Which cool techniques have you seen or adopted this year that are creative and worth highlighting for your peers?

I saw a few cool ones:

What’s your favorite?

I personally love feedback systems. Like in 2012 and 2013, vision would be calculating how far away the robot is to the goal, and the shooter would always be adjusting to be ready to shoot. 341 had this in 2012 and I’m sure again in 2013, we(1706) had it for both years. It is truly amazing.

The devil’s in the details. Most of my favorite cases of programming are very much behind the scenes:

  • 254’s elegant usage of VEX bump switches. You can see them in Barrage’s picture on their Aerial Assist page. I assume (based on their placement) that they were used to pulse their intakes when they are grabbing the balls for their famous auto.
  • 254 and 1114 both had a strip of LEDs on their robot that lit up when the flywheel was up to speed for their drivers.
  • Both 987 and 4334 have implemented their own engine to parse out an autonomous scripting language. 4334’s engine for Gordian is open source and can be found on GitHub. Though I’ve never used Gordian, I can imagine that it cuts down on compile/reboot time when writing auto routines.*
  • 33 had a unique catapult design that involved a lead screw that they used to vary their shot. I don’t know if they ever did, but this gave them the ability to vary their shot distance on the fly. The only other teams that I know of that could change their shot distance on the fly were all flywheel based (or, per billylo’s post, pneumatic).
  • While not used, a sophomore on the team developed some OpenCV for ball detection, with the intention that it would be used in an autonomous mode that would pick up our partner’s misses. The ball detection code was written, but the corresponding auto mode was never implemented due to our decision to allocate resources in a manner more consistent with our priority list. Who knows…maybe we’ll finally have some time to realize it for some of our off-seasons.

*An aside: I’m curious to see whether or not they (Gordian, and other in-house scripting languages) are displaced with other solutions: the roboRIO is supposed to have Java 8, which should allow usage of Jython and/or Rhino (Java engines for Python and Javascript, respectively) for Java teams.

254 had code that allowed them to set waypoints on the field and have the computer make a spline in that path. The spline was then translated into different motor curves for each side of the drivetrain. They could literally draw a path and have the robot follow it.

From Kevin Sheridan on /r/frc:
“We generated a path using quintic hermite spline interpolation. This creates two individual paths for the right side of the drivetrain and the left side of the drivetrain. We can tell the path generator the goal location, goal velocity, goal heading, starting location, starting velocity and starting heading. We can also set the max velocity for a path. This way we can set a path using multiple waypoints to navigate to a certain spot on the field in a specific way such as driving in a s-curve.”

Jaw-dropped. :eek:

:ahh:

For the record, my favorite magic so far is our team’s quadratic deadband.

What is that and what do you use it for? My first guess would be that you’re trying to do some sort of fancy filter with some sort of funky ripple in the stop band but, my google fu fails me so my best guess is that “quadratic deadband” is just a made-up name.

That is absolutely incredible. Hopefully they actually release that code, because that would be so cool to read through and learn how it works.

At Championships we had a fairly interesting system controlling our pneumatic catapult pulses. We had a pressure sensor that measured the pressure feeding to our launcher pistons. We used a polynomial function based on collected data that would match pressures to pulse times so that the shot is consistent regardless of pressure. Our programmer could probably tell you more about it if you want.

1986 had really simple yet effective targeting system setup. A ultrasonic sensor set low to the ground told them how far away they were from the wall. When the robot was in shooting range, LEDs on the back of the robot lit up to inform the drivers.

We did the same thing! I’m glad to see that we had the same idea as a famous team.

This is quite similar to what we did this year :slight_smile: Except we used ours as an active aim-assist to put ourselves the correct distance from whatever was in front of us when a button was held. We literally got it working ten minutes after the last match :confused: Our autonomous and teleop aim-assist could be reduced down to just a few lines of code using the map() function we replicated from the Arduino language. For both the range was smoothed using a moving average to avoid jittering.

ForwardSpeed = map(rangefinder_distance - target_distance, 0, 200, 0, 1);

The map() function takes an input (in our case from a rangefinder), the minimum expected value of that input, the maximum expected value of that input, the minimum value to output, and the maximum value to output. The input/output ranged aren’t exactly set in stone… if the input is greater than expected, the output will also be outside of the expected range (meaning that the above piece of code can be used when too close or too far away from the wall, even though the output should fall between 0 and 1. It just re-maps values from one range to another. We pretty much programmed the entire robot on the fly during competition this year, so the only time we had to test out changes during matches. We ended up running into the wall on more than one occasion due to this…

I read part way through that and I was like awesome, someone has decided to use map, filter and fold. That’s still pretty cool though. We started writing something like that but didn’t finish because it didn’t fit how we played the game.

It filters out extraneous values like a normal deadband, but instead of having output values come out linearly after the deadband they scale quadratically. We use it for our drive code.

And yes, the name is made up.

This is another way of doing proportional control, (ie the P in PID). It’s a nice way to do a first pass to figure out the P constant.

One of our team’s members developed an emulator as part of our software framework that let us thoroughly test our code without a real robot:

This let us have our robot code mostly ready before we even had a prototype robot, and then let us make sure our changes would work before deploying.

The robot emulator can be very useful for training new programmers too. No need to share a physical robot to test their code (and reduce safety risks too.)

I would very much look forward to seeing this running on the new roboRIO platform. Very well thought-out.

Thanks for sharing it here!

Yes, we will be releasing the code for this (both path generation and follower controller) later this year. Right now we are turning over student leadership and most mentors are busy barbecuing :slight_smile:

I would like to elaborate on this system a bit though.

From the start of the season (Or rather, once we were set on a robot that carried 3 balls to a closer shooting position) we wanted the ability to drive smooth S curves. The reasoning was we knew we were going to need to get over to the one point goal to avoid defensive goalie robots by the time Einstein came around. In place of this system we could have used the typical “Drive, turn X degrees, Drive” auto driving architecture, but decided this was not going to work based on our troubles with translating and spinning in 2012 and 2011.

The implementation of this system was a tool [1] that allowed the programmer to specify a list of waypoints for the robot to drive. Each waypoint consisted of a X coordinate, a Y coordinate, and heading (Assume the robot starts at 0,0,0). From there, we found a set of splines that intersect each waypoint (at the desired heading) and fit within a maximum radius which was something that we knew we could drive (aka no abrupt 180* turns).

Next, we figured out what it would take to get the robot to drive that path.Given a maximum velocity, acceleration, and jerk (derivative of acceleration) for the robot to drive with, we walked the spline-y path generated above in steps of 0.01 seconds. For each step, we generated (Position, Velocity, Acceleration, Jerk, Heading, X, Y, and current time step) for the geometric center of the robot. These values corresponded to how the robot /should/ be moving at that point in time. From there, given the wheel base of the robot, we were able to build two sets of steps, one for each wheel. These lists of steps were built into arrays which we serialized to a text file.[2]

On the robot side we built a controller that would drive the robot on the path (after deserializing). The controller that made the robot move consisted of 3 sub controllers: 2 wheel controllers for the left and right wheel, and a heading controller that would correct the angle of the robot on the field.

The wheel controller which followed the trajectory basically looked like this (one running for each wheel, using the given wheel’s path data:


double update() {
  Segment step = follower.getNextSegment();
  double motor = 0;
  motor = (step.vel * K_vel) + (step.acc * K_acc) + (step.jerk * K_jerk);
  motor += ((step.pos - getWheelPos()) * K_p);
  return motor;
}

The heading controller looked something like this


double updateHeadingController() {
  Segment step = leftWheelFollower.getCurrentSegment()
   return (step.heading - gyro.getAngle()) * K_heading;
}

And the overall system looked something like this


updateDrive() {
  double l = leftWheelController.update();
  double r = rightWheelController.update();
  double turn = headingController.update();
  leftMotor.set(l+t);
  rightMotor.set(r-t);
}

So basically, even without a gyro or encoders we could drive something like the path just by pressing ‘play’. Once the closed loop position and heading controllers were turned on, it was much better.[3]

We will release this code over the summer. Hopefully it will make more sense then.

[1] This tool was run on a laptop, as the math to build all the paths was pretty slow on the cRIO.
[2] We originally wanted to render java class files with static arrays defined for each path, but we ran into an issue with the tool that packages the final jar file for deploying. It could not handle more than about 1KB of static data in any Java class.
[3] We did no work to validate that our paths did not cause motor saturation. Whenever the driving looked weird we would down the velocity or move the waypoints to make bigger curves.

Very cool! Your last second change in the Einstein finals was exciting to watch, and the controlled s-curve was very impressive.

What I can gather from this code is that the motor speed for each side is first calculated with a feedforward term from the velocity, acceleration, and jerk, and then additionally a proportional feedback term is added using feedback from the actual encoder position. Is this correct?

If I’m reading it correctly, is there a reason why the feedforward velocity (the velocity, acceleration, and jerk sum) is calculated on the robot using the curve values rather than calculated beforehand and output as motor velocity values?

Yes. Most of the real work is done by the feed forward terms of vel, acc, and jerk. Since we know what position we want to be in at that particular point in time, we can close the loop on the error in the case that we are trailing or leading our moving carrot.

Yes, you could also back out the motor values on the front end since it is all open loop. We did not because we wanted everything in the path to be in same units, and it made it a bit easier to tune the gains on the robot without re-generating a path.

The neat part is it’s crazy easy to find K_acc and K_vel. Put your robot on the ground, give it full joystick forward, and log encoder values and time. Plot velocity and acceleration. Find max velocity and max acceleration. K_vel = 1.0/Max_vel and K_acc = 1.0/max_acc. Think about it, when you are commanding half your max velocity, your feedforward term should be about half max voltage. The acceleration term will add a bit more power in and the loop closed around position should help fix any error you get (from chain tension or battery voltage differences).