Team 254 Presents: FRC 2017 Code


#1

Team 254 is proud to present the code for our 2017 robot: Misfire. If you have any questions, feel free to ask!

Some highlights from this year’s code:


#2

Looking forward to going through this code. Thanks for releasing it!


#3

The code looks really neat and functions amazingly. Excited to read through it all. Congrats on yet another successful season!


#4

What led you guys to choosing the Nav-X over the Spartan Board?


#5

Hold 
 *  Once we collect enough kF samples, the shooter switches to the hold stage. This is the stage that we begin
 *  firing balls. We set kP, kI, and kD all to 0 and use the kF value we calculated in the previous stage for essentially
 *  open loop control. 

Gain Scheduling. I like it!

Edit:
Thinking through this more, this seems like it should have been obvious to me - Clearly, spooling up the shooter from zero to setpoint, and imparting energy into playpieces are different actions. The same control method might work for both, but that depends… I feel like going forwards, this is how I’d want to set up shooter software - tune in two phases for the different parts of the action.


#6

It does a bunch of helpful stuff in hardware like temperature compensation, auto re-calibration, and angle integration.

With the Spartan Board you can still do all that but it requires running extra code on the RoboRIO.


#7

A few questions about the vision app:

  1. Why does the app serve as a device administrator?

  2. Why did you opt for Android.mk instead of cmake for building your native code?


#8

The app serves as device admin so that you don’t get a pop-up every time you try to pin it.

When we were creating the app last year, we found an OpenCV sample that worked using android.mk and just went from there.


#9

I’m interested in the Robot State Estimator described in the technical binder. How did you decide to approach the problem, what challenges did you encounter, etc?


#10

We went with a pretty similar approach to pose estimation as last year and ended up reusing a bunch of Dropshot’s position estimation code.

One new problem that we ran into this year was due to the increased complexity of our autonomous paths in addition to the lack of a substantial center drop, we had problems with the accuracy of our pose estimation. We tried to come up with a better kinematics model that would account for a shifting center of rotation when we turned, but we ended up just adding a “correction factor” which was basically a fudge factor that moved our waypoints over until we drove to the correct place.


#11

How many programmers do you all have on your programming team?


#12

What was the purpose of the Twist2D function? What made it necessary this year?


#13

It’s actually the same thing as the RigidTransform2d.Delta class from last year. We just renamed it for convenience.


#14

We have around 15 students on our programming team.


#15

Our robot moves in a 2D plane with two translational dimensions (x and y) and one rotational dimension (yaw). Sometimes we want to know where we will end up given an instantaneous parametric velocity (dx/ds, dy/ds, dtheta/ds) and a value for the parameter (ex. time duration if the parameter ‘s’ represents time). A simple way to do this is to assume that in a given period ‘ds’ the robot moves in a straight line and then turns (or visa versa)…to obtain this, just multiply each component by ‘ds’. If ‘ds’ and curvature of the motion is small, this is a pretty good approximation. However, if ‘ds’ and curvature are large, this can introduce error that gets compounded over time, since in reality never move in a straight line, since we are simultaneously translating and rotating.

Luckily, there is a precise formula for obtaining a new pose assuming constant curvature displacement which we can borrow from the mathematical field of differential geometry (often used in PhD-level robot kinematics and computer vision). The term “twist” is borrowed from this field because constant-curvature displacement can be thought of as representing the twisting of a screw (especially if you think about the 3D case where there is also a “pitch” velocity). The “exp” and “log” functions likewise refer to the group exponential and group logarithm functions that are well defined in this field (for obtaining a new pose from a twist, and obtaining a twist from a pose, respectively).


#16

Jared, is there a good overview for the transform you are talking about here?


#17

Here’s a good one: http://ethaneade.com/lie.pdf


#18

this sounds like nonsense, but i trust you :yikes:


#19

Here is a paper describing the algorithm I think Jared is talking about. I find the way they describe it kind of confusing, and their final algorithm is slower than it has to be, so here’s a wiki page I made describing our version of it: