What are some common things teams miss when configuring CAN ids, the roboRIO, writing code, etc?.?
One thing we always miss is making sure that our objects are wired into the RIO. Turns out, that if you make an object and it is not wired, then the driver station will just show “No Robot Code.”
This video just about sums it up:
tldr?
Tldr:
Motion Planning, PID Loops, Dead Reckoning, Bang-Bang Loops
Basically you can graph a curve which represents a path on the field (a spline)
If you want to be able to get to a position on the field independent of friction, carpet consistency, and voltage – you need to use a spline.
This is often just ignored by programming teams and is usually what sets teams like 254, 1678, 987, 971… apart from the rest in autonomous at least.
It is obvious why not every team is able to do this because as Jared says in the video, it is something almost identical to what google engineers are working on and involves high level system controls and some calculus. But, nonetheless it is something overlooked by many programming teams.
PS: If you didn’t catch what I am talking about, watch the video it is really informative.
Thanks
There are various levels of simplification that are possible. You don’t need a full blown super efficient self driving car, you just need some curved motion that is “good enough”
- No centralized location in code for port numbers, thereby hiding conflicts
- Forgetting to print out the port assignments, making it difficult to check wiring after maintenance
- Not labeling control and power wires at both ends, which slows down troubleshooting
- Not having code deployed on the robot when they arrive at the field
- Not documenting what non-obvious code is intended to do
- Not using GitHub and git properly, committing each change with a description, which makes it difficult to roll back changes
- Not bringing a working programming computer to the competition, available in the pit, with current code
- Not having the current driver station update installed on their driver station machine when they arrive
One thing from me - if your pnuematics is not turning on automatically, make sure you are instantiating a solenoid with the correct PCM CAN id. If you are not providing the id as an arguement, only the channel of the solenoid, then the code is using the default PCM CAN id of 0. Make sure that the PCM id is the same as what you use in your code (either the one you provide or 0 if using the default) by going to the robrio webdashboard over USB (172.22.11.2).
I’m glad you found our presentation interesting!
That said, I want to caution everyone that (aside from some feedback control basics), almost everything in that talk is strictly “extra credit” for FRC. Advanced techniques can help you wring the last few drops of performance from your robot, but they also take a tremendous amount of time to implement and test (at least at first), and increase the complexity of your software dramatically (and therefore also the likelihood of encountering a failure that you’ve overlooked). It is hard to overstate this point (both within FRC and beyond it).
Just as it would be un-wise for a team who has never built a swerve drive to suddenly decide on kickoff day that they are doing swerve, it would be un-wise to put these techniques in your team’s “critical path” unless you have some experience with them from the offseason or prior years.
I will echo what Jared says - its not necessary to have complicated programming to be successful. Our 2012 regional winner$@#won because of a great concept. It used very simple timing loops for auto.
Our Aerial Assist (2013) robot had much more complex controls. We never got it working well.
This year we spent a lot of time on getting motion profiling working on our test robot.$@# Was it fun? Yes? Did we learn a lot? Yes.$@#Can we move more precisely in auto? Yes.$@#will it make a difference? Unclear.
- Setting output for motors or solenoids in more than one place, causing race conditions.
We use something called ControlLeases to overcome this. In short, each part of the code has a ‘priority’ to access output devices with. If their priority is higher than the other, they get a ‘lease’. They can release the lease when they are done, and pass it to other, lower-priority code. If the lease isn’t updated in 100ms, it times out and is released automatically.
We put our joystick teleop controls on priority 10 and our calibration code on priority 1000. Other autonomous actions or commands might be anywhere between those values.
We have separate leases for each mechanism (drive train, shooter, feeder, intake, winch, etc). An example use is a vision or gyro alignment command that takes control over the drive train until it is on target.
Telling the motor to stop.
Something a lot of teams forget to do during testing is to test the transition from disabled, to auto, then to teleoperated mode.
It’s easy to go out and test your auto mode over and over and never transition it to teleop with the drivestation after the auto has run.
In other words, utilize the practice match setting on the driverstation and run a couple to see what happens.
This time of year, robot puns make me chuckle loudly enough to draw attention from coworkers.
As much as I wish we were using splines, we actually aren’t this year. I agree with Jared that you start to see really diminishing returns from controls in FRC after you get the basics down. PID ( + sometimes feedforward) is basically all you need. We’re running all state-space controllers this year because they’re fun and we have a programming team that can support it, but you could do everything that we’re doing with PID + FF.
This is interesting to me, since we’ve been starting to actually think a lot about how we structure our code. We’ve mostly been able to avoid situations where more that one mechanism needs access to the same motors at the same time, but not completely. It seems like you are embracing this and building it into the structure of your code. How does that work out for you? What failure modes do you see with it?
I ask because we’ve talked about implementing a priority system in the future, and it definitely seems like something that could have unexpected failure modes.
I also have a couple questions about your example code:
- What types of objects do you wrap in a control lease? The actual speed controller WPILib objects, or something else?
- How do you deal with ownership of the control leases?
- Do you rely on the 100ms timeout? It seems like a safety watchdog/timeout thing, but personally I’d be worried about writing code that relies on it.
Is there somewhere where I can see more of this robot code? It’s nice to see more teams that put focus into code architecture, and I’d love to see an entire robot’s code written like this
“how to use PID? its supose to make the robot stay strait but it dosent work”
In other words, using PID without understanding how it works/how to use it properly, but they followed a guide. Also, posting asking for help giving no real detail on what exactly they were trying to accomplish and what was going wrong. This makes it very hard to help them, and holy crap do I want to help them out.
This is a cool idea, and a simple version of a robotics technique called the subsumption architecture.
It is definitely important to have a robust solution to the “control contention” problem. FWIW, on 254 we typically deal with this issue by having a singleton object for each subsystem that explicitly keeps track of who is in charge and runs all the controllers, but this results in a very centralized architecture (which has good and bad aspects).
Note that even in the “real world”, people sometimes make mistakes around control contention. My favorite example is from the DARPA Robotics Challenge in 2015. The MIT team had amazing software, and were able to get their Atlas robot to drive a vehicle, stop, and get out of it. But during the finals, they had a bug/user error that resulted in two controllers trying to command the robot at the same time when getting out of the vehicle. On some control cycles, the first controller “won”,
and on others the second controller “won”. This was the result.
Something that I’ve been teaching for a few years now is this idea of structuring code such that code is naturally prioritized correctly (and it seems very similar to the subsumption architecture Jared linked to). RobotPy’s Magicbot framework is structured around this concept.
The idea is you create high level and low level components. Low level components talk directly to motors, high level components only talk to other components. Each component defines two types of functions: input functions, and an ‘execute’ function.
During each teleop iteration, execution happens in two phases: input phase and execution phase.
- During the input phase, only variables internal to each component are set/unset – no motor control actually occurs. These are ‘verbs’… things like “shoot” or “rotate to angle”.
- During the execution phase, the various input variables for each component is read, and the output motors/whatever are set.
The end result is that you can layer various types of inputs (automated, operator controls, whatever), and during the execution phase the “last input wins”, ensuring that higher priority actions occur. For things that need to occur for a set period of time, the individual components can make decisions to lock out inputs while the action is continuing to occur.
Magicbot also has really cool state machine constructs that fit into this execution model really well, making it very easy to create complex controls that are understandable.
I rewrote last year’s code for 2423 in this style: 2016/robot at master · frc2423/2016 · GitHub