After the discussion in this thread, I was wondering if it was possible to modify the Limelight firmware (by adding custom code) so that it could act as a coprocessor for something like path following in addition to vision processing (instead of using a Raspberry Pi or a Jetson board). The Limelight has a powerful ARM processor that could easily handle this, and not needing to add a custom coprocessor would greatly simplify wiring. Since the Limelight has a USB port, a USB to CAN interface like this one could be used to communicate with motor controllers and encoders.
It’s just a raspberry pi compute module cm3+, so you can plug it into your computer and use usbboot to install Linux onto it (you’ll need to setup overlays) or edit the shadow file to reset the pi user password if you want to keep the existing functionality. Totally possible.
Not quite sure of the legality of using a limelight to control motor controllers though
The thread that the OP linked answers this question: as long as the heartbeat and enable signal comes from the Rio it is legal.
As for the main question, path following isn’t that intensive of a process and can easily be done on the roborio.
Thanks. It looks like what you are doing is very interesting.
This past season we updated our drivetrain odometry for pure pursuit path following at 100 Hz, and it used a significant percentage of the CPU on the roboRIO. I would like to try to run this loop even faster, so I thought that a coprocessor would be a good place to start.
What will you gain by running the control loop faster, academically and practically?
Have you profiled your code to understand what parts are slowing it down?
Academically, learning how to program the interface between the coprocessor and the roboRIO will be a great exercise. Practically, since the odometry implementation we are using assumes linear movement, our path following will have an increased accuracy.
I don’t think anything is really slowing it down, but I do have some experience with Java profilers.
I’m very curious to see what is causing your odometry to take such an amount of resources. Updating odometry is a very simple linear time operation- I’ve done it on much less powerful microcontrollers at a similar rate with no timing issues.
Depending on how you’re connecting your encoders, you may not actually be able to update faster than 100 Hz (the default Talon status frame rate is 100 Hz and running any faster will just re-use stale data). You have to take in account of the latency as well, if you’re running your odometry loop on the Limelight and reading data on a Talon, you have latency over CAN, the time it takes to get your loop back around to sending the data to the Limelight, and then the time to actually process the data.
Are you in need of additional accuracy?
Odometry also uses input from a gyroscope (also with a 100 Hz update rate). Since, the updates from each encoder and the gyro are not in sync, you can still benefit from updating the odometry faster.
I prefer to attempt to get as much accuracy as possible and be pleasantly surprised when it is more than enough.
Development time is a finite resource, and good enough meets requirements.
If you plan to pursue faster update loops here are my recommendations:
Profile your code and find out what is taking so long (presumably you’re fairly certain it’s the odometry, but what part of that.) If it can be fixed, then fix that. If more speed is required or whatever issue cannot be fixed, move to step 2.
Use a more accurate odometry method of constant-curvature arcs or pose exponentials. There is much example code for this, and it’s the method that WPILib uses for their odometry implementation.
Maximize sensor update rates and minimize latency. These put hard limits on your precision. Increase your gyro update rates, increase CAN status frame rates or better yet connect your drive encoders directly to the roborio.
Move odometry to the Limelight. Minimize latency here as well, which means reading and sending data just as fast on the roborio as you need to update on the Limelight- which probably isn’t any faster than just calculating the pose on the rio.
I agree with points 1, 3, 4, but can someone please explain exactly what the differences/advantages are between the different methods of odometry (assuming linear movement, constant-curvature, etc.).
I have an explanation to this here:
Basically, you are assuming that the robot travels with a constant angular velocity between measurements, which is typically closer to the true behavior than assuming that the robot travels with no angular velocity at all.
This tends to be more accurate (Figure 10.1 of https://file.tavsys.net/control/controls-engineering-in-frc.pdf) than traditional Euler integration (straight-line approximations).
Thanks, I actually did not understand that 254’s code did this. So, I actually am using the constant curvature arcs. I had previously thought that only the WPILib implementation did this.
I made some scripts to make modifying the firmware easier.