Inertial Measurement vs. Vision Tracking

Our team has two programmers. I am one of them. I code in LabVIEW, and our other programmer codes in C++. We use a lift-out-of-box gear placement mechanism.
I am relying on vision tracking and timed outputs to correctly place a gear during autonomous, and our other programmer is relying on an IMU (I forget which one, but it’s from Analog Devices).

Which of these two will get better accuracy, assuming all control loops are tuned properly?

Vision Processing will give you more reproducible results. That being said, however, there’s nothing stopping you from using both.

IMU’s are tricky. Assuming it’s a good IMU, you may have some luck with the gyroscope being able to line you up properly, although distance from the tower is something your driver will have to manage. It should be noted that most IMUs suffer some sort of sensor drift (the ADIS16448 is notorious for this, with the NavX family being noticeably better), so watch out for this and be sure to calibrate your sensors at the start of each match.

Vision Processing (assuming a rigid camera mount), does not drift (unless your camera is literally garbage). Vision Processing is good at telling you where the target is, and with some math it can tell you the angle you’re at and how far you are from it. For this to be useful, though, you have to tune your vision tracking, which in all honesty isn’t a huge trouble. Cameras, however, aren’t a good source for a continuous feedback loop, since their update rate is quite slow (30-60Hz in the best case).

Since IMU sensor drift is quite gradual, we can afford to use them as an alignment method in the short term. Other than that, they are quite accurate and update fast. I would suggest something like the following:

Grab Camera Image -> Process to find angle offset from target -> Use gyro to align to said offset -> Grab Camera Image to confirm alignment.

This (mostly) overcomes both sensor drift and the slow update rate of cameras.

Fuse the 2 systems.

Unfortunately, that will require translating from LabVIEW to C++, of which our team probably isn’t capable. :frowning:

What kind of processing times are you getting with Labview? I would be surprised if you can process images fast enough to solely rely on vision.

About 6-10 Hz. I could make it faster.

6-10hz should work. There is also the issue of processing a moving pic (blurred). With Vision Tracking alone, you really don’t want to be stopping the robot to take a pic.

I don’t know, why don’t you tell us. :slight_smile:

Who is the better programmer?
Are you doing the drive straight on gear, or the one of the two outside gears?

I believe the drive straight gear can be done with a simple gyro, event better if a IMU, and some encoders.

I believe that the outside ones with distance and turn will need something to compensate/correct for the issues with the turn and longer distances.

Just my gut with no data, so I think you should test both, and determine the accuracy.

I use a very fast shutter speed, (by setting low exposure and brightness) and it still picks up the target, so I’m not overly worried about blurring.

It’s as I feared. The other programmer is using the ADIS16448. However, I have used that before and noticed no drift whatsoever. (I never finished that project, it was a segway drivebase using a cRIO. I got the lateral axis like it was on rails, but never got the balancing to work. I could pick up the robot, rotate it, and put it back down and it would snap right back into position.)

A new method now available for FRC teams is to fuse Vision Processing and navX-MXP/navX-Micro Inertial Navigation using the open source Kauai Labs Sensor Fusion Framework (SF2). This implements a Video Processing Latency Correction feature.

Several things: First, it sounds like you and the other programmer are competing with each other. That isn’t necessarily a bad thing, but if you’re working on incompatible systems in different languages, it’s going to make it hard to leverage each other’s work when you have to produce a single system. Even if you’re having a friendly competition to see which system will work better, you’ll eventually want to be able to combine insights and code from both systems into your final robot. I don’t know your backgrounds or team dynamics, but this might be a chance for you to learn C++ or for the other programmer to learn LabVIEW.

Now, as others have said, this isn’t an either/or kind of problem. The IMU provides very good continuous measurements, but it’s subject to drift and integration error. A camera provides great ground-truth measurements, but they are lower-rate and are much more prone to error. The right solution is to use both, taking the strengths of each to produce a reliable position estimate.

Gyro drift can be a problem, but it’s not going to doom you in 15 seconds. Calibrate it as well as you can, reset the angle when you enter autonomous mode, and you should be fine. We made this stuff work ten years ago, and the IMUs today are significantly better than the ones we had in FRC back then. Accelerometer drift is a bigger problem than gyro drift, but I’m not qualified to say how well it works on a robot.

My team is a java team, we are using UDP to communicate between the roboRIO and our Jetson (which will run python or c++ depending on how fast python is) for vision processing. I have not used Lab View, but I suspect that it is very likely to support UDP. If that is the case, you could have vision metadata sent to the c++ code from the labview computer and thus merge the two sets of data.

If you want to look at some example code, here is a link:
https://github.com/TheHighlanders/RustyCommsVision/blob/Adriana-Draw_Target/Python/identifyTargets.py

My team is currently prototyping our vision code so it might not be the most clear and consise but it is working.