Full Field Odometry Concept

I had this idea for an full field odometry system and I’m kinda of surprised that no one that I know of has attempted this yet. A lot of the higher level teams this year had some sort of pose estimation using drivetrain encoders and then used the limelight to compensate for any encoder dift. What if instead, you mounted 4 limelights on each side of your robot. This would allow for the hub to be seen nearly 100% of the time and allow for much more accurate odometry. This seems like it doesn’t have any major flaws really, other than needed 4 limelights.


Correct me if I’m wrong, but wouldn’t that only tell you how far away you are from the hub, not where you are on the field. It would put you on a circle with a radius that you get from the limelights. You would still need some sort of pose estimation to figure out where on the circle you are. If the limelight target wasn’t a circle such as the flat targets it could work by estimating the angle of the limelight compared to the target, but not for the hub


Having talked to them in Houston and watched some of their videos, it’s my understanding that 971 is doing this, at least to some extent. But they’re not using off-the-shelf limelights. In 2019 or 2020 it was 5 Raspberry Pis. Now it might be something more custom?

I once wondered if an omnidirectional 360-degree camera would work for this.


I’m not saying to not use pose estimation. Most pose estimation code I know of uses the gyro and assumes it doesn’t drift, then uses limelight distance data to estimate your pose. This works quite well but most teams only have the limelight on one side of a robot, therefor can only correct their pose sometimes. Actually on second thought, 254 and other turret teams can probably have a similar effect to 4 limelights since they have a turret.

1 Like

Yup! And for multiple years running, with different hardware.

If I had to guess, I’d say the cost/complexity tradeoff is still too high for the vast majority of teams. I’m happy to be swayed by data, but I struggle to think that in most FRC years, just having more cameras on a robot will drastically increase field odometry accuracy.

The bigger limit today is that the targets usually aren’t unique (so you have to get creative to figure out which one you’re looking at), and they aren’t distributed evenly around the field to where it makes sense to invest in a “always seeing a target” camera rig.

Given these target constraints, I think the incremental gains from going from one to four cameras just aren’t significant enough for lots of teams to try to bite it off.


I was more talking of top tier teams with turrets (like 254 and 971), but I guess I just answered my own question, as they basically have 4 cameras with a turret.


Wow. 5 raspberry pi’s

With a high angle rotation turret, you can maintain pretty much continuous vision to the vision target. This negates the need for multiple limelights.

You can estimate your position because you know the center of the goal’s location, and you know your robots angular rotation (gyro), turret rotation, and the distance and angle to the center of the goal through the limelight or other vision. This gives you enough information to compute your location and update your robot pose.


I don’t think 4 limelights would help in this scenario. As noted above the hub isn’t a flat target so you would have a general arc around the target. Using external encoders on the drivetrain (and knowing your starting position in auto) already provides very accurate odometry for any autonomous or a driver station UI even with encoder drift.

Instead of using 4 limelights, I would probably recommend using some sort of LIDAR sensor to generate where you were on the field

Correct me if I’m wrong (by no means an expert when it comes to sensors), but don’t most LIDAR units (especially within the FRC cost limit) not play well with clear plastic? I recall hearing from many teams on here that units such as the RPLidar have very inconsistent readings with the vast amount of clear plastics found on your typical FRC field.


There are some higher end products that will work… but… thanks to the amazing rules we have around component costs, teams are not enabled or encouraged to seek them out, experiment with them, or use them.

To my knowledge, we are the only team to have ever fielded an FRC robot using a SICK system and the results were pretty good but we’ve not gone back to experimenting with it as there are other ways that might actually happen and the promise of affordable (read: price legal) solid state LIDAR being around the corner seems to be real but taking a while to pan out.


This is true, but many teams are are using the limelight for full field localization during teleop. This works through combining gyro and limelight to reset your drive pose. The drive pose becomes a lot less accurate when you are hit or pushed by other robots.

I’ve seen 2 implementations that work around this, both 6328 and 254 have vision pipelines that work with corner data from the vision targets to calculate their distance which allows for you only needing to see a few of the vision strips.

Good LIDAR = lots of $$$$

1 Like

There are FRC-legal Ethernet sliprings that at least appear to be specced for your application. No need for more than one LL.

If I read correctly 971 used features on the field (logos etc) and used it as sort of vslam. In this case the 4 cameras did help.

1 Like

Yes, if you are looking at targets besides the hub.

What sick sensor did you use?

We did some experiments with the Intel Realsense T265 VSLAM through ROS on a Jetson but we were limited by the camera’s frame rate At slow speeds we were able to return a very accurate pose based on that camera alone, but it would lose tracking every time the robot moved quickly.

1 Like

I’ll try to get the model on Monday if I can remember. One our students won it at ROSCon Madrid and donated it to the team. It wasn’t the easiest thing to interface with and I’d be seriously worried about using them on a real robot due to the costs as well as the fragile nature of them but it was a cool little off season project.


We wouldn’t need expensive cameras and coprocessors if they just put some apriltags or something in known locations.


Correct me if im wrong but if you have a turret always track the target using a limelight that same limelight can be used for this same purpose. This is the code I have currently written and based off of the math it should work at least in Rapid React. It won’t work for all games though.