QuestNav: The Best Robot Pose Tracking System in FRC

QuestNav is an entirely new approach to robot pose tracking that is more reliable and robust than anything currently available in FRC. This project enables streaming Oculus VR headset pose information to an FRC robot using the Network Tables protocol. This pose information can be used by the robot control system to accurately map its surroundings and navigate around a competition field, practice space, or any other location. The headset does not require any special calibration / initialization, April tags, or a special zeroing / homing sequence. It just works!

Check out the demo video linked below!

questnav-demo

Hardware Requirements:

  1. FRC robot and/or control system
  2. Quest 3S headset
  3. A supported USB-C to Ethernet + power pass-through adapter (thereā€™s a list on the GitHub page)
  4. A 3D printed mount that attaches the headset to a robot (TBD, Iā€™m still working on an FDM-printable version of it)
  5. Optional: A USB backup battery

More information, including source code, a precompiled example, setup instructions, and a detailed software description are available on the QuestNav GitHub page.

Special thanks to @Thad_House for patiently answering my questions during development.

112 Likes

This is wild.

What is the output to NT the field Pose or is it relative to start?

1 Like

Itā€™s relative based on where you reset the robot. Iā€™m hoping someone will send a pull request my way that looks for April tags and attempts to initialize itā€™s position using them instead. I found a Unity-based April tag detection project thatā€™s already been compiled for the tags used in FRC if anyone wants to take a crack at it :slight_smile:

Meta also provides an API for placing 3D anchors within a headset map that might be helpful!

1 Like

Whatā€™s the level of accuracy here? Will it notice small 1cm changes?

2 Likes

The headset is definitely sub-centimeter accurate without relying on any known features (April tags)

11 Likes

Pretty crazy to get this level of accuracy with something that costs like a brand new LL3G.
Are there any problems with this solution, except pre-game reset (which can be solved easily)? Does the pose stay consistent throughout a 3 minute drive with a lot of rotational and translational changes?

4 Likes

Iā€™m heavily biased here, but there donā€™t appear to be any problems with this approach other than form factor, I guess. Definitely watch the video. I drove over several bumps and crashed into totes while spinning in circles and still arrived at the same estimated position. I also hosted a Twitch stream last week where I answered a bunch of questions and drove the robot around that same field live for a solid half hour.

10 Likes

Any idea on what the latency is between an image being captured and the pose being written to NT? Based on your video not much, Your code seems like it intended to do latency compensation but questTimestamp isnā€™t used, unless I am missing something.

If I were to do this, I would instead of direct fusing probably use a PoseEstimator and supply this input with a small standard deviation. Pose estimators are latency compensated so need the time reading.

Not trying to pick here, just trying to determine if this is worth trying to take on this year for us. I am guessing probably no for my team as I am new to them, but maybe yes for me for my own fun.

3 Likes

A few questions here

  1. How difficult would you say it would be to setup this system for the robot. From the docs it seems that you only need to setup things on the quest side and the robot only needs to read the data and input it to your pose filter. Is this the case or are there other changes that need to be made?
  2. How robust is the system to high velocity movement (Both angular and linear)
  3. How would you go about calculating standard deviations for the system? What factors affect how much we should ā€œtrustā€ the measurement.
  4. We currently only have a Oculus Quest 3. Do you think it would be worth spending time trying to get it to run QuestNav or should we try looking for a Quest 3S?
3 Likes

If I understand correctly, is this basically a VIO(Visual Inertial Odometry) or visual SLAM(if it has loop closure)?

That is quite a big claim for a system that to my knowledge has never been run in an actual FRC match with actual FRC lighting (hopefully not a problem but hard to say there arenā€™t any problems without trying it). There is still a ton of unexplored area here and you really havenā€™t gathered nearly enough concrete data in my opinion.

All that being said I bought a 3S that I wanted anyway to go with my Quest 2 and have been trying this out. My conclusion is that Unity is not a fun platform to work with. Maybe I am doing things wrong but just getting debugging working has been elusive and painful.

4 Likes

I am unfamiliar with the internals of these devices. Could you imagine destructively removing casings or fuselages to trim itā€™s weight and decrease itā€™s space claim while still maintaining itā€™s operation?

Also, in the demo most of the environmental background was static, is there a scenario where the performance is degraded by robots, spectators, refs, and game/field elements moving? Or is this technique robust against a dynamic background?

4 Likes

Agreed, and Iā€™m super pumped someoneā€™s looking into it. Being able to strap an ā€œodometry boxā€ to a robot and never having to think about it again is a game changer.

Looking forward to seeing some in-match, back-to-back data to show what other solutions this outperforms.

The crucial thing though: This needs to get more accuracy for less ā€œfiddle timeā€ than other options on the market today.

Looking at the Q&A, my personal assessment is more testing is needed before that statement can be made with confidence for all FRC teams.

I look forward to seeing this strapped on a few robots this season!

That being said, lacking the seasonā€¦ one thing that might be discussableā€¦ Why should this solution be expected to outperform wheel odometry and gyro?

5 Likes

This is really impressive @juchong . I feel like it could be a game changer. Thanks for sharing!

1 Like

If it started dropping updated field object locations to the network table too, use that for AD* and you some amazing things.

I havenā€™t run this system but Iā€™m going to stick my neck out here for Juan. His work while at Analog Devices was a significant part of delivering teams IMUs that are still generally the answer unless youā€™ve hitched your wagon to CTRE in full. And then he went to an employer where bad pose tracking makes people puke, thus itā€™s got to be tested hard.

So Iā€™m watching this with interest.

For that many cameras and sensors in a mass market unit, Iā€™d be shocked if shucking it didnā€™t wreck some factory calibration.

12 Likes

I am certainly not trying to attack Juan or discount his expertise. I would am just trying to question branding this ā€œThe Best Robot Pose Tracking System in FRCā€. If the title would have QuestNav: New Technology that Might Change the Game or something like that I would completely agree and that is why I am also looking into and working with this heavily.

I just want teams who see this to not get over excited and have inflated expectations with where this project is currently. Meaning if you arenā€™t a very early adopter and pretty technical you shouldnā€™t be rushing out to buy one just for this purpose. However the Black Friday deal getting it + the new Batman game (which is awesome) for 200 (after 100 in Amazon rebate) is really good.

13 Likes

Got me thinking as wellā€¦

My main theory is both the amount of cameras, and the amount of assumed R&D time for hardware, software, and calibration. Maybe the vslam using features that are not limited to specific markers allows for more robust tracking as well.

I legitimately thought this was a joke at first. This is so freaking cool!! Canā€™t wait to see more testing come out, and what the future holds for it!

8 Likes

This is really cool - but in FRC youā€™d still need April Tags then correct? Iā€™d imagine overtime youā€™d experience some drift

Although I guess the argument is going to be its not coupled to the wheel encoders like traditional odometry and ā€˜hitsā€™ shouldnā€™t really affect you