QuestNav is an entirely new approach to robot pose tracking that is more reliable and robust than anything currently available in FRC. This project enables streaming Oculus VR headset pose information to an FRC robot using the Network Tables protocol. This pose information can be used by the robot control system to accurately map its surroundings and navigate around a competition field, practice space, or any other location. The headset does not require any special calibration / initialization, April tags, or a special zeroing / homing sequence. It just works!
Check out the demo video linked below!
Hardware Requirements:
FRC robot and/or control system
Quest 3S headset
A supported USB-C to Ethernet + power pass-through adapter (thereās a list on the GitHub page)
A 3D printed mount that attaches the headset to a robot (TBD, Iām still working on an FDM-printable version of it)
Optional: A USB backup battery
More information, including source code, a precompiled example, setup instructions, and a detailed software description are available on the QuestNav GitHub page.
Special thanks to @Thad_House for patiently answering my questions during development.
Itās relative based on where you reset the robot. Iām hoping someone will send a pull request my way that looks for April tags and attempts to initialize itās position using them instead. I found a Unity-based April tag detection project thatās already been compiled for the tags used in FRC if anyone wants to take a crack at it
Meta also provides an API for placing 3D anchors within a headset map that might be helpful!
Pretty crazy to get this level of accuracy with something that costs like a brand new LL3G.
Are there any problems with this solution, except pre-game reset (which can be solved easily)? Does the pose stay consistent throughout a 3 minute drive with a lot of rotational and translational changes?
Iām heavily biased here, but there donāt appear to be any problems with this approach other than form factor, I guess. Definitely watch the video. I drove over several bumps and crashed into totes while spinning in circles and still arrived at the same estimated position. I also hosted a Twitch stream last week where I answered a bunch of questions and drove the robot around that same field live for a solid half hour.
Any idea on what the latency is between an image being captured and the pose being written to NT? Based on your video not much, Your code seems like it intended to do latency compensation but questTimestamp isnāt used, unless I am missing something.
If I were to do this, I would instead of direct fusing probably use a PoseEstimator and supply this input with a small standard deviation. Pose estimators are latency compensated so need the time reading.
Not trying to pick here, just trying to determine if this is worth trying to take on this year for us. I am guessing probably no for my team as I am new to them, but maybe yes for me for my own fun.
How difficult would you say it would be to setup this system for the robot. From the docs it seems that you only need to setup things on the quest side and the robot only needs to read the data and input it to your pose filter. Is this the case or are there other changes that need to be made?
How robust is the system to high velocity movement (Both angular and linear)
How would you go about calculating standard deviations for the system? What factors affect how much we should ātrustā the measurement.
We currently only have a Oculus Quest 3. Do you think it would be worth spending time trying to get it to run QuestNav or should we try looking for a Quest 3S?
That is quite a big claim for a system that to my knowledge has never been run in an actual FRC match with actual FRC lighting (hopefully not a problem but hard to say there arenāt any problems without trying it). There is still a ton of unexplored area here and you really havenāt gathered nearly enough concrete data in my opinion.
All that being said I bought a 3S that I wanted anyway to go with my Quest 2 and have been trying this out. My conclusion is that Unity is not a fun platform to work with. Maybe I am doing things wrong but just getting debugging working has been elusive and painful.
I am unfamiliar with the internals of these devices. Could you imagine destructively removing casings or fuselages to trim itās weight and decrease itās space claim while still maintaining itās operation?
Also, in the demo most of the environmental background was static, is there a scenario where the performance is degraded by robots, spectators, refs, and game/field elements moving? Or is this technique robust against a dynamic background?
Agreed, and Iām super pumped someoneās looking into it. Being able to strap an āodometry boxā to a robot and never having to think about it again is a game changer.
Looking forward to seeing some in-match, back-to-back data to show what other solutions this outperforms.
The crucial thing though: This needs to get more accuracy for less āfiddle timeā than other options on the market today.
Looking at the Q&A, my personal assessment is more testing is needed before that statement can be made with confidence for all FRC teams.
I look forward to seeing this strapped on a few robots this season!
That being said, lacking the seasonā¦ one thing that might be discussableā¦ Why should this solution be expected to outperform wheel odometry and gyro?
I havenāt run this system but Iām going to stick my neck out here for Juan. His work while at Analog Devices was a significant part of delivering teams IMUs that are still generally the answer unless youāve hitched your wagon to CTRE in full. And then he went to an employer where bad pose tracking makes people puke, thus itās got to be tested hard.
So Iām watching this with interest.
For that many cameras and sensors in a mass market unit, Iād be shocked if shucking it didnāt wreck some factory calibration.
I am certainly not trying to attack Juan or discount his expertise. I would am just trying to question branding this āThe Best Robot Pose Tracking System in FRCā. If the title would have QuestNav: New Technology that Might Change the Game or something like that I would completely agree and that is why I am also looking into and working with this heavily.
I just want teams who see this to not get over excited and have inflated expectations with where this project is currently. Meaning if you arenāt a very early adopter and pretty technical you shouldnāt be rushing out to buy one just for this purpose. However the Black Friday deal getting it + the new Batman game (which is awesome) for 200 (after 100 in Amazon rebate) is really good.
My main theory is both the amount of cameras, and the amount of assumed R&D time for hardware, software, and calibration. Maybe the vslam using features that are not limited to specific markers allows for more robust tracking as well.
I legitimately thought this was a joke at first. This is so freaking cool!! Canāt wait to see more testing come out, and what the future holds for it!
This is really cool - but in FRC youād still need April Tags then correct? Iād imagine overtime youād experience some drift
Although I guess the argument is going to be its not coupled to the wheel encoders like traditional odometry and āhitsā shouldnāt really affect you