We are working for apriltag localization for Pathplanner. Before we started, we had one thing on our minds
Does Pathplanner ask for the data required for Apriltag recognition in terms of distance to the Apriltags or in terms of their location on the coordinate plane?
I believe pathplanner does not calculate position via April tags. You must do that on your own and supply that position to the pathplanner auto tracker.
How Photovision and Limelight do it
To incorporate photonvision or limelight you would use their 3d tracking method (ie megatag for limelight) to calculate your position in the field. After that you will continuously input that position into pathplanner.
For Limelight, it gives you your position on the field with a bit of delay and you use that to update your poseEstimator with addVisionMesurements which ask the pose and the delay from the to the reality which limelight give you
Is there any way to do this without using limelight and photovision?
You could in theory only use your odometry (tracking your position via wheel velocities) to find your position. The problem with only using odometry is that your odometry will drift with time. For short distances it will work fine but after one or two cycles the pose calculated by the odometry will be very far off your real position. What the cameras do is give you an absolute measurement of exactly where you are in the field to allow you to “reset” your odometry to correct your drift.
If you do not have access at all to cameras you could technically tell your driver to stop at a certain known point every cycle and give them a button which allows them to reset the odometry position to that location. The main problem with this is that any distance which your driver is off by when he clicks the reset button will cause your odometry to be off by that distance.
I will ask though what are you trying to achieve with this? Is your purpose automatic placement? If so there could be simpler solutions that don’t require pathplanner and 3d tracking.
You didn’t explain why you don’t want to use PV or LL. If you are extremely cost sensitive and can afford a USB camera (about $35) but not a coprocessor such as an RPi 3 or 4, then you can run the WPILib AprilTag detection code on a roboRIO v2 and might have enough CPU left over to do commands.
I added the pose estimation to the WPILib detection example (with help from fellow CD posters) and it likely will work. GitHub Pose Estimation
The example has both PV and LL routines as I compared the poses from all three methods. LL and PV are not required to run the pose estimator on a roboRIOv2.
I recommend that you run PV, if at all possible.
We will use ROS with Jetson Orin Nano Super. In ROS we will make Apriltag Detection. Do we need to use PhotoVision or anything else ?
Sorry, I don’t know if PV runs on your processor.