The pose returned by the limelight is very accurate on the red side of the field (while viewing speaker) but is offset forward by approximately seventy inches when we’re viewing tags 7 & 8 on the blue-side speaker.
Why is this happening? We’re calling getbotpose2d_wpiblue on the LimelightHelper library.
I think you are going to have to give us a lot more numbers and details here! What are the actual numbers you get? Where are you at when you get those numbers? Pictures details etc all help!
Seventy inches is a ton. Something major is off here.
What does the Limelight UI show? Does it give good numbers and show you in the right pose? This will help us try to isolate is this a problem with your LL setup or field or something to do with how you are getting that data or the LL helper library (unlikely).
Also, sorry for not including this at first, but we’ve recently found that it’s off on both sides, not just red. On both sides our bot has about a 70 inch offset forward.
When we’re about a foot in front of the blue subwoofer it shows us as being pretty much inside of the speaker. I am very confident that the limelight pose in robot space is correct.
I regrettably can’t provide any pictures tonight. I can tell you that we have our tag size set to 165.1mm in the limelight ui, pitched 20 degrees up (we measured it with a protractor). I’m starting to think it might be the field/fmap.
When you said the LL UI didn’t show the correct position did you mean using the field view or the robot view or both?
It seems like you must be on a 2024 image but are you on 2024.1.1? While I am skeptical that the image actually fixed a problem relating to your issue that would recreate your fmap in case it accidentally got changed or modified somehow.
If you are already on the latest image you could try replacing your fmap here.
Do you have the other distances of your camera relative to your robot configured? Could one of those have a typo?
Are you getting this error on a practice/team field, or at your event? We’ve noticed the Speaker tags at our regional this weekend are not well aligned (the backing they’re on is visibly bowed, especially on the Red side) and are getting wrong readings on them (though it’s closer to one foot off than your reported 70 inches) vs our sets on our own practice field.
I pulled up the 3d model in the ui and the camera pose in bot space looked correct. I also measured it with a tape measure twice. However, I am one version behind, so I’ll reflash firmware today and see if that fixes it.
Although idk if firmware would fix it. If that doesn’t work I reckon I’m gonna go ask somebody if I can go on the field with tape measure. Other teams have also been having shooting issues, so…
Us and a couple others brought it up during calibration and after - the answers from the FTAs were “we know, most week 1 fields have this issue, we can’t do anything without a response from FIRST”, and once everyone had calibrated they didn’t want to change the tags anyway. Understandable to an extent, but very disappointing since our shot aiming and autos used tag odometry and we’ve had to rework or disable most of those features. Really hoping it gets better by week 3!
Okay I kinda forgot to get the screenshot, but I can say with relative confidence that the issue was caused because the field was build wrong. I think now we’ll just only estimate our position solely based on the tag immediately underneath the speaker.
Crazy idea but instead of trying to use April tags to localize since the field could be off what about using a lidar camera to find the distance from the alliance wall for lining up to shoot
To be honest I just don’t know what else it could be. Our camera pose in bot space was perfect. 70" may also be a bit high, in hindsight I’d say it may be more like 30 inches off. Also, when we try it at the lab (whose apriltags are 100% in the right place) it works just fine…
Lots of field surfaces are very reflective or transparent, or a combination of both (e.g. Lexan)… meaning they aren’t great to detect with LIDAR. Not saying it can’t provide useful data, but considering the technology has been available for more than a decade and hasn’t seen much use in FRC indicates that it has challenges that teams might not want to try to take on mid-season.
I think this is going to be a learning year for both FIRST and teams with respect to AprilTag based global pose estimation and its challenges/limitations. While AprilTags were on the field last year, this is the first year that it looks like a large number of teams will be using them. The field build tolerances, along with movement over the course of an event, means that individual AprilTags may not be perfectly square to the field dimensions (e.g. can be mounted in the correct location but with rotational deviations of potentially several degrees). I don’t think FIRST has a good tool at present to help FTAs quickly verify this and keep the tags in good alignment? (this would certainly be an area for improvement)
Given this reality, 3D solvers may not give good results, especially with single tags, and the trust level given to those solutions in overall pose estimation should be low. Multi-tag estimates should be better? Filtering outliers and having distance-based trust levels into pose estimators is likely crucial–for example, a camera might detect a tag halfway across the field, but if it’s tiny in terms of pixels, a 3D pose from it shouldn’t be trusted much, if at all. Similarly, if poses come in that are clearly incorrect (e.g. outside the field, robot moving faster than it physically can) they should be rejected rather than fed into the pose estimator. There are tools that can be used to help debug pose estimation (e.g. teams should be logging not only the output of the pose estimators, but all inputs to it, so outliers can be identified in offline analysis and filtering added) but tuning tools to interactively work with that data and figure out better filters/weights are currently nonexistent or homegrown.