I want to get many object’s bounding boxes to decide which has the largest area. I don’t think I can call the ta, ty, or tx values because they assume one object only (as far as I’m aware). Does anyone know how to implement this in code (java)?
Limelight has a helper class called LimelightHelpers that has some built-in methods for JSON parsing that should make it easier to loop through the data to compare targets!
Yeah it has getTX and getTY methods, but how does it differentiate between different objects?
The docs I linked show that you can parse the JSON output with:
LimelightResults llresults = LimelightHelpers.getLatestResults("");
The LimelightResults
object has arrays of detected objects in it (you probably want LimelightTarget_Fiducial
if you’re looking for april tags) - from there you can iterate through and compare area or any other fields they expose.
I recommend looking at the JSON dump specification too to see how the data is structured so that you know what LimelightHelpers methods to look for. Hope this helps!
I want to detect notes, so probably not fiducials.
I’ve scoured the docs and don’t see what part of llresults.results to call for object detection.
Even after I’ve called it, and gotten the LimelightHelpers.LimelightTarget_Detector object, it doesn’t show any methods I can use to get the values from the bounding boxes. Let’s say there was a gettx or ty method, like here:
Even then, I’m pretty sure that it returns one value. But for many detected notes I should have multiple values.
The actual variable names of the object arrays are:
I imagine targets_Classifier
or targets_Detector
should have the note output. It looks like they only expose tx
, ty
and ta
for detector results, but since ta
is area that should work for your use case right?
llResults = LimelightHelpers.getLatestResults("").targetingResults;
double area = llResults.targets_Detector[0].ta;
Just so I understand, targets_detector returns an array, where each element represents an individual detection?
Yep So you can iterate over that array and keep the object with the highest
ta
, or any other filter you want to apply.
Thank you so much for your help, you’re a life saver!
(Just asking, is there a url for the vendor dependency of limelightlib or do I need to directly download from github?)
No vendordep unfortunately, just copy-pasting the LimelightHelpers
class into your robot project.
Would it be possible to use multiple limelights (one facing the front and one on the back) to get an even more accurate robot pose?
You can have multiple things (Limelights in this case) adding to addVisionMeasurement
This may be splitting hairs but it won’t necessarily make any current measurements you have more accurate. It will allow you to “see” an AprilTag more often and thus be updating your pose (with hopefully accurate data as long as you are filtering properly) more often.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.