My team has 2 limelight’s and every periodic we get the json results through limelight lib. This is causing loop overruns.
The code in my periodic:
// Every periodic chooses the limelight to use based off of their distance from april tags
LimelightResults frontLimelightResults = LimelightHelpers.getLatestResults(LimelightConstants.FRONT_LIMELIGHT_NAME);
LimelightResults backLimelightResults = LimelightHelpers.getLatestResults(LimelightConstants.BACK_LIMELIGHT_NAME);
LimelightTarget_Fiducial[] frontLimelightAprilTags = frontLimelightResults.targetingResults.targets_Fiducials;
LimelightTarget_Fiducial[] backLimelightAprilTags = backLimelightResults.targetingResults.targets_Fiducials;
The limelight lib is very heavy on data parsing and getting network tables data, try getting data for only the methods you use instead of the entire json dump if you aren’t using json dump specific data. This should probably be implemented in a separate vision subsystem.
The limelight getResults function is known to cause delays I believe. We just run our vision updates in a separate thread to avoid overruns in the main one.
Our vision class is not a subsystem, but you will see we have a vision_thread method that gets called to create the thread and loop which is referenced in the vision constructor.
We only create a object reference to this class inside of our drivetrain subsystem so we are certain multiple vision threads won’t ever be made.
None that I’ve seen, I can check later but it was running in the background of our entire event last weekend and we never had a problem. The nature of the limelight results coming back slowly in of itself is a delay.
private static final Vector<N3> visionStdDevs = VecBuilder.fill(0.9,0.9, Units.degreesToRadians(0.9));
//Values tried for Theta 0.1, 500
//(I think we want to ignore the camera pose's angles since they will always be wrong compared to gyro)
Correct me if I’m wrong, but my understanding of this code is you are trusting the vision measurement to have an angle accuracy of 0.9 degrees. That’s a very high confidence in the vision supplied angle. A value of 500 is > 360 and wouldn’t make much sense. The default that WPILib uses for vision measurements is 0.9 Radians which is about 51 degrees.
You may want to throw a lock around the shared data (I think it’s just the poseEst object). Race conditions have a nasty habit of not showing up until a very important match
I should say we had vision estimated alignment starting to work on our test grid in our shop but we did not use at competition yet because of camera mounting related issues. (We can see an april tag right in front of us on one camera but not the other, and then we still need to fix our measurements of camera → robot since they don’t report the correct position when we are in a known place we were going to try and fudge the camera → robot values to see if that helps.)
I tried doing the thread thing and it stopped having loop overruns from the Vision subsystem. It caused cpu utilization to go to 100% which then caused other things to have loop overruns. How can I stop it from killing the cpu?
@Traptricker more specifically you want to put the thread to sleep for a period of time. Doing this allows the thread to not use CPU cycles until a specific amount of time has passed. You can checkout the following for details.