Limelight 2019.5 - 3D, Breaking Changes, and Darker Images

Apparently this is the fix - guess I should learn to read, eh?

@Olivia @Derek @AlexSwerdlow Were you using the dual intersection filter when you encountered this?

Our bad, we figured it out. Thanks

I’m struggling getting any result from camtran over network tables. The 3d model works in the Limelight dashboard and gives accurate results, but sadly for whatever reason when I access the “camtran” entry in the “limelight” network table it returns no value.

@Olivia how did you guys fix this problem? We are having the same problem where the experimental camera localization feature only works when we are pointed to the left of the target.

You need to make the sort order left to right. If you make it biggest to smallest, as it was for us, it doesn’t recognize things on the right half of the field because the right side that is bigger gets returned before the left side, confusing the recognition.

I can take screen shots of all our settings tabs if that would help @Adham_Elarabawy ?

@Derek @Olivia that would be great! Thank you so much!

I think we got it working in that when I set the sort mode to be leftmost, it works perfectly, but now we don’t know how we will be able to distinguish between targets on the cargo ship, as there are identical targets right next to each other, and we might want to score in the middle one sometimes or the left one sometimes etc. We want to be able to select the leftmost target or the rightmost target or the middle target and be able to lock on to that target and push our auto code to aim for that one when placing. How do you do this if you can’t sort between which targets?

1 Like

@Brandon_Hjelstrom no, I was not using the dual intersection feature. There appeared to be 8 points at the time.

I am trying to access the limelight Camera Server feed in java and I can’t seem to get it. i know you can get a normal steam via cam0 = CameraServer.getInstance().startAutomaticCapture(0);
and then getvideo(cam0), but how do you pull the Limelight video? CameraServer.getInstance().getVideo(“limelight”);
Unsurprisingly does not work, I just want a CvSink

Have you tried accessing the stream as an HttpCamera?

CameraServer cs = CameraServer.getInstance();
HttpCamera frontIPCamera = new HttpCamera("frontIPCamera", "http://10.31.3.11:5800", HttpCameraKind.kMJPGStreamer);
frontIPCamera.setConnectionStrategy(VideoSource.ConnectionStrategy.kKeepOpen);
cs.startAutomaticCapture(frontIPCamera);

Just make sure you call CameraServer.getInstance() before creating the camera, due to a bug in CameraServer.

1 Like

This is a wonderful feature, thank you for adding it. However, in pulling the values from the camtran array, how does one set which value to use in code? We code in C++, this is what my theory code looks like currently:

double Limelight::GetTargetAngle() {
  return limelight_table -> GetNumber("camtran[4]", 0.0);
}

@Alex5276 Are you trying to run your own vision processing on the Limelight stream? If not, the stream is automatically accessible in your dashboard if your team number is properly set.

@BeeGuyDude Answered in the other thread, but reposting here to help others:

double Limelight::GetTargetAngle()
{
    std::vector<double> cameraTransform = limelight_table->GetNumberArray("camtran",std::vector<double>());
    return cameraTransform[4];
}

Just double-checking the listed FOV. The LL1 sensor FOV aspect ratio was 54/41 = 1.31, close to the sensor ratio of 320/240 = 1.33. Whereas 59.6/49.7 would only be 1.2. (A ratio of 59.6/45.7 would be 1.30.) Did the pixel aspect ratio change?

@Jared_Russell Do you have any tips for the clever filtering you mentioned? This is actually proving to be harder to reject than we initially thought, so if you have any ideas, we’d be all ears. While usually there’s only a handful of flips, sometimes we get several reading in a row where we get more of the flipped readings than non-flipped.

Really simple version: Pick a dimension and a direction (ex., when not flipped, expect the X coordinate to point to the left). Reject any image where this isn’t true.

More advanced: Instead of rejecting the image, re-order the axes and apply a transformation to fix. Ex. https://github.com/opencv/opencv/issues/8813#issuecomment-390462446

NOTE: for what it’s worth, we gave up on solvePnP once we realized how often the bottom corners of the vision target are occluded by the hatch panel.

5 Likes

We considered the first method, but were able to find states where the wrong direction as being sent more often than the right one, which we figured would be hard to pick the right direction then. As far as the second, I didn’t realize we had access to the axes themselves. I assumed those were defined as they were by solvepnp, and we just got the point in space.

Either way, we also found another route that works for us, not using solvepnp. Thanks for your help!