Limelight 2019.5 - 3D, Breaking Changes, and Darker Images

You cannot set this via the NT api, but you can setup a second pipeline with a different resolution. Will this work for you?

The new LL update has a 3-D experimental mode where it gives readings of 6 things (x,y,z,pitch,yaw,roll). It’s really cool!

Can you comment on the latency for switching to a different pipeline with higher resolution? I’m wondering if it can be done relatively rapidly in order to determine pose. Do you also have updated camera matrix values for the higher resolution in case I choose to call solvePnP directly? Thanks!

1 Like

So could I use something like the pidgeon or navX to show a real time view of the robot on a virtual field, as opposed to relying on a camera to show our placement? could we take a reading from a pixy camera to show the location of cargo on the field as well? Maybe even other robots?

…do you think i could get these results with no prior (frc) programming experience within the next 8 days? I have a mentor with vast game development skills, who is semifamiliar with arduino and raspberry pi. we have a limelight and pixy camera, both version 1, and about 20 microsoft webcams, like the ones in the kop.

What frame rate do you get while in high res mode?

1 Like

ah yes. I didn’t think of that. thanks

@AlexSwerdlow Are there more/fewer than eight target corners in the image when this happens? If you have eight corners when this happens, you should try increasing the “acceptable error” value. We may want to make this dynamic or a function of target distance in the future.

@AFlyingKiwi Yaw is indeed how far your robot should turn to become parallel to the line going into the target / perpendicular to the target. It will not change as you slide your camera left/right.

@rmaffeo We will measure this asap. Regular pipeline switching takes about 50ms currently, and there is an increased cost when the resolution changes. We will add the updated camera + distortion matrices for advanced LL1 and LL2 users on the docs soon.

@Brian_Selle 22fps processing! The live stream’s frame-rate and resolution will not change.

@LilShroomy This is a cool idea, and it is well-suited for a videogame programmer. Note that our visualizer is not programmable, so you would need to start nearly from scratch. This is what I would recommend if you want to pursue the idea:

  • Your robot needs to estimate its position using encoders, an IMU (pidgeon/navX), and potentially a Limelight.
  • Simple Encoder + IMU Odometry From 1712’s Pure Pursuit Paper:
    paper: Implementation of the Adaptive Pure Pursuit Controller
    distance = (change in left encoder value + change in right encoder value)/2
    x location += distance * cosine(robot angle)
    y location += distance * sine(robot angle)
  • Based on your IMU’s yaw and/or estimated robot pose, you will know which targets you might be facing.
  • Understanding where your camera is relative to the target, you can correct your robot pose and hopefully erase the drift that invariably accumulates with the algorithm above.
  • Post your robot pose to NetworkTables.
  • Your “visualizer” needs to run on the driver station and utilize the “NetworkTables” API to read data from your robot.
  • Draw a sprite of the field, then draw a robot sprite using the robot pose from NT. The visualizer will need to know where the robot was located when the match began.

Still, consider whether you might run into another robot if you are focused on your visualizer rather than a live camera feed.

4 Likes

When testing the compute 3D feature, we found that the limelight sometimes incorrectly assigned the model coordinates to the targets on the camera feed. For instance, when we place the limelight to the right of the target starting with a negative tx, the limelight correctly assigns corners 0-3 to the left target and 4-7 to the right one. As we rotate the camera to make tx more positive, it reaches a certain point where the limelight then assigns corners 0-3 to the right target and 4-7 to the left target, still in counterclockwise order within the individual targets. In the latter case, the corner coordinate data does not match the model and the limelight cannot calculate any data or create a visualization.

We tried both csv models on your website’s downloads page, and our tape orientation is pretty accurate. When the corners are assigned correctly, the x and z calculations are accurate as well. Has anyone else experienced anything like this? Would making our model
or solvePNP algorithm help at all? Thanks!

Edit: I realized that the coordinates were switching targets because I had sorted my targets by largest size rather than leftmost. Works great now!

@Brandon_Hjelstrom
I am having problems using the 3d compute stuff. When I turn it on it just instantly turns off. I am trying to do it with the dual target csv file. Do you know why this is happening? I am using limelight 1.

A notification asking you to enable the new high-res mode should appear. These notifications might not appear properly if you are using Edge rather than chrome/firefox.

Thanks! that was the problem that I had.

1 Like

Hey @Brandon_Hjelstrom I was looking at the camtran array values (they’re pretty awesome btw) and for some reason, the “x” values seems to be pretty buggy. Sometimes it’ll read ~30 inches when it should only be ~5 inches. Do you know what the problem could be?

Thanks for adding this. Literally took us 3 minutes to get 3D reconstruction and running from scratch.

We did run into some problems with the scaled orthographic projection ambiguity, where our angles about the x or y axes would occasionally flip for a frame or two in certain poses because small errors in corner detection (as small as one pixel, even) led to a bad initialization for solvePnP. But nothing that you can’t reject with some clever filtering :slight_smile:

4 Likes

Our team is finding that we only receive data from the Limelight if the object is to the left of the centre. Is anyone else experiencing this issue?

Sweet. yea we can have guides to accurately place the robot every time, and just have a position switch. using the limelight to account for drift uses some kalman filtering which i’m not too familiar with yet. But that’s some good info there, thanks.

For clarity - the data stream stops for the 3D Experimental feed (camtran) if the target is left of centre from the camera. We have two cameras and replicated the issue on both.

We’ve also encountered this.

Thanks for confirming you’re also having a problem @AlexSwerdlow - we’ve been working on the problem for a few hours. @Brandon_Hjelstrom are you aware of the issue and perhaps another hot fix is in the making? Thanks.