Limelight 2019.5 - 3D, Breaking Changes, and Darker Images

With 2019.5 we are introducing the brand new compute3D camera localization feature. Only a handful of teams have even attempted to add this feature to their vision systems, and now it is available to all Limelight 1 and Limelight 2 users.

This is not a silver bullet for this year’s game. We highly recommend thinking of creative ways to use the standard high-speed 90 fps tracking unless this feature is absolutely necessary. You may notice significant noise after about 5ft of distance to the target.

https://giant.gfycat.com/LeftHalfBluewhale.gif

All example gifs were created with an LL2 mounted on the side of a kitbot. This is why you will see slight changes in translation during turns.

Features

  • High-Precision Mode and PnP
  • In the following gif, a Limelight 2 was placed 37 inches behind and 14.5 inches to the right of the target.

  • The Limelight was later turned by hand. Notice how the distances remain mostly unchanged:

  • With 2019.4, we introduced corner sending. This allowed advanced teams to write their own algorithms using solvePNP. With 2019.5, this is all done on-board.

  • Upload a plain-text csv file with a model of your target. We have pre-built models of 2019 targets hosted on our website. All models must have a centered origin and use counter-clockwise ordering.

  • Enable the new high-res 960x720 mode, and then enable “Solve 3D” to aquire the position and rotation of your Limelight relative to your target.

  • Corner numbers are now displayed on the image for easier model creation.

  • Read all 6 dimensions of your camera’s transform (x,y,z,pitch,yaw,roll) by reading the “camtran” networktable number array.

  • Black Level

Breaking Changes

  • The reported vertical FOV for LL2 has been fixed to match the listed value of 49.7 degrees. This will change your “ty” values

Bug Fixes

  • Fix stream-only crash that could occur when fisheye USB cameras were attached.
  • Fix rare hang caused by networking-related driver.
  • Corner approximation is now always active.
16 Likes

Just tested the 3d vision approximation, and it works great! Is it possible to do SolvePNP with a USB camera and our own distortion coefficients/camera matrix?

One thing that we’re considering is mouting the limelight pitched down 30 degrees. Is it possible to add that angle offset to the limelight for 3d vision processing, or is that something that we have to do robot side?

2 Likes

After installing the new update on my team’s Limelight, I can’t access the dashboard under the default address (http://limelight.local:5801/) but I can access the camera stream under the default (http://limelight.local:5800/). Any explanation for this? I tried using both Edge and Firefox and it has worked fine in the past with either browser. After reinstalling firmware 2019.04 onto the Limelight, everything works as expected again, so this seems like a firmware issue, but I could be wrong.

3 Likes

You can pitch your limelight up or down and the 3d reconstruction (solvePNP) still works just fine (ours is mounted at an angle too). As far as doing this on the secondary camera we don’t have any vision processing support for the secondary camera. It’s purpose is be a second driver camera or a rear-view camera.

We are trying to connect to the limelight via limelight.local:5801 and we are just continually getting “New Pipeline” in the logs and a frozen application

2 Likes

So our robot would just have to transform the target location from one relative to the camera to a robot centric one?
Also, what’s the camtran entry format?
found it, (x,y,z,pitch,yaw,roll)

We had one other report of this problem with the new code so we are trying to reproduce it here. Send us an email at support@limelightvision.io

Actually the coordinates we are giving you are target centric. So the origin (0,0,0) is the target and the location is the location of your camera in the target coordinate system. The angles are the pitch,yaw and roll of your camera. We found this was the most useful way to present the information.

3 Likes

@pratBruns can you try using chrome? The new pnp visualization may be problematic on other browsers. Looking into this.

Edit - Found the issue with other browsers, pushing 2019.5.1 shortly

1 Like

Hey, I’ve flashed the limelight and everything seems to be good, the position of the limelight in the 3-D dashboard seems to moving all over the place even though it’s stationary. Do you know how I can fix this?

@AFlyingKiwi First, your target must be precisely constructed to match the field drawings. The 3D solving won’t work well if your tape targets are not perfect 5.5x2" rectangles and angled towards each other properly.

Also, what is the distance measurement listed in your visual? If you are centered and parallel to the target, the pose estimation will start to get jumpy past ~ 50". If you are not centered and parallel, you can sometimes go a bit further (70 - 80") before the measurements become too noisy.

@Erik2175 @pratBruns 2019.5.1 has been posted to our downloads page. Thanks for the feedback.

1 Like

Thanks so much for fixing the problem quickly!

1 Like

How are the target centric values passed back?

Probably a dumb question, but what is the reasoning behind increasing resolution? Does it just make the points more accurate, or allow for more processing power for solvepnp?

You can get the 3D position and orientation values using the network table entry named ‘camtran’. We just updated the docs so you might have to refresh the webpage.
http://docs.limelightvision.io/en/latest/networktables_api.html

Regarding the increased resolution: at low resolution the coordinates of the corner points in the image are not precise enough to get a reliable, stable solution. You really need accurate points and you need your model to match the real-life target very closely.

1 Like

Haven’t tried the update yet. Is there a way to switch between the high res and low res through network tables?

@Brandon_Hjelstrom Thanks so much! I do have another question though; what exactly does “yaw” measure. Is it just the amount the robot has to turn to be perpendicular to the vision target?

I should mention I’m having a weird issue where 3D tracking is lost even though the target and corners are tracking perfectly and I’m close to the target <30 in. Moving it sideways or back seemed to work though.

This is the first I’ve heard of a 3D dashboard, sounds interesting, care to elaborate?