Hi Everyone,
For those who haven’t been introduced to Limelight, it is the easiest way to add high speed vision to any FRC robot. It is the first smart camera/vision sensor that combines tracking software, networking, video streaming, an easy web interface for tuning, and 400 Lumens of light output in an easy-to-mount box. Our beta testers have added vision tracking to their robots in (usually) under an hour.
Initial launch video:
Website:
Docs:
http://docs.limelightvision.io/en/latest/
Limelight also enables tracking without gyroscopes or encoders, as seen here:
987’s software student and a 2017 mechanical grad took 987’s 2015 robot from zero to sensorless tracking in two hours with a very small amount of code. This included flashing the roborio, updating toolchains, etc. The documentation now has multiple case studies with code to help add tracking to any robot.
Here’s another example. Tracking and range finding were tuned entirely with Limelight’s crosshair calibration feature. The only other sensor used in this video is an encoder on the flywheel:
The full source code for this test is available in the docs. We were actually very surprised by how little code we needed to make a capable STEAMWORKS shooter with Limelight.
Limelight works with any programming language, and there is no complicated setup involved. No libraries, toolchains, soldering, crimping, networking code, or Linux required. The latency is very low, so you can completely forget about latency compensation as well. We want to level the playing field when it comes to vision, specifically for teams that lack software mentors or an abundance of software students.
We have a lot to announce today.
First, batch 2 will go on sale on Saturday at 4:00 PM PST and start shiping late week 1.
Second, batch 1 will start shipping tomorrow. Many thanks to those pre-ordered.
Also, after more testing, we are now recommending the addition of a small network switch to your robots if you wish to use vision while tethered at an event.
Software update 2018.0 (1/3/18)
On top of a ton of new case studies, more detailed documentation, and a full example program for an autonomous STEAMWORKS shooter, the software has received a major upgrade.
FEATURES
New Vision Pipeline interface:
-
https://thumbs.gfycat.com/UnfitLankyHadrosaurus-max-14mb.gif
-
Add up to 10 unique vision pipelines, each with custom crosshairs, thresholding options, exposure, filtering options, etc.
-
Name each vision pipeline.
-
Mark any pipeline as the “default” pipeline.
-
Instantly switch between pipelines during a match with the new “pipeline” NetworkTables value. This is useful for games that have multiple vision targets (eg. the gear peg and boiler from 2017). This is also useful for teams that need to use slightly different crosshair options per robot, field, alliance, etc.
-
Download vision pipelines from Limelight to backup or share with other teams.
-
Upload vision pipelines to any “slot” to use downloaded pipelines.
Target “Grouping” option:
-
Instantly prefer targets that consist of two shapes with the “dual” grouping mode”. “Single” and “Tri” options are also available
-
https://thumbs.gfycat.com/ScalyDeficientBrahmanbull-size_restricted.gif
New Crosshair Calibration interface:
- “Single” and “Dual” crosshair modes.
- “Single” mode is what Limelight utilized prior to this update. Teams align their robots manually, and “calibrate” to re-zero targeting values about the crosshair.
- “Dual” mode is an advanced feature for robots that need a dynamic crosshair that automatically adjusts as a target’s area / distance to target changes. We’ve used this feature on some of our shooting robots, as some of them shot with a slight curve. This feature will also be useful for robots with uncentered andor misaligned Limelight mounts.
- Separate X and Y calibration.
Misc:
-
Add Valid Target “tv” key to Network Tables.
-
Add Targeting Latency “tl” key to Network Tables. “tl” measures the vision pipeline execution time. Add at least 11 ms for capture time.
-
Draw additional rectangle to help explain aspect ratio calculation.
-
Remove throttling feature, and lock Limelight to 90fps.
-
Disable focusing on most web interface buttons. Fixes workflow problem reported by teams who would calibrate their crosshairs, then press “enter” to enable their robots.
-
Post three “raw” contours and both crosshairs to Network Tables.
-
Access a raw contour with tx0, tx1, ta0, ta1, etc.
-
Access both raw crosshairs with cx0, cy0, cx1, cy1.
-
All x/y values are in normalized screen space (-1.0 to 1.0)
-
Add “suffix” option to web interface. Allows users to add a suffix to their Limelights’ hostnames and NetworkTables (e.g. limelight-boiler). This feature should only be utilized if teams intend to use multiple Limelights on a single robot.
-
Display image version on web interface
Optimizations
- Decrease networking-related latency to ~0.2 ms from ~10ms (Thanks Thad House)
- Move stream encoding and jpg compression to third core, eliminating 10ms hitch (25 - 30ms hitch with two cameras) seen every six frames.
- Drop steady-state pipeline execution time to 5ms with SIMD optimizations.
- http://docs.limelightvision.io/en/latest/_images/20180_latency.png
- New Latency testing shows ~22 ms total latency from photons to targeting information.
- Upgrade Network Tables to v4 (Thanks Thad House)
- Optimize contour filtering step. Latency no longer spikes when many contours exist.
- Much improved hysterisis tuning.
- Significantly improve responsiveness of webinterface<->limelight actions.
Bugfixes
- Fix minor area value inaccuracy which prevented value from reaching 100% (maxed ~99%).
- Fix half-pixel offset in all targeting calculations
- Fix camera stream info not populating for NT servers started after Limelight’s boot sequence. Regularly refresh camera stream info.
- Fix bug which caused aspect ratio to “flip” occasionally.
- Force standard stream output (rather than thresholded output) in driver mode.
- Fix bug which prevented LEDs from blinking after resetting Networking informationorking informationng information
Finally, we want to thank our testers for helping us get to this point. We chose a great mix of experienced and inexperienced teams, and their feedback directly translated to changes throughout development:
Alpha testing + **invaluable **electronics help:
Kiet Chau and Team 1538
Alpha testing:
Norman Morgan, Greg McKaskle and Team 2468
Beta testing:
Corey Applegate and Team 3244
Marshall Massengill and Team 900
Brian Selle and Team 3310
Kevin Hjelstrom and Team 2605
Adam Heard and Team 973
Solomon Greenburg and Team 2898
Best,
The Limelight Team