Limelight Extensibility

Over the summer we had a bit of a bootcamp to knock off the rust and get the students back up to speed. One of the areas we focused on was computer vision.

Our team has a LimeLight and has used it with success in the past. The goal we were shooting for was to detect shapes and have the robot navigate towards the desired shape. Lofty goal, but it was good to explore the capability of the system.

I’m familiar(ish) with OpenCV and could help the kids navigate GRIP to create the right filters and get that uploaded to the LimeLight via a GRIP export. But polygon detection seems like something outside LimeLights built capability and thus required some custom OpenCV code.

Running that OpenCV code on the RIO was god-awfully slow. Like one frame per second. The code probably could be optimized, but it lead me to the following question.

Does anyone know if LimeLight has the capability to run arbitrary OpenCV code? I assume that’s what it’s doing under the hood, and the form factor and specs make it a pretty nice co-processor. It would also be slick since we already have one. :slightly_smiling_face:

I know the “.ll” files you upload to the LimeLight contain some type of sudo-code structure but so far I haven’t found any documentation that would lead me to believe this is a full-fledged API. I’m wondering if anyone has experience running more advanced workflows on the LimeLight or if it’s time to start playing around with more open vision systems.

2 Likes

As far as I know there’s not a direct way to do this with the integrated software, so you’re going to have to look at other software options.

The WPILibPi image is designed for full custom OpenCV programs, but uses a Pi, rather than the Limelight. It has some examples which show how to integrate GRIP pipelines.

PhotonVision can be installed on the Limelight; my understanding is it’s still largely a canned vision solution, but it’s open source, so it should be possible to add your own OpenCV code to it.

3 Likes

The Limelight, hardware-wise, is just a Raspberry Pi under the hood. So writing your own custom software isn’t much more complicated than flashing normal Raspbian to the storage and writing your own code from there. You’ll need to handle the LEDs and stuff yourself, but luckily someone has already documented the limelight pinout here:

7 Likes

Thank @Peter_Johnson and @bobbysq for the info, this is exactly what I was looking for.

I’ll have to dig through the links to see how much can be repurposed versus starting from scratch on the underlying hardware.

Letting my mind wander a bit, it might be nice to build something that abstracts the underlying hardware away and exposes a clean API interface for the lights, camera, etc. Seems like some of the WPILib (camera server) does that, but haven’t dug in enough to understand the whole thing.

If anyone’s aware of projects that attempting this let me know. Hate to reinvent the wheel.

I wonder of Photon does this enough for you. Also, I wonder if photon inherently could solve the polygon issue. We have had sone success with it. You could try it out on a pi and see without needing to reflash the limelight yet.

1 Like

Hi! I’m part of the Photon team. A new feature is coming out soon (technically it already is out but we are having issues with CI and releases at the moment) that will allow you to create a pipeline to detect a shape (circle, triangle, quadrilateral and polygon at the moment) which is what I think you want to do.

You can view the PR here: Add colored shape to the UI by mcm001 · Pull Request #258 · PhotonVision/photonvision (github.com)

More on PhotonVision: PhotonVision

If you would like, I can let you know once it is ready for usage.

6 Likes

Good to hear @mdurrani834! I’ll check out the PR to get an idea of the code. By that time I’m sure you guys will have your the build out :crossed_fingers:

2 Likes

Hi! Forgot to keep you updated on this, sorry.

You are now able to use colored shape pipelines in the latest version of PhotonVision.

The PR for the docs should be merged soon as well.

Please feel free to let me know if you run into any issues / have questions.

4 Likes

These are incredible. I have been playing with them. I am going to teach it the team how to use them this week.

1 Like

Thanks for the heads up! We’ll update to the latest release and see how things go.

1 Like

@mdurrani834 - Had a chance this week to check out the feature. It looks like it will be a great tool for trying to pickup the cargo. So big :+1: here.

Do you happen to know how the gloworm-vision/pi-gen does releases (or even where I should ask)? Our testing so far has been off a laptop, but the team’s excited to get them onto our limelight and test it out.

1 Like

The latest Gloworm Pi image (here) ships with PhotonVision 2021.1.3, and since that PhotonVision release we have not changed anything that required a new image to be made.

As of right now, you can upgrade the image by SSH’ing in to the device and replacing the .jar file with the latest one from our GitHub (currently v2022.1.2, CD post coming soon :slight_smile: ).
Specific instructions for that process are here.

Once you have a device on 2022.1.2 (or later), you can upgrade it from the Web UI via the settings page, by simply uploading a new .jar.

1 Like

I’m glad I came across this thread! Take a look at our latest update.

The python scripting update enables arbitrary OpenCV + Numpy code, arbitrary input data from the robot, arbitrary outbound data from Limelight. We’ve wanted to enable this for a long time, but we wanted to get the experience right. No dev environments are required, and code changes are applied instantly.

Let me know if you have any change requests.

4 Likes

Can you run multiple pipelines at the same time, one per camera? For example, a primary pipeline used for tracking the goal with the built in camera, and a secondary pipeline for tracking game pieces using a USB camera?

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.