While researching vision systems for our robot, one of our mentors, @nxmq99, suggested the Gloworm and PhotonVision as an alternative to the ubiquitous Limelight.
We need a system that can both locate the powercells for the Galactic Search challenge and track the targets for shooting. The Gloworm has a usb-c port that can be used for a second webcam (or other peripheral device) and PhotonVision supports multiple cameras. The Limelight supports GRIP out of the box (so that we can make a custom pipeline for the GSC), but does not support multiple cameras, so we would either need two Limelights or a mechanism to adjust the angle of one Limelight. PhotonVision does not use GRIP or custom pipelines, which means that we would need to do some extra work with WPILib FRCVision to get it to work.
Limelight is hard to beat on ease of use - 30 minutes from opening the box to getting data is doable. I found it easy to set up the Gloworm (took about 20-30 mins?) but I didn’t try to hook it up to a Roborio. That said, most of the work involved in getting good vision is interpreting data and making accurate shots based on it.
Either solution can work. The Gloworm’s lower price tag makes it a good choice if you’re on a budget, but the Limelight has good support year after year.
Hardware-wise, they’re pretty much exactly the same. They both use the same computer (RPi CM 3) under the hood. The Limelight does actually also support a secondary USB webcam, via the USB-A port. The Limelight accepts power via either passive PoE or a Weidmueller connector, whereas the Gloworm is PoE-only, but you want to be using PoE anyway. The Gloworm also allows dimming the LEDs, whereas the Limelight is either on or off.
Software-wise, the fundamental difference is that the Limelight uses proprietary software, whereas the Gloworm (PhotonVision, technically) is open-source*. Currently I’d consider the software UIs and feature sets pretty equal, but given how responsive the Limelight/PhotonVision devs are on Discord, I’d favor the Gloworm for support. Both sets of software have basically the same architecture: web UI, “pipelines” of computer vision operations, parameter sliders for each operation so that you can tune in realtime.
Purely in terms of functionality and flexibility I’d give the Gloworm a slight advantage, but the price difference makes it a no-brainer IMO.
* Edit: To clarify, the Gloworm hardware is also open-source. If you want, you can spin your own PCBs, modify the circuitry, modify the case and print it yourself, or whatever.
It’s probably worth noting that if you need something right now, Limelight is your only option. Gloworm is currently sold out for the time being, and likely won’t be back until no earlier than this summer as per comments by @fharding on Discord. However, you can use PhotonVision with a regular Raspberry Pi 3 or 4 in the meantime if you already have those around.
We currently plan on using a Limelight for targeting the goal in shooting challenges as well as PhotonVision on a Raspberry Pi for tracking power cells.
Disclaimer: This is based on my interpretation of the rules, if someone has knowledge on whether this is valid or not - please correct me.
I’ll just mention that since there isn’t a challenge that needs both intaking and shooting, you can have one LL/Gloworm/whatever and then point it toward the goal in the shooting challenges and position it in a way that it’ll see the power cells for the search challenge. Unless this is ruled as illegal, this is likely what my team will be doing.
Why I think this is legal: the rules state that you can only make minor modifications to the robot. Given that they give the example of removing whole mechanisms between challenges, I think this is a minor and legal modification.
A super inexpensive option is to simply load WPILibPi (formerly FRC Vision) onto a Raspberry Pi and plug in a few webcams. This approach is compatible with GRIP, and is similar to using either LimeLight or GlowWorm. The big thing you get with these other offerings is well-designed hardware, specifically the high intensity LED lighting.
IMO the big thing Limelight has going for it is proven success on the field. Teams know they can buy one and be nearly guaranteed they’ll be tracking targets in an hour.
I believe gloworm/photonvision will be darn close to this really soon (maybe already there), it just doesn’t have the same level of proven track record on the field. Yet.
It’s a bit interesting I think actually - For the couple hundred dollars you save by going with not-limelight, how many extra hours of work would you be willing to spend? Do the division and see what your labor rate is (kinda).
We have a Limelight and have recently switched over from the native Limelight software running on it to PhotonVision.
It sounds like many people here recommending the Limelight have just tried it and not PhotonVision. I can say the features as far as picking thresholds are extremely similar. The huge differences come when analyzing the number of targets (like for Galactic Search) where the Limelight can only do 1 “target” but has has allowances to represent multiple objects as 1 (like the slanted lines in Deep Space). Whereas PhotonVision can do up to 5 targets simultaneously all giving back independent information about where they are. The other huge improvement is that PhotonVision supports running pipelines on multiple cameras. Yes you can have extra cameras hooked up via USB and running 3 (or more up to the USB bandwidth limit) different pipelines simultaneously. The Limelight supports hooking up an extra camera but it can only be passed through back to the driver no analysis.
Lastly updates and OSS. PhotonVision has been constantly adding new features (I hear good things about colored shapes feature coming soon?). It only released fairly recently and has been making great progress. Its developers are also extremely active on Discord. Whereas the last update we got from the Limelight was almost a year ago and the last feature update was over a year ago. I really hope the Limelight team has been up to amazing things in this time but I am not sure. Lastly I am a huge fan of OSS and all of the benefits that it provides.
All that said I have used the Limelight successfully in multiple competitions and think it is a great product. I have not actually used the Gloworm (our limelight is using the gloworm’s image however) so can’t really evaluate the hardware. I can say that when we need another camera it will be a Gloworm.
Curious - How much work was this (in terms of time and level of technical expertise required)? Is it something that could be put into a (fairly straightforward) document? I’d assume there’s not too many “gotchas”, and that it was certainly possible, this is just the first I’d seen someone doing it.
We have both a Limelight and a Gloworm on the robot currently. The Limelight tracks the goal and the Gloworm tracks the balls. The Limelight is posting values to the Network Tables and then we pull the values from the Network table. The Gloworm is set up as an object in Java and we call functions that I assume by-pass the Network table and pull directly from the Gloworm.
As far as wiring them up. The Limelight has a network cable plus a power cable running to it. The Gloworm has just the network cable and uses POE for power, which is nice to not have to run the additional wires to it.
All of the issues I had were minor and most of that was because of not really having a guide (I got instant help from Discord however). I expect once the guide is up it will be as easy as flashing a new version of the limelight software (it is essentially the same process).
I just want to note that if you have a Pi that can run WPILibPi then it can also run PhotonVision out of the box. I would choose PhotonVision on your Pi if you want something just like the Gloworm on a very cheap device, and I’d choose WPILibPi if you want more fiddling and you want to write your own vision pipeline.