Limelight, Object Detection, and Trajectory Creation

So after a few weeks of limelight setup we finally got it up and running and I began my first test yesterday which consisted of our limelight following a target. One issue I noted was that when we have april tags on a computer the limelight takes a large amount of time to detect it so I was wondering would this be alleviated with a paper copy due to the reflection from the computer. Additionally me and my teammates were talking and i’m having some confusions about limelights object detection as one member is working on creating a ai to detect objects from last years game while another member is telling me that it is not possible to do so. I was wondering what we would do to detect objects such as the cone and the cube from last years game in different lighting my idea was just to use our pixy camera to do so. Finally as we continue to move on in our testing I want to get into the basics on generating trajectories into finding the closest object/closest scoring area and so I was wondering where I would get started on this. (All the tests being done are with the charged up game in mind additionally thank you for any feedback this is an extremely long post)

Last year we used a limelight (2+) with a Google Coral connected to it with separate pipelines for cone and cube detection for our auto pickup command. For your limelight processing april tags slow, you may want to double check that your camera is set at a higher fps, ours was at 320x240 I think running 30fps (ill double check tonight when I’m at practice). As for april tags, we used a black and white global shutter camera hooked up to raspberry pi, which we ran with photonvision. We used the black and white camera because there is less information for it to process with less colors, and we used global shutter because then you do not get any tearing of the image when you move the camera fast.

1 Like

Hi sorry it took a while to respond but I had a question for the limelight pipelines. So one of members of the team said he’s training and ai which I’m assuming we can use as a pipeline so my question is if we do make separate pipelines during one match is there any way to swap it during the match? We do have a Pixy camera, global shutter camera and pi so we could use one for April tags and one for object detection but it would be easier it if could be compiled into one device.

Yes you can switch them during a match, check out limelight’s documentation here: Complete NetworkTables API | Limelight Documentation
If you are using limelight helpers library you can also do it through there I’m sure.

I think I’m being dumb but photon vision is what’s used for object tracking and then from there we can train it to detect different objects and then we just upload to a separate pipeline on the limelight?

I’m assuming you just download the file from photon vision and upload to limelight

Photonvision is just software you can use for vision with a coprocessor. Like how Limelight’s have their own software where you set and tune the pipelines and stuff.

How we swapped our pipelines mid match was by using this

limelightNetworkTable.getEntry("pipeline").setDouble(activePipelineId);

The activePipelineId would be a base zero number (whatever you see as the pipeline id when you have limelight.local or your static ip for the limelight open)
(Our limelight subsystem on github)

We have a google coral for object tracking which is just plugged into the limelight running limelight’s software with 3 pipelines, 1 to pickup cones, 1 for cubes, and another for aiming while shooting cones mid row.
(Limelight’s neural network docs)

We have our april tag camera running the photonvision sofware (just one pipeline for april tags), one warning i have is that the pi does occasionally lose the pipeline so we downloaded it and i upload it before every match

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.