Help using ArUco markers

My team wants to start using ArUco markers for vision. This is a new concept to me and I’m not entirely sure where to start, since last year we used limelight on retroreflective tape. My first thought was that limelight may have libraries to handle ArUco, but from what I can tell only custom GRIP pipelines or retroreflective tape can utilize the limelight’s onboard processing. My next thought would be to use a co-processor like a raspberry pi to process the markers using our existing cameras (we have limelight and pixy cams) and send the data to the roboRIO over network tables. Is this the correct way to go about using ArUco markers?

2 Likes

First of all, to be clear, this is for a non-FRC project, right? Since there aren’t any arUco tags on the field (yet), I’m not sure how you’d use them in FRC.

Secondly, yes, using some type of camera (Limelight, webcam, RPi camera module, etc.) and doing the processing on a Raspberry Pi would work fine. Do note that, in order to correctly compute the pose (position+orientation) of the tag, you’ll have to go through a process called “camera calibration”, which basically lets you figure out exactly how the camera image relates to the real world. OpenCV has support for arUco, I believe, and you could just write a program in C or Python to grab frames, extract the tags, and then push the coordinates to NetworkTables. However, if you’re willing to climb a pretty steep learning curve, this might be a good chance to start playing with ROS.

If you really wanted to, since the Limelight is a Raspberry Pi under the hood, you could write custom software and do the tag extraction onboard. That requires some hacking with the Limelight software though, and you definitely won’t get official support.

1 Like

Thanks for the info! This is for a non-FRC project, but all of the parts my team is using are for competitions. My team writes software in Java rather than C/Python, but I don’t think this is a problem since WPIlib has Java examples for Raspberry Pi, and I see that OpenCV has ArUco Java libraries.

One thing that we just figured out which could make this even easier is that if your team is using a pi with the WPILibPi image, you can upload any program you want for the “Vision code” in the web portal. Then, it will load on boot and have easy access to the network tables. It can be in any language you want too (I think, I know CPP, Java and Python will work anyway). We are using this to start Photon on the Romi.