XRP can see AprilTags! Is this practical for other objects?

Hi there, so we wanted to make it easy for students to practice with vision in WPILib without involving a big robot or expensive cameras that many students cannot take home.

So far we came up with this:

The price point is “dirt cheap” and AprilTags work fine… but it doesn’t scale too well to applications outside AprilTags (when trying to use YOLO to detect other types of objects, we first get a version conflict with RobotPy, and then practical difficulties with students having to use GPUs or TPUs).

Can anyone recommend a more scalable way for students to practice robot coding with video using XRPs or something similar?

(yes, Shawn Hymel’s approach with Coral Dev Board Micro doesn’t have these problems: https://www.sparkfun.com/news/10339
, but it is somewhat constrained by a 320 pixel camera and by having to write not-so-student-friendly code for that Coral Dev Board, but maybe this is really the path forward, I am not 100% sure)

3 Likes

Really nice! You’ve done the hard part by hooking a camera into the XRP platform.

I would suggest something very simple to prove the concept and introduce students.

  • Take an object of a distinct hue (for instance, the orange “notes” of Crescendo).
  • Use something like GRIP to create a draft of your pipeline:
    • Use OpenCV to grab frames as they come through the camera/XRP.
    • Convert the image to HSB representation (better for detecting detecting colors that will vary from lighting)
    • Create a filter on HUE so that you get a B&W mask of the object.
    • Use GRIP/OpenCV functions to find the contours and bounding box of the object from the mask data
  • GRIP can generate a Java skeleton of the detection routine. With that in hand, it should be clear what your opencv-python structure and function calls should be.

(As someone who does ML Computer Vision for a living, believe me that you don’t need a convolutional neural network for objects that are highly saturated orange.)

If OP wants to eliminate even more code than your GRIP generated code for the robot, then stream the camera to GRIP running on the drive station and view the results in GRIP.

If the robot needs to drive based on vision, then the GRIP output can be HTTP sent to the robot program with little code needed. It’s somewhat slow and ill-suited for FRC competition but it is quick and easy to program especially if OP copies a program such as an example my team developed some years ago (Java).

1 Like

Sort of off-topic, but this reminds me that I need to finish my retrofit of Romi electronics onto an XRP-compatible frame. With the Romi you can have a full Raspberry Pi onboard.

1 Like