Luxonis OAK cameras

I recently came across these. They look pretty good on paper anyhow. Anyone here ever used one for FRC or otherwise?

I havent heard much chatter about them recently, but there was some discussion last summer about them, and again back in 2020 (when it appears they first launched).

Speaking from experience, these cameras can be pretty beneficial depending on your application. Within the field of FRC, the off-board processing that you can allocate onto this device can be pretty beneficial I would imagine. However, given the scope of the problem space we are dealing with in terms of computer vision in FRC, these could definitely be overkill. Outside of FRC, these provide an excellent platform to run AI models for robotic projects where computer vision may not be the project’s primary focus, but a necessary aspect to achieve greater behavior. Some examples of where I personally have run into them being used were a cup tracking & stacking algorithm using DeepSORT to interface onto a collaborative robot and a balloon trajectory tracker for an omnidirectional robot to play a game of “don’t let the balloon touch the ground”.

I recently talked to the head of my department and the robotics lab at my university about these cameras. The overall takeaway for us was that these can be helpful for students who do not have a strong machine-learning background and want to employ some CV model in the project. The specific model I have experience with is the original OAK-D camera. If you have any other questions, feel free to reach out.

I’ve played with these cameras a quite a bit in FRC (See my previous posts here and here).

In the current FRC ecosystem, most of the problems the OAK cameras could solve can be achieved through existing means (Localization with AprilTags using Limelight/PhotonVision, Object Detection using Limelight + Google Coral). You could make a cheaper, more performant object detection system using an OAK-1 + Raspberry Pi compared to Limelight + Google Coral, but the effort in doing so is probably not worth the cost/time investment for most teams.

The one problem that can be uniquely solved using the OAK cameras is to use an OAK-D with its stereo depth + object detection to both find and locate objects for on-the-fly trajectory generation to intake game objects (I got this working to some degree at a proof-of-concept level in 2022).

Additionally, there are some unique quirks with the cameras/libraries (e.g. Their AprilTag support is pretty bare-bones - it just directly ports the default AprilTag implementation without updating it to take advantage of the camera’s built-in AI accelerator chip, so you get ~15 FPS at most. The PoE variants have severe performance issues compared to the USB cameras). At the moment, it seems that the company is focused on their next-generation version of the cameras with an expected release of the end of this year/Q1 2024 which might fix some of these issues, but at the moment, I find it hard to recommend these cameras for FRC teams.

3 Likes