Using Beelink Mini PC and Intel RealSense for vision

I am Eitan Vichik, the software lead of team Galaxia 5987. Recently we’ve been trying to find the optimal vision solution for us. We found that Orange pis with one camera each, or Beelinks with two cameras each are the best solutions, and we want to decide between them. As such I have a couple of question.

I saw that the Beelink can’t use its hard disk when moving too rapidly (which is most of the game in FRC). Does the Beelink even need the hard disk? Can we take it out? And if it does need the disk, how can we deal with it?

Additionally, I saw that a couple of teams used the Intel RealSense camera for vision during the season. Did you encounter problems with oscillation in close distances? If so, at what distance?

Finally, how do you make use of the depth sensor on the RealSense camera?

Thanks in advance for any help!

We ran a Beelink based solution, and didn’t find significant issues when moving quickly after applying the changes listed in Anand’s coprocessor guide. The Beelink uses a SSD drive, which does not have any moving parents and is much more resilient to shock (furthermore, I don’t recall seeing any space dedicated to a HDD in the Beelink). We used standard HD cameras, so I can’t comment on that.

This thread is relevant to your interests:

We ran two cameras on one OPi this year and were very happy with the performance it was giving: 20-30fps at a high resolution for our camera (that I cannot quite remember exactly but can check if you’d like). If you want to avoid the beelink and only want to run one device, I would recommend exploring using just one OPi

Were you running a Apriltag pipeline (ie Photonvision) with the oPi?

Yes, we were using apriltags on photonvision.

I’d love it if you could tell me more about your experience with the Orange Pi’s. What problems did you run into? How much were you really using your pose estimation during competitions?

Getting PhotonVision setup was a relatively painless experience on the OrangePI with @asid61’s guide, the most difficult part was finding an ethernet port that wasn’t being used at our school. We made a backup of the SD card so we wouldn’t have to scramble for internet access at competition is something went kaput with the SD card or the OPi. From what I remember, we had a noticeable increase in performance after adding a fan (but this could also just be some confirmation bias). The OPi was powered directly from a spliced SparkMAX USB-C cable into the VRM.

For the cameras, we ran two identical OV9281s each facing opposite sides of the robot and after using ArduCam’s serial number change tool, photon had no difficulty separately identifying and using them. There was a little exposure quirk that made it not update after adjusting but enabling and disabling auto exposure normally fixed that issue. The actual exposure range was also only a fraction of what photon had the slider space for. At times, we could see all 8 tags (although this data was not used as it was in the middle of the field, see below).

I’m not sure of the exact settings we used but I’d be able to check if you’d like (although I won’t be at our build space for a few weeks).

In addition to the cutoff builtin to the photon UI for filtering apriltags we added a distance cutoff that threw out all data from farther than 4 meters away (even if there were multiple tags in view) and removed tags with an ambiguity above a threshold if there was only one tag in view. The 4-meter filter has a hacky fix to prevent any noise for the driver using field centric drive as none of our features would need to know where the robot was more than 4 meters away from a tag.

The pose estimation from the cameras was heavily relied on as our swerve wheel odometry was pretty unreliable out of what we suspect to be an issue with our gyro. We did not use the cameras in auto as although the gyro was problematic for accurate pose estimation, it was consistently inaccurate, so we tuned our autos to that because we were running low on time. What we did use it for was auto-align onto the substation and grid. If you are interested there are some details about this in our judge programming handout. Every piece scored on the grid and grabbed from the substation during teleop (except for our match where the OPi didn’t have power) was scored/grabbed using auto-align.

Without the tags, I highly doubt an auto-align would have been even remotely viable as by the end of the match our odometry would probably be several meters off its actual position (if you just look at how much it corrects from one cycle from the substation to the community). Times like these that I wish we were using advantage kit so I could easily see after the fact…

Most of the apriltag work was done a time where if it was good enough, we stopped working on it and moved onto the next thing so it definitely wasn’t near perfect, so feedback is definitely appreciated from anyone.

Other stuff

  • If you’d like to see some of the vision data our robot used it is all in our log files repo. The raw photonvision data is in the photonvision/ table and the processed data is in the photon/ and drive/estimatedPose tables.
  • Our full robot code can be found here.
  • This is the OPi case we used
  • This is the camera mount
    • Both were printed with PLA on a Bambu x1c Carbon

If I missed something or anyone has any feedback or questions, please let me know.

6 Likes

Absolutely incredible, thank you.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.