Are Intel RealSense cameras legal for FRC?

Thank you, based on this thread it might be useful to try and dump my Intel and “starter Linux” stuff I’ve come across for FRC on frczero as well. Back when I was using the Intel cameras I was also using some FRC hardware like the CTRE TalonSRXs and a Hero Board for a non-FRC project. I had a script that would connect to a Bluetooth controller or Android app to drive the robot. That script had to start on boot in a similar manner. If I can find that code I’ll post it here.

Building from source seems to unfortunately be the only way to get pyrealsense2 to work on ARM devices. We started the building process yesterday but it seems our little pi 0 could not handle it, so we went ahead and ordered a pi4b. I think we’ll get further along in the building process with a beefier coprocessor, but idk.

If it’s not too much trouble, could you ask your mentor if they had to do anything special to build librealsense from source? Thank you.

1 Like

I just asked the mentor that built the pyrealsense2 from source, this is what he responded:

1.He can’t build it from souruce unless he have a raspberry pi
2.Unfortunately setting up a cross compiler toolchain for arm is actually pretty tricky
3. He don’t think the library will build for a 32 bit target (pi 0)

ngl, i don’t understand all of it lol

1 Like

I feel like you can get a full minipc that uses linux, therefore you can use it year after year for visoins.

I also saw some teams using limelight with some basic trig for finding note relative to robot, which is also what we’re doing right now, as we borrowed the realsense camera from a college lab last year, and we no longer have access to it this year.

The library not building for 32 bit devices lines up with our experience. We’ll try it on the pi4 when it arrives.

Idk if we’ll have to mess with cross compiling (seems difficult), I think if we can build librealsense on the pi we can also compile code on it.

1 Like

I wish you good fortune in the competition to come :grin:

1 Like

Thanks, you too :slight_smile:

Also thank you so much for your help with the camera, we got the pi4 and successfully installed pyrealsense2, now we just have to wait for the camera to get here!​​​!​!​!​!​​!​!​​​!​!​!​

2 Likes

What are you planning to do with the realsense camera?

Note detection, robot detection, and maybe some vision-based SLAM later down the line.

hell yea

1 Like

Vision based slam with that will be highly dependant on the color and lighting of the environment. It’s pretty darn good though. Good enough for a 3d scanner where you can control the environment better. On an FRC field I think it’ll be alright too, dark warehouses, mirrors and dark glass doorways is where I had to stop.

The Zed cameras have the same issue with low light or super dark colors because… Well cameras need light.

My favorite real world use case is a company I visited called RAD ROAMEO | Robotic Assistance Devices (RAD)

The Roameo I saw under the hood of used 6 of the cameras in a ring pattern set together with some infrared lights to help it see at night. It was a security robot so night time is important as well. The thing also had a tethered drone it could deploy. Wild to behold. That was a 1.0 probably as the 2 seems to use bubble cameras.

Hello, FIRST participants. My name is Maksim, and I’m a tech lead at Intel Corporation. I would like to know more about Intel Realsense camera use cases. If it will be possible, I would like to know user experience about Intel Realsense cameras like D435i, or even more new GMSL cameras D457. I have a small pool of D435i cameras, that I can share for free for team members who might be willing to collaborate in future with me, and can also support you in all questions, thank you. @Rocky_S and @kingc95 I see that you are the most experienced there in terms of the Realsense cameras programming. I would like to have a call with you so we can discuss some cooperation activity, and increase your teams visibility using Intel media resources. We can exchange emails [email protected] If you are willing to do so, of course.

3 Likes

We have a T265 (though I know it is now discontinued) camera for pose estimation, but we never really got it to work perfectly on an FRC playing field and never quite figured out the best way to put both odometry, april tags, and VSLAM together to achive maximum performance in robot localization.
If we could get it to work perfectly, I believe VSLAM could really up the game in pose estimation and FRC as a whole, so we’ll start working more on it again in the upcoming off-season.

If you have any suggestions or notes about these things, I’d love to hear!

2 Likes

I’ll send a more thorough email, but to update the group here.

I have used the T265, D435 and L515. It’s been a couple of years since we had switched (after the announcement of some cameras being retired) and tried out the OAK cameras.

I was really excited about pairing a T265 and D435 together for VSLAM and through ROS I had decent success. There were issues with the IMU frames and their transforms that took a long time to solve. Partially due to my lack of knowledge with ROS at the time, but also due to the Intel node having its own base_link that would interfere with a robots real base_link (usually your drive train).

In the end it worked but as stated the speed limitations of the IMU when turning seems like it would be a no-go for FRC. With an external IMU and using some sensor fusion you could get past this (pigeon?). I also never used the encoder feedback input to the system at the time and was fully using just the IMU and LiDar to known my approximate location. For running Vodom only at a slow speed (1-2m/s) it was fine. As soon as it was hit hard, jostled, accelerated too rapidly the Odom would lose track and need to be reset.

The IMU in the cameras was the weak point definitely. Paired with external sensors the cameras themselves were great, ran at a decent FPS and the accuracy of the L515 camera+ laser was great. The D435 was a little rough, but still good enough.

1 Like

For a VSLAM we have some proprietary 3D-vision algorithms based on CollabSLAM and ADBSCAN. At Intel we developed a so called Robotics SDK- debian-packaged product to accelerate ROS2 Humble, and bring all the latest Intel technologies for navigation, mapping and localisation for Autonomous Mobile Robots. We can establish an online knowledge sharing session, so I can share with you our solution (drop me an email, to book a calendar). It is also available to download for free Tutorials — documentation Also we have project examples created by makers for autonomous mobile robots, you can explore some of them LattePanda ROS2 Robot on Intel RoboticsSDK - Hackster.io they are using VSLAM for mapping based on Realsense. I’m located in Europe, but will cooperate with Intel folks from US soon.

5 Likes