Are Intel RealSense cameras legal for FRC?

Just wanted to see before we buy one

1 Like

Check the Custom Circuit rules, and the price rules. If it doesn’t violate those it would be legal.

5 Likes

Note, the housing of it may be grounded. So, I’d recommend mounting it to plastic.

2 Likes

Take a look at R203 d. Will you be able to show the inspector that the camera complies?

The camera appears to go for less than $600 FMV, and looks sort of like regular USB camera.
So, seems compliant.

1 Like

The answer is basically yes. We had a T265 and a D415 on our robot last year.

1 Like

I used the inteal realsense d435 for gamepiece detection in 2023, didn’t get into trouble

1 Like

The accuracy of d435 isn’t very ideal and it can’t get distance accurately when the object reflects lots of light, and results into holes in the depth map, so i’ll say if you really need one, you can go for a d435i

you can also check the realsense d435 i code I wrote in python last year here:
(2023Visions/Detector at main · CRRobotics/2023Visions · GitHub)
Used basic color thresh, and realsense api python, run on separate processor

3 Likes

Sounds good. Also thank you for sending code, as I was pretty unsure about how I was actually gonna use the thing if we got it.

Were you guys just running a d435 and some kind of raspberry pi as a coprocessor? Did wiring it to robot go smoothly? Sorry for so many questions I just wanna be relatively confident we can quickly get the whole thing working

The code looks interesting. I am not familiar with that camera type. Does it have a processing unit that supports Python pipeline? In other words, where do you run the pipeline part of it?

1 Like

it does not have a coprocessor built in. code runs on your driver station or robot CPU.

Ive used these outside of FRC with jetson nanos. Some will have a built in IMU or Gyro, but these may or may not work well with FRC. they are very sensitive to jerks.

ROS has packages and nodes for using them, The t series tracking cameras were able to provide decent odometry when combined with a LiDAR.

The d series are depth cameras (some are color, some are b/w only). They use stereo vision and usually a laser depth finder. This isnt like a 3d scan, just a single laser point to help gauge the overall depth at the middle of the image.

the l515 was a true laser camera and would produce a decently accurate depth map.

Both the T-series and the L series were killed off i believe so now its just the D series.

the biggest upside is they work without needing a CUDA powered gpu like the ZEDs did. any pc could run one. your frame rate would just vary greatly

1 Like

We are using a coprocessor to to the processing.
It’s a mini pc that uses linux, so we have the pipeline run on startup.
in python code, it’s like “sys.argv” or something like that

the coprocessor communicates with the roborio using ethernet wire, and in python code, we push the values using networkTables (you can see it in the python code), you can use(pip install networktables) to see it.
The roborio can recieve network table values and use it, you can see it in our git repository(2023 robot)

Glad to help! I’m really glad to help out with vision stuff, like I spend the whole last season doing it. The pipeline itself is very good, but our programmers working on robot programming didn’t have enough time to integrate the vision code with the robot code together, so it didn’t get used lol.
Suggestion: work on visoin+robot code integration ASAP

We’ve started work on the coprocessor only (camera arrives February tenth lol) and it’s been a bit of a rough start. When we try to install pyrealsense2 with pip it just errors, so it seems that we have to install librealsense and then build it. Is this what you guys had to do? The only guide we’ve found is woefully outdated…

I made a virtual env and use python 3.8

try this:(type in terminal)
prerequests: install python 3.8, install pip

type in terminal:
pip install virtualenv
virtualenv (env-name) --python=python3.8

Then open the folder (env-name) in terminal and type the following in terminal
Scripts/activate.bat

Then u can download pyrealsense2:
pip insatll pyrealsense2

Edit: this is on my own computer running windows for code testing, not on the minipc

1 Like

I honestly don’t know how to make a virtualenv run on startup on a co-processor.
The thing happeded last year is that a mentor helped build the pyrealsense2 from source indeed.
This year we’re running a virtualenv on startup for apriltags.
I can help ask my mentors how to make a virtualenv run on startup and post it here
Our coprocessor is a minipc using linux, so it makes “running on startup” not as hard I think

1 Like

My guess is a bash file with commands to start up the script tied to a Cron job. Cron jobs can be scheduled by time or there’s a wildcard for running it on boot.

This is from 2013 but more or less the same

Additionally there’s also “rc.local” or “init.d” if the cron method is unavailable for some reason. They work in a similar way based on what files are in those directories with a specific order

also a pro for linus coprocessor is that u can ssh into the coprocessor, and use vim to change the code on-the-fly.
it also makes building library from source and coding in general way easier

1 Like

I really appriciate the effort for a open-source knowledge website for all teams to learn from :+1:

1 Like