Nivida Jetson For Co-proccesing Video

Hello, we are trying to add a Nivida Jetson to our robot but We don’t know how to Link it as a Co-Proccesser like you would with a Coral. Does anyone have any ideas?

Are you talking about using it as a coprocessor for a limelight? That’s not supported, AFAIK.

Setting up a Jetson for FRC is going to fairly involved and custom.

The two teams I know who are using Jetsons are Team 900 and Team 971. 900 has some setup information for older jetsons on their website and in some CD threads, not sure how applicable it is to modern jetsons. Team 971 has a completely open-source code base, though it will probably be more useful as reference than for you to base your setup on.

I would think the first steps you’d have to do is set the jetson up following Nvidia’s instructions, then install Network Tables on the jetson, then set up a test program that simply sends data over network tables.

1 Like

this is what we’ve been working on:
(30) Phantom Vision Intro - YouTube
Shenzhen-Robotics-Alliance/FRC-Phantom-Vision: a rapid, powerful, easy-to-use and open-source vision framework for FRC (github.com)

We are using Jetson Nano to do vision processing. The enitire stack are customized. However I didn’t get involved into writing the vision code (as it was written completely by one of our studnet @catrix) so i only know some abstract.

Here’s the code

Here’s our approach.

All code are written in Python. Two camera are connected to Jetson, two python script we wrote would run as startup and deal with camera data independently for shooting and auto note auto finding.

Camera image would run though a vision pipline in python which detect apriltag or note (through a AI model) and turns it into coodernate. A UDP connection is than established to roboRIO and transmit those number to the RIO. the entire stack is custom.

Yes that’s a lot of work.

in a abstract level, what you would like your vision code to do is

  1. open camera and get the image
  2. run those image though a processing pipline and detect what you want to detect (apriltag, reflective tape and game piece) using whatever algorithm.
  3. use cameraserver or other method (RTC, RTSP, etc) to send camera image to dashboard after compression on jetson (due to speed limit)
  4. use networktables or other method (socket, http, etc) to send your detection result to roboRIO.

there’s not a lot of exist framework for jetson to be used in FRC. You have to do a lot of work to get it running.