Help with coding CPP and OpenCv

Hello,

I am working with the Jetson Tk1 and Nvidia Toolkit for the FRC stronghold competition. I have spent 8 hours today working on this and have basically given up. Are any teams willing to give up their cpp files for vision on a coprocessor, or willing to help because at this stage anything would be great :).

You should use the search functionality of the forum, there’s lots of stuff.

You could try using my blob detection code here: http://einsteiniumstudios.com/make-your-beaglebone-track-a-ball.html

Despite the fact that it’s named ‘ball tracking for BeagleBone’ it’s actually just blob detection on a debian/ubuntu system.

Hey there!
If you are looking for some OpenCV CPP code that would be useful, I would recommend taking a look at Team 2053’s TowerTracker which is a C++ port of Team 3019’s TowerTracker which can be found here: https://github.com/team2053tigertron...016/src/vision
Good luck!

Please include complete link? I’d like to look at this.

TIA

Here you go

here

We also have code that works on the raspberry pi. I can upload that tonight if you want.

Hey there!
I would be interested in that if you don’t mind. Our team is also using a Raspberry Pi and would like to see how you pushed that through.
Thanks!

I can grab the file tomorrow. Basically we ssh into the raspberry pi through the robot radio then edit the files via the command line. We use systemctl to run a bash script that sets up the library path and then runs the program at startup of the pi.

Making your Co-Processor run it’s tracking software at startup seems to be the way to go.

We used a BeagleBone Black for vision on our robot. Apparently the FMS won’t let you ssh in when you are connected to it.

I didn’t have time to try, but can you ssh in if you change the ssh port to one of the available ports for UDP with the roboRIO?

I can try, but I don’t think it matters that much because you wouldn’t need to ssh in while connected to the FMS. All we need to do is connect to the robot over Ethernet during the field calibration time, and change the RGB threshold values that it sees. This is really fast with grip because you save an image onto the pi at boot up of the pi and download it onto your pc to run grip on it. Then change the values accordingly.

Thanks!