Working With GRIP and LiDAR on a Raspberry PI

We are a little confused on getting vision data from webcams with the GRIP pipeline, as well as combining this with additional data from, for example, a LiDAR system (we can successfully read data off the LiDAR, it’s just a matter of using it), especially as we are running all of this off of a Raspberry Pi using Python, and not the NI Rio, and there is very little I can find on that particular workflow. I know that we do have a webportal (the wpilibpi.local site) to load code onto the pi, but I’m not quite sure how we can get multiple “classes” in (GRIP, LiDAR, and one main function). Another question with that is how to utilize the code generated by GRIP in the executable main function on python running on a Pi? Again, there is very little documentation (that I can find, at least) about that particular use case.

Thank you!

For the multiple classes question, trough the web portal you can use the file upload tool to upload all your supporting files. Then you upload your main as a user python application which will run on startup and use your supporting files.

Make sure you set the file system to writable before you upload anything. Know that ftp with a client such as FileZilla is also an option, once again make sure the file system is writable trough the web interface before.

For the grip pipeline from grip you can export a python code file. This can be imported in your main application. In the main you’ll want to grab a frame from the webcam using the camera server, feed it through your grip pipeline class then write the output to the network tables.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.