We are a little confused on getting vision data from webcams with the GRIP pipeline, as well as combining this with additional data from, for example, a LiDAR system (we can successfully read data off the LiDAR, it’s just a matter of using it), especially as we are running all of this off of a Raspberry Pi using Python, and not the NI Rio, and there is very little I can find on that particular workflow. I know that we do have a webportal (the wpilibpi.local site) to load code onto the pi, but I’m not quite sure how we can get multiple “classes” in (GRIP, LiDAR, and one main function). Another question with that is how to utilize the code generated by GRIP in the executable main function on python running on a Pi? Again, there is very little documentation (that I can find, at least) about that particular use case.
Thank you!