I have started testing running vision processing on
a Raspberry Pi 3 running Raspbian using Java.
Camera is a Microsoft USB LifeCam
At this time I do not have access to a RoboRio.
Started with the 2017 FRC Control System Vision processing
document on the web site.
I got the code compiled on a Ubuntu system transferred to
the Pi and it ran !! At least I think so the blue light on the
camera came on.
In the doc it states
“This sample program only serves video as an http mjpg stream to a web browser or your SmartDashboard and serves as a template.”
Question:
- How can I access the stream from the web browser ? Does the browser
have to be running on the Pi or one from my Ubuntu system.
Next step is to take the code generated from GRIP and run it on the Pi.
This would include creating the Network Tables.
Question:
How can I access the Network Tables from the Ubuntu system to
see the values and if the change when I move the target?
Question:
Finally is there a RoboRio simulator?
Thanks in advance to those who answer.