Our team has previously done dashboards in smart dashboard and labview. Smart dashboard leaves a lot to be desired and labview is another monster. Last year we wrote a Java swing GUI that would communicate via network tables and send and receive integers. I’m guessing that will make it closing to doing doubles and Booleans as well.
We would like to do vision on board the bot this year. The goal is to have one or two cameras sending video to an onboard PC AND back to the dashboard GUI. I am the mentor and haven’t done this type of thing before. I’m looking for some direction if anyone has experience with this.
We’re looking at using Java FX this year to develop the GUI. Past that I’m wondering about two camera(s) that could be connected to either the roborio or the onboard PC. Grip will be processing on the onboard PC. From this I’m assuming we could stream one of the two camera feeds back to the dashboard. How do we stream video? How do we receive the stream in FX? Should the cameras be attached to the RoboRio or the onboard PC?
[strike]The whole purpose of the onboard PC is so that you can do your vision processing (grip, openCV, take your pick) without the latency and issues with the FMS. With having the onboard PC and doing GRIP on your driver station you have the disadvantages to both without any advantages other than offloading from the roboRIO. Either you can run GRIP on that on board PC or just skip the on board PC and stream directly to the driver station. If you’re going to the effort of the PC, that’s the much preferable situation. I’m going to assume you’re doing that for the rest of this explanation.[/strike]
Again, edit once I realized I was wrong and misunderstood you. The two tutorials below have instructions on how to set it up, and you can just use the same IP listed in the tutorials to be on your smart dashboard. There are lots of examples of how to display MJEG streamer, the simplest just being a browser window.