We have made our own dashboard in python and it is working well. However, we can’t figure out how to run a command from a network tables entry the way you can from the sim or Shuffleboard. Specifically, if we have a command named “intake piston” with the following networktable entries:
I have also been interested in if there was a clean way to do this. My best solution is to set a boolean flag or something unrelated to the command itself which when activated will run something inside your robot code and activate that command.
Thanks! That works perfectly.
Along the same lines, is there an underlying JSON of the global NT that we could grab occasionally so we can make use that pre-made dictionary of the all the NT values? When we do getGlobalTable().getSubTables() we get a list that we then convert back into a complete dictionary. I suspect that work is already done somewhere. It makes tracking and populating tree views a lot easier.
So far it is coming along very quickly. (I have no idea what prompted the students’ love of Comic Sans.)
No. However, NT is fundamentally a flat namespace. In the standard C++/Java NT API you can just grab all keys via NetworkTableInstance.getDefault().getEntries() and build the tree by splitting on “/” yourself.
It is custom, and it may not be that straightforward unless people have already invested in learning PyQt, or an equivalent MS user interface building application. It’s pretty quick to learn, however, and Qt works in Java and C++ as well.
I have the team programming in python, so we use robotpy for the robot, and we’re expanding our FRC toolkit. PyQt is a nice framework for a graphical user interface, and pyqt5-tools comes with a user interface designer/manager so you can drag and drop widgets and build your ui, which it saves as an xml. You can then import that ui file into your python script that then does the actual interfacing with network tables. We’ll release once we have it documented, but for now it’s in our training repo here.
Part of what prompted this was feedback from competition last week. Our camera sends back distance and rotation to the hub but it’s hard to see, so we wanted to have a clean interface and pop up the ball where it is predicted to go so we can get better human-readable info to the driver team coach and fine-tune the system before next competition. This is one way to give us that flexibility.