I’m a first year mentor and would like to gain some insight as to the options the team has in configuring their Operator Interface. From what I understand, the OI’s LabView Driver Station is locked down for safety reasons. That makes sense from the perspective of how and when the Driver Station makes communication out to the robot control system. But what are the options for the Driver Station’s inputs. From what I can see Labview has the ability to recognize USB joystick inputs as well as inputs from a Cypress board.
Is it possible to configure the LabView Driver station to receive other inputs - i.e. from a usb connection to an Arduino board , or a labview vi that receives inputs from a socket, or possibly via a java api?
I know there are options for configuring the OI Dashboard but have heard that can only be used for outputs from the Driver Station.
Unfortunately , I can’t launch my expired eval of LabView until the team receives its license, and thus an unable to poke around. )
-In the past, the Driver Station has allowed 4 USB joysticks and the Cypress board. It is compiled, and no user code can run from it directly.
-With the new Kinect beta, there is a separate KinectServer (written in C#) which talks over UDP to the Driver Station (Which bundles it with the driver data packet). While I can’t say a ton, you can modify this as much as you want, and could use some of the extra data fields for your own purposes (it has a few extra fields which are fairly easy to use). Assuming you still want to use Kinect, unfortunately you are stuck with C#.
-There are a few TCP/UDP ports which are open to the robot, and the Dashboard can use those to communicate with the robot, but you’re on your own with that.
What? I don’t have as much info on this as you do (not a Beta team and don’t have access to a CRIO at the moment) but it seems to me that if you are communicating via UDP packets you could write this in any language you could run on the laptop.
It’s not true that to use the Kinect you need to use C#. You can use the built-in gestures without anything special (it looks like a normal joystick). You can also process the kinect skeleton on the robot. I suspect more teams will do this then modify the kinect server, since they can do the processing in a language they’re more familar with (LabVIEW, C++, Java, etc).
Adding extra data to the Kinect packet that goes to the DS (in Kinect Server) is a possible method of getting more data to the robot, which would require you to edit the existing C# code or re-implement it.
I don’t see any safety reason to disallow more data over TCP/UDP sockets, as the actual disabling of the robot is done in the FPGA by FRC Network Communications (based on the Driver Station packet).
Wonderful! I guess I’ve been making the wrong assumption in that the solution would only be FRC legal if the driver station is used as the sole conduit for sending instructions to the robot.
My impression now is that as long as the driver station is running, the team should be good to go.
I think your impression would not pass muster on a 2011 field. There is a collection of rules that would prohibit that. For 2012, this may change… but for 2011, the applicable rules were:
<R52> (All signals to the router on the robot had to originate with the field or the Operator Console, and no other router was legal.)
<R79> (No communication to, from, or within the Operator Console except for the field comm system.)
<R75> (The only thing allowed to communicate operating mode and state was a particular version of Driver Station software–device running it was up to the team.)
<R76> (Any device hosting the Driver Station software had to only connect via the provided Ethernet cable to the field communication system.)
So, if these rules don’t change from last year, IMO, your impression is incorrect. If they do change, it may be correct or incorrect.
Thanks EricH. Are you saying that some of the suggestions made on this thread are not applicable to years past?
“There are a few TCP/UDP ports which are open to the robot, and the Dashboard can use those to communicate with the robot”
“Your program(s) to do with as you will. It (or they) doesn’t have to even be a dashboard, but there are various default apps available to choose from… Inputs from any accessible source can be sent to the robot”
I know this is the first year for the Kinect and I’ll certainly wait to here more one that…
control devices–>device running Driver’s Station (hardwire)–>Field communication (via Ethernet)–>Robot Router (wireless)–>cRIO (hardwire)
Not knowing what the port types are that are mentioned, if they don’t bypass the above framework, they should be OK. However, if they do bypass the framework, then there just might be a problem.
Considering that some of the responders are on Beta Test teams, they may know a little bit more about what’s being considered than I do. However, I think we’ll all find out 1/7/12 what is and isn’t legal.
Edit for clarity: To be clear, I was responding to the assumption that “as long as the driver station is running, the team should be good to go”. The previous assumption (driver’s station as sole conduit) is perfectly valid; the assumption in question is questionable and depends on implementation.
Team 1073 wrote our own Dashboard that used a USB-connected touchscreen display to set the initial position of the robot. It used a UDP port to send the data. It was perfectly legal. Last month, we did a quick demo of controlling our elevator and claw using a Kinect, and by-passed the Kinect server. That code also sent commands directly to the robot. I discussed our architecture with Kevin, so I’m assuming it was all legal.
The whole safety is issue is managed by code in the DriverStation, Field Management Software (FMS), and FIRST code on the robot. Together, they kill motors if you hit e-stop, and officials can disable you via FMS.
The dashboard is just a regular Windows app and gets data to our code in the robot. No matter what our code on the robot tries to do, it can’t override a stopped state. For example, telling the Jaguar to move the motor doesn’t have any effect if the robot has been stopped, or, for that matter, prior to being enabled.
Software Mentor for The Force Team 1073, Hollis-Brookline HS