Driven Station options?

I’m a first year mentor and would like to gain some insight as to the options the team has in configuring their Operator Interface. From what I understand, the OI’s LabView Driver Station is locked down for safety reasons. That makes sense from the perspective of how and when the Driver Station makes communication out to the robot control system. But what are the options for the Driver Station’s inputs. From what I can see Labview has the ability to recognize USB joystick inputs as well as inputs from a Cypress board.

Is it possible to configure the LabView Driver station to receive other inputs - i.e. from a usb connection to an Arduino board , or a labview vi that receives inputs from a socket, or possibly via a java api?

I know there are options for configuring the OI Dashboard but have heard that can only be used for outputs from the Driver Station.

Unfortunately , I can’t launch my expired eval of LabView until the team receives its license, and thus an unable to poke around. )


-In the past, the Driver Station has allowed 4 USB joysticks and the Cypress board. It is compiled, and no user code can run from it directly.

-With the new Kinect beta, there is a separate KinectServer (written in C#) which talks over UDP to the Driver Station (Which bundles it with the driver data packet). While I can’t say a ton, you can modify this as much as you want, and could use some of the extra data fields for your own purposes (it has a few extra fields which are fairly easy to use). Assuming you still want to use Kinect, unfortunately you are stuck with C#.

-There are a few TCP/UDP ports which are open to the robot, and the Dashboard can use those to communicate with the robot, but you’re on your own with that.

I may be forgetting something…

Driver Station application:

  • 4 USB standard game controllers/joysticks (can also be custom controls wired in as a standard game controller)

  • Cypress FirstTouch inputs/outputs

  • I/O tab controls, as an alternative to the Cypress

  • Text message output display

  • For 2012: F1 (Enable), Enter (disable), space bar (emergency stop)
    No additional inputs can be incorporated.Dashboard or other custom apps:

  • Your program(s) to do with as you will. It (or they) doesn’t have to even be a dashboard, but there are various default apps available to choose from.

  • Kinect server is one instance of a custom user default program. Check the Beta teams forum on advance word

  • Outputs whatever you want
    *]Inputs from any accessible source can be sent to the robot

What? I don’t have as much info on this as you do (not a Beta team and don’t have access to a CRIO at the moment) but it seems to me that if you are communicating via UDP packets you could write this in any language you could run on the laptop.

Anyone using OpenKinect?

Another process sending instructions to the robot? Would that be FRC legal since the untouched Driver Station would still be able to shutdown the robot?

It’s not true that to use the Kinect you need to use C#. You can use the built-in gestures without anything special (it looks like a normal joystick). You can also process the kinect skeleton on the robot. I suspect more teams will do this then modify the kinect server, since they can do the processing in a language they’re more familar with (LabVIEW, C++, Java, etc).

Adding extra data to the Kinect packet that goes to the DS (in Kinect Server) is a possible method of getting more data to the robot, which would require you to edit the existing C# code or re-implement it.

I don’t see any safety reason to disallow more data over TCP/UDP sockets, as the actual disabling of the robot is done in the FPGA by FRC Network Communications (based on the Driver Station packet).

Wonderful! I guess I’ve been making the wrong assumption in that the solution would only be FRC legal if the driver station is used as the sole conduit for sending instructions to the robot.
My impression now is that as long as the driver station is running, the team should be good to go.

Thanks for the quick responses!


I think your impression would not pass muster on a 2011 field. There is a collection of rules that would prohibit that. For 2012, this may change… but for 2011, the applicable rules were:
<R52> (All signals to the router on the robot had to originate with the field or the Operator Console, and no other router was legal.)
<R79> (No communication to, from, or within the Operator Console except for the field comm system.)
<R75> (The only thing allowed to communicate operating mode and state was a particular version of Driver Station software–device running it was up to the team.)
<R76> (Any device hosting the Driver Station software had to only connect via the provided Ethernet cable to the field communication system.)

So, if these rules don’t change from last year, IMO, your impression is incorrect. If they do change, it may be correct or incorrect.

Thanks EricH. Are you saying that some of the suggestions made on this thread are not applicable to years past?

  • “There are a few TCP/UDP ports which are open to the robot, and the Dashboard can use those to communicate with the robot”
  • “Your program(s) to do with as you will. It (or they) doesn’t have to even be a dashboard, but there are various default apps available to choose from… Inputs from any accessible source can be sent to the robot”

I know this is the first year for the Kinect and I’ll certainly wait to here more one that…


The basic framework is:

control devices–>device running Driver’s Station (hardwire)–>Field communication (via Ethernet)–>Robot Router (wireless)–>cRIO (hardwire)

Not knowing what the port types are that are mentioned, if they don’t bypass the above framework, they should be OK. However, if they do bypass the framework, then there just might be a problem.

Considering that some of the responders are on Beta Test teams, they may know a little bit more about what’s being considered than I do. However, I think we’ll all find out 1/7/12 what is and isn’t legal.

Edit for clarity: To be clear, I was responding to the assumption that “as long as the driver station is running, the team should be good to go”. The previous assumption (driver’s station as sole conduit) is perfectly valid; the assumption in question is questionable and depends on implementation.

All of the methods posted in this thread were legal in 2011, except for the Kinect. Other methods that you think of would need to be evaluated against the rules that Eric posted.

Do you have access to the LabVIEW 8.6 DVD from last year. The LabVIEW is good untill late January 2012 (past the date you will have the new 2012 kit of parts)

You can install that and “poke around” with the robot stuff from last year.
You can install it on more than one computer (up to about 20 PCs used for FIRST)
For FRC 2012 we will be using LabVIEW 2011.

Yes I installed from that dvd . Perhaps I was given the incorrect serial # . from what I remember is started with L3

Good point. You know what they say about assumptions:)

Lets say the team has the appropriate version of the driver station installed on the OI PC and is using it to pass joystick inputs to the robot. All standard stuff and legal so far…

In addition, the OI PC has a custom app running that is receiving usb input from a button sensor on an arduino board.
Is that alone legal for 2011?

Furthermore - the custom app, upon receiving input over usb, sends a tcp packet to the robot to perform an action.
still legal for 2011?"

I don’t see anything that would make it illegal.

Furthermore - the custom app, upon receiving input over usb, sends a tcp packet to the robot to perform an action.
still legal for 2011?"

As long as it sends it over the competition network, I see no reason it wouldn’t be.

Page 4 of Getting Started with the 2011 FRC Control System

You should be able to just enter the LabVIEW serial number at the popup telling you LabVIEW has expired.

Or find and launch the NI License Manager.

Team 1073 wrote our own Dashboard that used a USB-connected touchscreen display to set the initial position of the robot. It used a UDP port to send the data. It was perfectly legal. Last month, we did a quick demo of controlling our elevator and claw using a Kinect, and by-passed the Kinect server. That code also sent commands directly to the robot. I discussed our architecture with Kevin, so I’m assuming it was all legal.

The whole safety is issue is managed by code in the DriverStation, Field Management Software (FMS), and FIRST code on the robot. Together, they kill motors if you hit e-stop, and officials can disable you via FMS.

The dashboard is just a regular Windows app and gets data to our code in the robot. No matter what our code on the robot tries to do, it can’t override a stopped state. For example, telling the Jaguar to move the motor doesn’t have any effect if the robot has been stopped, or, for that matter, prior to being enabled.

Software Mentor for The Force Team 1073, Hollis-Brookline HS

BTW, I believe the rules quoted mean something like an iPhone talking directly to the robot over WiFi is not permitted. Nor could the iPhone communicate wirelessly to the laptop.


Thanks. That did the trick.