Programming Questions????

Is there a simple way to put a value on the FRC Driver Station to show that a limit switch is being pressed? Also, what is the easiest way to program and run vision with Java? We were looking to process vision on the RoboRio, unless there is another way to run it without an external cell phone or processing unit. Currently, we have a Microsoft Life Cam that we were planning on using, but we have a pixy camera on our robot that is just acting as a camera. Does anyone have any experience with an easy vision program and is there a program out there that might help us?

Is the limit switch attached to a Talon or the RIO? You’ll want to use SmartDashboard to display the values but where the switch is attached determines how you get the value. For vision, you should check out GRIP.

It is just connected to the RoboRio.

In that case, take a look at this: https://wpilib.screenstepslive.com/s/4485/m/13809/l/599709-switches-using-limit-switches-to-control-behavior

Thank you, is there a way to program a Smart Driver Station so we can see if the limit switch values are pressed or not?


public void teleopPeriodic(){
    SmartDashboard.putBoolean("Limit Switch",limitSwitch.get());
    //Rest of your teleopPeriodic goes here
}

This assumes you instantiate a DigitalInput called limitSwitch, as shown in the ScreenSteps.

Ok, that makes sense. Thanks

The processing speed is really slow for vision processing by using the Rio without any external processor. I would love to hear from teams who have successfully run vision on the Rio and their opinion on the matter though. I have heard of teams who have made a “copy and paste” vision program, but personally I have never played around with them.

You can’t run extremely high resolution/FPS on the Rio, but you can run high enough to be very competitive. Our team did it with GRIP in 2016, and if I recall correctly, 2056 did onboard processing with a LifeCam in 2017, and they were extremely good.

Ok, do you know of a simple way to implement vision into code?

I would consider using GRIP. Once you have defined a pipeline to process the camera images and identify the target(s), there is an option to output the code in Java or C++. A good starting place is here.

Our last season’s code can be found here. It has vision code using Pixy camera. It also has GRIP vision code although we decided to use Pixy, so the code is there but not in-use.

If for some reason smartDashboard fails you, and alternate suggestion (in java) is System.out.println() after every switch read.

It usually requires changing the print level of the driver station to print everything, instead of just warnings and errors (little gear above the console).

Would not recommend this for actual competition, as it easily hides errors and warnings, and takes up bandwidth. But it’s a quick-n-dirty solution.

smartDashboard is definitely the better option.