Log in

View Full Version : A few questions about the control system...


pipsqueaker
04-06-2016, 18:28
I'm considering starting a software project which would basically offload most of the robot code from the RoboRIO to a sub-connected android device (mostly for speed considerations, I'd like to be able to run robot code much faster than 20Hz)- I have a few questions that need to be answered beforehand however.

1. I know that the driver station only sends packets every 20ms, but are there any control system components (Talons, Solenoids, RoboRIO, etc.) which can only accept/execute inputs at 50Hz? I haven't managed to find any that do, but I figured I'd ask to be safe

2. How fast could the the android and the RoboRIO communicate over usb? Assuming I used something like RIODroid to, for example, send the phone a table of variables describing the state of the robot and joysticks (or vice versa), would there be a significant latency between the two

3. Does WPILib limit code running on the RoboRIO to 50Hz? If so, is there a way around that?

thatprogrammer
04-06-2016, 18:34
I can't answer all of your questions but I can tell you we use 254's looper code to run our code every 5, rather than 20, MS.
(Here is the code you want to look at: https://github.com/Team254/FRC-2015/blob/master/src/com/team254/lib/util/Looper.java *
Also I recall 254 saying their nexus that communicated with the roboRIO for vision had about 40ms of Lag.

*Check out their Multilooper and Loopable to better understand how to use the code.

GeeTwo
04-06-2016, 21:01
Presuming that the rules for co-processing remain consistent with recent years:

The control signals to every motion device, whether a motor, pneumatic cylinder, solenoid, servo, or otherwise, is required by the rules to come from the 'RIO. (For 2016, most particularly R68 through R71). This is done quite carefully, in the name of safety (particularly, FIRST wants all motion to shut down when the field crew disables a robot).

In order to legally make the change you're describing, you would have to have a model in which your external (android) device would send high level/abstract commands to the roboRIO so that the roboRIO could do the low level control.

Greg McKaskle
04-06-2016, 21:25
The DS sends joystick info at 50Hz, or every 20ms because that is about as fast as needed for a human input device. But this doesn't put any limits on how fast the roboRIO executes code or how fast it commands outputs. The speed controllers used by FRC are capable of a new update about every 5ms, but it is dependent on the model of motor controller.

You don't mention what language you use, but the biggest consideration is to look at what controls the scheduling. The LV template code has periodic task loops that can run at any rate you like and are totally independent of the DS messages. In C++, you'd spawn a thread or task and do anything you like. The Java code can do this as well, but don't ask me for details.

If you see a limit as to how things are being scheduled, it is likely being caused by the framework you are using. I believe that the command-based framework defaults to a 20ms schedule rate.

Also, if you look at the chart tab on the DS, it shows the roboRIO CPU. Unless your CPU is pegged, you don't necessarily need a faster processor, but different code running on it.

Greg McKaskle

rich2202
04-06-2016, 23:23
3. Does WPILib limit code running on the RoboRIO to 50Hz? If so, is there a way around that?

I believe you are thinking about Teleop_Periodic(), which is executed about that frequency. The frequency can be changed.

There is also Teleop_Continuous(), which is called continuously (I'm guessing as soon as it ends, it is called again).

Note: The exact names may be a little different, but the concept is the same. There are Atonomous versions of the same.

pipsqueaker
04-06-2016, 23:42
In order to legally make the change you're describing, you would have to have a model in which your external (android) device would send high level/abstract commands to the roboRIO so that the roboRIO could do the low level control.

Yup, the flow was gonna be something like

1. Roborio sends data to Android
2. Android does calculations
3. Android sends a table of values for all motor outputs, solenoid states, etc. back to roborio
4. Roborio sets outputs of controllers

Which is why I was concerned about latency


If you see a limit as to how things are being scheduled, it is likely being caused by the framework you are using. I believe that the command-based framework defaults to a 20ms schedule rate.

Also, if you look at the chart tab on the DS, it shows the roboRIO CPU. Unless your CPU is pegged, you don't necessarily need a faster processor, but different code running on it.

Greg McKaskle

The reason I was interested in a faster processor was because I wanted to do detailed logging with every iteration of the code, as well as do vision processing onboard, which I was concerned would be too taxing on the roboRIO (though I recognize I could use a setup similar to the poofs)

saikiranra
05-06-2016, 00:11
In order to legally make the change you're describing, you would have to have a model in which your external (android) device would send high level/abstract commands to the roboRIO so that the roboRIO could do the low level control.

IIRC, 971 did that in the C-Rio days.

GeeTwo
05-06-2016, 23:58
IIRC, 971 did that in the C-Rio days.

I did not meant to come across as this being an impossible/unreasonable task, just that this this would be the only way (or at least only obvious way) to do this legally, presuming no change in what appear to be the intention of the rules. I am actually using a similar model for a summer ball launcher - a raspberry pi will do vision processing and control high-level logic, and an arduino will do all the device control and sensor inputs other than the camera.

Tom Bottiglieri
07-06-2016, 17:10
Also I recall 254 saying their nexus that communicated with the roboRIO for vision had about 40ms of Lag..
The 40ms of lag came between the time the camera sensor captured a frame and when it was done processing by our vision pipeline. I haven't measured the latency of sending a packet through adb's port forwarding tools, but it seems to be in line sending a packet to another machine direct connected over ethernet.

apalrd
13-06-2016, 20:21
You should have no problem getting 100hz+ execution rate on a RoboRIO.

In the cRIO2 days, in LV, I was able to achieve a 10ms loop time with probes open. Without the overhead of probes, I believe I could have gotten 7ms (143hz), maybe even 5ms (200hz), but that would need more code optimization and very little spare CPU bandwidth. This was all on the cRIO2, with a ~400mhz PowerPC.

On the RoboRIO we continue to use the 10ms loop time, because we don't need to run any code any faster. We use CAN now, and at 1M baud rate there is a limit to how fast you can transmit messages before saturating the bus, but 100hz is plenty for anything I have encountered in FRC.

In general, the fewer devices that need to be interact with each other, the lower latency you will get. I think you can keep all of your code on the RoboRIO and achieve low latency. I would also guess that you can't get low jitter execution above 100hz on an Android device because it's not a real-time operating system. The timing jitter of the cRIO2 (running VxWorks OS) was very good, I haven't measured it on the RoboRIO.

magnets
13-06-2016, 21:33
You should have no problem getting 100hz+ execution rate on a RoboRIO.

In the cRIO2 days, in LV, I was able to achieve a 10ms loop time with probes open. Without the overhead of probes, I believe I could have gotten 7ms (143hz), maybe even 5ms (200hz), but that would need more code optimization and very little spare CPU bandwidth. This was all on the cRIO2, with a ~400mhz PowerPC.

On the RoboRIO we continue to use the 10ms loop time, because we don't need to run any code any faster. We use CAN now, and at 1M baud rate there is a limit to how fast you can transmit messages before saturating the bus, but 100hz is plenty for anything I have encountered in FRC.

In general, the fewer devices that need to be interact with each other, the lower latency you will get. I think you can keep all of your code on the RoboRIO and achieve low latency. I would also guess that you can't get low jitter execution above 100hz on an Android device because it's not a real-time operating system. The timing jitter of the cRIO2 (running VxWorks OS) was very good, I haven't measured it on the RoboRIO.

Why are the cRIO and roboRIO so bad at fast loops? Last year, I measured loop times on a 20 ms loop up to 25 ms, average 21 ms. With a $10.99 microcontroller eval board I have sitting in front of me, I can run a loop that does a decent amount of floating point math at 15 kHz with timing jitter so small I can't measure it with my oscilloscope. It's also running a real time operating system that supports threading.

But honestly, for FRC, you probably don't need much more than 200 hz or so.

RufflesRidge
13-06-2016, 22:08
Why are the cRIO and roboRIO so bad at fast loops? Last year, I measured loop times on a 20 ms loop up to 25 ms, average 21 ms.

What language and timing mechanism? LV can definitely do better than this depending on the mechanism and I'm pretty sure that C++ and Java will do better as well with Notifiers unless you are actually doing something that is causing overrun.

apalrd
14-06-2016, 02:24
In LabVIEW on a cRIO (running VxWorks), you can use timed loops to get excellent timing accuracy. Self-measured jitter (reported by the OS) is usually tens of microseconds. I can't speak for other languages, but real-time constructs definitely exist in VxWorks and they seem to work very well. You can't use an oscilloscope to test this because I/O goes through the FPGA, and is inherently quite slow.

The new RoboRIO runs Linux with PREEMPT-RT. I can't speak for it's timing jitter, but I would expect it to be worse than VxWorks and better than normal Linux. It's good enough, if you use the timed loop constructs in Labview (not sure of the equivalent in C++, or if Java has any real-time constructs at all)

The challenge with a large OS like Linux is the addition of a ton of overhead in context switching, memory management, and normal computer interrupts and functions. Microcontrollers have none of this, and can dedicate their entire existance to handling low-level IO and running algorithm code without being bothered with handling context switches, paging memory, hot-plug USB devices, etc..

The fastest I have ever attempted to run algorithm code was for a powertrain control module (responsible for an engine and transmission in a car), on a 180mhz PowerPC with single float. It was able to run algorithm code as fast as 1khz (and I/O events and kernel internals much faster), with the bulk of the core engine code running at 100hz or more. The CPU utilization was usually under 50%. The big enablers for this was the complete lack of heap memory, filesystem, compile-time definition of all tasks and task tables, and co-operative scheduling of most tasks to avoid memory locking and context switching. 'Nice' features like dynamic allocation of of memory (aside from locals on the stack), per-thread or per-task stacks and contexts, process isolation, file IO, and separate driver modules (all with their own separate tasks), all take up a lot of CPU resources on a small processor. For the same CPU, a fully-featured OS will have far worse timing characteristics than a basic real-time OS or microcontroller.

Overall 100hz timing should be enough for FRC. 50hz is probably too slow for control loops, but good enough for driver/HMI interactions.