RTP or kernel module?

I am starting to read the “VxWorks Application Programmer’s Guide, 6.6” and came across the following two statements.

VxWorks 6.x systems can be created with kernel-based applications and without any process-based applications, or with a combination of the two.

For more information about kernel-based applications, see the VxWorks Kernel Programmer’s Guide: Kernel.

I know that when we compile our robot code we generate a “downloadable kernel module”. This leads to three questions for me:

  1. Does this mean we are running as a “kernel-based” application and not as a “process-based” application?

  2. If so there are some stability benefits to the “process-based” model (application memory kept isolated from kernel memory) so will FIRST eventually move to the process-based model?

  3. Will I get more value from reading the kernel programmers guide or the application programmers guide?

Thanks
Joe

For now, running in the kernel is the only option. The VxWorks kernel that is on the cRIO was not compiled with the user processes enabled. That’s just something we have to live with. I’m guessing the kernel guide will be of more use, though I’d be suprised if you need anything in there.

Can you remake the kernel? Is the image standardized across the entire line (non frc) or something?

Nope. We’ve tested this in the past but it has negative interactions with LabVIEW real-time. And yes, the kernel is standard across the cRIO product line (for those that share the same SoC).

It’s true that you guys have done a great job of abstracting away all the nitty gritty details…and thanks for that! I am just trying to get a better understanding for when we run into issues along the way. Its hard to troubleshoot issues if don’t at least have a foggy idea as to whats going on in the levels below your application. I have some experience as an application developer in the Linux and QNX environments. Things were pretty much one-for-one portable across these two platforms but I am finding things are a little outside my zone when developing for the VxWorks/cRIO environment. It does seem that the process-based stuff that VxWorks has started supporting in its newer versions of the OS is closer to what I am used to with QNX and Linux.

VxWorks is a flat memory model for FIRST purposes. All code can see all memory and all applications (kernel modules) run in supervisor state. The RTP interface is a way of running code in user state instead of the normal supervisor state. But, we don’t have access to that in FIRST. For FIRST applications, think of it as running everything in the kernel of Linux or QNX and you’ve got the right idea.

VxWorks is a thread-based O/S. The scheduler only schedules threads and not processes. Also, each thread is independently schedulable (a 1-to-1 scheduling model). The default scheduler is a preemptive, priority-based model. Essentially, “run till you block or get preempted by a higher priority thread”. There is no attempt at fairness. Real-time operating systems are notoriously unfair by design. Highest priority always wins.

The VxWorks RTP looks remarkably like a Linux user-space process right down to the stack guard pages. But, because the RTP runs in user state, it can’t easily get to services provided by the kernel (like WPILib). So, I’d avoid RTPs even if you had access to them.

Now, if you want a challenge that can improve the performance of your 'bot, look at using separate threads. WPILib supports the concept and the 2011 code is much easier to follow than the 2010 code. This allows you to avoid all of the polling business in the continuous teleop loop and go event driven which reduces the CPU load on the cRIO. In addition, consider offloading the sensor inputs to a processor like the Arduino and communicate with the cRIO using sockets.

The Arduino can be used as a sensor platform and further offload the cRIO. Just don’t control anything via the Arduino. That violates the rules, I think. All of the control is supposed to come via the cRIO.

HTH,

Mike

What about threads at the same priority level? No time slicing?

consider offloading the sensor inputs to a processor like the Arduino and communicate with the cRIO using sockets.

Aren’t the sensor inputs in the cRIO handled by the FPGA? Why would you need to offload them?

**

I’m eager to hear the answer to your questions too, Ether. I don’t know what the time quantum is for thread scheduling, but I’m pretty sure that equal priority threads are sliced out. I certainly think I’ve seen it using the NI RT Trace tool. Additionally, in the default code, all of the VIs are at the common or standard priority, and LV has a thread pool of about four threads to execute the parallel tasks. It sure seems like they slice.

As for the sensors – the FPGA is involved in all of them except for CAN, serial, and ethernet. Its 40MHz clock drives latching the values in, performs averaging or accumulation, and those values can then be read from FPGA registers using the NI-RIO driver’s peek function. Scaling with calibration constants and to engineering units is performed on the RT side, as is the validation that the module and channel inputs were correct. In reality, the RIO platform allows you to move the functionality from RT to FPGA or back, as you choose to use the resources. What I’m describing is how it was compiled for FRC. I’m sure there are processing intensive tasks that make sense to move off, and it is always an interesting research project, but I’m not sure that it will improve the timing, measurement quality, or the throughput of the system in most cases. Because the digital module is the low-cost, high channel version, the encoder sensing is somewhat limited – about 150k pulses per seconds. If you have other examples to the contrary, please post details.

Greg McKaskle

Excerpts for the VxWorks Application Programmer’s Guide, 6.6:

The VxWorks traditional scheduler provides priority-based preemptive
scheduling as well as the option of programmatically initiating round-robin
scheduling. The traditional scheduler may also be referred to as the original or
native scheduler.

A priority-based preemptive scheduler preempts the CPU when a task has a higher priority than the current task running. Thus, the kernel ensures that the CPU is always allocated to the highest priority task that is ready to run

The disadvantage of this scheduling policy is that, when multiple tasks of equal priority must share the processor, if a single task is never blocked, it can usurp the processor. Thus, other equal-priority tasks are never given a chance to run. Round-robin scheduling solves this problem.

Round-robin scheduling is enabled by calling kernelTimeSlice( ), which takes a parameter for a time slice, or interval.

Time-slicing in the same priority level is supported but disabled by default. To turn it on, use the kernelTimeSlice function. It takes one parameter, an integer number of ticks which is the quantum. Passing a zero turns time slicing off.

HTH

Does the FRC C++ framework leave it in the default (disabled) mode?

If so, how do most C++ teams deal with this? Turn the time-slicing on, or assign different priorities to their threads, or some combination, or not use threads?

Good question - I’ll take a look. We turn RR on by default but on the other hand do not run many tasks at the same priority. We separate things out into multiple tasks and use the native messaging libraries (msgQLib.h) to get things done. For example we form a message (and send it from our ‘ds’ task) that tells the ‘wheels’ task the desired velocity.

Note that even if we did not create a multi-tasking solution, there are many tasks already running on the robot, some created by the OS and many more created by the NI infrastructure. Type ‘i’ at a shell to see them.

HTH

How do you get a shell on the cRIO? (ssh, telnet, serial, some hackery involving the ftp to enable one of the former?)

We use the console provided by the debugger. There is also a netconsole option (enabled when you re-image the cRIO) and a serial console (enabled by a dip switch on the cRIO). VxWorks supports telnet but I do not know if it is in the kernel built/provided by WPI.

HTH