Log in

View Full Version : Compact rio not being able to use C


kenethare
12-08-2008, 18:34
I may sound like an idiot here but awhile back a person from NI came to my team and told us that the compact RIO can't use C programing. I was not at that meeting and want to know for sure if this is true. Can you guys give me and other sources than FIRST and NI, because they have so far been non-conclusive. It would be great if someone had a link to a place where this was announced, so i can show my team.
Thank You.

Tom Line
12-08-2008, 18:38
I suggest you look around. It has been clearly stated a number of times that the compactrio WILL be supporting C and that WPI will be handling the C library coding to keep it on equal footing with what NI releases for Labview.

kenethare
12-08-2008, 18:45
Thank you i thought so i just need a way to prove it to my team because this guy was a pretty smooth talker and has them all convinced.
this is good news for me thanks for the fast reply.

jtdowney
12-08-2008, 18:48
Thank you i thought so i just need a way to prove it to my team because this guy was a pretty smooth talker and has them all convinced.

On the WPI FIRST page regarding the new control system (http://first.wpi.edu/FRC/csoverview.html) which is linked from the FIRST website (http://www.usfirst.org/community/frc/content.aspx?id=8976) it says:

The cRIO from National Instruments can be programmed in LabVIEW or C.

Joe Ross
12-08-2008, 19:02
National Instruments has not yet released anything for C with the Compact Rio to the general public. We are one of the guinea pigs.

It's very likely the person you spoke to is not aware of the development of the FIRST controller and the C interface.

Here is the NI press release, which mentions C programming: http://digital.ni.com/worldwide/bwcontent.nsf/web/all/F70C10117567BBF18625742B00737DF5

BJT
12-08-2008, 22:01
Does anyone know if we can use EasyC with the new controller? Forgive me if that's a stupid question, I'm no programmer. The last thing I programmed was an Apple 2 using Basic.

Richard McClellan
12-08-2008, 22:10
Does anyone know if we can use EasyC with the new controller? Forgive me if that's a stupid question, I'm no programmer. The last thing I programmed was an Apple 2 using Basic.

No, currently there is no plan for EasyC or any program like it to be available for the new controller. However, if you're used to using EasyC, using LabVIEW might not be too difficult since LabVIEW is a graphical programming environment, where you place blocks on a block diagram and wire them together.

EHaskins
12-08-2008, 22:10
Does anyone know if we can use EasyC with the new controller? Forgive me if that's a stupid question, I'm no programmer. The last thing I programmed was an Apple 2 using Basic.

I haven't heard anything about EasyC being adapted to be used with the new controller, but I expect someone to release a comparable IDE for the new system.

BJT
12-08-2008, 22:18
Thanks.

whytheheckme
13-08-2008, 14:17
Intelitek hasn't given an official statement regarding easyC, (imho) it sounds like they are either working on it, or they are not. So I wouldn't rule it out (just quite yet.)

My $.02
Jacob

slavik262
15-08-2008, 00:32
From my understanding, the cRIO would be programmable in C++. Is this still the case? I remember quite a buzz about finally being able to use C++ as opposed to C.

Lowfategg
15-08-2008, 01:11
From my understanding, the cRIO would be programmable in C++. Is this still the case? I remember quite a buzz about finally being able to use C++ as opposed to C.

I heard that to, or was that dealing with using C++ inside of labview.

Pat Fairbank
15-08-2008, 04:09
From my understanding, the cRIO would be programmable in C++. Is this still the case? I remember quite a buzz about finally being able to use C++ as opposed to C.From what I remember from the mentor presentation in Atlanta, the compiler that will be used is GCC (with an IDE provided by Wind River). So since C++ is mostly a superset of C, if the base code is provided in C it will compile in C++ as well. (I suppose this implies that you can also write a module or two in Fortran, Objective C, Ada, etc. if you really want.)

slavik262
15-08-2008, 14:21
I realize the base code will compile in C++ if you can compile in C. However, I'm looking forward to using the features presented in C++ that aren't in C such as being able to use Object-Oriented Programming techniques and templates. My question is: will we be able to write/compile code in C++? I thought that this was the case.

Michael Hill
15-08-2008, 21:35
I haven't done much work with Compact RIO, but can you not use Code Interface Nodes with them?

Pat Fairbank
16-08-2008, 01:27
I realize the base code will compile in C++ if you can compile in C. However, I'm looking forward to using the features presented in C++ that aren't in C such as being able to use Object-Oriented Programming techniques and templates. My question is: will we be able to write/compile code in C++? I thought that this was the case.
If the base code will compile and run in C++, I don't see why you wouldn't be able to use OOP and templates; it all just crunches down to machine code in the end.

If I were to speculate, I would guess is that the base functionality provided by FIRST will all be in C, but teams will be free to write their extraneous code in C++ and just interface it with the C base code.

Greg McKaskle
16-08-2008, 21:06
I haven't done much work with Compact RIO, but can you not use Code Interface Nodes with them?

CINs are ancient, and new platforms like cRIO do support the Call Library Function node, which will call vxworks .out files.

Greg McKaskle

Greg McKaskle
16-08-2008, 21:15
I realize the base code will compile in C++ if you can compile in C. However, I'm looking forward to using the features presented in C++ that aren't in C such as being able to use Object-Oriented Programming techniques and templates. My question is: will we be able to write/compile code in C++? I thought that this was the case.

Some of the cutting edge C++ stuff may be a bit rough, but the LabVIEW code base contains plenty of C++, and it runs there. As usual, bringing in arbitrary C++ code can run into rough spots when deciding whose STL implementation to use, or doing lots static initializer stuff, etc. But the short answer is, that it is a C and C++ platform via Wind River tools, and a LabVIEW platform via NI tools.

Greg McKaskle

Michael Hill
16-08-2008, 23:21
There's nothing wrong with the CINs being ancient. They still work...and allow you to access compiled C and C++ code. I haven't used a Call Library Node. Most of the code I use at work was already written for use with CINs.

Greg McKaskle
17-08-2008, 11:21
There's nothing wrong with the CINs being ancient. They still work...and allow you to access compiled C and C++ code. I haven't used a Call Library Node. Most of the code I use at work was already written for use with CINs.

Correct. There isn't anything wrong with being ancient, but being ancient is one of the reasons they aren't supported on new platforms. To give a bit of history, CINs were invented when LV was still on the Mac, and there wasn't a good choice for dynamically called binary code -- no DLL-like entity. Since they were being constructed, a number of other bells and whistles were added to them.

Around six years later, when CodeFragments on Mac, and DLLs on Windows existed, the CallLibrary node was added. In some ways it is superior to CINs, in other ways, it was a bit lacking since it was defined industry-wide, and not inside the LV team. In the fifteen years or so that have passed since then, both have been maintained and reworked. In the next release, NI will no longer ship any VIs containing CINs. Over the year, as code is touched for some other reason, it has been moved to CallLib nodes, but we still support them because customers have written them, and don't necessarily want to spend the time taking their C code to a new library format. If it works, why change it.

On the other hand, new platforms, like cRIO can't use existing CINs. They require the C code to be recompiled for a new OS anyway. This will require reworking the make file, the header includes, dealing with little nit picky C stuff that was marginally compatible. So while doing that, we think it is better to move from CIN libs to .outs, .DLLs, .frameworks, .libs, or whatever the platform preference is for shared binary code.

And that is the state the cRIO is in. It supports external binary code written by you, by the OS vendor, by third parties, etc. It does so in a standard way rather than the twenty-x year old proprietary way that LV invented.

That was why I called them ancient, and that is why the cRIO doesn't support CINs. Ancient doesn't necessarily mean bad, but ancient things do fall out of use, otherwise I'd be tapping this out in morse code or writing it in Latin.

Greg McKaskle

ttldomination
17-08-2008, 12:18
I attended the seminar that they had at World, and the people in the seminar said that we would hav a chance to program in either C, C++, or Labview. So there are 3 programming options.

slavik262
20-08-2008, 23:14
Some of the cutting edge C++ stuff may be a bit rough, but the LabVIEW code base contains plenty of C++, and it runs there. As usual, bringing in arbitrary C++ code can run into rough spots when deciding whose STL implementation to use, or doing lots static initializer stuff, etc. But the short answer is, that it is a C and C++ platform via Wind River tools, and a LabVIEW platform via NI tools.

Greg McKaskle

I don't use STL all that much anyways. I prefer to write my own data structures and such. That way I can fix my own compatibility problems. I'm just excited to be able to play with dynamic memory allocation and OOP. :D

BradAMiller
28-08-2008, 22:54
If the base code will compile and run in C++, I don't see why you wouldn't be able to use OOP and templates; it all just crunches down to machine code in the end.

If I were to speculate, I would guess is that the base functionality provided by FIRST will all be in C, but teams will be free to write their extraneous code in C++ and just interface it with the C base code.

Actually it's the opposite. To make C++ work really well, the libraries have been developed as a series of classes that implement all the sensors, motors, etc. Then the idea is to add C wrappers around the class methods (functions). We're currently trying to decide if the wrappers are really necessary since one could write the entire robot program using only C code except when calling the methods for those sensors and motors. And even that looks a lot like C. The upside for not doing the wrappers is that it allows the C++ code to be much more flexible and easy to use.

Any thoughts?

Brad Miller
WPI Robotics Resource Center

AustinSchuh
29-08-2008, 00:21
We're currently trying to decide if the wrappers are really necessary since one could write the entire robot program using only C code except when calling the methods for those sensors and motors. And even that looks a lot like C.

My reaction is that if you would be wrapping C around C++, why even bother? I am sure teams can adapt, or write their own wrappers around what they use if they feel so inclined. In short, I think wrappers aren't necessary and the time spent writing them could be better used in other places, like documentation.

Just wondering, but if you did write wrappers, would you then still use the C++ compiler, or would you use a C compiler instead? How would wrappers work in this case? I typically see wrappers used the other way, wrapping C code up to use in C++, which is why I am curious.

Pat Fairbank
29-08-2008, 10:44
Actually it's the opposite. To make C++ work really well, the libraries have been developed as a series of classes that implement all the sensors, motors, etc.
Even better, then. Will these classes expose virtual functions so that teams can subclass them to extend their functionality?
We're currently trying to decide if the wrappers are really necessary since one could write the entire robot program using only C code except when calling the methods for those sensors and motors. And even that looks a lot like C. The upside for not doing the wrappers is that it allows the C++ code to be much more flexible and easy to use.
I don't really see the point in wrapping the method calls in C. Since writing code for the new controller is going to involve learning a new interface for I/O in any case, there's no argument to be made for preserving familiarity. Since C and C++ code will be compiled automatically by the same compiler, there won't be any build issues for those who choose to program in C.

I guess what is boils down to is whether there are teams who will choose to program exclusively in C and who would be confused by the C++ syntax. I can't answer for others, but my team may or may not decide to use C++, depending on how comfortable our students are with C. In either case, we'll already be taking advantage of having a C++ compiler to eliminate some of the shortcomings of C, so calling methods on a class for motors and sensors won't bother us.

BradAMiller
01-09-2008, 21:39
The question was, as you noticed, one of familiarity with C++. One could write the entire program using C syntax except for creating and invoking methods on objects. The syntax isn't a huge stretch from what a C programmer would know, but we're trying to be very sensitive about forcing people to learn a whole bunch of new stuff in this transition year. I was trying to get a feel if the community would be up in arms over being forced to use those few pieces of C++ syntax.

Brad Miller

slavik262
02-09-2008, 18:23
The question was, as you noticed, one of familiarity with C++. One could write the entire program using C syntax except for creating and invoking methods on objects. The syntax isn't a huge stretch from what a C programmer would know, but we're trying to be very sensitive about forcing people to learn a whole bunch of new stuff in this transition year. I was trying to get a feel if the community would be up in arms over being forced to use those few pieces of C++ syntax.

Brad Miller

Not at all. I personally think that anybody fairly well-versed in C could pick up the C++ syntax quickly.

Nibbles
03-09-2008, 21:25
NI has never supported anything besides LabView on it's hardware, until now.
<ins>Hm, I can't find my source for where I saw this, I think it is more accurate - better to be conservative - to say NI has not extensively supported this type of programming on this type of hardware before FIRST, at the very least. I was aware there were articles published on calling shared/dynamic object code from LabView (http://decibel.ni.com/content/docs/DOC-1690), though I am not sure to what extent it is used in reality. In addition, NI is merely supporting the C/C++ effort, it is WPI (http://first.wpi.edu/) doing the development of the library.

In addition, I completely forgot about interrupts writing this post, and how that will be handled (it shouldn't matter as far as C vs C++ is concerned, however, LabView changes it up because it supports interrupts seamlessly and natively, so LabView could be appealing for teams who want to take advantage of those features and not spend hours in front of a C or C++ debugger which probably won't even work well in a real time environment.</ins>

On the topic of C versus C++, I think this post by Linus Torvalds to the linux-kernel mailing list is entirely relevant: http://kerneltrap.org/node/2067 His full post is about halfway down the page.

He talks about using C++ to write operating system kernels (FRC programming is for embedded systems, roughly equivalent), and how it is a "BLOODY STUPID IDEA". That isn't the point I will jest at though and I am not supporting one over the other (frankly I don't like the NI-RIO at all, even if it does improve on some things, like floating point math), rather, I point out the fact that they are more or less the same, difference being that C++ has made things much more easier (or as Linus points out, worse for some areas of programming where even moderately high level programming is bad, e.g. operating system kernels).

C has most all of the stuff C++ does, most coders don't realize it though. Structs are the same thing as classes, and can (with work) be inherited, and polymorphism is (roughly) supported with function pointers. C has functions that operate on structures (function(object, arg)), C++ has methods that operate on structures (object.function(arg)).

C++ just has member visibility, operator overloading, and extended function names (so you can define multiple functions with the same name but different arguments).

It is hard but entirely possible to create byte code that has both a struct-and-function interface for C and (formal) OO interface for C++.

There is no reason not to use the OO interface using C++, it is simpler, memory management becomes easier, and error handling with Exceptions improves code cleanliness. Unfortunately, if we go all out on C++ (as opposed to using OO and Exceptions sparingly), it makes it difficult to write plain old C if you are intent on doing your own memory management and error handling.

We aren't working with kernels or doing data mining or multi-threading, and are working on a UNIX-like OS (thankfully), so C++ imo is perfectly adequate. I am more worried about people not being able to deal with malloc and free, say, in C more then I am the OO features of C++. OO is pretty simple to grasp if you are only using it.

Greg McKaskle
03-09-2008, 22:22
I really don't want to get sucked into a language discussion, but let me throw in a clarification.

NI has never supported anything besides LabView on it's hardware, until now.

The cRIO is a relatively new HW platform for NI, and to this point, it has indeed been exclusively a LV platform. But, NI sells a number of other HW platforms such as PXI, as well as supporting third party HW from PC vendors, embedded platforms, even handheld devices.

Greg McKaskle

Mike Mahar
04-09-2008, 10:42
Actually it's the opposite. To make C++ work really well, the libraries have been developed as a series of classes that implement all the sensors, motors, etc. Then the idea is to add C wrappers around the class methods (functions). We're currently trying to decide if the wrappers are really necessary since one could write the entire robot program using only C code except when calling the methods for those sensors and motors. And even that looks a lot like C. The upside for not doing the wrappers is that it allows the C++ code to be much more flexible and easy to use.

Any thoughts?

Brad Miller
WPI Robotics Resource Center

I program in C using a C++ compiler every day. Sometimes we use a C++ feature and we're doing it more every day but often our code looks just like a standard C program. I wouldn't bother putting C wrappers around your classes. If the only issue is calling the class's member functions, Almost anyone can learn how to do that. Provided it is described in the WPILib documentation.

How close is the new library to existing WPILib? I want to get my team up and running as soon as possible and I was wondering if it is worth it to learn the old WPILib on the old controller?

BradAMiller
04-09-2008, 13:06
How close is the new library to existing WPILib? I want to get my team up and running as soon as possible and I was wondering if it is worth it to learn the old WPILib on the old controller?

The new WPILib represents the same level of abstraction as the old library. By that I mean that there are methods to run motors, get sensor values, and interact with the driver station. The philosophy has been to stop short of providing code to do the "trick" that is required in any competition, but to make sure the pieces are there so that it's fairly straightforward for a team to do it. For example, there was never any code to drive a robot to a green target, but you could always get the position of the target, and you could always drive the robot. The complete program to follow someone walking around with a green light was less than 20 lines of code.

In other words, using the old WPILib will give you and your team a feel for how to write robot programs with the new library. But the new library will be much more capable and the interfaces much richer than ever before.

And besides all that, the source code will be available along with extensive documentation to make the transition as smooth as possible. We are very sensitive to the fact that a lot of new stuff is getting rolled out at the same time and FIRST is trying to make this as easy as possible.

Brad

slavik262
07-09-2008, 04:04
There is no reason not to use the OO interface using C++, it is simpler, memory management becomes easier, and error handling with Exceptions improves code cleanliness. Unfortunately, if we go all out on C++ (as opposed to using OO and Exceptions sparingly), it makes it difficult to write plain old C if you are intent on doing your own memory management and error handling.

We aren't working with kernels or doing data mining or multi-threading, and are working on a UNIX-like OS (thankfully), so C++ imo is perfectly adequate. I am more worried about people not being able to deal with malloc and free, say, in C more then I am the OO features of C++. OO is pretty simple to grasp if you are only using it.

While I am slightly afraid of what some people will do with malloc and free, I'm looking forward to them. I've already combined OO with some memory management and have some data structure classes ready to implement the second I get my hands on this. And I'm not quite sure what you're saying in regards to OO and exception handling. Why use them sparingly? Once you become versed in C++ they are both extremely useful.

Alan Anderson
07-09-2008, 13:21
While I am slightly afraid of what some people will do with malloc and free, I'm looking forward to them. I've already combined OO with some memory management and have some data structure classes ready to implement the second I get my hands on this.

I'm curious to know how you intend to use dynamic allocation in the context of a FRC robot. Perhaps I'm too ingrained in the "embedded system" paradigm to see a good reason not to account for all the data in advance.

And I'm not quite sure what you're saying in regards to OO and exception handling. Why use them sparingly? Once you become versed in C++ they are both extremely useful.

It seems to me that overuse of a language's exception-handling features is a lazy habit. Again, when working with embedded systems, one tends to learn to keep very tight control over unusual situations. My experience is that compiler-level exceptions are not appropriate if you take care to ensure the proper input conditions and to handle out-of-range values explicitly.

Pat Fairbank
07-09-2008, 14:12
I'm not a big fan of C++ exceptions - they're hard to use properly, and not very safe. In fact, their use is banned by the two companies that I've worked for that use C++ (Sony, Google (for these reasons (http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Exceptions))).

On an embedded system, what's going to happen if a thrown exception is not handled properly? The program is going to terminate, which leaves you with nothing running on your robot controller (and this would be very hard to debug).

Nibbles
07-09-2008, 20:09
Honestly I can't think of why you would want to use exceptions (or any error handling at all for that matter) on the robot, because it doesn't deal with literal user input. Errors (division by zero) means something wrong in your logic (a case for using assertions), and bad/conflicting input values (both upper and lower limit switches are triggered, say) should be handled when parsed and dealt with then, not inside a function that detects them.

On C++ programs, I only use it for passing down error messages through a large stack of functions, where I know it will be caught (as for the overhead, it can't be any worse then using a scripting language, or a custom-built method of passing down structs of data, but maybe I am just too ignorant). I go by the rule of thumb only add exceptions that you will be catching yourself (don't put them in libraries for other people to use).

slavik262
07-09-2008, 20:39
The only reason I would use exception handling is for debugging purposes. I really have no plans on using it in the competition builds. I agree with all people above that it would become a bit of a nightmare to properly handle all exceptions. I am a strong believer in just writing the code well so that exceptions aren't thrown in the first place.

BradAMiller
08-09-2008, 10:40
OK, there's a bunch of stuff going on here, let me see if I can give the rationale for some of the design decisions with a disclaimer at the end.

First, something about our new environment. We have about 500x more memory and probably 100x more processor speed over the PIC that we're used to using. The past years high speed sensor-interrupt logic that required precise coding, hand optimization and lots of bugs has been replaced with dedicated hardware (FPGA). When the library wants the number of ticks on a 1000 pulse/revolution optical encoder it just asks the FPGA for the value. Another example is A/D sampling that used to be done with tight loops waiting for the conversions to finish. Now sampling across 16 channels is done in hardware.

We chose C++ because we felt it represents a better level of abstraction for robot programs. C++ (when used properly) also encourages a level of software reuse that is not as easy or obvious in C. At all levels in the library, we have attempted to design it for maximum extensibility.

There are classes that support all the sensors, speed controllers, drivers station, etc. that will be in the kit of parts. In addition most of the commonly used sensors that we could find that are not traditionally in the kit are also supported, like ultrasonic rangefinders. Another example are several robot classes that provide starting points for teams to implement their own robot code. These classes have methods that are called as the program transitions through the various phases of the match. One class looks like the old easyC/WPILib model with Autonomous and OperatorControl functions that get filled in and called at the right time. Another is closer to the old IFI default where user supplied methods are called continuously, but with much finer control. And the base class for all of these is available for teams wanting to implement their own versions.

Even with the class library, we anticipate that teams will have custom hardware or other devices that we haven't considered. For them we have implemented a generalized set of hardware and software to make this easy. For example there are general purpose counters than count any input either in the up direction, down direction, or both (with two inputs). They can measure the number of pulses, the width of the pulses and number of other parameters. The counters can also count the number of times an analog signal reaches inside or goes outside of a set of voltage limits. And all of this without requiring any of that high speed interrupt processing that's been so troublesome in the past. And this is just the counters. There are many more generalized features implemented in the hardware and software.

We also have interrupt processing available where interrupts are routed to functions in your code. They are dispatched at task level and not as kernel interrupt handlers. This is to help reduce many of the real-time bugs that have been at the root of so many issues in our programs in the past. We believe this works because of the extensive FPGA hardware support.

We have chosen to not use the C++ exception handling mechanism, although it available to teams for their programs. Our reasoning has been that uncaught exceptions will unwind the entire call stack and cause the whole robot program to quit. That didn't seem like a good idea in a finals match in the Championship when some bad value causes the entire robot to stop.

The objects that represent each of the sensors are dynamically allocated. We have no way of knowing how many encoders, motors, or other things a team will put on a robot. For the hardware an internal reservation system is used so that people don't accidentally reuse the same ports for different purposes (although there is a way around it if that was what you meant to do).

I can't say that our library represents the only "right" way to implement FRC robot programs. There are a lot of smart people on teams with lots of experience doing robot programming. We welcome their input, in fact we expect their input to help make this better as a community effort. To this end all of the source code for the library will be published on a server. We are in the process of setting up a mechanism where teams can contribute back to the library. And we are hoping to set up a repository for teams to share their own work. This is too big for a few people to have exclusive control, we want this software to be developed as a true open source project like Linux or Apache.

Alan Anderson
08-09-2008, 13:12
Apologies for the length of this post. I have trimmed it significantly from its original form, and it is still much larger than I would like.

...The past years high speed sensor-interrupt logic that required precise coding, hand optimization and lots of bugs has been replaced with dedicated hardware (FPGA).

I'm not certain what you intended to say here. Doesn't any hardware control task require "precise coding"? There wasn't any "hand optimization" in the interrupt code I wrote, nor was such optimization even an option given the C18 compiler most teams were using. I'm pretty sure you didn't mean that "lots of bugs" are a necessary part of interrupt logic. I'm also wondering how the PIC interrupt logic doesn't count as "dedicated hardware" just because it's on-chip.

In short...huh?

When the library wants the number of ticks on a 1000 pulse/revolution optical encoder it just asks the FPGA for the value. Another example is A/D sampling that used to be done with tight loops waiting for the conversions to finish. Now sampling across 16 channels is done in hardware.

You're apparently mixing a couple of different things here, and neither of them seems relevant. Using Kevin Watson's encoder support library for the IFI control system, when the program wants the number of ticks it just reads the value. The default code indeed does busy-wait polling for A/D conversions, but lots of teams replaced that library with one that uses interrupts -- in short, sampling across as many channels as desired "in hardware".

Is your point that the FPGA makes it possible to support much higher rates of encoder ticks or A/D samples? A faster CPU would be able to do that itself, without FPGA assistance, using interrupts. So speed alone doesn't seem like a compelling argument for eschewing interrupts.

...And all of this without requiring any of that high speed interrupt processing that's been so troublesome in the past. And this is just the counters...This is to help reduce many of the real-time bugs that have been at the root of so many issues in our programs in the past.

It sounds like you have had a very bad experience with interrupts. That's probably not just because they were interrupts, and more likely the result of badly designed or poorly implemented service routines.

I can honestly say that I have never seen any problems in FRC robot code that I have traced to a "real-time bug". The one I thought I saw (in someone else's code) was eventually determined to be an EEPROM access contention issue, having nothing to do with interrupts, and was experimentally solved by adding a "real-time" feature (i.e. a semaphore) to the program. On the contrary, the big interrupt-related issue I have seen many teams run across is due to the IFI default library's not using "real-time" code for its PWM generator.

Just how are we going to be able to implement PID control -- or even simple speedometers -- without using "real-time" features anyway? That's a question I asked some time ago when it was made clear that the cRIO doesn't have interrupts, and it was never satisfactorily answered.

The objects that represent each of the sensors are dynamically allocated. We have no way of knowing how many encoders, motors, or other things a team will put on a robot.

The reasoning behind this statement eludes me. What am I missing? How is the amount of hardware relevant? Does dynamic allocation (i.e. at run time) address it any better than static allocation (i.e. at compile time)?

BradAMiller
09-09-2008, 00:04
Wow, tough crowd!

Alan - if I suggested that interrupts were evil, that wasn't my point and I'm sorry for not making that clear. And I certainly didn't mean to suggest that lots of bugs are a necessary part of interrupt logic. My point was that having dedicated hardware is a good thing and makes programming easier and offloads the CPU running the robot program. But let me try to answer some of your questions directly.

You mentioned that you never received a satisfactory answer on the questions of "speedometers" and PID time-based code.

By speedometers, I'm guessing you were asking about measuring the period of signals from encoders on rotating shafts for measuring motor speed directly, please correct me if that's not what you meant. There will be 8 hardware quadrature encoder inputs and 8 hardware counter inputs. Each of these can track the period as well as the count. The period gives an instantaneous representation of speed. Any digital I/O port can be routed to any of these encoders/counters.

There are many ways of solving your issue with time-based algorithms like PID integration. One is to use the Notification class that will periodically call some function in your program. You can, for example, request notification every 20ms and do the PID calculations there. Even though it is not a true hardware level interrupt routine, the function can get the precise time and easily apply the appropriate weight as it's integrating the error value. There are many additional mechanisms that could be used for solving the same problem.

You asked what i meant by "dedicated hardware" and asserted that it wasn't really necessary and ultimately could just be replaced by faster CPUs.

I believe that having hardware to supplement the CPU is a good thing. Like video cards in PCs - it doesn't matter how fast CPUs get, they'll never be as fast as dedicated pipelined graphics engines we see on modern video cards.

Same is true on the robot. A few years back there was a gear tooth sensor that generated a (about) 40 microsecond pulse for one direction and a (about) 70 microsecond pulse for the other. If the interrupt on the gear tooth sensor occurred while a higher priority master processor interrupt was being handled (delaying the start of the gear tooth sensor interrupt handler) there would be no way of measuring the pulse width. And we couldn't determine the direction. That's an example of where using the CPU just doesn't work. With "dedicated hardware" we can easily know and the program doesn't have to deal with those complexities.

You mentioned Kevin Watson's code with respect to encoders and A/D converters. Kevin was able to accomplish amazing things with PIC chips. He really knew that hardware better than most people I've met and all FIRST teams recognized that.

In his encoder FAQ he talks about about phasing errors caused by limited software sampling and interrupt rates that give false interpretations of encoder direction sensing. This is not a problem when there is dedicated hardware looking at each encoder channel, it's only an issue when the CPU is over taxed with many devices competing for the CPUs resources or encoder rates are too high.

D/A converters can be sampled with software in interrupt routines, and as you pointed out Kevin's interrupt-based example proved that. But we now can achieve about 500k samples/second aggregate rate on each of two modules without the CPU getting involved at all. And that's with hardware oversampling and averaging and integration for gyros. I'm not saying this can't be done without dedicated hardware, but having it sure seems better to me than not having it.

We hope to continue to get input from the FIRST community, make improvements and address concerns as they come up. Please keep asking questions, but please don't shoot the messenger. I for one would prefer to see constructive questions targeted at the hardware/software and not the developers.

rwood359
09-09-2008, 01:58
We hope to continue to get input from the FIRST community, make improvements and address concerns as they come up. Please keep asking questions, but please don't shoot the messenger. I for one would prefer to see constructive questions targeted at the hardware/software and not the developers.

It's difficult to form constructive questions when there are only epic poems and myths about how the new hardware and software are going to work. There is more detail in your last post than in all preceding posts and announcements that I have seen.

Can you give us an ETA for the WPILib manual?

Will it be released to all teams at once or only to the beta teams and later to the rest of us?

edit - added questions:
Can you tell us which Wind River Tools will be available?
Will the source code be released with the manual?
Will there be details such as

But we now can achieve about 500k samples/second aggregate rate on each of two modules without the CPU getting involved at all. And that's with hardware oversampling and averaging and integration for gyros.
or will there calling sequences to initialize and read the gyros?

Thank you,
Randy Wood

BradAMiller
09-09-2008, 07:57
It's difficult to form constructive questions when there are only epic poems and myths about how the new hardware and software are going to work. There is more detail in your last post than in all preceding posts and announcements that I have seen.

My high school english teachers would never believe someone would accuse me of writing epic poetry.


Can you give us an ETA for the WPILib manual?

Will it be released to all teams at once or only to the beta teams and later to the rest of us?
The documentation and source code will be released to the beta teams next week along with the beta development tools. The decision on the rest of the public release is up to FIRST.


edit - added questions:
Can you tell us which Wind River Tools will be available?


The software will include Workbench 3.0 based on Eclipse 3.3. The FIRST program builds use the gnu C/C++ compiler. Full remote debugging (breakpoints, watchpoints, single step, etc.) is included, some of the other profiling tools from the full version of WorkBench are not there. You can debug the program remotely over the Wifi or ethernet connection.


Will there be details such as

or will there calling sequences to initialize and read the gyros?


I'm not sure exactly what you're asking, but there is a class for the gyro. Inside the class the A/D channel hardware is set up to do averaging and oversampling and the output from the A/D conversions is routed to a hardware accumulator that does the integration (summing) to compute heading from the gyro rate output. The software just reads/scales the current value whenever the program asks for it. The hardware has two of those accumulators so you can have two gyros or other devices requiring hardware summed A/D values.

Are those the details you were looking for?


Thank you,
Randy Wood
You're welcome - good questions!

Alan Anderson
09-09-2008, 08:37
I have many more comments, questions, and quizzical requests for clarification. I will defer them all until after I've received the opportunity to work with the platform I'm asking about.

I for one would prefer to see constructive questions targeted at the hardware/software and not the developers.

As would I. Lacking the hardware/software, however, I have little choice but to aim my questions at the people whom I assume ought to be able to answer.

The Lucas
09-09-2008, 09:00
There are many ways of solving your issue with time-based algorithms like PID integration. One is to use the Notification class that will periodically call some function in your program. You can, for example, request notification every 20ms and do the PID calculations there. Even though it is not a true hardware level interrupt routine, the function can get the precise time and easily apply the appropriate weight as it's integrating the error value. There are many additional mechanisms that could be used for solving the same problem.

This touches on my primary curiosity:
How will the library implement multitasking and scheduling?
I think this the most significant change in the new system and the real challenge in an RTOS. There is a conflict between ease of programming (learning curve) and advanced functionality. I can see a need to abstract this issue away for the basic users who do not use utilize the CPU enough for it to be an difficult issue. However, the user with heavy vision processing (multi target tracking) and multiple PID loops will need low level access to the scheduling features to ensure deterministic performance.

So in the library, how can the programmer declare a object/method/function as a separate task (process)?

How do you set priority among these tasks? Is it an object attribute?

What about inter process communication? A message class?

Also since Day 1 I have been curious about how the disable/autonomous states will work across the control system as a whole. Specifically, can the driver station communicate with the cRIO in disable mode(as so many teams use to select auto modes)?

Thanks for the info
Brian

rwood359
09-09-2008, 13:27
The documentation and source code will be released to the beta teams next week along with the beta development tools. The decision on the rest of the public release is up to FIRST.
Please lobby for the have-nots. Some dialog can start as soon as we have the documentation and tools. Getting some things cleared up now may reduce your work load when everyone gets the full system.

The software will include Workbench 3.0 based on Eclipse 3.3. The FIRST program builds use the gnu C/C++ compiler. Full remote debugging (breakpoints, watchpoints, single step, etc.) is included, some of the other profiling tools from the full version of WorkBench are not there. You can debug the program remotely over the Wifi or ethernet connection.
This sounds like fun! I have some concerns about breakpoints and motors. Are breakpoints tied into the FPGA to pause PWMs?


I'm not sure exactly what you're asking, but there is a class for the gyro. Inside the class the A/D channel hardware is set up to do averaging and oversampling and the output from the A/D conversions is routed to a hardware accumulator that does the integration (summing) to compute heading from the gyro rate output. The software just reads/scales the current value whenever the program asks for it. The hardware has two of those accumulators so you can have two gyros or other devices requiring hardware summed A/D values.

Are those the details you were looking for?
Thanks for the information on the gyros. My real question is about the level of documentation of the FPGA that is sitting between the code and the devices. One of your earlier posts in this thread described both specific device interfaces and generalized tools to build an interface. How detailed as to timing and timing constraints will the documentation be?
For example, in your response above, you described the functionality of the gyro driver without giving any data rates. Will the functional document have the data rates for the sampling and such? You don't need to give details here.

Thanks again,
Randy Wood

BradAMiller
10-09-2008, 20:34
This touches on my primary curiosity:
How will the library implement multitasking and scheduling?
I think this the most significant change in the new system and the real challenge in an RTOS. There is a conflict between ease of programming (learning curve) and advanced functionality. I can see a need to abstract this issue away for the basic users who do not use utilize the CPU enough for it to be an difficult issue. However, the user with heavy vision processing (multi target tracking) and multiple PID loops will need low level access to the scheduling features to ensure deterministic performance.

We are hoping to do some classes to help start and manage tasks. It will make it easier to cleanly shut down at the end of the program. Teams may also call the VxWorks functions directly if they desire. One of the challenges for teams choosing to use the multitasking features will be managing priorities and doing synchronization. It can get thorny.

So in the library, how can the programmer declare a object/method/function as a separate task (process)?

In VxWorks you just supply a function that runs as the new task and a bunch of parameters that are passed to it. Our class will do the same thing.

How do you set priority among these tasks? Is it an object attribute?
What about inter process communication? A message class?

VxWorks lets you set the priority on task creation and you can modify it with a function call.

No plans right now for messaging or interprocess communication. Good opportunity for a team contribution.


Also since Day 1 I have been curious about how the disable/autonomous states will work across the control system as a whole. Specifically, can the driver station communicate with the cRIO in disable mode(as so many teams use to select auto modes)?

This is pretty much the same as before - you can read the driver station while disabled. There is a lot of support in the library to help robot programs detect the field state and deal with it.

BradAMiller
10-09-2008, 21:12
Please lobby for the have-nots. Some dialog can start as soon as we have the documentation and tools. Getting some things cleared up now may reduce your work load when everyone gets the full system.


Everyone on the project is 100% behind that goal. We hope to make it available as soon as possible.


This sounds like fun! I have some concerns about breakpoints and motors. Are breakpoints tied into the FPGA to pause PWMs?


This particular issue caused lots of discussion. Sometimes it's desirable to have the motors running while debugging and the robot is up on blocks. Other times it isn't, and possibly dangerous.

We implemented a user watchdog timer that is enabled by default (but can be disabled) and will automatically stop the motors if your program doesn't periodically call a method. If used, it would shut down the motors on a breakpoint. But like seat belts, if you don't use them you can get in trouble. And this gets even more complicated by the fact that multiple tasks can be running - but you can set the debugger to interrupt all running tasks on a breakpoint as an option.

There is also a system level code that will shutdown the motors in the event of a communications failure, disable or e-stop.


Thanks for the information on the gyros. My real question is about the level of documentation of the FPGA that is sitting between the code and the devices. One of your earlier posts in this thread described both specific device interfaces and generalized tools to build an interface. How detailed as to timing and timing constraints will the documentation be?
For example, in your response above, you described the functionality of the gyro driver without giving any data rates. Will the functional document have the data rates for the sampling and such? You don't need to give details here.


I wouldn't expect to see documentation on the FPGA interfaces, not because we're keeping secrets from people, we're just out of time. You can look at the code to understand what's going on.

We are trying to document as much of the specs as possible, like data acquisition rates and subsystem block diagrams. And you can always ask questions.

Dave Flowerday
10-09-2008, 22:18
We implemented a user watchdog timer that is enabled by default (but can be disabled) and will automatically stop the motors if your program doesn't periodically call a method.
Brad,
Isn't a hardware watchdog being used? If the processor completely locks up, does PWM generation stop, or does the PWM hardware continue to run with the last value given?

How can the system be considered absolutely safe if it is still possible for the motors to run while user processes are stopped at a breakpoint (I assume the processor is still running since I presume that a run-mode debugger is being used)? With the IFI system, if the user processor stopped updating the master for any reason the motors were shut down. Personally I think this is the only safe way to implement a "robot disable" function.

I guess I'm trying to find out how the "robot disable" function is actually implemented in the new system. I.e. what hardware is generating the PWM signal, and what conditions cause that hardware to stop generating and therefore shut down the motors? Is that hardware being updated by some sort of system (non-user modifiable) task?

Roboj
10-09-2008, 22:33
Brad,
I guess I'm trying to find out how the "robot disable" function is actually implemented in the new system. I.e. what hardware is generating the PWM signal, and what conditions cause that hardware to stop generating and therefore shut down the motors? Is that hardware being updated by some sort of system (non-user modifiable) task?

Hi Dave,
The watchdog is implemented in the FPGA. Therefore, it is a non-modifiable and non-crashable task. If the watchdog isn't strobed at a minimum rate, the I/O (which is also controlled by the FPGA) will go to safe mode. Therefore, PWMs are set to the proper value to halt the motors. As mentioned in Brad's message, there are ways that the strobing of the watchdog can continue to run while debugging if that is desired. As Brad mentioned, there are some disadvantages/hazards to allowing this also.

Besides not wanting to throw too much technology at the participants at once, this is another reason we aren't allowing you to re-program the FPGA this year. We can't risk teams accidentally disabling the safety features until there are some features added to LabVIEW FPGA to let you do this safely.