I was explicitly told Windows and Linux and that the compiler is based on the GCC toolkit by more then one representative (one from NI and the other from WPI).
If you are right you will have made my day… (Im a Mac user) Anyone have any official releases from companies answering this?
Sorry about that. That’s what I meant. Thanks for correcting me.
- C++ will finally allow us to use object-oriented programming, making code cleaner and more efficient to write and debug.
You don’t need an OO language to program in OO. We did OO in C this year (well, mostly). It’s just way easier with an OO language :P.
Well I guess you can’t exactly practice polymorphism in C. But the whole encapsulation thing, easy.
The evaluation download page for the WindRiver compiler (http://www.windriver.com/portal/server.pt?space=Opener&control=OpenObject&cached=true&parentname=CommunityPage&parentid=0&in_hi_ClassID=512&in_hi_userid=27106&in_hi_ObjectID=848&in_hi_OpenerMode=2&) lists the following system requirements:
- Windows NT (Service Pack 5 or later), Windows 2000 Professional, Windows XP Professional, or Windows Vista (Business and Enterprise)
*** Red Hat Linux 7.2 or later**
- Solaris 2.6 or later
Now that means you’ll be downloading an RPM but it will be trivial for a semi-experienced Linux user to install it on any number of distros.
um… isn’t C an object oriented language already.
No, C is a procedural language.
You’re right, I was getting them mixed up, but if this isn’t getting too off topic, what kind of
languages are there and what is the difference? Anytime I run into a different style I just adjust and then go with it.
LABView, while I have no experience with it, is a graphical interface you use to code.
C++ is a subset of C, meaning that it follows the same syntax. C++ is basically C with added features such as classes (allowing for object-oriented programming) and templates. If you’re already comfortable with C the jump to C++ should be fairly easy. Online tutorials such as learncpp.com have helped me a good deal.
While you can “fake” OOP with C, actually having classes to work with brings a whole new world o’ fun into programming.
If it will run it Linux, it shouldn’t be that hard to get it to run on a Mac given the unix core. In addition, os x already runs many linux services such as apache and gcc. I am going to try and get the compiler to run ASAP.
I can officially answer that we do not support the Mac for downloading Real-Time code to the cRIO. You can develop code on a Mac, but you must use Windows or Linux to upload your code to your controller.
However, virtual machines are a wonderful thing, especially if you’ve got an Intel Mac that can run Windows and Mac OS X natively. In that scenario your problem is solved.
Is/will the protocol used to upload programs to the cRIO be documented, as it is (or seems to be) just a simple IP transmission over ethernet? I’d be more than happy to use a protocol specification to write a loader for OS X / Solaris / any other unsupported UNIX system.
shudders If you’re willingly going to walk into the fray of dynamic memory allocation on a real-time, (relatively) high-speed system, you have my sympathy. I know I wouldn’t look forward to tracing down memory leaks, or figuring out how to fail gracefully when my autonomous routine couldn’t grab enough memory. Mostly it’s something I just don’t want to have to worry about.
Also, in reply to Chris’s rather long post:
I just started working at Arc Specialties where we making welding robots of various sorts. And yes, 75% of our robot run off a single Omron PLC. I had the privilege of being tossed into ladder logic cold, but as you’ve noted, having a programmer’s mindset tends to make moving from one environment to another largely a matter of syntax. I was also interested to discover that thinking in terms of small, high speed loops for programming the IFI RC developed a good frame of mind and some useful habits for programming in ladder logic. I’m sad to see the IFI controller go for this reason in particular. It’s been preparing a large number of programming students for a smooth-ish transition into the ladder logic that’s in charge of the vast majority of robots out there.
I’ll also agree that I really like Labview for data acquisition, signal processing, and other high speed applications like motion control. It’s difficult to get around the fact that implementing a state machine or anything like it in Labview can get pretty annoying. I had to implement one for some automated testing of a device in college and keeping track of every input and output from the case structure, as well as how to manage the transitions, quickly got on my nerves. And that was a fairly simple and straightforward sequential state machine. Unless NI will be providing us with the State Chart module, which seems unlikely, I’m not at all going to relish our team developing an autonomous mode of any real complexity. Nor implementing any other automated functions during teleop mode that can’t be broken down into simple signal processing with an on-off switch.
Also, thanks TONS for MrPLC. I’m uncertain I would have survived my first project here without that forum and the resources and documentation provided there.
As far as I know, the answer to that is no. We have never and possibly will never release protocol information such as that, especially since it changes between some LabVIEW versions.
Obviously I won’t be pulling any special tricks. Dynamic memory will be the thing addrest last AFTER everything else is working. I won’t be using it extensively; I’m just saying it will open up some interesting avenues.
Come to think of it, I’m not even sure I’ll use it. Just the prospect of finally being able to use it is rather interesting.
I find the prospect of being able to do OCR on our robot similarly interesting. This doesn’t mean I have any desire whatsoever to use this feature in an FRC competition. Looks meaningfully in the direction of the GDC
How would you combine LabVIEW and C? What functions, for example, would you use C rather than LabVIEW for? Could you give a few examples or links to see how this all comes together?
Through the Code Interface Nodes and the Call Library Nodes, but after thinking about it a little more I think I would not use the Code Interface Nodes since the Call Library Nodes are the easiest to work with.
You can build .out files for VxWorks (the equivalent of .dll files for Windows/ETS) and call into them using the Call Library Node - this allows you to have a C-compiled library that you can use. It’s incredibly easy to build libraries with WorkBench, the VxWorks IDE you should be getting, we even have a step-by-step instruction set to help you build your own libraries under VxWorks.
There are multiple reasons why it would be beneficial to have a hybrid system.
First, I’m what I call a “realist” - I don’t think you should be forced to take algorithms you’ve already written in C and convert them to LabVIEW just for the sake of having them in LabVIEW. I know there are a lot of great algorithms for performing lots of different kinds of advanced calculations floating around, and a good number of them are in C. Sometimes you want to break them apart and study them in-depth in order to become a master of the algorithm, and sometimes you realize you have 2 weeks before ship and you just want to plug-and-chug. Being able to build/access C-based libraries are beneficial in this case.
Second, having a c-based library gives you the power to change out modules on a filesystem level, or access modules on a given filesystem. This makes a lot more sense on NI controllers with USB Mass Storage support, you could have different program code on different USB flash drives, and change them out at will. You can do the same thing with the FTP server on the controller, it would allow you to update specific portions of your code without recompiling everything (and it can also be done through an automated script).
Code sharing. If you have the .out files, you can share code with teams using the LabVIEW interface as well as the C/C++ interface. This would be especially useful for autonomous algorithms and sharing them with rookie teams without autonomous.
Lastly, using C instead of LabVIEW can be a valuable crutch. Depending on when the controller gets in your hands and how much time you have to learn how to use it, having an out can make you more comfortable with the whole 6 week deadline thing. I would hope that teams would pick up the LabVIEW software we provided in the KOP, maybe even pick up an NXT robot and the LabVIEW NXT Toolkit, and learn how to use LabVIEW in the off season. However, that’s not going to be possible for all teams. Giving a way to experiment with LabVIEW but being able to use some C code if necessary can help facilitate teams moving to using LabVIEW without being cornered into an all LabVIEW or all C/C++ situation.
LabVIEW itself is chock full of examples. Just use the example finder found in the lower right-hand corner of the LabVIEW 8.5 opening splash screen. Look up “Call Library Node” and you’ll find tons of examples of how to use this feature.
I hope I answered your question adequately.
Can someone give a list of the programs that teams will be able to program with in 2009 and following years?
Danny, I am doing just that and the examples work great with the NXT but I don’t know why or how. For instance in the getting started guide there is the NXT microphone example. I am trying to figure out where the sample rate is set. There is a lot more detail shown in the VI hierachy dispaly but I haven’t been able to drill down to the basic configuration info. Is it worth trying to figure out how the VI’s work at a lower layer (at least for the NXT) VIs or are the configurations kind of hard coded and not accessable? Now that I think of it I don’t even know how the system figured out what sensor input I pluged the microphone into. I need to spend more time reading the generic help info but searches on “sample rate” did not turn up anything that looked fruitful.