View Single Post
  #13   Spotlight this post!  
Unread 24-04-2008, 10:12
Kevin Sevcik's Avatar
Kevin Sevcik Kevin Sevcik is offline
(Insert witty comment here)
FRC #0057 (The Leopards)
Team Role: Mentor
 
Join Date: Jun 2001
Rookie Year: 1998
Location: Houston, Texas
Posts: 3,769
Kevin Sevcik has a reputation beyond reputeKevin Sevcik has a reputation beyond reputeKevin Sevcik has a reputation beyond reputeKevin Sevcik has a reputation beyond reputeKevin Sevcik has a reputation beyond reputeKevin Sevcik has a reputation beyond reputeKevin Sevcik has a reputation beyond reputeKevin Sevcik has a reputation beyond reputeKevin Sevcik has a reputation beyond reputeKevin Sevcik has a reputation beyond reputeKevin Sevcik has a reputation beyond repute
Send a message via AIM to Kevin Sevcik Send a message via Yahoo to Kevin Sevcik
Re: NEW 2009 Control System Released

Quote:
Originally Posted by Jay Lundy View Post
These are all related to the fear that NI's ultimate goal is that someday we will all be programming our robots in LabView because the C/C++ interface is crippled. The NI reps have stated many times that is not the goal and the C/C++ and LabView libraries will be of similar quality. If you want to program in C, nothing is stopping you.
I think our primary concern is that Labview often times makes things seem perfectly transparent, easy, and efficient when they're anything but. It's not terribly straightforward or apparent what it's actually doing with the code you've given it and unless you're paying very close attention to what you're doing, it can easily come back and bite you. I was working with a Real Time VI developed by a fellow MechE student that was being used for data-logging for an experiment. I was having difficultly getting the loop rate as fast as I thought I should be able to when I finally noticed the problem. Each data point was simply added to an array using the Insert into Array function block. The VI was blithely allocating a new chunk of memory, copying over the entire old array, and adding a single new data point to it. Lacking a minor in CompSci and thus previously acquired knowledge of how arrays actually work, I would have just assumed that that was how fast things were supposed to run. More to the point, after discovering this, I attempted to ascertain just how intelligent Labview was about resizing the array it was inserting this element into. Did it increase the allocated memory by 1 element? 10? Some constant somewhere? Did it double the size? I can get at this info for a rather lot of other programming languages, but I haven't a clue what labview is doing for me behind the scenes.

Also, to those hoping to use the FPGA like myself, you won't be subjected to VHDL unless you really want to be or are boycotting Labview. The Labview FPGA module lets you do everything in function blocks just like everything else and it then works our your VHDL code for you. Of course this is, again, just so much voodoo. I have also coded up a rather gate-hungry FPGA program to do some forward kinematics at ludicrous speed. I knew I was cutting things close on the size of my FPGA, but I didn't know how close until I changed the values of some constants I had rounded slightly out of laziness. Imagine my surprise when my design suddenly no longer fit on the FPGA. Of course changing back to my old, slightly rounded values made everything fit just fine again.....
__________________
The difficult we do today; the impossible we do tomorrow. Miracles by appointment only.

Lone Star Regional Troubleshooter
Reply With Quote