View Full Version : C++ vs. Java (speed considerations only)
daniel_dsouza
02-08-2012, 14:02
Hello CD,
I'm told that code written in C++ will run faster than Java code. My question is, how much faster, and under what conditions will it be noticeable? I'm especially interested in situations where teams have used both C++ and Java environments (at different times of course).
A large speed advantage from C++would be the only reason we would switching (we use Java now), as there really aren't too many other benefits to C++.
Thanks,
Daniel
My professional answer is: it depends. For generic software that doesn't use specific hardware implementations, the results aren't quite as intuitive since post-JIT processing will be at least as fast as C++ (i.e. after the same cycle has run a few times).
For hardware-based operations (OpenGL, OpenCV, custom hardware interfaces like radios) C++ is typically faster latency-wise, yet processing wise it's the same as Java. This is only due to the JNI code layer, and is implementation dependent (i.e. most of the time it isn't a noticeable difference unless you're shoving MB's of data through the code per second).
Bad programming practices will cause more performance issues on an FRC bot than the language choice.
Some articles:
http://vanillajava.blogspot.com/2011/07/c-or-java-which-is-faster-for-high.html
http://vanillajava.blogspot.com/2012/01/java-sucks-revisited.html
http://vanillajava.blogspot.com/2012/02/high-performance-libraries-in-java.html
http://vanillajava.blogspot.com/2011/08/collections-library-for-millions-of.html
And the article to make contraversy:
http://vanillajava.blogspot.com/2011/08/java-can-be-significantly-faster-than-c.html
Tom Bottiglieri
02-08-2012, 16:43
Have you run into a CPU ceiling? If not, what's the point of switching?
I've used both and I must say that I've found Java to be a more pleasant experience for FRC. It was easier to get students involved and learning, I could run it on my mac without a problem, and the speed of development was just WAY faster (less boilerplate, Eclipse/NetBeans are fairly good at autocomplete, etc).
Jon Stratis
02-08-2012, 16:53
Have you run into a CPU ceiling? If not, what's the point of switching?
+1
There are certainly reasons for a team to consider switching programming languages, and it really is great that we have the option to use Java, C++, or LabView. However, there is also the old adage, "If it ain't broke, don't fix it!"
Your best bet to examine the differences is to re-write last year's code in C++, then do some testing yourselves. Does the robot work better?
In my opinion, you'll see more benefits from knowledge and experience (development time, debugging, etc) in Java than you would ever see in performance from switching languages.
daniel_dsouza
02-08-2012, 21:36
Thanks guys,
There were a few members who wanted to use C++ for our language next year, but I think we will stick with Java. But there is still the off-season to experiment.
Hjelstrom
02-08-2012, 22:35
If you were going to try to achieve high-framerate vision processing on the cRio then C++ might be a good choice. Other than that I would recommend sticking with JAVA. Nothing else you would do on a robot should come close to taxing the cpu in JAVA.
Now that you can offload the vision processing to your driver station (e.g. see the paper released by 341 and the cookbook from WPI) you really don't need to do it on the cRio anyway. There are some of other advantages to doing the vision processing on your driverstation too such as being able to see what its doing :-)
If you don't know already, Labview has MAJOR lag problems, especially with relays. This year, the code that came with Labview for the relays was unusable and we were forced to write our own. This prevented us from competing in most of our matches at the Sacromento regional. So if you are looking for processing speed, don't use labview.
Greg McKaskle
03-08-2012, 07:37
If you believe you have found a bug with relays in LabVIEW, can you please be more specific? What workaround "fixed" the problem?
Each of the languages use the same integrated tests to verify correct operation and we each have tests for performance as well. LabVIEW is in fact a compiled language, like C++, but very different in that it is dataflow. Many teams used LabVIEW this year, and we don't have reports of lag everywhere or lag in relays. Details please. Since this isn't on topic, feel free to PM me.
Greg McKaskle
ttldomination
03-08-2012, 08:11
Thanks guys,
There were a few members who wanted to use C++ for our language next year, but I think we will stick with Java. But there is still the off-season to experiment.
This is interesting, but it's important to take note of why they're suggesting it. If the members want to move over because they feel like they would be more comfortable in that environment/they have more experience in that environment, then this merits further thought.
And definitely look into off-season testing. We're going through similar changes so we're looking forward to taking full advantages of this fall to make a smart, informed decision.
- Sunny G.
If you don't know already, Labview has MAJOR lag problems, especially with relays. This year, the code that came with Labview for the relays was unusable and we were forced to write our own. This prevented us from competing in most of our matches at the Sacromento regional. So if you are looking for processing speed, don't use labview.
There are two things that are REALLY slow in LabVIEW:
-VI calls through the normal "execution system" when they should be Subroutines or Inlined. The majority of the WPIlib suffers from this problem, unfortunately. The normal execution system is designed for heavily multitasking systems and systems with many concurrent processes. It treats each VI as an independent node in the execution system, unless it is flagged as Subroutine (or Inlined at compile time), which isn't very efficient. When properly written, LabVIEW code is far easier to multitask without much overhead.
-Programmers writing code while thinking procedurally. Thinking functionally is much better in LabVIEW (and even better in Simulink and more hardcore dataflow and modeling languages). That said, I still firmly believe it's better for programmers to know multiple languages and paradigms, as it will help you better choose the correct language and environment for a challenge.
As for C++ vs Java, non-JIT Java code will be slower. For completely personal reasons, I am a fan of purely procedural C programming for embedded systems.
I am also quite unimpressed by the timing jitter I got in LabVIEW. The RT threads are able to hold very tight timing, but at fairly significant CPU expense.
Greg McKaskle
03-08-2012, 23:43
This (http://izismile.com/2010/10/27/fighter_jet_cockpits_16_pics.html) is what the cockpit of fighter planes look like.
This (http://www.aviafilms.com/photos/cessna-150-cockpit.JPG) is what the cockpit of a Cessna looks like.
Is WPILib meant to be a fighter or a Cessna or somewhere in between? In all three languages, it steers away from advanced features, and the APIs do lots of parameter checking and error reporting. WPILib is not the ultimate example of performance techniques for any of the languages.
... The majority of the WPIlib suffers from this problem, unfortunately. The normal execution system is designed for heavily multitasking systems and systems with many concurrent processes. It treats each VI as an independent node in the execution system, unless ...
I'll argue that WPILib doesn't suffer from this. The goal was to make teams successful writing code to safely control an FRC robot. Much like the HW kit of parts, it isn't a turnkey robot SW solution, but instead offers relatively safe and straightforward components that can be combined in many ways. It is shipped as source where possible and often with debugging in place in order to allow safe exploration even to the point of making optimizations and other modifications. Just because the Cessna doesn't have the same cockpit as the fighter doesn't mean it failed at its intended purpose.
As for the LabVIEW execution system, I'll be happy to answer questions on how it operates. VI calls and Subroutines are closely related and both introduce overhead. In return, you gain code reuse and abstraction. I do not agree they are REALLY slow.
Just as a multitasking OS makes sense on a single core computer, so does a multitasking language. LV doesn't just treat VIs as independent nodes, it treats loops and other code clumps containing asynchronous nodes as independent nodes -- because they are. They are asynchronous. Lumping them together amortizes scheduling overhead but risks unnecessary blocking. This is the juggling act that the OS and the LV scheduler and clumping algorithm perform. All the languages support multitasking, but do so differently.
Summary?
All three languages are fast enough if used properly, and not nearly fast enough if used improperly. All are used in industry in performance demanding and time critical applications and are used by large numbers of FRC teams. There are many ways to choose the language(s) you want to learn. Performance shouldn't be the sole reason.
Greg McKaskle
JamesTerm
04-08-2012, 03:15
Thanks guys,
There were a few members who wanted to use C++ for our language next year, but I think we will stick with Java. But there is still the off-season to experiment.
I'd like to share with you an email my boss sent to me some time ago... this goes beyond robotics in our professional world of application development, but I think it would be good to know why some people use c++. I started with BASIC and then learned 6502 assembly... then finally came around to c++ here goes:
Programming language wars are about the same as religious wars ... everyone believes that their chosen favorite(s) are the only ones that make sense and I am certainly going to wade into this argument because at the end of the day any language that provably obeys the basic axioms of computation can all technically perform the exact same operations.
The above said,a couple of observations.
1. The world is built on-top of C (++) and so while other languages might be good at expressing some things "better",at the end of the day C and C++ seem to fill some magic middle ground between assembly and higher level languages. Every major app you use,every OS,even the C# run-time and compilers are all written in C(++). The same is true of every video codec,every piece of firmware (your fridge,phone,car,planes,etc...),your web browser and whatever UI control you are reading this in,etc...
2. As an illustration of the importance of C(++) every major change in hardware architecture first adopts and uses C-like languages and not others. Every GPU language (that has been adopted) is based on C,as is OpenCL,most HW description languages,etc...
3. In the real world,if you are going to implement an API that anyone else can use, it needs to be C. This is still the uniformly accepted standard for libraries and for good reason. Try using a C# library in C or Fortran. Now try a C library.
4. A few years ago I decided that we needed to move our Ui development to C# in the belief that some of the higher level programming concepts would help make development more robust by not having the same issues that C(++) has with manual memory management,etc... In practice (and some of this might be down to the fact that C(++) programmers and C# programmers tend to be at different skill levels on average) we have found that our Ui code has far more bugs is less stable and harder to debug than our C(++) code.
5. I am not an expert on the tools,but while there are good C# tools,it suffers at a disadvantage of only having been around for a fraction of the time of C(++). Whenever I am working on C# code I simply cannot believe that there is no compile and continue support,etc... Likewise you are not going to find the long history of proven tools for any language other than C in terms of profiling (yes some C# ones exist but they are quite frankly not at the same level),code verification,code documentation,distributed compilation,etc... etc...
*Ahem*
Beating a dead horse, that was dead a long time before FIRST robotics came around, but throwing in my two cents anyways.
In FIRST robotics, for the applications we develop, if you're concerned with efficiency, then it doesn't matter really at all whether you use labview, c++, or java.
All three are more than sufficiently tight to handle any application a FIRST robot utilizes, so long as it's written correctly.
So long as it's written correctly being the key statement.
I recommend using whatever language your team has the most experience with, or your mentors are most familiar with, or at the worst case, just the one you feel like you want to learn. Hell, take a year and write your robot code from that year in all three and pick one based on just your personal impressions if you want, but keep in mind, if it's slow, if your CPU is capped and your logic isn't getting executed on time, it's very, very, very likely that you're doing it wrong, rather than the language is to blame.
And if someone comes along and mentions something akin to 'all the best teams on Einstein seem to be using c++', don't forget that a lot of the teams on Einstein are either senior teams, who used to HAVE to develop in C, and so already had experience with that when it came to picking between c++ or labview, or have great mentors, and mentors who were probably born in the 50's-80's are much more likely to have C++ experience than java or labview experience, and mentors born from the 70's-80's are much more likely to have java experience on top of c++ experience, but still very little labview, just because it's relatively young and has historically had very specific market exposure.
In my work place we use C, C++, Ada, Java, Labview, and many others, and all are slightly better or more convenient for certain things, but what's most important in every application is that you use something your team is comfortable with, because the great deal of real world efficiency problems are caused because the developer that wrote the functionality didn't realize they could have made it faster if they implemented it slightly different, and are entirely the developers 'fault'.
JamesTerm
04-08-2012, 10:48
A large speed advantage from C++would be the only reason we would switching (we use Java now), as there really aren't too many other benefits to C++.
Thanks,
Daniel
This a little measuring stick I've used in the past... When I mentor other people in programming... they either get pointers or they don't. This may a quick way to know... if you feel you cannot live without pointers then you probably want to use c++. (I do not feel that I can) ;)
flameout
04-08-2012, 12:26
Until this past year (I just graduated this year), I would have agreed with everyone who stated that any of the three should be fast enough.
This year, however, we ran into issues with our code's performance. We were using Java. These issues primarily came from low performance in our IO calls.
Benchmarking some of the sensor-reading code, we found that it normally took about a millisecond to read a gyro. I'm not sure what it is for encoders, or to update a PWM output for a motor controller, but I found this to be quite long. Unfortunately, due to this slow read time, we were unable to do integration on all three gyros, which adversely affected our balancing code.
Although I haven't measured for other sensors, I will have to comment that we had to drop our drivetrain control loop from our desired 200 Hz to 100 Hz (this affected our tuning noticeably). We had to do this for the arm as well. Our code was not written in an inefficient manner -- if you think that I may be incorrect, then please check it out; it's at http://code.google.com/p/frc-team-957-2012/.
I'll put in another note that when I benchmarked CAN performance for Logomotion, I found that I could transmit about 180-200 operations per second (depending on the message), which is far below the theoretical 900+. Although I didn't try sending multiple at once, which I now wish I would've tested, I believe that most of the time spent in a CANJaguar setX() call is actually spent in WPILibJ and/or OS code -- not in waiting for transmission and reception of the CAN message.
Note: We cannot say anything about the efficiency of C++ or LabVIEW, and therefore cannot make the statement that either of the two are any more or less efficient than Java. I am purely trying to point out that the statement "all three should be more than fast enough for FRC use" is false.
drakesword
07-08-2012, 22:39
[QUOTE=JesseK;1180245]My professional answer is: it depends. For generic software that doesn't use specific hardware implementations, the results aren't quite as intuitive since post-JIT processing will be at least as fast as C++ (i.e. after the same cycle has run a few times).
For hardware-based operations (OpenGL, OpenCV, custom hardware interfaces like radios) C++ is typically faster latency-wise, yet processing wise it's the same as Java. This is only due to the JNI code layer, and is implementation dependent (i.e. most of the time it isn't a noticeable difference unless you're shoving MB's of data through the code per second).[QUOTE]
I agree with you on the first part. The second part I very muc disagree with the MB's of data. I work for a company that does e-discovery. Our main processing system is entirely java. It is able to process Almost 100gb per hour depending on the media. Just today we processed 1.2GB of mail storage in 45 seconds.
JamesTerm
07-08-2012, 23:49
I agree with you on the first part. The second part I very muc disagree with the MB's of data. I work for a company that does e-discovery. Our main processing system is entirely java. It is able to process Almost 100gb per hour depending on the media. Just today we processed 1.2GB of mail storage in 45 seconds.
How do you define processing? Any language can do simple tasks like data transfer with simple logic instructions... but its another thing where you really process the data that requires inner loops (e.g. 3d rendering, Audio/Video effects processing). With c++ you can use inline assembly and use intrinsics that explicitly choose what assembly instructions to use. You can for example do prefetching to ensure the P and V pipes are always full in an inner loop. Back in the eary 2000-2004ish... we also had to be responsible for scheduling instructions to ensure that we didn't get penalized for branch predictions failing, or put two divides too closely together by squeezing in some lightweight instructions like 'and', 'or', or even adds, or ensure we cycled through each mmx/sse instruction register (it was crazy). Around 2005-2008 Visual Studio mastered the intrinsics to have a competitive optimization of the instruction scheduling. Furthermore you can do things like this http://www.jaist.ac.jp/iscenter-new/mpc/altix/altixdata/opt/intel/vtune/doc/users_guide/mergedProjects/analyzer_ec/mergedProjects/reference_olh/mergedProjects/instructions/instruct32_hh/vc235.htm. Really now... when you speak of real processing power, consider what language they used to land the Mars Curiosity Rover. ;)
drakesword
08-08-2012, 09:56
How do you define processing? Any language can do simple tasks like data transfer with simple logic instructions... but its another thing where you really process the data that requires inner loops (e.g. 3d rendering, Audio/Video effects processing). With c++ you can use inline assembly and use intrinsics that explicitly choose what assembly instructions to use. You can for example do prefetching to ensure the P and V pipes are always full in an inner loop. Back in the eary 2000-2004ish... we also had to be responsible for scheduling instructions to ensure that we didn't get penalized for branch predictions failing, or put two divides too closely together by squeezing in some lightweight instructions like 'and', 'or', or even adds, or ensure we cycled through each mmx/sse instruction register (it was crazy). Around 2005-2008 Visual Studio mastered the intrinsics to have a competitive optimization of the instruction scheduling. Furthermore you can do things like this http://www.jaist.ac.jp/iscenter-new/mpc/altix/altixdata/opt/intel/vtune/doc/users_guide/mergedProjects/analyzer_ec/mergedProjects/reference_olh/mergedProjects/instructions/instruct32_hh/vc235.htm. Really now... when you speak of real processing power, consider what language they used to land the Mars Curiosity Rover. ;)
Think of it this way. We have a solid block of data, anywhere between 20 megs to however large your storage is. You do not know what is in this block of data. You need to determine the data type, extract any metadata (date created, who created it, ip addresses, email etc.), create an image of it if you can, and index all of the searchable text.
Now with java you can get that crazy as well, disassemble the byte code, examine your jumps, make your loops tighter etc. I personally have not gone that far but I assure you it is done all the time.
For vision processing, if you have the money, I would use a HDL frontend for speed, then maybe for lower level operations c++ or, one again if you have the money, GLSL (opengl shader language), java for higher level processing and finally for gui I would use either php or perl or bash. As you can tell this setup would require a whole computer, not suitable for FRC but it will work!
As far as the mars rover goes, I have no idea what they used. Do you have a reference? I would like to read it!
Every language has its advantages and disadvantages, there is no need to discount one over the other in a generalized environment. What it really comes down to is the developers experience with the language as well as the system(s) it is being developed for.
Personally my opinion is that the language doesn't matter, what does matter is the communication between the parties involved. If you have something that works well, something YOU can understand, something your COWORKERS can understand, and something that is optimized for your needs.
I will not get into personal gripes about each language. If you want to hear from my personal experiences with languages feel free to message me.
As far as the mars rover goes, I have no idea what they used. Do you have a reference? I would like to read it!
There's good sources referenced from this offsite discussion here (http://programmers.stackexchange.com/questions/159637/what-is-the-mars-curiosity-rovers-software-built-in)
Moral of the story is mostly C code running on top of VxWorks. Not that that means C is better, still, but the source code was built on top of preexisting C firmware from earlier rovers.
I agree with you on the first part. The second part I very muc disagree with the MB's of data. I work for a company that does e-discovery. Our main processing system is entirely java. It is able to process Almost 100gb per hour depending on the media. Just today we processed 1.2GB of mail storage in 45 seconds.
This is a far different scenario from the one I had in mind (and quite honestly I'd conjecture that your mail processing system actually processes kB's of metadata rather than GB's of raw data in that amount of time unless it's a JBOD attached to a very expensive Xeon processor...):
1.) Start a live stream of 1080p @ 30Hz
2.) Render said stream via OpenGL on a Java display. Why OpenGL? The rendering of geometric overlays directly onto the video stream is much faster that Java's layered SWT, AWT, or Spring implementations.
3.) Decide that you want overlays AND image processing (line detection, dithering, target color enhancement, etc) from the same screen on the same viewport (matches a very common requirement from a demanding customer... heh...)
4.) Decide that all of this processing and rendering only has 1 server assigned to it (matches a common requirement in my world)
Depending on the underlying libraries, this may all be single-threaded -- i.e. that 16-core enterprise grade processor still might not be able to keep up since it's all asynchronous. If that's the case (as it often is with FOSS libraries for decoupling purposes) then multiple passthroughs of JNI alone in this scenario could cause more than 200ms of latency between the time a video frame hits the socket and the time the render hits the glass. One might say "oh, just re-write the libraries and JNI extensions yourself!", yet the actual amount of time (aka $) spent verifying the optimisation works perfectly far surpasses the minimal gain.
The latency is added to the image processing time -- poor implementations can see a 0.5 seconds of delay or more. The fastest I've ever seen (on optimised applications at work) is roughly 150ms from video frame generation to rendering on the glass. 500 ms doesn't seem like a big deal unless it's what stands in between a driver/pilot and a machine that costs 7 or 8 figures.
Again, the point here isn't that Java is slow or fast -- it's that a poorly constructed application is slow and Java's wrapping and hiding of the library implementations can compound that issue very easily.
JamesTerm
08-08-2012, 11:39
This is a far different scenario from the one I had in mind (and quite honestly I'd conjecture that your mail processing system actually processes kB's of metadata rather than GB's of raw data in that amount of time unless it's a JBOD attached to a very expensive Xeon processor...):
Thanks JesseK you saved me from needing to respond. Nice to meet another programmer who works with HD video. (looking forward to supporting 1080p 60 someday). :)
For now... I just wanted to make sure Drakesword's benchmark fits in the "Any language can do simple tasks like data transfer with simple logic instructions" category. I want to keep a close eye on c# and java trends as I don't want to fall victim to a programmer who is behind the times. I tend to think of JAVA and C# as the 4th generation language... somewhere in between C and Scripting (e.g. 5th?).... where the further out you are the less direct control you have to the CPU. Visual Studio is really 2.5 with the intrinsics and inline assembly.
http://en.wikipedia.org/wiki/Programming_language_generations
EricVanWyk
08-08-2012, 12:19
This thread has gone pretty far from the original question. It is easy to construct situations where one tool is a better choice than another - that is why there are multiple tools available.
Does anyone have benchmark numbers for FRC?
JamesTerm
08-08-2012, 12:26
This thread has gone pretty far from the original question. It is easy to construct situations where one tool is a better choice than another - that is why there are multiple tools available.
Hello CD,
I'm told that code written in C++ will run faster than Java code. My question is, how much faster, and under what conditions will it be noticeable?
Is it really? JessieK has identified processing vision as under what conditions, and now that vision can be processed by the driver station. I think it is on par with the original question.
FRC programming is no longer limited to the mpc5200 processor. ;)
EricVanWyk
08-08-2012, 12:40
Do we have numbers for that?
JamesTerm
08-08-2012, 13:24
Do we have numbers for that?
I can't share numbers that I have access to, but I can share this link:
http://www.youtube.com/watch?v=BfVLAe4_HPg
That is processing vision... not done in Java. ;)
Tom Bottiglieri
08-08-2012, 13:43
I can't share numbers that I have access to, but I can share this link:
http://www.youtube.com/watch?v=BfVLAe4_HPg
That is processing vision... not done in Java. ;)
Yeah, and a Corvette goes faster than a go-kart. Doesn't help much for a go-karting competition though.
Personally, there's not really a huge difference in execution speed between these two languages that can't be made up for with architecture. Process your vision on the laptop, run your control loops in separate threads, and handle user interactions in an event oriented fashion. This will result in code that is easier to build, understand, maintain, read, re-use, etc.
JamesTerm
08-08-2012, 14:13
Yeah, and a Corvette goes faster than a go-kart. Doesn't help much for a go-karting competition though.
Personally, there's not really a huge difference in execution speed between these two languages that can't be made up for with architecture. Process your vision on the laptop, run your control loops in separate threads, and handle user interactions in an event oriented fashion. This will result in code that is easier to build, understand, maintain, read, re-use, etc.
It should be noted OpenCV was not a solution we used to acheive the motion tracking shown in that video as it falls short in performance and requirements of our needs. While many aspects of the FRC code are in a "go-karting" competition (i.e. where there is not a noticeable need for inner loops). I believe processing vision fits into a Corvette category. As Tim (NewTek owner) once said to me when I was attempting to code the processing vision for Rebound Rumble. "Real video is a "dirty" world"... there is so MUCH room for new innovation for processing video to interpret what you see. There is no limit on how advanced this code can be. Imagine if you will, to have code that is robust enough to identify targets (AI technology) as well as a human driver, but with lighting fast speed. OpenCV allows people to work with existing solutions, but if you want to make new innovative solutions... you really need the right tool for the right job.
Thanks for the Corvette to go-kart comparison... I hope others can appreciate the hard work that we do and see the real power and innovation behind c++ with intrinsics.
davidthefat
11-08-2012, 22:53
Java has a lot more overhead than C++. For example, the JVM itself needs to use the same registers, caches and RAM that the Java Program would use. Also, a Java Object has at least 8 bytes of overhead (because of the instruction pointers). If you caste a type, you have to copy it since Java just does not read it as the type you are converting to. Take for example, if you have a 32 bit integer in memory in C++, you can caste it to read in as a short (16 bits). It will only read the first 16 bits. But since everything is an object in Java, the memory simply cannot be read as 16 bits. It needs to be copied.
Java has a lot more overhead than C++.
I don't think the argument is that Java or Labview has less overhead than C++, rather that in (nearly all) applications that are relevant at all to FIRST, the overhead is simply irrelevant.
With the singular possible exception of vision processing, where the argument could be made that realtime performance of a control system based on vision processing is not necessarily drastically impacted by the performance gains presented by vision processing software, it just doesn't matter.
I'd add PID loops to the list, but in most FIRST applications the effectiveness of your PID loop is probably bottlenecked before you encounter processing speed issues, either by I/O or low feedback resolution or frequency.
The likelihood that performance issues will be introduced by a team simply because they aren't as familiar with a different language set drastically outweigh the actual performance differences between the three languages, bringing me back to if performance is what you care about, then stick to the language your team feels most comfortable with, and don't forget that you can realistically (and simply) export almost any processing that can deal with ethernet latency and bandwidth restrictions to the dashboard, and then out to be processed on any language, with any framework, with almost any hardware you want on your driver station machine.
That said, there's a reason why all three are used widely in different industries, and I strongly encourage anyone who's interested to learn all three and make their own decisions.
davidthefat
12-08-2012, 07:51
I don't think the argument is that Java or Labview has less overhead than C++, rather that in (nearly all) applications that are relevant at all to FIRST, the overhead is simply irrelevant.
I agree. The overhead creates milliseconds of delay; it's insignificant compared to the sensor delay. But I was answering in terms of pure Java vs C++. I don't want people to think what's relevant to robotics is always relevant to general programming.
The overhead creates milliseconds of delay
In "normal" embedded control system programming, a whole millisecond is a LOT of delay.
What's relevant or important in general programming isn't always the same in robotics/control systems, but a few things that aren't that important in general programming are REALLY important in embedded control system programming. Milliseconds is one of them.
(I seriously don't think Java overhead creates milliseconds of delay for code the size of FRC code. LabVIEW...)
In "normal" embedded control system programming, a whole millisecond is a LOT of delay.
What's relevant or important in general programming isn't always the same in robotics/control systems, but a few things that aren't that important in general programming are REALLY important in embedded control system programming. Milliseconds is one of them.
(I seriously don't think Java overhead creates milliseconds of delay for code the size of FRC code. LabVIEW...)
Properly written Java and Labview code do not create milliseconds of delay in our use case. Period. The only thing with the potential to generate that kind of delay are differences in the WPILib implementations for each language, which are both editable, and completely unrelated to the way the language actually works.
Greg McKaskle
12-08-2012, 11:12
NIWeek just concluded, and if you haven't heard about it, it is a large developer conference focussing on NI products including LabVIEW. It is similar to the Apple, MS, Java, and Google conferences, and similar in size, but in embraces more of engineering than just SW. Attendees are a combination of industry developers and integrators, and all of them presumably care about efficiency.
So why don't all of them use C++? Why not assembly? Or heck, why not just build a circuit and get rid of that dinky computer altogether?
Glad you asked. At the end of the day, people are rewarded for solving problems. Sometimes the solution to the problem is well understood and the problem is actually reducing the cost or size of the package. Other times, the problem is not that well understood, or the size and cost of the package are less of a factor because you first need to show that people want this problem solved and are willing to pay for it. Optimizing a product people don't want is a waste of money and time. The overall cost factors of time, money, and expertise, typically select out the tools that make sense to use.
More SW specific:
If you have to write a given amount of code and you will not run out of CPU, you will tend to choose tools and languages that enable you to write that code more quickly or more bug-free. But if you have a limited CPU and your goal is to cram mucho-code into it, you will go for tools that produce smaller or faster execution even if it takes longer and costs more to write the code. Since no single language is optimal in all these areas, professional programming teams tend to know more than one and not be quite so dogmatic about it.
I'm tempted to highlight some of the NIWeek apps that demonstrate how the developers chose different tools for different tasks and integrated them to solve the real-world problem, but since FRC really is a microcosm of industry and academia, I'll use FRC instead.
The cRIO controller has an FPGA (reconfigurable-circuit), a realtime CPU/OS, and a non-realtime but more powerful OS/CPU in the DS. There are many ways to use this equipment, but note that tasks such as PWM generation and pulse-train decoding are done in the FPGA. You could do it other ways, but can you decode 40,000 encoder pulses on multiple channels as well using the CPU and any of the languages? No. Tasks like these benefit from dedicated HW. If the digital module being used were swapped out, it could easily go a hundred times faster.
Vision processing whether done in OpenCV or NI-Vision is written in rather sophisticated C++. If running on multicore computers, it will take advantage of it, and if the cores support SSE or MMX or other vector instructions, the code will take advantage of it. True, it could be done in the FPGA, but you will chew it up very quickly and it is better utilized for other tasks. The vision algorithms can be written in all three languages, and they will all three be slower than the professional libraries. Just using C doesn't necessarily mean it is fast. It is true that C/C++ is a good tool to use to write highly efficient code, but simply using C/C++ doesn't make it highly efficient. Algorithm selection and attention to data access is far more important. The C/C++ compiler and others as well only has so much optimization magic to apply.
So the teams already have some efficient resources written in the appropriate language at their disposal and they have bridges to those tools in all three languages. You have a spectrum of resources to choose from in SW, just as in motors and structural materials. Oh, and you have deadlines too. I still recommend you look at all these factors when you pick your programming language.
OK, I can't resist a few NIWeek highlights:
Entertaining robots -- Intel at NIWeek (http://www.youtube.com/watch?v=OlqLjr_CG4w&feature=related), alternate Intel ProVideo (http://www.youtube.com/watch?v=JLdB0WEixjM), and KUKA (http://www.youtube.com/watch?v=eEbdbt-CqiE)
Keynote Highlights (http://www.ni.com/niweek/keynote_videos.htm) in particular the Olin Robotic Sailboat, the Japanese KURAMA II which uses the same controller as FRC, and Optimedica returned with another inspiring device and presentation. If you want to know more about why LV is what it is, Jeff Kodosky's Core Principles of LabVIEW will give insight. Cold fusion, now known as the Anamolous Heat Effect was also a part of both the expo and the keynote.
TLDR (my first ever):
Theoretical efficiency of the code after it is written and debugged is not going to help you much if you can't get it finished. Pick the tools that make your team successful.
Greg McKaskle
daniel_dsouza
12-08-2012, 11:31
Wow, I must commend all of you. You have more than answered my question.
I probably should have stated in the beginning that I was primarily talking about code on the cRIO that reads inputs, and sets outputs. (not primarily vision)
What seems to be the overall consensus is that a team can have greater efficiency by sticking with the language that they know best (as opposed to switching to a supposedly "better" language). Why? The overhead in FRC applications is mainly in the stock software/hardware, compared to language differences.
For non FRC applications, the language used depends on what works best for that platform/task.
Thanks everyone!
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.