C++ vs. Java (speed considerations only)

Hello CD,

I’m told that code written in C++ will run faster than Java code. My question is, how much faster, and under what conditions will it be noticeable? I’m especially interested in situations where teams have used both C++ and Java environments (at different times of course).

A large speed advantage from C++would be the only reason we would switching (we use Java now), as there really aren’t too many other benefits to C++.

Thanks,
Daniel

My professional answer is: it depends. For generic software that doesn’t use specific hardware implementations, the results aren’t quite as intuitive since post-JIT processing will be at least as fast as C++ (i.e. after the same cycle has run a few times).

For hardware-based operations (OpenGL, OpenCV, custom hardware interfaces like radios) C++ is typically faster latency-wise, yet processing wise it’s the same as Java. This is only due to the JNI code layer, and is implementation dependent (i.e. most of the time it isn’t a noticeable difference unless you’re shoving MB’s of data through the code per second).

Bad programming practices will cause more performance issues on an FRC bot than the language choice.

Some articles:




And the article to make contraversy:

Have you run into a CPU ceiling? If not, what’s the point of switching?

I’ve used both and I must say that I’ve found Java to be a more pleasant experience for FRC. It was easier to get students involved and learning, I could run it on my mac without a problem, and the speed of development was just WAY faster (less boilerplate, Eclipse/NetBeans are fairly good at autocomplete, etc).

+1

There are certainly reasons for a team to consider switching programming languages, and it really is great that we have the option to use Java, C++, or LabView. However, there is also the old adage, “If it ain’t broke, don’t fix it!”

Your best bet to examine the differences is to re-write last year’s code in C++, then do some testing yourselves. Does the robot work better?

In my opinion, you’ll see more benefits from knowledge and experience (development time, debugging, etc) in Java than you would ever see in performance from switching languages.

Thanks guys,

There were a few members who wanted to use C++ for our language next year, but I think we will stick with Java. But there is still the off-season to experiment.

If you were going to try to achieve high-framerate vision processing on the cRio then C++ might be a good choice. Other than that I would recommend sticking with JAVA. Nothing else you would do on a robot should come close to taxing the cpu in JAVA.

Now that you can offload the vision processing to your driver station (e.g. see the paper released by 341 and the cookbook from WPI) you really don’t need to do it on the cRio anyway. There are some of other advantages to doing the vision processing on your driverstation too such as being able to see what its doing :slight_smile:

If you don’t know already, Labview has MAJOR lag problems, especially with relays. This year, the code that came with Labview for the relays was unusable and we were forced to write our own. This prevented us from competing in most of our matches at the Sacromento regional. So if you are looking for processing speed, don’t use labview.

If you believe you have found a bug with relays in LabVIEW, can you please be more specific? What workaround “fixed” the problem?

Each of the languages use the same integrated tests to verify correct operation and we each have tests for performance as well. LabVIEW is in fact a compiled language, like C++, but very different in that it is dataflow. Many teams used LabVIEW this year, and we don’t have reports of lag everywhere or lag in relays. Details please. Since this isn’t on topic, feel free to PM me.

Greg McKaskle

This is interesting, but it’s important to take note of why they’re suggesting it. If the members want to move over because they feel like they would be more comfortable in that environment/they have more experience in that environment, then this merits further thought.

And definitely look into off-season testing. We’re going through similar changes so we’re looking forward to taking full advantages of this fall to make a smart, informed decision.

  • Sunny G.

There are two things that are REALLY slow in LabVIEW:

-VI calls through the normal “execution system” when they should be Subroutines or Inlined. The majority of the WPIlib suffers from this problem, unfortunately. The normal execution system is designed for heavily multitasking systems and systems with many concurrent processes. It treats each VI as an independent node in the execution system, unless it is flagged as Subroutine (or Inlined at compile time), which isn’t very efficient. When properly written, LabVIEW code is far easier to multitask without much overhead.

-Programmers writing code while thinking procedurally. Thinking functionally is much better in LabVIEW (and even better in Simulink and more hardcore dataflow and modeling languages). That said, I still firmly believe it’s better for programmers to know multiple languages and paradigms, as it will help you better choose the correct language and environment for a challenge.

As for C++ vs Java, non-JIT Java code will be slower. For completely personal reasons, I am a fan of purely procedural C programming for embedded systems.

I am also quite unimpressed by the timing jitter I got in LabVIEW. The RT threads are able to hold very tight timing, but at fairly significant CPU expense.

This is what the cockpit of fighter planes look like.

This is what the cockpit of a Cessna looks like.

Is WPILib meant to be a fighter or a Cessna or somewhere in between? In all three languages, it steers away from advanced features, and the APIs do lots of parameter checking and error reporting. WPILib is not the ultimate example of performance techniques for any of the languages.

… The majority of the WPIlib suffers from this problem, unfortunately. The normal execution system is designed for heavily multitasking systems and systems with many concurrent processes. It treats each VI as an independent node in the execution system, unless …

I’ll argue that WPILib doesn’t suffer from this. The goal was to make teams successful writing code to safely control an FRC robot. Much like the HW kit of parts, it isn’t a turnkey robot SW solution, but instead offers relatively safe and straightforward components that can be combined in many ways. It is shipped as source where possible and often with debugging in place in order to allow safe exploration even to the point of making optimizations and other modifications. Just because the Cessna doesn’t have the same cockpit as the fighter doesn’t mean it failed at its intended purpose.

As for the LabVIEW execution system, I’ll be happy to answer questions on how it operates. VI calls and Subroutines are closely related and both introduce overhead. In return, you gain code reuse and abstraction. I do not agree they are REALLY slow.

Just as a multitasking OS makes sense on a single core computer, so does a multitasking language. LV doesn’t just treat VIs as independent nodes, it treats loops and other code clumps containing asynchronous nodes as independent nodes – because they are. They are asynchronous. Lumping them together amortizes scheduling overhead but risks unnecessary blocking. This is the juggling act that the OS and the LV scheduler and clumping algorithm perform. All the languages support multitasking, but do so differently.

Summary?
All three languages are fast enough if used properly, and not nearly fast enough if used improperly. All are used in industry in performance demanding and time critical applications and are used by large numbers of FRC teams. There are many ways to choose the language(s) you want to learn. Performance shouldn’t be the sole reason.

Greg McKaskle

I’d like to share with you an email my boss sent to me some time ago… this goes beyond robotics in our professional world of application development, but I think it would be good to know why some people use c++. I started with BASIC and then learned 6502 assembly… then finally came around to c++ here goes:

Programming language wars are about the same as religious wars … everyone believes that their chosen favorite(s) are the only ones that make sense and I am certainly going to wade into this argument because at the end of the day any language that provably obeys the basic axioms of computation can all technically perform the exact same operations.

The above said,a couple of observations.

  1. The world is built on-top of C (++) and so while other languages might be good at expressing some things “better”,at the end of the day C and C++ seem to fill some magic middle ground between assembly and higher level languages. Every major app you use,every OS,even the C# run-time and compilers are all written in C(++). The same is true of every video codec,every piece of firmware (your fridge,phone,car,planes,etc…),your web browser and whatever UI control you are reading this in,etc…

  2. As an illustration of the importance of C(++) every major change in hardware architecture first adopts and uses C-like languages and not others. Every GPU language (that has been adopted) is based on C,as is OpenCL,most HW description languages,etc…

  3. In the real world,if you are going to implement an API that anyone else can use, it needs to be C. This is still the uniformly accepted standard for libraries and for good reason. Try using a C# library in C or Fortran. Now try a C library.

  4. A few years ago I decided that we needed to move our Ui development to C# in the belief that some of the higher level programming concepts would help make development more robust by not having the same issues that C(++) has with manual memory management,etc… In practice (and some of this might be down to the fact that C(++) programmers and C# programmers tend to be at different skill levels on average) we have found that our Ui code has far more bugs is less stable and harder to debug than our C(++) code.

  5. I am not an expert on the tools,but while there are good C# tools,it suffers at a disadvantage of only having been around for a fraction of the time of C(++). Whenever I am working on C# code I simply cannot believe that there is no compile and continue support,etc… Likewise you are not going to find the long history of proven tools for any language other than C in terms of profiling (yes some C# ones exist but they are quite frankly not at the same level),code verification,code documentation,distributed compilation,etc… etc…

Ahem

Beating a dead horse, that was dead a long time before FIRST robotics came around, but throwing in my two cents anyways.

In FIRST robotics, for the applications we develop, if you’re concerned with efficiency, then it doesn’t matter really at all whether you use labview, c++, or java.

All three are more than sufficiently tight to handle any application a FIRST robot utilizes, so long as it’s written correctly.

So long as it’s written correctly being the key statement.

I recommend using whatever language your team has the most experience with, or your mentors are most familiar with, or at the worst case, just the one you feel like you want to learn. Hell, take a year and write your robot code from that year in all three and pick one based on just your personal impressions if you want, but keep in mind, if it’s slow, if your CPU is capped and your logic isn’t getting executed on time, it’s very, very, very likely that you’re doing it wrong, rather than the language is to blame.

And if someone comes along and mentions something akin to ‘all the best teams on Einstein seem to be using c++’, don’t forget that a lot of the teams on Einstein are either senior teams, who used to HAVE to develop in C, and so already had experience with that when it came to picking between c++ or labview, or have great mentors, and mentors who were probably born in the 50’s-80’s are much more likely to have C++ experience than java or labview experience, and mentors born from the 70’s-80’s are much more likely to have java experience on top of c++ experience, but still very little labview, just because it’s relatively young and has historically had very specific market exposure.

In my work place we use C, C++, Ada, Java, Labview, and many others, and all are slightly better or more convenient for certain things, but what’s most important in every application is that you use something your team is comfortable with, because the great deal of real world efficiency problems are caused because the developer that wrote the functionality didn’t realize they could have made it faster if they implemented it slightly different, and are entirely the developers ‘fault’.

This a little measuring stick I’ve used in the past… When I mentor other people in programming… they either get pointers or they don’t. This may a quick way to know… if you feel you cannot live without pointers then you probably want to use c++. (I do not feel that I can) :wink:

Until this past year (I just graduated this year), I would have agreed with everyone who stated that any of the three should be fast enough.

This year, however, we ran into issues with our code’s performance. We were using Java. These issues primarily came from low performance in our IO calls.

Benchmarking some of the sensor-reading code, we found that it normally took about a millisecond to read a gyro. I’m not sure what it is for encoders, or to update a PWM output for a motor controller, but I found this to be quite long. Unfortunately, due to this slow read time, we were unable to do integration on all three gyros, which adversely affected our balancing code.

Although I haven’t measured for other sensors, I will have to comment that we had to drop our drivetrain control loop from our desired 200 Hz to 100 Hz (this affected our tuning noticeably). We had to do this for the arm as well. Our code was not written in an inefficient manner – if you think that I may be incorrect, then please check it out; it’s at http://code.google.com/p/frc-team-957-2012/.

I’ll put in another note that when I benchmarked CAN performance for Logomotion, I found that I could transmit about 180-200 operations per second (depending on the message), which is far below the theoretical 900+. Although I didn’t try sending multiple at once, which I now wish I would’ve tested, I believe that most of the time spent in a CANJaguar setX() call is actually spent in WPILibJ and/or OS code – not in waiting for transmission and reception of the CAN message.

Note: We cannot say anything about the efficiency of C++ or LabVIEW, and therefore cannot make the statement that either of the two are any more or less efficient than Java. I am purely trying to point out that the statement “all three should be more than fast enough for FRC use” is false.

How do you define processing? Any language can do simple tasks like data transfer with simple logic instructions… but its another thing where you really process the data that requires inner loops (e.g. 3d rendering, Audio/Video effects processing). With c++ you can use inline assembly and use intrinsics that explicitly choose what assembly instructions to use. You can for example do prefetching to ensure the P and V pipes are always full in an inner loop. Back in the eary 2000-2004ish… we also had to be responsible for scheduling instructions to ensure that we didn’t get penalized for branch predictions failing, or put two divides too closely together by squeezing in some lightweight instructions like ‘and’, ‘or’, or even adds, or ensure we cycled through each mmx/sse instruction register (it was crazy). Around 2005-2008 Visual Studio mastered the intrinsics to have a competitive optimization of the instruction scheduling. Furthermore you can do things like this http://www.jaist.ac.jp/iscenter-new/mpc/altix/altixdata/opt/intel/vtune/doc/users_guide/mergedProjects/analyzer_ec/mergedProjects/reference_olh/mergedProjects/instructions/instruct32_hh/vc235.htm. Really now… when you speak of real processing power, consider what language they used to land the Mars Curiosity Rover. :wink:

Think of it this way. We have a solid block of data, anywhere between 20 megs to however large your storage is. You do not know what is in this block of data. You need to determine the data type, extract any metadata (date created, who created it, ip addresses, email etc.), create an image of it if you can, and index all of the searchable text.

Now with java you can get that crazy as well, disassemble the byte code, examine your jumps, make your loops tighter etc. I personally have not gone that far but I assure you it is done all the time.

For vision processing, if you have the money, I would use a HDL frontend for speed, then maybe for lower level operations c++ or, one again if you have the money, GLSL (opengl shader language), java for higher level processing and finally for gui I would use either php or perl or bash. As you can tell this setup would require a whole computer, not suitable for FRC but it will work!

As far as the mars rover goes, I have no idea what they used. Do you have a reference? I would like to read it!

Every language has its advantages and disadvantages, there is no need to discount one over the other in a generalized environment. What it really comes down to is the developers experience with the language as well as the system(s) it is being developed for.

Personally my opinion is that the language doesn’t matter, what does matter is the communication between the parties involved. If you have something that works well, something YOU can understand, something your COWORKERS can understand, and something that is optimized for your needs.

I will not get into personal gripes about each language. If you want to hear from my personal experiences with languages feel free to message me.

There’s good sources referenced from this offsite discussion here

Moral of the story is mostly C code running on top of VxWorks. Not that that means C is better, still, but the source code was built on top of preexisting C firmware from earlier rovers.

This is a far different scenario from the one I had in mind (and quite honestly I’d conjecture that your mail processing system actually processes kB’s of metadata rather than GB’s of raw data in that amount of time unless it’s a JBOD attached to a very expensive Xeon processor…):

1.) Start a live stream of 1080p @ 30Hz
2.) Render said stream via OpenGL on a Java display. Why OpenGL? The rendering of geometric overlays directly onto the video stream is much faster that Java’s layered SWT, AWT, or Spring implementations.
3.) Decide that you want overlays AND image processing (line detection, dithering, target color enhancement, etc) from the same screen on the same viewport (matches a very common requirement from a demanding customer… heh…)
4.) Decide that all of this processing and rendering only has 1 server assigned to it (matches a common requirement in my world)

Depending on the underlying libraries, this may all be single-threaded – i.e. that 16-core enterprise grade processor still might not be able to keep up since it’s all asynchronous. If that’s the case (as it often is with FOSS libraries for decoupling purposes) then multiple passthroughs of JNI alone in this scenario could cause more than 200ms of latency between the time a video frame hits the socket and the time the render hits the glass. One might say “oh, just re-write the libraries and JNI extensions yourself!”, yet the actual amount of time (aka $) spent verifying the optimisation works perfectly far surpasses the minimal gain.

The latency is added to the image processing time – poor implementations can see a 0.5 seconds of delay or more. The fastest I’ve ever seen (on optimised applications at work) is roughly 150ms from video frame generation to rendering on the glass. 500 ms doesn’t seem like a big deal unless it’s what stands in between a driver/pilot and a machine that costs 7 or 8 figures.

Again, the point here isn’t that Java is slow or fast – it’s that a poorly constructed application is slow and Java’s wrapping and hiding of the library implementations can compound that issue very easily.