Quote:
|
An instruction cycle here or there makes zero difference in the face of the larger picture. If you want to have a truly useless argument, just remember that our new processor is pipelined AND has a double dispatch with multiple ALU elements.
|
Right. We now have CPU cycles and memory space to burn, so a cycle here and a cycle there does not make that much difference.
Quote:
|
So while you are all optimizing the instructions, I'll be working on what I want the code to do.
|
I get calls on a regular basis to "make it run faster." I always ask "Does it run correctly now?" since there is no reason to optimize a program that does not work.
Quote:
|
As a point of interest, I know someone who wrote his own graphical programming language that compiles to something that is on average twice as fast as C for embedded power and motion control applications.
|
There are two optimizations here. The custom language is focused to help solve motion control applications. C is a general purpose language, an electronic "hammer" so to speak.
The second is that there is most likely a set of libraries that he uses for the down and dirty interfaces. He optimized them to work with his target platform.
Speed in Java the last few years has come from:
- Improved VM processing - the actual runtime environment of the system
- Improved "code generation" by the compiler
- Improved library functions. By far the biggest speed pickups has been in the optimization of data structures in the system. Sorted lists are about 150% faster due to algorithm improvements (no longer the bumble sort) which then reflects back into the application.
You saw the same things with C and the IFI controller. You could roll your own code to do all the functions, but most of us took Kevin Watson's or the WPI library. Those key functions had been optimized first for "they work" then for "make them fast."
If you optimize the algorithm you seldom have to bit fiddle to make it faster.