We switched from Labview to Java (bad idea IMO) as that is what is taught in school. In making the transition I wanted to be certain we were not exceeding our capacity. I looked at the “Driver Station Log Viewer” and the logs from various runs both in competition and not.
One glaring issue that I have uncovered is that the CRIO CPU is pegged at 100% utilization regardless of how much code we have. Even a simple “Tank Drive” only with no other code also pegs the CPU at 100%.
Has anyone else noticed this? Is this normal for this JVM?
If this is true then you have no useful way to actually measure your code load, and be able to monitor when you are reaching the limits of processing on the CRIO from a JAVA environment perspective.
If this is true, then this needs to be addressed, would opening a tracker be the next logical step?
Why in your opinion was the switch a bad idea? Our team moved to Java from Labview last season and have been nothing but happy with the switch. What problems have you run into other than the CPU running at 100%? Our CPU isn’t pegged, or even close to being pegged with our competition code. Are you using command based robot, simple, or iterative?
Put us as a third team that does not peg the CPU with Java. I definitely agree that a simple robot template with no sleeps seems like the most likely cause.
I won’t get into the reasons as to why I think it’s a bad idea here. That’s another long drawn out philosophical discussion. Just look it at from my perspective where I work with code at level that even a few nanoseconds in execution time matters.
Thanks for all the feedback, I’ll look have our Teacher and Java educator along with our lead programmer take a deeper look at our code.
I think for comparisons sake, I’ll build for myself from scratch a new SimpleRobot project.
I think you can code successfully in any of the 3 predominate languages used in First. If not all the powerhouse team would be using the same language. Bad idea to switch in the middle (or beginning) of build season though. Better to rewrite the code in last seasons robot to find the pitfalls.
Another thing to watch out for is SmartDashboard stuff. We had a SmartDashboard :: putData in one of our controller classes (that runs iteratively in a separate thread) which pegged us near 100%. Calling this 50 times less per second (aka 50HZ to 0Hz) brought us from near 100% CPU usage to ~50%.
(Without looking at the code, I’m guessing without a receiving dashboard app we were spamming handshakes?)
Why couldn’t you instrument each thread in your code to read how long each thread takes to execute and whether or not it’s being scheduled at the expected rate.
When using threads/parallel processing in Java, it is easier to goof up than in LabView. Our robot has ~20 threads running, 5 PID loops, and vision, and it never goes over 85%. If you post your code, we can help you with it. One of the biggest problems is that sending errors to the DS can waste huge amounts of CPU usage, so you should check the diagnostics tab of the DS. I’ve actually found that Java is really efficient, and the difference between C++, LV, and Java(in terms of performance/speed) aren’t noticeable to FRC teams.