As Team 341 continues our Beta test of the 2012 FRC Control System Java programming language and hardware, I thought it would be a good time to explain some of the new features we have been testing. This post will focus exclusively on the software side - and most of the Java changes also go for C++.
Aside from support for the new hardware (4-slot cRIO II, Microsoft Kinect, and maybe a couple other things), the 2012 Java environment offers some new helpful utilities, a new SmartDashboard application, and an entirely new style of programming (CommandBasedRobot) that supplements the existing SimpleRobot and IterativeRobot styles.
In many cases, the extensions to the Java/C++ programming options are things that some enterprising teams have been doing for a few years now. For example, if you look at the code posted on ChiefDelphi by teams 125 and 254 in the past couple seasons, you will find a lot of similarities.
Here are some of the new features that we are currently testing:
A built-in “Preferences” class to load and save important values from/to the cRIO’s non-volatile memory. Imagine things like PID constants, autonomous mode values, etc. What is even cooler, the Preferences class has a network interface to the new SmartDashboard - you can tweak your values on your Classmate, send them over, and have them stored onboard!
As part of the new CommandBasedRobot template, the idea of “Subsystems” has been created. A subsystem class is a single functional part of your robot - such as a driveline, an arm, a gripper, etc. For example, a gripper subsystem might have methods to Grip and Release game pieces. Subsystems help you logically group together the sensors and actuators that work together, and help you to define your robot code while the mechanical team is still putting it together.
Along with Subsystems, the 2012 version introduces Commands. Commands are discrete pieces of code that manipulate the Subsystems of your robot. For example, a “GrabATube” Command might tell a roller claw to run until a tube is detected by some sort of sensor on the claw. Commands have specific methods that get called when they start, as they execute, and when they finish (allowing you a lot of flexibility in what can be put into a Command). Commands can be scheduled in sequence or in parallel - you can imagine that button presses may create new Commands, just as your autonomous mode would be a sequence of Commands. Commands have a method of “requiring” specific Subsystems - this makes sure that you never have multiple commands telling your drive to do different things, for example (the second command which requires the drive would cancel the first).
There are new ways to interface with your Classmate and OI. A new OI class captures all of your Joystick, Kinect, and Cypress board interfacing in one place. New Button classes are used to asynchronously map button presses to Commands (when you create the Button, you associate it with a Command - no need to do anything else!)
The new SmartDashboard lets you tweak Preferences on the fly, as I previously mentioned. It also has a couple of other nifty features. Buttons that you define have a virtual interface in the SmartDashboard - simply point and click, and you can test that a piece of code is working. It also displays the currently active Commands for your Subsystems, helping you debug (did one of your Commands forget to finish?). Lastly, there is a new function to help you choose your autonomous mode directly from the dashboard - something that many teams have had to figure out how to do on their own for years.
You now have the option to do vision processing on your Driver Station Laptop in addition or instead of doing it on the cRIO! Your laptop (whether a Classmate or something else) definitely has more power available to it than the cRIO, so this gives you an option to move all of that processor-heavy image processing code offboard.
It is important to note that although there is a ton of new stuff in 2012, all of the 2011 templates and classes work in exactly the same way (for example, our 2011 robot code worked without modification using the 2012 libraries).
Per the beta test agreement, I am happy to answer any questions on these new features, but cannot provide code or documentation until after the beta period.
Wow, those are a lot of really nice improvements. It should elevate a lot of teams.
What is used to do image processing on the driver station? How is that programmed? Do you have to use Labview or are there sample java (or C++) projects for it?
Can you post a copy of your test code that uses these new features? (comments )
Just to clarify “on the fly” How easily is the preferences changed? On the fly? Or is something that needs to be changed and the robot needs to reset? Do you have to refresh the class to use preferences?
I cannot (yet), due to the beta test non-disclosure agreement. This is primarily to make sure that things don’t get confusing as multiple versions of the 2012 code are published (remember, things are still a work in progress). I will share my code as soon as I am allowed.
By default, the Preferences are loaded from a file when the cRIO boots. However, you can set up your own code to read values from the SmartDashboard at any time. You would then tell Preferences to “save” the values if you are happy with them.
Essentially it just makes it much easier for rookies, but I am going to recommend staying away from that (along with the drive code that is provided). It gets even more higher level than it is already. I honestly think there are great benefits of writing the functions and classes by yourself.
Also, stay away from image processing on the laptop… There will be great delays. Assume each RGB pixel has 24 bits and the image is 640x480. That is essentially:
640 * 480 = 307200
307200 * 24 = 7372800 bits = 921600 bytes
That data has to travel from the camera to the cRio then to the laptop. Also, the big endian needs to be converted to little endian on the laptop end. Assume the cache of the cpu is 512KB. The cpu is already preoccupied by system processes and other processes. So assume only 24KB are available for the image processing. The image has to be retrieved from the RAM then stuffed into the cache. But look, the whole image can not be stuffed into the cache. There needs to be multiple times it has to go to RAM to retrieve data then send it to the cRio and then the cRio utilized that data. Keep in minds x86 cpus have 8 General purpose registers. It takes about 1-3 cycles to retrieve data from the cache. So only about 1 pixel can be handled at one time by the cpu. BUT powerpc architecture has 32 general purpose resisters. That can effectively quadruple the number of pixels that can be handled by cpu at any given time. Now, I am away my argument is flaws because I do not take account of the clock speed of the cRio into account. But I take my bet on the cRio any day over sending data over wifi and sending back.
The webcam is hooked up to a switch on the robot, so the driver station can http get an image from the webcam without the cRio in the middle. A good driver station laptop can grab an image, process it, and send the results back to the cRIO before the cRIO could process the image itself.
I assume based on your statements that this means that you are now allowed to use an application like RoboRealm to process the images on the PC and send back any results? If so, we will check with FIRST about making this available to team members.
does java now support the top part of the dashboard? last year, we needed to manually create a dashboard class so that values on the dashboard would be updated as it does natively in LabView
I am not familiar with “RoboRealm”. However, in concept, I believe that you would be able to pull this off. Even in past seasons you were able to transmit raw images back to the Driver Station for viewing by the drivers; wrapping your image processing code in a custom Dashboard is totally doable. This year, an implementation of OpenCV for the Driver Station side will be supplied to accomplish exactly this.
No, the “Labview” Dashboard would still require some manual effort to get working with Java. However, the C++ and Java versions of WPIlib are moving more and more towards a new version of the SmartDashboard that is natively supported and highly capable (multiple visualization “widgets” as well as “choosers” to select autonomous modes and set parameters used by your code).
Just to follow up on a point regarding transmitting video back from the bot to the driver station:
Watch out for the delay in reality caused by the transmission of video across a wireless network. You can certainly get 30fps on a wireless transmission but the issue is that any image you get and process can be delayed by a second or more from what is actually happening in the bot. While this does not sound like much of a delay, if you are targeting something visually (like the hoops) the robot will tend to oscillate quite a bit if this delay is significant. This oscillation is caused by the closed loop delay of the video. In other words, once you process the second or more delayed image the bot will have moved beyond what you are currently processing, sending back the commands to correct for this delayed image will than cause the bot to overshoot and then overcompensate.
The best way to remove this is to reduce the frame size so that the IP camera does not take as long to compress the image and the wireless transmission is faster and closer to reality … or slow the adjustments of the bot way down (i.e. make slower movements) or put a laptop/netbook on the PC which will remove this transmission delay.
This is a common problem with wireless transmissions and is only made worse by more people/bots using the network so look for that at the competition when more bots are streaming video back to the control station.
Lastly (hopefully this is not viewed as spam) RoboRealm is now donating copies to teams that what to use it.