Wow, thanks for this! I haven’t seen anything like this before, good stuff. One question: you had mentioned in the intro that you would discuss version control, but I can’t find anything in the document discussing this. This is something that would be especially useful for me, think you could add some tips for it?
Threading. The ability to run each sub-system in it’s own thread gives you two things: Independent loop cycle timing (some things that are higher priority can update at the same rate as the Victors/Jaguars, while lesser things can run slower (flashing lights or Dashboard comm). When running multiple threads, you can pickup and scale user input data in Teleop (e.g. take the distance slider, and scale it to feet) and pass it to the parallel thread using either RT-FIFO or Global variables.
OpenG has a rising/falling edge trigger, among a GIGANTIC list of helpful VI’s.
State-machines. These are HUGE. You can describe the process of creating an enum. control and using that to select states in a case structure. Very useful for things like kicker sequencing or multi-position arms.
Anyway Version control in LabVIEW is hard to do. What I do is at every competition or before it. I Make a copy of the code and save that under a different name. I also make a new executable and label it for that competition. For example, I have “build season code.exe” as a downloadable code for the robot. Then for I make a new exe called, “competition one.exe”. Then when ever I make changes to the code I build in the new .exe file. That way if the code doesn’t work I can change over to “build season code .exe”. I also made a text display on the DS that says what code is running. Us the other papers attached to modify DS screen and other things. This is more of a way to manually handle version control. I don’t know of a way to have LabVIEW do this automatically other than using multiple exe codes.
Hope that clears things up.
Threading is interesting I never used that before. I might try that later. I will add state machines in another revision when I have more time and some code ready.
You would create a VI for each subsystem (not necessarily each thread)
Then you would call the VI from under the main loop (where Vision and Periodic Tasks is)
Then, in each VI, you would add one or more While loops (for each thread)
Each While loop will run in parallel.
You can then use Wait Ms to time the loop, somewhere between 20ms and 50ms is average (remember: the main loop runs at around 50ms ish, and has no exact means of timing.)
If you do not include Wait Ms, then the new thread will consume all of the remaining CPU power, cause possible network lag, and make it impossible to determine actual CPU load. Include the Wait Ms or another source of delay (e.g. wait for occurrence from the comm packet, or a camera wait for image) in every loop.
If you want to reach a larger audience, get rid of the DOCX files. Replace them with DOC, or better yet, PDF.
Threading and state machines (mentioned in an earlier post by apalrd) are two areas which seem to cause a lot of trouble for new FRC LabVIEW programmers. Adding a concise explanation of these would add much value to the paper.
I’ve never understood the difficulty understanding state machines. Perhaps it is conceptualizing the fact that the loop runs constantly but only increments your state machine when a condition is met?
I’ve always understood that you COULD run parallel threads, but I have a question regarding that.
We put a timer in front of our teleop code that passes it’s time value into each loop. We use that to run any time-sensitive loops. I have yet to find anything in FRC that requires anything faster than the (approximately) 50ms loop. Can you tell me what type of things you might want to run faster than that?
I can tell you something that wants to run slower than that. A kicker for example. Say the kicker needs to wait 1000ms to re-arm before it will accept another kick command. You could code the kicker as a state machine and run it in teleop, or you could run the kicker in a separate thread and just block waiting for 1000ms. But what you can’t do is block waiting for 1000ms in teleop. It will screw up the rest of the teleop code.
Or you could use the threads to partition your code. Each thread operates a sub-system, allowing a single VI to be written that handles the acquisition of refs from Begin, the code loop, and cleanup at the end (although it actually never gets there), for one subsystem. More complex subsystems contain clearly labeled SubVI’s called from this VI, and all WPI lib calls are visible from the main VI.
If you wanted to process images asynchronously from using the target data, you could put a call to process image and a loop to use the image data in the same VI, running as two parallel threads, but under one sub-system VI. If you needed to pass references to both threads at the beginning, they would both be there.
Plus, isolating the robot functions from Teleop allows you to use them much more easily in Autonomous.
I definitely understand the image processing requirement of being outside the teleop and running as fast as possible.
From our standpoint though, nothing mechanically really needs to run that fast. I suppose encoder sampling if you’re attempting traction control would be another good reason (what’s the maximum sample rate of our digital I/O on the crio?).
As for teleop and auton, we attack that differently. We wrote a subvi to handle our kicker, and had a copy in autonomous and a copy in teleop. Since they never execute simultaneously, you don’t have to worry about re-entrancy.
I can see the reasons for doing it for a cleaner code look, however from the readability side, teaching my newer students is much easier if I can point at the teleop loop and say “this is where everything happens during teleop”.
As long as you approach the code and create a subvi for each individual task I suppose it’s pretty easy to code either way.
What I’d really like to get the kids to do this year is to have a decent fault / warning reporting system that tells you the state of each variable and what it’s waiting on to progress to the next state - much in the way the automation in our plant is programmed.
It’s not that state machines are difficult to understand, it’s just that many new programmers (new to robotics) don’t realize they need to use them (or concurrent processing). They program the robot like they would program a non-realtime app on a PC. This is understandable if they’ve never been taught the basic concepts of realtime programming.
Teleop runs once every time a DS packet is received. DS packet is sent every 20ms (not 50ms). Any functionality (like a kicker with delays) which can’t complete in well under 20ms must be written as a state machine if it’s going to be put in the same thread as teleop. Or it can be run in a separate thread, using block waiting for delays. Many hours of confusion and frustration could be avoided if a way could be found to ensure that new programmers are taught these simple concepts.