I was just wondering if it is possible to create and deploy user created code to be linked dynamically at runtime? In other words, is it possible to deploy the RoboRIO equivalent of a win32 .dll file to the RoboRIO?
Certainly - in Linux they have a different name structure and are stored in different default locations but the intent is much the same.
Lets say I have a library called “keith”. The actual file name (or link) would be called libkeith.so for a static library or libkeith.a for a static library. The objects are compiled with extra flags and the linking step looks different. Just google “Linux shared libraries howto” and there are hundred of good examples on the web.
If you have trouble, post again or PM me.
HTH
Thank you for the help, so do i just ssh the library to the rio?
Sure, remember to put it where it can be found. Normally that is in /lib or /usr/lib or /usr/local/lib (for anyone to use) - it is configurable. During development you can use LD_LIBRARY_PATH and put the library anywhere (in the CWD for example).
HTH
Although it may be possible (and it really isn’t that hard), I’d highly recommend just statically linking everything. If you dynamically link, you run the risk of running with inconsistent versions of libraries, or having to maintain binary compatibility between libraries. It ends up being easier to just deploy statically linked binaries if RAM and HD aren’t an issue, which they aren’t. We deploy static binaries both at work and for robotics, and it has saved us many times.
Out of curiosity, why would you want it to be a dynamic link library as opposed to say a static library?
The big difference there is if you have global/static variables that are to be shared to multiple children dll’s that are dependent on the same dll. Other than that static seemed to be easy to do to keep coding groups encapsulated just as it is in a win32 environment.
There is one other technique that seems to work just as well… this is one project, but within the project each group of files is in its own folder. This one seems to be the easiest to build as I don’t have to worry about certain projects being forced to build, when doing it the other way. This method also avoids having multiple instances of singletons (that could occur with static libraries).
This is very good advice. I got the feeling the OP was just in learning mode.
The great advantage of shared libraries is that there is only one copy in physical memory, hard to see that as an issue in FRC since there is only one user application running in user space (besides the system and NI daemons).
An example of that… (i.e. an issue in FRC when multiple instances occur due to a static library setup)… would be if someone wishes to use SmartDashboard… I had SmartDashboard in its own library as static… in my win32 environment, and had multiple DLLs in use. Network Tables must have only one instance to work properly… so I had to make a DLL version of it to work correctly… luckily this was all win32 environment so it was easy to do.
For the eclipse environment if something similar needed to be done (which I couldn’t see happening, but could if all vision processing was done on roboRIO)… The static library solution may not work… in which case the one project multi folder technique may give the best results… so far I’ve been leaning towards that as a workflow. I’m going to thank Jeff Downs for that suggestion from his workflow tips on the eclipse old cRIO solution. It gives the benefit for shared libraries, while allowing code to be separate… built time is still very fast too.
Yep, I am just experimenting with stuff in my free time, seeing what I can do. The purpose of this was to see if I could dynamically load the c++ compiled code into a DLL so that I can tweak values without having to kill the running program. I have the main FRCUserProgram running code that applies only to all the hardware related aspects. This code stores all the necessary memory which is then passed into the dll every iteration. If the dll should change, the same memory is pushed into the new dll and it operates upon the memory as I intended the change to. I think this is called live loop editing.
We actually achieve the same thing through a different mechanism.
Our robot code is split up into a bunch of processes (each is a separate binary). There is 1 process which has WPILib built into it and interfaces to the hardware. The other processes have our control loops, joystick code, autonomous mode, etc. They all communicate via a shared memory mechanism we designed. The really cool part is that this lets us re-start the various modules and replace them without taking the rest of the code down or restart it. We’ve been able to do things like deploy new joystick code while the driver was driving.
Our code is structured in much the same way but is multi-threaded and uses pipes for communications (mostly because we ported from VxWorks message queues).
How did you implement the shared memory? We noticed that POSIX message queues are not in the kernel. Is the POSIX shared memory API supported? Or maybe SystemV style shared memory? Or did you do something custom?
TIA
Essentially SystemV shared memory, though we just use the mmap call directly to map a file from /dev/shm/ in. We then built up message queues with mutexes, condition variables, etc in that memory and use those. That lets you use priority inversion safe mutexes, which is good.
Just putting this out there, thank you all for all your help. I have successful dynamic loading of code and am able to make changes to the program while the robot is active.