Labview / Simulation

Does anyone have experience in using Labview to create a robot project but running it as a simulation? In other words, NO robot. The code runs in LV and you can ‘connect’ it to the DS.

What level of simulation are you talking about? I’ve used a bit of test code to emulate a robot from the network communication standpoint to test the driver station.

Or are you talking about a cRIO emulator that will run the same code compiled for the cRIO?

Or are you talking about a simulator that you recompile your code for that runs a physics simulation of a model of your robot?

What are you trying to accomplish?

-Joe

I’ve made a simple simulator for the vision system, so that when I run “vision processing” on my computer, it uses my webcam.
Would this help?

If you’re talking about a physics simulator, I can tell you why I haven’t done it:

  1. It would take more work to make an accurate one than actually testing it on the robot.
  2. You’ll only get the results you expect. If you program a simulator, you’re going to program it how you think it will act, leaving out all those quirks that you don’t yet know about. This is, in effect, trying to get good functional code without ever testing.

If you just want this to make sure your code works how you expect, then that’s pretty easy. You can accomplish that with some conditional disable structures, disabling the code that interacts with the FPGA.

good question: I think we have two different answers, two different goals.

a) a cRio emulator that will run a program we compile for the robot. something a person could write code for, then execute, and have ‘virtual instrumented’ and watch gauges for the various things that are happening.

b) a Labview program that is a physics based simulator, that feeds back information to the FRC DS and custom dashboard.

A little background…

Our team, FRC 1311, is a year round affair. Currently we are one of 14 Lemelson-MIT Inventeams nationally. We are building a robot that is an oil skimmer and the control system shares a lot of components with an FRC robot. However it is not currently using a cRio.

What we did is use Simulink from Mathworks. We created two things, both using Simulink:

a) a physics based simulator
b) a driver station

The DS handles the user controls and dashboard and implements a telemetry stream to the robot via UDP/IP. Onboard there is an ARM based linux computer that is a communications processor and muxs/demuxs data to a pwm controller, GPS, and other things we can think of, like an “Adventure Game”, haha…

The goal is to have a DS and a simulator with identical user interfaces, except the simulator is physics based. This December we will go back to Orlando and exhibit the project at I/ITSEC as part of the student exhibitions.

In time, we would like to replicate this effort substituting Simulink for Labview. For those Labview gurus that know how then obviously that is easy, but if you don’t know how, like us, then it isn’t so easy.

We also have a relationship with the engineering/simulation/education people at Wright Patterson AFB. WPAFB is basically the engineering/program management hq for the U.S. Air Force. One of the “big picture” goals here is to introduce simulation to high schools on a national basis. Just as it took FIRST years to get to where it is today, it will take a while to get HS simulation going.

[/li]
I completely disagree with this.

We test autonomous via non-physics based simulation every year. We put countless hours worth of coding and debugging via simulation long before the robot ever gets built. Once we finally get to the robot it usually takes about 15 minutes to get the sensor/motor polarities all correct, about 30 minutes to tune PID gains, and about 0-15 minutes for bugs not found in simulation. Then the robot accomplishes the autonomous task to about 98% of the desired accuracy on the first run. We usually seem to get about 4 hours to test autonomous before the robot ships, so without the simulation we would be dead.

Another good example is our kicker from last year. Due to the mechanical design, a lot was required of the software. We wound up having to implement the attached state machine. It took about 15 minutes to throw together a quick simulator that got 95% of the bugs out of the software. Why not just test it on the robot? 1) it wasn’t build yet, 2) even if it was build, I didn’t want to break anything.





Unfortunately it will be June 20th before I will have time to seriously spend time looking at this problem, but when the day comes, it would be nice to have collected some material to help us get started.

Well, I certainly don’t mind being proven wrong, but I don’t yet understand how this works. If you’re not simulating the motion of the robot, what are you simulating?

Perhaps I was over-generalizing, but I’ll explain why I said what I did. The primary issues that stop me when programming aren’t actually programming issues. What stop me are issues like mechanical binding or improper motor/gearbox selection. These aren’t things that are going to be caught in a simulation for testing programming (nor should they be).

If what you mean by simulating is simply running your subVIs without the robot, then my view is completely different. Yes this will catch programming errors. Unit testing is one of the steps anyone should take when developing software.

I am simulating the motion of the robot - it’s just that the robot model isn’t exactly physics based. It matches the robot motion to about 85% accuracy (or thereabouts), which is plenty close enough to get the bugs out of the autonomous code (especially when you’re trying to implement scripted autonomous routines, or something else that has a lot of potential for going wrong, like Go To XY Coordinate functions).

I did a presentation at the FIRST Conference in St. Louis on scripted autonomous control. I’m still working on making the documentation and examples good enough to post online. The examples will include the autonomous simulator that will run the autonomous code that was created on the big screen at the presentation. Hopefully I should have it posted tomorrow evening.

If what you mean by simulating is simply running your subVIs without the robot, then my view is completely different. Yes this will catch programming errors. Unit testing is one of the steps anyone should take when developing software.

I also did a presentation at the conference on unit testing with simple models to represent the motion of various parts of the robot. I didn’t call it unit testing since I didn’t want to throw any more jargon into the presentation than there already was.

The material and examples from that presentation are already posted in the chiefdelphi papers area. The presentation and examples stop at fairly simple examples, but the autonomous simulator is just an extension of what was presented at the simulation presentation.

Interesting.
What sort of bugs does this catch for you? Does it simply tell you you need to turn further here, or stop sooner here?
Or is it meant to catch bugs in your navigation subVIs?
Or is it used for timing purposes? (you must fit all your autonomous into 15 seconds, unless you continue during Teleop)

Is this only used for testing autonomous, and not Teleop?

You ARE using encoders and a gyro on your robot for feedback, aren’t you? Or is it entirely time-based?

What would be interesting is to take the bundle of signals that go out to the world (pwm, relays, etc) and the bundle of signals that come into from the world ( dio, aio, etc ) and put them into another vi that roughly models the robot.

Last year, I played with LabVIEW framework code to see what would be involved to run without a cRIO.

If you move or copy the Team code from the cRIO realtime target to the PC host, and open the host Robot Main.vi, it will actually run. There is no FPGA or FRCCommunications DLL, so there will be lots of runtime errors. Even at this level, it will allow you to do some unit testing. You can type input values into the panel controls, run the VI, and probe or observe the outputs. You can also write test harnesses, something like Robot Tests.vi, to call the VIs with different test vectors, record the results, and present a summary of the tests.

In order to get more code running, I made dirty changes to Start Communications.vi so that it would process the UDP info directly rather than sharing the FRCCommunications.dll code. This allowed for the DS to work with the hostPC code. Finally, I monkeyed with where the WPILib was stored on disk. I made it so that the hostPC implementation and a cRIO implementation were independent, and the correct one was used depending on which RobotMain was opened.

Next, I started changing the hostPC WPILib to chop out the FPGA and replace it with a set of virtual registers/properties that the other layers will access. You can also open the subVI and watch the values as the code runs on the PC.

What I didn’t write was code that also runs on the PC to close the loop and implement the physics model. It would read from the output registers, calculate sensor values, and write to the sensor registers.

So, to summarize, there are a couple steps involved to smooth it out so that teams could use this. It is certainly possible to do, but not documented or easy to accomplish without some in-depth knowledge about the protocols and LV target mechanisms.

Perhaps it makes sense to hear some input on what would be most useful for simulation?

Greg McKaskle

Almost all driving is controlled using encoder/gyro feedback. The simple robot model takes the motor commands from the autonomous software, calculates the robot speed, heading, distance travelled, and X,Y position. The heading, distance travelled (and speed if wanted) are fed back to the autonomous software as sensor readings (gyro/encoder readings). Thus, a complete closed loop simulation can be done.

It helps to debug all of the navigation software (position calculation) as well as the control software. For example:

  • does it drive the correct heading?
  • are their any sign errors / accidental positive feedback loops?
  • does it go forward when it’s supposed to go forward, backward when it’s supposed to go backward.
  • does it properly exit the maneuver and sequence to the next maneuver?
  • does it drive an arc properly and exit properly from all starting headings and ending headings? (if you have an arc maneuver as one of the driving maneuvers)
  • does it properly go to an XY coordinate? Does it do so without going too far out of the way? Does it “track” instead of “home”? (if you use Go To XY)
  • do the timed manuevers (such as delay, hold position, timed appendage motor commands (like roller claws)) exit at the proper time?
  • do any interacting exit conditions (such as OR logic between time and arm/elevator getting to the desired position) work properly?
  • etc.

It’s pretty cool: you can code up come cool autonomous control, run the simulation and see the robot go running off and crashing into a wall (virtual wall, of course). Check the code and go “duh - here’s a sign error”. Correct the error, re-run the simulation, and it does what you expect. You then realize how nice it is not to have real robots crashing into real walls as you do your debugging.

The biggest thing is that if you were afraid to do something complicated (like mapping the floor into X,Y coordinates), you can do all of your software testing in simulation before the robot is even built. Get as fancy and complicated as you want - you don’t need time with the real robot. It translates to the real robot quite well as long as everything is feedback controlled.

Never done feedback control? Here’s your chance - get it all working in simulation where you don’t have to worry about breaking the robot with your trials and errors.

Simulation can be a great tool. It takes a little up front investment, but the payoff is > 10x what you put into it.

I understand now.
You’re using this not for prototyping, but for checking your programming logic. Nice!

This means you have a way of using the same simulation code from year to year without tedious changes. How do you do this? I’m assuming you aren’t metaprogramming.