Collaborate on LabView virtual Kalman robot?

Hello everyone,

First year mentor with Team 2171 – the Crown Point RoboDogs! Which means I’m still coming up to speed on FRC…:confused:

I would guess that since the announcement of the cRIO controller and LabView combination everyone has been busy trying to learn all you can about them (like me) .

While learning LabView, it occurred to me that somewhere I had this old PDF circa 1995. Its called “Mathematical Foundations of Navigation and Perception For an Autonomous Mobile Robot” by James L. Crowley.

Note: Since finding it on a dusty old backup CD, I’ve since gone out to Google and discovered its widely available, so I believe this can be considered public domain information and is not subject to copyrights. Still, to respect copyrights, I’ve decided to include a link (below) to it rather than uploading the document itself.

It is a very readable text that describes a software/hardware architecture that uses Kalman filtering to guide the motions of an autonomous robot.

Keep in mind that when James L. Crowley built his robot and wrote this paper, he did not have a lot of the benefits that we at FIRST now have. For example, we have a well-defined game area, etc. Plus we have some really great sensors and LabView’s mathematics libraries.

So, I was wondering if anyone would like to join together to collaborate on a summer project? Specifically, I was thinking about creating a vitual robot with LabView that can ‘play’ as much of the Overdrive game as possible using James L. Crowley’s autonomous architecture. Athough the real purpose would be to learn more about LabView and methods of creating autobots, so the subject is open for discussion.

All the foundation work is done and provided in Labview’s Robot Modeling and Simulation Toolkit. However, we would still need to build a Kalman filter, a mathematical model of the game area, etc. In short, we would be building the control logic for a robot that would be limited to the parameters we know from the Overdrive game. I’m not 100% sure, but I think LabView has all the tools we need to do this, we just have to figure out how to put them together so they work!

If you’re interested, read the PDF (below). Then lets kick the ideas around and start building something with LabView. If we’re not careful, we may learn something that will help our teams next year!! :ahh:

Repectfully submitted,
KHall

PS – Please do not assume that since I know of something, that I know how to do it. They are very different things! :slight_smile:
http://www-prima.inrialpes.fr/Prima/Homepages/jlc/papers/NavFoundations.pdf

Hello all,

I see there have been some tire-kickers. That’s good. Had a couple hours to work on some of the basic outlines last night. I’ve zipped up a powerpoint file and uploaded it for review and discussion. You’ll find it at the bottom of this post.

To get started, I’m thinking we just build a simulated bot that can run the course. We can build on that and see where it leads us.

Slide 1 converts the game area into a coordinate grid. To make a runner you give it a set of goals to get to specific locations on the grid and then repeat. Round and round it goes.

Slide 2 outlines the bot and its sensors. Very basic. Encoders and pingers for now. In LabView we can use the “2 Wheel Simple Chassis” code from the FRC library. And to make it even easier, we can use all the settings that were used in the second LabView tutorial.

Slide 3 is a proposed high-level program flow chart.

Slide 4 addresses what is called “convert to common vocabulary” in the PDF. Its seemed natural to me to convert everything to SI so the common vocabulary of the sensors is ‘meters’.

That’s probably about as far as I’m going to get today.

I would like to ask a general question to the CPU experts:

If we have a 400MHz CPU does that mean 1 machine instruction every 0.0025uS? Lets say for discussions that we can make use of the techniques in the PDF and compensate for the sensor/motor latency. Then if an autonomous robot takes say 1,000,000 machine instructions between ‘reactions’ does that mean it can implement changes to its behavior every 2.5mS?

Because if it can, and somebody can build a robot that makes use of that, in terms of reaction time, the human operators wouldn’t stand a chance against it. By the time they even realized something has happened, the autonomous robot would have already reacted!

Regards,
KHall
Mentor Team 2171

layout.zip (6.56 KB)


layout.zip (6.56 KB)

The IFI speed controllers have a maximum update rate of around 120hz. No one knows what we will be using in the future, but that has been a limitation in the past. See this page for more details: http://www.ifirobotics.com/forum/viewtopic.php?t=303

Most CPUs do not execute 1 instruction per cycle, instead, each instruction can take different amounts of cycles to execute. And, things that might seem like 1 instruction (eg one operation in C) can be many machine instructions. PICs are designed for one cycle per operation, so you can add two 8 bit values in operation because there is an instruction for that. But, you can’t add two 32 bit numbers in one operation, because there isn’t an instruction for that. Instead, the C compiler, when you write something that is a long + a long generates many instructions to complete that one operation.

It’s even harder to figure out when you have a highly abstracted language (Labview) and an operating system (VxWorks). On the other hand, with a real time OS, you should be able to schedule tasks and make sure your calculations run at a set rate (at the expense of lower priority tasks).

Overall, your project is very ambitious, but is a great learning experience in real robotics. I work with GPS/Navigation and a lot of time goes into tuning a Kalman filter (although I’ve never dealt with them directly).

Thank-you Joe Ross,

I didn’t know you had to tune Kalman filters. Probably should have known that though, since you have to fiddle with other kinds of filters.

I don’t think this is all that ambitious, considering we have a pretty well written guide. Besides, if it proves to be too difficult…well that is something worth knowing BEFORE the next build cycle.

Has anyone read the paper yet? I had sort of thought about just following the paper section by section and seeing what happens.

Sections 1 and 2 of the paper are descriptions that seem perfectly reasonable and close enough to FRC robots to seem applicable.

Section 3 starts with:
“We define perception as: The process of maintaining of an internal description of the external environment.” Um, well it is a research paper.

Then it kind of goes on a bit as research papers do, but finally settles down and starts making some really good points:
"
Principle 1) Primitives in the world model should be expressed as a set of properties.
Principle 2) Observation and Model should be expressed in a common coordinate system.
Principle 3) Observation and model should be expressed in a common vocabulary.
Principle 4) Properties should include an explicit representation of uncertainty.
Principle 5) Primitives should be accompanied by a confidence factor."

If you get the powerpoint above, I’ve proposed the layout and defined the primatives to follow this model. The ‘common vocabulary’ is meters. So the next thing that needs to happen is to create some arrays to hold the data collected by the sensors.

Specifically, the paper suggests:
Model: M(t) ={ P1(t), P2(t),… , Pm(t)} – That mean the model is a collection of primitives, each with a timestamp, and:
Primitive: P(t) = {ID, X^ (t), CF(t)} – A primitive is a single data reading from a sensor. Also included with the sensor’s measurement is an ID (so you can tell which sensor it came from), a timestamp of when the measurement was taken, and a confidence factor.

Somewhere later it says we’re going to use the Kalman filter to adjust the confidence factor. You always start with a low confidence number for any new reading. As you take more measurements, the filter ‘decides’ if the data is good and increases the confidence or if the data is bad and it gets thrown out.

So far it all this seems simple enough and really quite workable. So what we need next are some containers to hold the sensor data in a way that is going to be easy to work with. LabView will to be able to get the sensor data, timestamps, etc and write them into arrays. And the LabView math routines should make processing the arrays straightforward.

Is anyone out there a wizard of array processing? It would be nice at this point to have a good logical data structure for the primitives. Considering so far we’ve only defined 6 sensors, I’m thinking it would be possible to keep a dozen or so primitives in the robot’s world model before letting the oldest primitive drop off into the bit bucket.

Does anyone know of a good way to store the primitives and the cycle them in such a way that we are not constantly shifting a bunch of data around or running weird routines so we end up tangling ourselves in loops? It would be nice to have a easy to understand way of storing the primitives in a chronological order so we don’t have to worry about which data is which.

Regards,
KHall