|
Re: Collaborate on LabView virtual Kalman robot?
Hello all,
I see there have been some tire-kickers. That's good. Had a couple hours to work on some of the basic outlines last night. I've zipped up a powerpoint file and uploaded it for review and discussion. You'll find it at the bottom of this post.
To get started, I'm thinking we just build a simulated bot that can run the course. We can build on that and see where it leads us.
Slide 1 converts the game area into a coordinate grid. To make a runner you give it a set of goals to get to specific locations on the grid and then repeat. Round and round it goes.
Slide 2 outlines the bot and its sensors. Very basic. Encoders and pingers for now. In LabView we can use the "2 Wheel Simple Chassis" code from the FRC library. And to make it even easier, we can use all the settings that were used in the second LabView tutorial.
Slide 3 is a proposed high-level program flow chart.
Slide 4 addresses what is called "convert to common vocabulary" in the PDF. Its seemed natural to me to convert everything to SI so the common vocabulary of the sensors is 'meters'.
That's probably about as far as I'm going to get today.
I would like to ask a general question to the CPU experts:
If we have a 400MHz CPU does that mean 1 machine instruction every 0.0025uS? Lets say for discussions that we can make use of the techniques in the PDF and compensate for the sensor/motor latency. Then if an autonomous robot takes say 1,000,000 machine instructions between 'reactions' does that mean it can implement changes to its behavior every 2.5mS?
Because if it can, and somebody can build a robot that makes use of that, in terms of reaction time, the human operators wouldn't stand a chance against it. By the time they even realized something has happened, the autonomous robot would have already reacted!
Regards,
KHall
Mentor Team 2171
|