For those who don’t know, WildStang (111) used a navigation system dubbed WildStang Positioning System (StangPS) during our autonomous mode.
There have been several requests for the presentation that we had in our pit area at both the Midwest regional and the Championship. I am pleased to announce that we have uploaded it to our website. There are 3 resolutions of Flash and 2 of AVI. The AVIs aren’t as nice as I was hoping they’d be, so I suggest using the Flash versions.
You guys wouldn’t want to post a picture of your drive train would you (or explain what you used)? I would like to possibly come up with a design for a 4 wheel steering system like your robot featured for future years.
I must say i am very impressed. Our team has tried doing the crab/4 wheel steering in the past and we know how hard it can be to get that…you guys are by far the masters of that style of stearing. And now to add this positioning system is amazing. Very very nice!
Will we be seeing any documentation related more to the actual specifics of the custom circuit and the programming and that kind of stuff?
*Originally posted by oneangrydwarf *
**Will we be seeing any documentation related more to the actual specifics of the custom circuit and the programming and that kind of stuff? **
Patience…
We’re all trying to catch up on real work after neglecting it since January. I think each of us plans to write up a whitepaper on the design of the part of the system that we worked on. We’re also talking about holding some kind of presentation/class on software (including crab), autonomous mode, and custom circuits. If there’s enough interest we’ll try to make it a reality, so let us know if it’s something you’d like to see.
Telly: We’re in the process of putting all our pictures on the web; hopefully there will be some good ones of the drive system. I’m also working on a writeup for the crab software control.
*Originally posted by Mike Soukup *
**Patience…
We’re all trying to catch up on real work after neglecting it since January. I think each of us plans to write up a whitepaper on the design of the part of the system that we worked on. We’re also talking about holding some kind of presentation/class on software (including crab), autonomous mode, and custom circuits. If there’s enough interest we’ll try to make it a reality, so let us know if it’s something you’d like to see.
Telly: We’re in the process of putting all our pictures on the web; hopefully there will be some good ones of the drive system. I’m also working on a writeup for the crab software control.
Mike **
Thanks Mike. You know I’d be interested in anything you guys would be willing to share about your team’s robot and its systems from this year ;). I was sad that I had to wait until the finals to see the 'bot in action, since I was on Curie the rest of the time. Just like last year and the year before, I’m in awe of your team’s robot. Keep up the great work Wildstang!
Very impressive indeed…I especially liked the software for recording autonomous modes. Earlier in the season I made a similar program that captured joystick input so autonomous mode would duplicate pre-recorded driver movements, but ultimately we stuck with a simple (and reliable) gyro-based program.
BUT:
I have to ask this…did the team’s students ever actually have a part in any of this? No offense is meant, I’m just curious…this is the sort of system that NASA would be proud of.
*Originally posted by Abwehr * I especially liked the software for recording autonomous modes.
Hmmm, I think you’re misinterpreting something. We don’t record driver’s movements. We use a software program that shows a map of the field and just select points on the map. Those points are downloaded into the robot controller and our autonomous software drives from one point to the next. This allows us to create new autonomous programs on the fly in minutes, without any practice time with the robot or a field.
**I have to ask this…did the team’s students ever actually have a part in any of this? No offense is meant, I’m just curious…this is the sort of system that NASA would be proud of. **
Yes. Our goal is always to have the students do as much as possible. This year that meant the students worked on the wiring of the custom circuit and writing the robot controller software (the RC software is by far the most complicated part of the system, BTW). A lot of the things that we’ve done over the last 2 seasons with the custom circuit required that the engineers learn a lot as well. As the engineers get more comfortable we will be able to better teach the students and hopefully work toward a goal of having the hardware, custom circuit software, and robot controller software completely done by the students.
WOW
that is a very impressive system. Once again I am amazed at the kind of innovations that come about in FIRST.
A white paper on the wiring/ how wilddraw works would be cool
once agan, wow, and congrats on being national champions
by the way, what do you use to measure the distance traveled, and what happens if the robot isnt moving while the wheels are spinning, wouldnt that throw off the system?
I think this has been asked before but I really don’t remember, did wildstang have and problem with slipping between the carpet and the tred on the wheels? I would guess that you guys didn’t, and if you did how did you account of this in your distance tracking?
WOW
that is a very impressive system. Once again I am amazed at the kind of innovations that come about in FIRST.
A white paper on the wiring/ how wilddraw works would be cool
once agan, wow, and congrats on being national champions
by the way, what do you use to measure the distance traveled, and what happens if the robot isnt moving while the wheels are spinning, wouldnt that throw off the system? **
Each “tick” of a rotation sensor would correspond with a distance travelled equal to the circumference of the wheel, as I understand it.
As for wheels slipping, you may be right but a measly plastic bin won’t skid them any measurable amount.
Their system is the best one for this year’s game.
*Originally posted by Foley350 *
**by the way, what do you use to measure the distance traveled, and what happens if the robot isnt moving while the wheels are spinning, wouldnt that throw off the system? **
As was mentioned, we use a wheel with 2 light sensors mounted on it (2 sensors allow you to determine both distance and direction of rotation). As for the wheels spinning without the robot moving, this is a problem to consider however in practice it doesn’t happen unless we change direction on the ramp grid (causing a momentary wheel slip) or if we run into something completely immovable (like the field border).
*Originally posted by Abwehr *
**Each “tick” of a rotation sensor would correspond with a distance travelled equal to the circumference of the wheel, as I understand it. **
The rotation sensor actually is measuring a distance much less than the circumference of the wheel. Our resolution is under two inches per tick and with the tread material we use slippage is kept to a minimum. All in all the accumulated error due to slipping in a 15 second period doesn’t amount to very much. If the auto mode was longer than 15 seconds we would need to use something else, perhaps a fifth wheel. A bigger problem is that with a rigid frame and no suspension, all the wheels do not stay on the ground when hitting the ramp at an angle. Our sensors are only on one wheel at the moment and perhaps sensors at every wheel averaged in software would make things more accurate. Of course that would add weight and we are already at 130.
I have only one question. When you get bumbed and the controller compensates. What if the wheels slide? Or it you rotate at greater than the max rot. veloctiy of the gyro? Im assuming that your program cannot compensate for that, but if it can I would love to know how. It seems a magnetic sensor would be much more failsafe than the gyro.
*Originally posted by Frank(Aflak) * I have only one question. When you get bumbed and the controller compensates. What if the wheels slide? Or it you rotate at greater than the max rot. veloctiy of the gyro? Im assuming that your program cannot compensate for that, but if it can I would love to know how. It seems a magnetic sensor would be much more failsafe than the gyro.
If the wheels slide or we turn faster than 75 degrees/sec we cannnot compensate. However, as I mentioned this hasn’t been a problem at all. As for the magnetometer, we spent a ton of time looking into using one and determined that it was not workable. There was just too much magnetic interference from all the metal and the motors in the robot.
We did some practice runs at home where we put as many bins in front of it as possible which caused the robot to get turned about 90 degrees. It didn’t matter, though - it just drove up the ramp sideways! I should see if there’s some video of that somewhere. It won’t behave like that anymore because since then we’ve implemented something we call “theta correction” which keeps us oriented correctly on the field at all times. Some of you may have noticed that when we ran our “sweeper” program - the robot would appear to be twisting as it diagonally crossed the scoring area. That was caused by our drive train twisting the robot and then our software detecting that and correcting for it.
*Originally posted by Foley350 *
**A white paper on the wiring/ how wilddraw works would be cool
**
As Mike said earlier, we will put out as much information as we can once we all get caught up with real work.
Before WildDraw was complete, we had to create waypoints manually. Since we have a 2 inch resolution, we made a grid of tape on the carpet of our field. We would then locate the tape that we wanted to drive to and figure out its coordinates.
With WildDraw in place, we were given the freedom to create waypoints without being at the field.
It also gave us the ability to easily and accurately modify existing programs.
[Edited to clarify sweeper program]
As soon as we found out that the Thunder Chickens had picked us at the Midwest regional, we sat down and drew out a program that would drive to common placement points of human player stacks with the intent of knocking them down. We call this the sweeper program.
After we ran it the first time, we saw that we had missed a critical placement area, so before the match was even complete, we adjusted some points and had a new program to download as soon as the robot came off the field. The next time we ran it (which happened to be a practice match at Archemides), all human player stacks were eliminated.
On a side note, those of you that were impressed by the sweeper (including me) will be happy to know that there are some more innovative programs in the works for offseason competition. We’re not going to give out any details, so you’ll have to see it to believe it.
*Originally posted by Abwehr *
**I have to ask this…did the team’s students ever actually have a part in any of this? No offense is meant, I’m just curious…this is the sort of system that NASA would be proud of. **
I can expand on the student involvement in the RC because that’s what I worked on. After the team decided to create StangPS I drew a diagram for the overall architecture of the RC code - inputs & outputs of each subroutine, calling order, etc. I then started with a blank whiteboard and stepped through the design process with the students, asking them for input where they could help and telling them what I was doing when they didn’t understand. Next I divided up the subroutines among the students and over the next couple of evenings they each worked on their part while the engineers floated around, helping when anyone got stuck. Then we all sat down and copied the subroutines into the code and held an informal code review to make sure it all looked good. Finally we tested the code using RoboEmu (thanks Rob, the tool allowed us to run some important unit testing and helped us be confident that our code worked) and a unit test driver that a student wrote. Then it was on to testing and debugging with the real robot for the next couple of months. So I guess we took the students from the initial design phase through coding & unit testing, all the way to system integration and deployment. An entire software cycle in 3 months.
All this time another group of engineers & students was debugging the communication between the RC & CC while another group was playing around with the gyro & wheel encoder.
To expand on what Dave said about theta correction… Because of the differences in turning rate of the crab modules on our prototype, during our initial testing up the ramp the bot always rotated when it made the first turn. By the time it reached the bins it was usually 45-90 degrees off. Steve (engineer) and Matt (student) played with the code for a few days but came up with a routine that rotates the robot and seeks a specified orientation. We played around with setting waypoints at 45 and 90 degrees off our starting position just to see it rotate as it travelled across the field. We went so far as to shove the robot while it was running in order to purposefully knock it off orientation & course and it recovered almost perfectly.
Here are some videos of our theta correction progress:
For the record, that was me trying to skew the robot to some other angle than its starting angle. Basically, we wanted to verify that if the robot was to be skewed, either by bins placed in front of it, or by it colliding with another robot, that the robot would:
A) Compensate for the change in angle, and ‘Theta Correct’ back to its pre-programmed angle
B) Not lose its position on the field during a collision and still find its way to its next waypoint.
I think the video shows pretty well that it worked as designed… Although, I think I should have used more than just a bin to try to stop that thing. It hurts when it runs into you!! I think this was the one test trial that didn’t involve parts of the robot running over my ankle