Complete Autonomous Robot

2 years ago me and 2 mentors tackled computer vision for the first time. One is an electrical engineer and the other is a software engineer, and I had zero programming knowledge before. We finished the program a week before our first regional, but our robot had some connection or electrical issues, we’ll never really know, but we, as a team, learned a lot that year, and that is all that really matters.

Last year I recruited another student to tackle computer vision and he ate up all the information. Me and him wrote the vision program and it got some attention. The ee mentor just nudged us in the right direction when we were stuck and took us to the white board for logic flow if we were still stuck. The software engineer attempted, and succeeded, to creating a scouting application with another student that involves wii remotes for each robot, tablets for each alliance, and a master computer.

Over this off season (pre-season?) a student on our team got his dad involved with the programming side of things. He is the head of the computer science department at the local state university. He brought along some other professors to help us. Over the summer we dwelled in depth programming. Then we got this idea: can we make a completely autonomous robot for this next year’s competition?

We are attempting to do away with the kinect and are trying to use the asus xtion, a much smaller camera with all the same features (minus the servo motor and accelerometer) and was specifically designed for development, and we are switching from the O-Droid X2 to the XU, not a big change, but still a change.

Here is a spiel I gave another student from a local team this morning who me and the other student have been helping this fall and winter:

Opencv and openni are pretty solid libraries, but openni does a lot more with 3d point clouds (look up the kinect slam) we are trying to do collision detection, avoidance and adding a star path planning in an attempt to be the first team with a completely autonomous robot during Tele operation. If you want to do depth stuff such as just checking to see if an object is there just stay with opencv. Me and [another student] wrote a program that takes the depth map and makes a xz image (an example found here: http://www.chiefdelphi.com/media/photos/39138) and it has the objects in view circled (because they are soccer balls). That is really the extent of opencv and freenect. With that you can apply the a star path planning and even apply a homography to learn the motion of the objects (the math is easier when you assume you aren’t moving, but everything else is) you can use that to calculate the speed of a robot that is in your way. Then since you know what velocity(ies) you’ll be moving at, you can calculate where the robot will be at any given time and adjust your a star path accordingly in advance.

This is our general approach, but with openni. We are looking for all the help we can get, even with our extensive programming team. (Well extensive in respect to our past, when there has only been < 3 programmers and 1 mentor; we now have 5 mentors dedicated to it and 9 students now)

I think your team is working on an exciting problem to make an all autonomous robot. College competitions routinely use autonomous robots, so getting an early start on that can help you later in school.

I would really like to read more about what you’re doing. Please follow up with your thread during the season. Can you write a paper after the season and let us know more specifics?

This has crossed my mind a time or two before

I think it would be somewhat straightforward for the robot to maintain location awareness, as there are probably enough stationary objects on the field to use as reference/triangulation points (think of using a map and compass to find yourself [if you ever have before]).

There are several tricky parts and stumbles to be had though
-maintaining tracking and location awareness of other robots on the field
-identifying friend or foe to coordinate a movement path
-detecting, tracking, and interacting with game pieces and scoring zones (think of Logomotion, and already filled pegs)

What happens if:
-A robot falls into pieces (We hit a robot so hard last year all 4 of their bumpers fell off and were then strewn about the field)
what will your robot to when it suddenly can see extraneous parts/unidentifiable robots?
-The robot gets into a position it cannot figure its way out (I would assume you would have a manual override)

I can foresee a fully autonomous robot getting a lot of fouls, especially one operating in some sort of defensive mode.

That all being said, It would be REALLY cool to see a robot that plays by itself. Maybe in the future there could be a fully autonomous regional (including pit crew :P)

This has popped up in the past before, and while I’m a huge fan of trying to automate different things the driver must do (climb, shoot/reload, balance, hang…), I don’t see a fully autonomous robot being possible with any of the previous games(but we have yet to see the 2014 game).

Lets take an average fully autonomous robot. It would have to know where it is on the field to get from the pyramid to the feeder station. There isn’t an easy way to do this. Teams have used follower wheels, but they slip and aren’t accurate over an entire match. From experience, you’re not going to get rotational accuracy over an entire match with just a gyro. If you go for the acceleromter/gyro/magnetometer combo, you’ll be +/- 15 degrees at the end of the match. That being said, I’m sure there are creative solutions to these problems that a FRC team can find, it’s just a matter of finding them.

Creating a fully autonomous robot is a really, really ambitious project, and you should make sure your team is on board with the plan. On the team I’ve been with, the controls team has two rules, as simple as possible, and if the driver can be trained to do it faster than we can program it, then the driver does it.

I dont think location awareness would really be impossibly difficult, but certainly one of the “easiest” challenges of a fully autonomous robot. (remember easiness is relative)

Hardware: you could use a revolving camera or other sensor such as a complex laser range finder (like you use on boats). On your processing computer (cRIO, some open source board, or a laptop) you process the data like this:
-You have some sort of computer model of the game field (like CAD).
-You have a 360 degree data field of distances of the closest object to you.
-You know how high your sensor is off of the ground
-You would have to throw out some data such as nearby robots as interference, but that might not be as hard as it sounds.
The computer would basically have to compare the range data it receives to the field structure file to find its exact location, like a piece to a puzzle (Use 2013 for example, if you know there is a wall to your left, and a vertical post to your right, you know you are next to the pyramid.

This system would not be the only source of location awareness. There is some degree of dead reckoning to be used as well. Your robot starts knowing it is roughly in your auto position, and fine tunes it to what it exactly sees around it. you don’t have to rely on a gyro for rotation because with your range sensor you know the post is to the robots front (by an encoder on the revolving sensor), so you are facing the pyramid. Knowing your robots heading and speed you can guess your next location, but double check and fine tune it based on your advanced data.

Factoring out piece and robot interference to the system can be done since at another stage of the operation you have to know where the pieces and bots are anyways, so you can ignore data from that direction.

Think of how experimental robotic vehicles are working. They are tackling these same challenges and succeeding this very second. While Google’s self driving cars don’t have to track Frisbees, they do have to identify and track other cars with us dinguses behind the wheel.

Would it be hard? Yes.
Would it be impractical? Probably.
Could it be done in 6 weeks? Probably not.
Is it innovative? Absolutely.

I’ve wanted to see this done forever! I spent all summer trying to come up with a list of characteristics and sensors that would aide in doing this. The best thing I’ve thought of is using a NIR TOF camera with 360 degree panoramic lens to obtain a point cloud that surrounds the entire robot. (A 360 degree panorama looks like this, which can be unfolded to look like this. Vertical lines are preserved without distortion, making it possible to infer the locations of rectangular targets). Apparently, it should be relatively easy to account for and calibrate the image processing algorithm for the additional distance added by the lens. Of course, almost all TOF cameras are way outside of a FIRST budget (I’m currently searching for a monocular affordable TOF camera to construct a lens for and the resources to do so). In addition to that, Astral navigation is starting to pique my interest for sensory.

So Hunter,
I noticed that you are a guru at vision programming. I am just wondering where you learned this from. I also have a project, to finish before I graduate to make the robot fully autonomous, even able to defend other robots. Before that, I want to make a fully autonomous nerf gun :D.

I’d like to see how this goes! It seems to be quite some challenge! Also, how do you install openni? I cannot find GraphViz in the apt repositories :frowning:

Why not follow the bumpers based on their alliance colors and numbers?
Then again, as you mentioned, there is a slight chance of bumpers falling off…

We have a sub group working on implementing the idea of tracking robots on the field based on their bumpers, and from that gather data (how fast, etc) and put it onto an onboard database. With the collected data, we’d be able to either send it to our scouting database(after the match), or somehow have the robot use it to make decisions on whether to avoid/confront opposing forces in future matches… I know I didn’t elaborate much at all on the implementation, mostly because we’ve just started gathering up ideas on doing so. We’ll definitely be using a second onboard processor like a Raspberry Pi running Open CV for probably one camera(which is still up in the air on what to use). So yeah, probably won’t have it fully operational for this season but probably next season, unless anyone would like to collaborate :smiley:

This idea has come up several times.

I think it’s an interesting off-season project if focused around how many points can our robot score during a match without any driver interaction. Kind of like how VEX does their Autonomous skills challenge.

However, I don’t think that it will ever be a good idea to go into a build season with this as your goal. If you’re excited about autonomous operation, use that to make your human controlled robot better and have your robot’s autonomous mode be the best in FRC.

Udacity has a class taught by Sebastian Thrun on how to program a self-driving car. This class would be a good start if you really want to learn how to make an autonomous robot. Here is the link to the class.

-Hugh

My first reaction to this thread was that it sounds like a PhD thesis level difficulty. However, I think that with the correct robot design this is very doable. For example, if you built a pure 30-pt hanger last year it would have been totally feasable to make it fully autonomous. And it would have required much less programming expertice than his team appears to have.

Is GPS legal in FRC? I don’t know how accurate they are, but if it was accurate to a few feet, maybe it could be used for positioning purposes, though it’d need to be calibrated for the playing field. Another thought is overcoming defense. I’ve seen some pretty cool autonomous routines that hold their ground even when they get hit, like 987 last year, but wouldn’t getting it back on track to the accuracy needed be very challenging?

I experimented with a Kinect and libpcl (Point Cloud Library) over the summer. Code is here:
https://github.com/FRC-Team-4143/4143pclpyramid

It was only tested in linux.

It is just a proof-of-concept. The code can tell the orientation and distance to the pyramid, but would usually fail if anything else was in the field of view.

I don’t really know which comment to start replying to, so I am choosing this one. I have written two variations of a vision program that gives camera calibration, that is, x y z displacement with respect to an object as well as pitch roll and yaw (so it can either replace a gyro or check its solution to see how accurate it is). In rebound rumble, we always could see the target. No matter what. If my program only saw one hoop, it would calculate where the other 3 were. We can use the vision solution as an index to where we are on the field. We did this in Rebound Rumble too. I did not use pose, but rather reduced the equations (by keeping roll and yaw constant, and pitch calculable by another means) to a basic trig problem, which resulted in the same scenario. We were a FCS. When a robot came to block us, our driver would push a button and it’d turn the robot so the vision solution read that we were x degrees to the right of the 3pt and it would lower the turret, so we were aiming at the left two point with respect to the three. We also had a button for the right two point. We could choose which one to shoot at.

Moving on. The new vision programmers, headed by a new mentor, just finished a cascade training project. It uses stereo imaging to calculate distance instead of a depth camera, but it still gets the point a cross. Our labview programmer (yes, singular sadly) has been working on preprogrammed maneuvers, such as a figure 8 (that one was just for fun). Other ones include a simple turn slightly to the right, go around and object, then go back onto the path that it was.

Our thought process for moving consistently is an interpolation table. If a star out puts a vector of x length, then our robot will go this fast, and our vision will tell us when we have gone that far. (We are going to multithread a XU to get as many solutions as possible to ensure minimal lag).

It is all dependent on the challenge, however. I feel a great year to have done this would have been logomotion.

I’ll have a paper posted about the vision program of 2013. I have to write it for a independent research class and get it published and submit it for science fair. In it will include the math behind it and what a detailed explanation of what it can be used for. Already 9 pages in and nowhere near done. I posted the 2012 on here too (from the same class):
http://www.chiefdelphi.com/media/papers/2797

I assume the UMSTL professors are going to want some form of documentation on it, even if we didn’t succeed, as well, so me and another student or two will do that as well. I do not know a time frame on that, however.

Some backstory to this: We created a demo west coast drive train for practice and to get familiar with it because we have always built tall and heavy robots (there is nothing wrong with that, we have done great in the past with them), but we have zero experience with the short, ~90 lbs, 15fps robot, so we wanted to try it out. We got it done in august. So we had nothing to do. In November a mentor suggested a mini challenge.

Here it is: 2 sides with a 5 foot semi circle on the short side against the wall. in the center of it raised on the wall will be either the 2012 target or 2013 target. In the middle are traffic cones and a defending robot. To get points the robot has to go from semicircle to semicircle (there and back). Points are deducted if you push over a traffic cone. We had everything programmed on the vision side and almost had the path planning worked out in this short time, but sadly we never got to implement it. Our simulations worked though. We could feed our xz image of the depth map and a star would find a path, and we’d send it to the cRIO as a series of vectors.

I’ve addressed the errors with gyros. We have had some trouble with gyros in the past with them climbing. We had a quick fix of doing a gyro reset, but that was a temporary fix. So, we can use the camera pose to tell us our angle of rotation in all 3 degrees, as well as position.

Addressing your “detecting, tracking, and interacting with game pieces and scoring zones” first. Simbotics did this in logomotion (on einstein, mind you). There had a 3 uber tube autonomous and the robot had to be aware of where it had already hung the previous ones. That blew my mind when I saw it as a freshman. It was probably the most influential and eye opening moment of my FIRST robotics career as a student. (I might not be much of a mentor while in college, but I will be back for some team once I get my life all figured out and underway). So it has been done. And, I’ve written a code that tracked the frisbees before, and our cRIO programmer programmed it so he would push a button and it’d go to the nearest [insert colour here] frisbee autonomously, but…we didn’t have a means of picking it up, so we had to forget about it. (http://www.chiefdelphi.com/media/photos/39015)

Moving on
“maintaining tracking and location awareness of other robots on the field”
That is where camera pose estimation comes in handy. I can know exactly where the camera is (and therefore the robot) on the field in respect to an object (the target). My mentor created a simulation to test to see if our camera could see the 3 pt from the feeder station. He used the PNP method of camera pose to recreate the field by inputting displacement vectors and giving the 3d world coordinates of the aspects of the field (2pt goals, 3 pt, and the pyramid by the target). So, it proved that yes, we can see the 3pt from the feeder station, barely. This same thing can be applied to real time, though, you can hard code the coordinates of the pyramid in a star, so the program will automatically avoid it. As for awareness of other robots on the field, it will be very difficult to decipher friend from foe, but if teams are so generous as to allow us to “learn” their robot via cascade training, then we can do it. I dont really see a point, however. A completely autonomous robot will be very independent, so it would be better to assume the robot will not get out of your way.

“A robot falls into pieces” this is actually an interesting question. The other student first started depth while I made the algorithm for finding the corner coordinates of a square more accurate over the summer. What he does is he takes a depth image of nothing in front the camera except the floor, and then the aspects of the camera are constant. it cannot be moved up or down, or be tilted. That image is now the calibrated image. Then, we can put anything in front the of the camera and it will see it. I just posted a picture on here, but it hasn’t loaded yet. But as you can see in it, there is also a bookshelf and a couch in the image, but the program only sees 2 objects, the objects that weren’t there in the calibration image. edit: http://www.chiefdelphi.com/media/photos/39264?

"The robot gets into a position it cannot figure its way out " Yes, we will have a manual override. If it even faulters, the driver could take over. I think it’d be really cool to do a cycle completely autonomously once a game or so. We are worried about the speed of a star and our collision detection algorithm. If they are slower than the human reaction, then this would not be justified in doing (but that is not going to prevent us from trying!).

“I can foresee a fully autonomous robot getting a lot of fouls, especially one operating in some sort of defensive mode.” yes. We are not planning on doing defense, as of now. There is nothing wrong with playing defense, it wins a lot of games. Our team has this unsaid motto “better to have tried, failed, and learned.” I love seeing rookie teams (and non rookie teams for the matter) build defensive bots. Who cares if they aren’t seeded first? Only one team will be, and we have never been in that position and have done very well in the past. The students, and mentors, learned stuff and had fun. They became inspired. We’ve been playing around with a lot of different things for the past several years, and this year we’re going to attempt to put them all together.

Me and the other student briefly talked about using a spinning camera or a 360 degree one, but decided against it. We are probably going to use 2 depth cameras, one on each side the robot can move (if we do use west coast, all depends on the challenge), and an rgb camera to track a target if applicable. (Sad day in my mind: we are no longer going to be using the kinect. We are switching to the asus xtion for depth and a webcam for target tracking. It has been a good 2 years of developing with you, kinect. You might finally be plugged into my xbox for the first time ever now.)

A lot of people (ok, like 3) on here are talking about robotic cars. I competed at ISWEEEP and was next to the romanian kid that (got first and) made a completely self driving car with the opencv libraries. We’ve been talking since then and have been working on some projects together.

The most ideal method would be doing a slam of the environment 30 times a second, but that is impossible, so we have to keep our data flow limited, but useful. The two depth cameras should be plenty. The A star will have to be able to decide which depth feed to use, but that wont be too hard.

I’ve already discussed knowing position and rotation as well. But, an issue that could occur is if we are not getting a solution from the rgb camera. Then we would have to rely on our gyro. A team last year at terre haute spent 30 minutes with us going over how our vision worked just so they could block it
and so we could not shoot at all 3 targets. They just put up a pole to keep the target from being a closed contour. It was really clever. I wasn’t even mad. I was lazy and didn’t put a bounding rectangle when around contours when there wasn’t a solution. (only do it if there isnt a solution. It will shave a few microseconds off the program). That was my fault. Oh well. I learned my lesson.

“Would it be hard? Yes.
Would it be impractical? Probably.
Could it be done in 6 weeks? Probably not.
Is it innovative? Absolutely.”

Hard? Our team is up for the challenge. We aren’t the most famous team in Missouri (cough cough 1986), and have only actually won one regional in our existence, but we are gaining attention through our software.

Impractical? to the extreme. I just want to do one operation completely autonomously during a game. To see our drivers let go of the controls and for us to be still scoring points.

6 weeks? No. That’s why we build 2 robots XD. And I may be going into the hospital soon for IVs for 2 weeks, so that’s what I’ll be doing for those 2 weeks. (no worries about my health, just have a bug that I can’t shake)

Yes, we are still going for it. We are going to do cascade training on our 6 wheel robot to act as a teammate during the build season. We will track the wall (obviously) and use it to know where we are on the field. Simple math allows for you to calculate the speed of a ball on the ground and then the robot can autonomously go to where it will be to pick it up.

Cascade a friendly robot, calculate their speed and pass them the ball where they will be. Find the ball in mid air and automatically go to where is will land to catch it.

Lastly, in a mode where our other 2 robots cant do anything (broken or otherwise), find ball, shoot it over truss, get it back, shoot it into target.

I’m really tempted to write all the code real quick, but…I’m a senior and that would really not be good if a student didn’t know how to do this next year.