View Full Version : Programmers: I Have A Challenge For You
davidthefat
29-03-2010, 19:09
Next Year, no matter the game, I challenge you to make your robot fully autonomous. That means autonomous during tele operation period too. Anyone up for that challenge? That would challenge your skills and dedication to the robot. That means no more just drive up 3 feet, kick, repeat type of coding. It would have to be a lot more thought out and will have to use real life robot coding. Its not really a robot if its not autonomous, its just an over glorified RC car if its human controlled. And if you are still sceptical, FIRST pretty much writes the libraries so that even a guy that picks up a programming book can code the robot in a week or even less... Well IMHO you can't learn programming from a book, sure you may learn the language and syntax, but you have to have experience to actually program. Programming comes with experience, and the way FIRST makes it, you get minimum experience as a programmer programming these robots. I will be announcing to my club next year that we want to try this. Just post your opinions and I will add to the list if you want to take the challenge.
Teams That Are Willing To Take The Challenge:
*Team 589 (Just Me As Of Now)
*Team 33
*Team 2503
*Team 1086
I will give you an internet high-five even if you just attempt this.
Rion Atkinson
29-03-2010, 19:14
I'm not a programmer, but I do know that because of the was the field system works, you will have to have the program read that it is now in teleoperated, and then begin a program.
But why not, instead of fully autonomous, make it to where it is mostly autonomous. Meaning that the only thing that isn't autonomous is driving across the field. So you drive a long distance, flip a switch, and then it does the rest on it's own? I'm not a programmer, but I have a feeling FULLY autonomous may be a little to hard. (My programmer has less hair than he had at the beginning of the season... )
Just my $0.02
Good luck!
-Rion
davidthefat
29-03-2010, 19:21
I'm not a programmer, but I do know that because of the was the field system works, you will have to have the program read that it is now in teleoperated, and then begin a program.
But why not, instead of fully autonomous, make it to where it is mostly autonomous. Meaning that the only thing that isn't autonomous is driving across the field. So you drive a long distance, flip a switch, and then it does the rest on it's own? I'm not a programmer, but I have a feeling FULLY autonomous may be a little to hard. (My programmer has less hair than he had at the beginning of the season... )
Just my $0.02
Good luck!
-Rion
Was thinking of the switch too, if the robot is acting really dumb and going away from the action, you can flip a switch to get you into the real teleop mode... Teleop mode is way easier to code than auto so it will be a breeze
Wow..
That would be pretty challenging. I think it would have to involve AI... One would have to code in game strategy, get the robot to line the ball up to the goal, avoid penalties, use the camera to identify balls and enemy robots...
davidthefat
29-03-2010, 19:28
Wow..
That would be pretty challenging. I think it would have to involve AI... One would have to code in game strategy, get the robot to line the ball up to the goal, avoid penalties....
:rolleyes: Don't use the word "ball" FIRST come up with some wacky games...
During this off season I am going to be working on finishing up my ADK (autonomous dev. kit).
The plan is to try to have a fully autonomous robot as a proof of concept.
My personal goal is to make a programming platform that even rookie teams could use to have a decent autonomous.
The more teams we get to have autonomous the more FIRST will work to help us improve it (personally i would love for them to add the zigbee into the KOP so robots could communicate).
cheers
davidthefat
29-03-2010, 19:55
During this off season I am going to be working on finishing up my ADK (autonomous dev. kit).
The plan is to try to have a fully autonomous robot as a proof of concept.
My personal goal is to make a programming platform that even rookie teams could use to have a decent autonomous.
The more teams we get to have autonomous the more FIRST will work to help us improve it (personally i would love for them to add the zigbee into the KOP so robots could communicate).
cheers
Thats great, but what language(s) will it be in? Because I think if FIRST will distribute it, you probably will have to port it to the other languages
ProgramLuke
29-03-2010, 19:58
I don't think the drivers would go for that otherwise I would SO do that! :rolleyes:
I'm nearly certain that 1756 won't be doing full-autonomous the whole match. That is, unless that's the challenge for next year. If it is the challenge next year, well just see how that goes. As always, we will probably try to incorporate autonomous actions into teleop.
Best of luck to you if you can run autonomous the whole time. Your drivers will look pretty useless, though.
davidthefat
29-03-2010, 20:18
I have some pretty radiacl ideas, I wanted to do full autonomous this year but never got to do it since I was new this year, so no one really listens to you... I think I want to make a 4 legged robot to go to the competition legally by my senior year
EricLeifermann
29-03-2010, 20:23
A programmer mentor and myself talked about a fully autonomous robot for 2008 Overdrive. But we couldn't convince the other mentors or the students to go for it. He had most of the code written for it by the end of the 1st week while we were still deciding what we wanted to do.
Heh. That's a big challenge. Not impossible, but a big challenge indeed.
If I had a chance to do this for total fun (i.e. not in a competitive sense where points matter ergo not a regional), I think it would be fun though I think it would be really hard to sense the environment around the robot. There's only so much ultrasonic sensors, touch sensors, and a camera can pick up. I'm sure there are more advanced sensors, but there's a point where the robots look less and less like FIRST robots and more like DARPA cars with LIDAR and 5 different cameras plus infrared imaging.
Do I think it'd be neat? Yeah. Easy? Nope.
About the most automated thing I've done for the teleoperated mode is let the robot put our robot's kicker in the correct position and backwind the rope we used to bring the kicker back. Probably not the autonomous we're thinking of, but thought I'd add it in.
I still think it'd be fun. I want to play with some ultrasonic sensors and the camera during the off-season just to have fun and learn something new about autonomous.
-Tanner
davidthefat
29-03-2010, 20:34
Heh. That's a big challenge. Not impossible, but a big challenge indeed.
If I had a chance to do this for total fun (i.e. not in a competitive sense where points matter ergo not a regional), I think it would be fun though I think it would be really hard to sense the environment around the robot. There's only so much ultrasonic sensors, touch sensors, and a camera can pick up. I'm sure there are more advanced sensors, but there's a point where the robots look less and less like FIRST robots and more like DARPA cars with LIDAR and 5 different cameras plus infrared imaging.
Do I think it'd be neat? Yeah. Easy? Nope.
About the most automated thing I've done for the teleoperated mode is let the robot put our robot's kicker in the correct position and backwind the rope we used to bring the kicker back. Probably not the autonomous we're thinking of, but thought I'd add it in.
I still think it'd be fun. I want to play with some ultrasonic sensors and the camera during the off-season just to have fun and learn something new about autonomous.
-Tanner
Im thinking of using a couple IR sensors (one for each side) and a couple ultra sound ones too to track the closer stuff. And a couple gyros (yes couple) to keep the robot from going all crazy looking like a drunk driver or something, it has to go straight at least
Im thinking of using a couple IR sensors (one for each side) and a couple ultra sound ones too to track the closer stuff. And a couple gyros (yes couple) to keep the robot from going all crazy looking like a drunk driver or something, it has to go straight at least
I've done one thing with one gyro and it's worked quite well. I didn't actually use it for anything, but what it showed was quite neat.
http://www.youtube.com/watch?v=fdKXQo65T9E
From what I know, using IR/ultrasonic sensors will just give you a relatively blurry image of the outside world. What you do with that data is the hard part. How do I differentiate between a robot, a wall, and a goal? Friend/Foe? Acquisition of game element?
I know of ways to do it, but it'd be complicated. As I said earlier, it'd be challenging. Would I have fun doing it? Oh yeah...
-Tanner
A wise mentor once told me:
The smart have a great understanding of the things they know.
The wise have a great understanding of the things they do not know.
David, as a teacher I would NEVER stop a student from pursuing a project as ambitious as this. Based on your other threads, you definitely strike me as a VERY smart young individual. My only word of advice is that I hope there can be a very wise (and hopefully smart!) mentor to provide some wisdom along the way. I guarantee it will be the difference between having the most hard-working, fulfilling and successful 6-weeks of your life, versus a few moments of undirected enthusiasm followed by sustained frustration, eventually giving up, and resorting to playing Madden on your hand-held.
If you can find that mentor and successfully work with them, I'll certainly give you an internet high-five regardless of whether you actually produce a functioning fully-autonomous robot...
Deal?
Chris is me
29-03-2010, 20:41
1024 could have done that in 2008.
I don't see why I should do this when I could spend my time winning instead, especially because game pieces are basically in random locations and 5 other robots are on the field too. Implementing fully autonomous control, with any semblance of strategy, isn't going to happen.
davidthefat
29-03-2010, 20:47
A wise mentor once told me:
The smart have a great understanding of the things they know.
The wise have a great understanding of the things they do not know.
David, as a teacher I would NEVER stop a student from pursuing a project as ambitious as this. Based on your other threads, you definitely strike me as a VERY smart young individual. My only word of advice is that I hope there can be a very wise (and hopefully smart!) mentor to provide some wisdom along the way. I guarantee it will be the difference between having the most hard-working, fulfilling and successful 6-weeks of your life, versus a few moments of undirected enthusiasm followed by sustained frustration, eventually giving up, and resorting to playing Madden on your hand-held.
If you can find that mentor and successfully work with them, I'll certainly give you an internet high-five regardless of whether you actually produce a functioning fully-autonomous robot...
Deal?
:)
Well one of my programming mentors is very supportive of my ideas, I wanted to used C++ this year, but he didn't trust my skills to pull it off since I was brand new and just a sophomore, but by the end of the 6 weeks, I have managed to get his attention and hes letting me use C/C++ next year. the other one thinks I will get extremely frustrated or something trying to use C++ and move back to Java. I have learned C++ as my first language and have been using it for 4 year, I don't think I will give up on it. Yea both are great mentors, but I have yet to deliver this message to them yet. I told them to let all the other programmers to code in Java and I will just out perform them with my C++ code.
slavik262
29-03-2010, 20:56
I don't see why I should do this when I could spend my time winning instead, especially because game pieces are basically in random locations and 5 other robots are on the field too. Implementing fully autonomous control, with any semblance of strategy, isn't going to happen.
Essentially what I was going to say. Ambition is never a bad thing - it should always be a reward to go above and beyond. However, I think you underestimate how much work fully autonomous code is. Granted, I've never done it myself, but imagine how much you have to do. I'm not doubting anybody's programming ability, but you have to face the facts: even if you do extensive work during the off-season, you only have six weeks to tailor all of your code to a game. Good AI that adapts to the current situation is difficult enough to write by itself. Making a robot respond to it adds another very large layer.
A much more practical goal would be semi-autonomous systems which would let the robot "help" the driver in accomplishing certain taks, i.e. lining up for a shot.
It's amazing how much automation creeps up as the years go by...
In 2001 (Back in the day of PBASIC) we had a robot that could balance itself on the ramp with the push of a button. A weight on a pot was all it needed. That's autonomy, however basic it is.
Then, in 2003, FIRST added Autonomous mode. I've heard stories from former members and mentors of back then ("Back in 2003, the robot entered Autonomous and hit a wall at full speed") and how bad it was programming it ("We had 63 bytes of RAM and spent more code caching things into EEPROM to save RAM then actually doing stuff")
In 2004 we had a decent processor and real autonomous programs began. This year, we also had an automatic transmission (autonomy in Teleop). It automatically shifted based on RPM's of the wheels vs output speed,
Several of our other robots have had semi-autonomy in Teleop. 2007 had some pretty sweet arm software that I would say goes beyond the "rc car" definition (you could slide the end point in and out and it would calculate joint angles on the fly, as well as make sure it didn't hit itself on the way). This is my personal favorite robot of ours, so I went and wrote a LabVIEW demo to illustrate automation of the arm (combining some elements from the 2005 game, notably the ability to store game pieces).
This LabVIEW demo was known as "the belly manager" and basically allowed the operator to perform many tasks with only 4 buttons. The total actions: Get Floor, Get Human, Score Hi, Score Lo, Hold Hi, Hold, Belly1, Belly2, Belly3 (get and put belly), as well as several intermediate states. The operator requested a Score Hi, Score Lo, Get Human, or Get Floor and the robot automatically stored game pieces in the belly when slots were available, and always tried to either empty or fill the claw. Based on states of the claw or belly slots, it would either hold the arm in a high position that makes manipulation easy or fold down to a lower position to be more stable. With the single press of Score Hi or Score Lo, it would execute a sequence to move to the setup position (possibly many steps depending on starting position), "stab" the goal, drop the game piece, and return on a path that was as stable and efficient as possible. The "robot" would try to determine the operators desired action (score, retrieve, move, etc) and act accordingly. While this never exited the simulator, this kind of automation could easily be incorporated into FIRST robots.
Radical Pi
29-03-2010, 20:59
This would be a fun project to tackle...except for the problem that in the past years our programming team has barely even had a day to work our magic with the robot, much less calibrate it for autonomous operation.
I know of a few teams that have implemented an "autonomous kicker", which, after pushing a button, will automatically align the robot with the goal, calculate the kicker force required to hit the goal, and then kick. It's a great mechanism (one that has caused me to recieve complaints from our drivers...), but nowhere near the level you are talking about
davidthefat
29-03-2010, 21:06
So noone here is actually going to take the challenge? Oh well, I think I will have a great time trying to figure this out...
edit: if you think this will not go well with your team, just make your autonomous mode in one function that you just call in both the real autonomous mode and tele op. So if your driver does not want that, he can flip a switch so its back to full tele op
I would accept your challenge, but I an quite sure the mentors wouldn't like the idea of full autonomy when a human driver can do much better. It all depends on the next game, which apparently the GDC is already working on.... So we just need Dave to add some extra punctuation to his sentences and we can get a nice head-start on the automation.
As for discussion of kicker automation, has anyone seen the Killer Bees robot? We have some automation on the kicker/ball-o-fier (suck button, kick button, distance knob) to result in a kicker recoiling with aprox. feet (nonlinear scaling), kick sequence (reverse ball motor, kill kicker motor, pop clutch, wait, re-engage clutch, restore ball motor, restore kicker motor, wait out rest of 2 second delay). This is all very basic to us. We also have a P control loop on our arm, with a total of 8 different gains to use depending on the loading, direction, and other factors. This is a simple Gain Scheduler, another thing that is very basic to us. Yet, many teams still struggle in software. I have reviewed the code of several local teams and seen that both of them limited themselves to Teleop, Disabled, and Autonomous Independent and/or Autonomous Iterative. They didn't know how to use the most basic of functions, like feedback nodes (One had "Jim the Cluster" that they passed into the main loop so they could use the shift registers), or Enum's (a different team had a bool for each option). Many teams still need to overcome many of the basic programming challenges, so FIRST should not require more automation then these teams can handle. That isn't to say it's not impossible, just something that the game won't require for a long time.
davidthefat
29-03-2010, 21:31
I just emailed my main mentor/teacher (hes my AP Comp Sci teacher) hopefully he will allow this and not think its too much of a mouthful for me
theprgramerdude
29-03-2010, 21:35
This sounds like fun. Team 2503 is willing to do it (no one else can actually object, since I am the only programmer.)
The only problem would be convincing the rest of the team; they're kind of addicted to driving it.
EthanMiller
29-03-2010, 21:38
Very new to programming here, but I'm working on that for our Breakaway bot, hopefully portable to next year. Our drive team's retiring (Seniors), so we can just tell the new guys that it's always been done that way.
We'll probably end up with a half broken autonomous period and then a teleop code, though. This year, mistake in auto. made the robot spin around for the entire auto period. It was at FLR... who remembers?
davidthefat
29-03-2010, 21:42
Very new to programming here, but I'm working on that for our Breakaway bot, hopefully portable to next year. Our drive team's retiring (Seniors), so we can just tell the new guys that it's always been done that way.
We'll probably end up with a half broken autonomous period and then a teleop code, though. This year, mistake in auto. made the robot spin around for the entire auto period. It was at FLR... who remembers?
At first, I did not take the autonomous mode seriously, I was seriously considering spinning around for the whole period, until out team leader got the IR sensors and made me do the autonomous per
theprgramerdude
29-03-2010, 21:47
On another note, does anyone know if it would be legal/possible to mount two camera's and route them back to the crio? They wouldn't have to be the Axis 206.
davidthefat
29-03-2010, 21:50
On another note, does anyone know if it would be legal/possible to mount two camera's and route them back to the crio? They wouldn't have to be the Axis 206.
Yes I believe that you have to get another camera, and treat it as a digital input or something.
We'll probably end up with a half broken autonomous period and then a teleop code, though. This year, mistake in auto. made the robot spin around for the entire auto period.
I've made it go backwards. Autonomous errors tend to be followed with more "What is this Palardymalarchy?" then match losses, however. Apparently joysticks are messed up so -1 = forward, and WPI dosen't fix this so -1 = forward. That's not what I would expect, so I made +1 = forward and -1 = reverse. Oops. I'm the only programmer, but I never do any major changes without approval of the lead mentors (mostly Jim) or change any of the drive team's controls without asking them. Most of your team would probably be happy with human drivers that can react to issues, mentor suggestions, strategy, etc. and regional wins more then they would like an innovation in control award. I'm not saying I wouldn't do full automation, but that FRC isn't ready for it uet.
JamesBrown
29-03-2010, 21:58
Not to burst any one's bubble but there appears to be a severe lack of understanding of how large of an undertaking autonomous programming really is. There are many programmers in this thread, some of whom have already said they are new to programming who think this is feasible. I urge you all to re think this and to take this in small steps, automate a task in Tele-op, i.e. automatic aiming.
I think that it is important to improve the level of software development that we see in FIRST. However the posts in this thread show a severe lack of appreciation for the difficulty of developing true autonomous robots. I urge anyone considering this to talk to some of the more experienced software and control mentors on your teams and on Chief Delphi about the feasibility of this. If you still think it is practical or possible then rather than wasting a build season trying this, why not exhibition matches at one of the fall off season competitions, and any one who is interested could write autonomous code and test the feasibility of this type of competition in that environment.
-James
davidthefat
29-03-2010, 22:01
Not to burst any one's bubble but there appears to be a severe lack of understanding of how large of an undertaking autonomous programming really is. There are many programmers in this thread, some of whom have already said they are new to programming who think this is feasible. I urge you all to re think this and to take this in small steps, automate a task in Tele-op, i.e. automatic aiming.
I think that it is important to improve the level of software development that we see in FIRST. However the posts in this thread show a severe lack of appreciation for the difficulty of developing true autonomous robots. I urge anyone considering this to talk to some of the more experienced software and control mentors on your teams and on Chief Delphi about the feasibility of this. If you still think it is practical or possible then rather than wasting a build season trying this, why not exhibition matches at one of the fall off season competitions, and any one who is interested could write autonomous code and test the feasibility of this type of competition in that environment.
-James
Yes I am aware of the difficulty of the problem at hand, but I have always learned that with good preparation, anything is possible. Before I even write a single line of code, I would have to do a lot of studying and pseudo coding. Like I said in previous posts, if all goes wrong, the driver can just press a switch to go into real teleop. thats just one if statement. Yea I always take programming in small steps, get the smaller things working then work up
efoote868
29-03-2010, 22:05
You know that DARPA ran its challenge twice before a team even completed it?
The challenge was (arguably) more straight forward: get your unmanned vehicle around a 150 mile track.
Teams also sunk large sums of money into the project, they also had a year to prepare beforehand.
I say this because having a fully autonomous robot may be *impossible* in 6 weeks, unless of course your team can come up with a 469 strategy.
In 2007, another programmer and I were able to make the robot *nearly* autonomous - the drive team only had to drive the robot to tubes, and drive the robot to the rack. The robot took care of controlling when the gripper mechanism opened (to grasp a tube), the robot took care of raising the tube to the dedicated height (level 1, 2, or 3 for the rack), and the robot sensed when the rack was within scoring distance (and shut, then released accordingly). After the tube was scored, the driver would back away, and the robot's mechanism would go down to the bottom level, and the wheels to grab the tubes would start spinning again.
This took two veteran programmers two weeks to accomplish, and the robot still wasn't fully autonomous.
My advice:
Do as much as you can in your robot, mask as much of the work as you can from the drivers, and you will have a successful time on the field. The less the drivers have to think about, the faster and better they can think about it.
To all yous looking to complete this challenge, good luck!
theprgramerdude
29-03-2010, 22:30
The main thing I want to try, however, is attaching some PC components as add-on's for image processing.
From a cognitive stand-point, a thinking, autonomous robot must have some of the same capabilities as a human does. Currently, robots are stupid and dumb because they lack these capabilities for thinking, as well as the ability to sense accurately. DARPA challengers failed because they lacked the processing power to see and analyze in real-time, something a human has the power to do. If we could harness an actual CPU (and maybe score with a CUDA GPU, who knows?), an autonomous program is definitely feasible as long as the problem is approached from the correct angle, by trying to emulate a human's problem-solving skills. Otherwise, it'll eventually just encounter an anomaly and be unable to correct itself.
/rant, it's late at night :eek:
ideasrule
29-03-2010, 22:32
Oh my god. I'll be graduating next year and I'm fairly experienced in programming, but I've no idea how to do this. I think I'd have to spend a year reading about artificial intelligence, genetic algorithms, machine learning, and the like before I can even attempt this.
efoote868
29-03-2010, 22:53
DARPA challengers failed because they lacked the processing power to see and analyze in real-time, something a human has the power to do.
I believe you'll find that the DARPA contestants had large, jeep styled vehicles filled with computers and sensors - more computing power and more sensors than your team will be able to afford.
They also had tens to hundreds of programmers and engineers trying to solve a very specific problem - avoid obstacles on a road.
More time, more man power, and an *easier* challenge.
Take a portion of this years game - defense. Defense is as simple as harassing a robot. A challenge that will keep you up for days is determining what is a robot, and what is a field element. Another problem that will keep you up is determining which robot is the one you want to defend. The last problem is *how* you defend it.
All of these are decisions that a human can make very very quickly - Harass this robot, stay between it and the goal, and make sure it doesn't take a shot.
Vikesrock
29-03-2010, 23:03
I believe you'll find that the DARPA contestants had large, jeep styled vehicles filled with computers and sensors - more computing power and more sensors than your team will be able to afford.
This is correct. Stanley, the winning entry in the 2005 DARPA Grand Challenge ran off of 6 1.6GHz Pentium M laptops stored in the trunk.
ideasrule
29-03-2010, 23:41
AI is still a long way off from being able to emulate the human brain, so I doubt that any autonomous program, even if run on the world's fastest supercomputer, could approach the skill of the worst driver.
<off-topic speculation>Soon after getting into AI, you'd have to get into how the human brain works, and what makes it sentient. It's possible that humans have only a very dim consciousness at birth that grows stronger as years' worth of experiences are built into the hardware of the brain. In that case, how would you give a robotic program those many years of training necessary for common-sense tasks that humans can easily do?</off-topic speculation>
With a few photoeyes, I'd definitely be up for the challenge! :)
A challenge that will keep you up for days is determining what is a robot, and what is a field element. Another problem that will keep you up is determining which robot is the one you want to defend.
Both can be accomplished by looking for the bumpers.
http://letsmakerobots.com/node/3843
(obviously would have to be ported to Java ME)
davidthefat
30-03-2010, 00:04
Both can be accomplished by looking for the bumpers.
http://letsmakerobots.com/node/3843
(obviously would have to be ported to Java ME)
Do you realize that the code was run on a 2.6 ghz processor... cRio is not that fast
Do you realize that the code was run on a 2.6 ghz processor... cRio is not that fast
Yeah, my thinking was try removing the ranging and center of mass calculations
Also you can shrink the viewing area as you know you are looking between 10 and 16 inches off the floor.
I think something able to detect a rectangle at a given height and a given color would definitely be feasible.
Rion Atkinson
30-03-2010, 00:11
Okay. So I keep seeing a lot of people saying "This is going to be to hard. You are wasting your time." Come on guys! FIRST is about Inspiring! I was taught the basics of CAD in my Pre-Engineering Academy. The very basics. And over the summer, I did this (http://picasaweb.google.com/RionAtkinson/CAD#5414573323411798338) with Inventor. Why? Because I was inspired to do so through FIRST. I wanted to learn! By the end of this years season, I had taught myself SolidWorks and am now making this (http://picasaweb.google.com/RionAtkinson/FRC2010Breakaway#5450989686890987154).
Had I told you guys that I was going to go off and teach myself CAD, would you have said "It's to big of a challenge. I wouldn't even consider it", or would you encourage me? Because it seems that you are all saying that you would just discourage me.
That being said. I wish I was a programmer, I would love to join you guys in taking on the daunting task. I wish you the best of luck! Keep on inspiring yourselves! :D
-Rion
(P.S. Nothing is impossible (http://www.chiefdelphi.com/forums/showthread.php?threadid=84805). )
davidthefat
30-03-2010, 00:39
Okay. So I keep seeing a lot of people saying "This is going to be to hard. You are wasting your time." Come on guys! FIRST is about Inspiring! I was taught the basics of CAD in my Pre-Engineering Academy. The very basics. And over the summer, I did this (http://picasaweb.google.com/RionAtkinson/CAD#5414573323411798338) with Inventor. Why? Because I was inspired to do so through FIRST. I wanted to learn! By the end of this years season, I had taught myself SolidWorks and am now making this (http://picasaweb.google.com/RionAtkinson/FRC2010Breakaway#5450989686890987154).
Had I told you guys that I was going to go off and teach myself CAD, would you have said "It's to big of a challenge. I wouldn't even consider it", or would you encourage me? Because it seems that you are all saying that you would just discourage me.
That being said. I wish I was a programmer, I would love to join you guys in taking on the daunting task. I wish you the best of luck! Keep on inspiring yourselves! :D
-Rion
(P.S. Nothing is impossible (http://www.chiefdelphi.com/forums/showthread.php?threadid=84805). )
Never too late to start programming, honestly, with FIRST oversimplifying the libraries, you can pick up programming the cRio in less then a week to be honest with you. But if you want to get really deep into programming, it takes lots of practice.
Rion Atkinson
30-03-2010, 00:47
Never too late to start programming, honestly, with FIRST oversimplifying the libraries, you can pick up programming the cRio in less then a week to be honest with you. But if you want to get really deep into programming, it takes lots of practice.
See, I went to a LABView thing before the season started. I had to leave early. But I was able to pick up enough to read the language. And I am currently programming in my aerospace class. But even that is following a book. I have learned that it is easier to read than write. (I have been able to read JAVA for some time now. Writing is a whole nother story though...)
I tried doing a little bit of programming... Didn't work out to well... It would take all summer for me to program a tele-op mode.... That's with a mentor.
davidthefat
30-03-2010, 00:50
See, I went to a LABView thing before the season started. I had to leave early. But I was able to pick up enough to read the language. And I am currently programming in my aerospace class. But even that is following a book. I have learned that it is easier to read than write. (I have been able to read JAVA for some time now. Writing is a whole nother story though...)
I tried doing a little bit of programming... Didn't work out to well... It would take all summer for me to program a tele-op mode.... That's with a mentor.
LABView is a JOKE IMHO, its like RPG Maker or Game Maker for robots... Its not programming. but IDK may be my mind just works that way and I caught onto C++ when I was 12... LOL
This is something I have been toying with for the last few years and in certain games it would defiantly be possible to do in 6 weeks. last year we were able to track a trailer in full motion, track orbit balls, and using a current sensor on our intake roller we could count how many balls were in our basket. and if i was allowed to put 4 sonar sensors on we would have attempted a full auto match at an off season (the drive team would never go for this in a real match)
Things to note:
- we were only able to get code to this level because we had a complete practice bot by week 3 and worked on it right up to our regional (and through our 2 competitions), which i think would be critical in trying to program something as complicated as a fully autonomous robot
- 79 made an awesome obstacle avoidance program using (i believe) 3 sonars at bumper level, which shows that real time obstacle avoidance is very possible without complicated LIDAR or anything like that.
- it is also very helpful to have a good mentor as previously noted.
Even if it doesn't work i have fun trying to figure out how to do it every year and if other people were trying as well I would be even more motivated to try and pull it all together.
davidthefat
30-03-2010, 00:52
This is something I have been toying with for the last few years and in certain games it would defiantly be possible to do in 6 weeks. last year we were able to track a trailer in full motion, track orbit balls, and using a current sensor on our intake roller we could count how many balls were in our basket. and if i was allowed to put 4 sonar sensors on we would have attempted a full auto match at an off season (the drive team would never go for this in a real match)
Things to note:
- we were only able to get code to this level because we had a complete practice bot by week 3 and worked on it right up to our regional, which i think would be critical in trying to program something as complicated as a fully autonomous robot
- 79 made an awesome obstacle avoidance program using (i believe) 3 sonars at bumper level, which shows that real time obstacle avoidance is very possible without complicated LIDAR or anything like that.
- it is also very helpful to have a good mentor as previously noted.
Even if it doesn't work i have fun trying to figure out how to do it every year and if other people were trying as well I would be even more motivated to try and pull it all together.
Well we had practice bots from day one... but they were old robots, so they don't really count do they?:(
Chris is me
30-03-2010, 00:56
Independent of everything else, how do you plan strategy and cooperation with alliance partners? How do you track the positions of the other 5 robots and make decisions accordingly and on the fly? How do you adjust and optimize this strategy?
I'm sure if you had 6 weeks you could code a robot to autonomously run laps in Overdrive or something, but games really require more than that.
Well we had practice bots from day one... but they were old robots, so they don't really count do they?:(
those are helpful to, but we were really lucky last year to have the real robot done really early and i believe that allowed for us to really polish our code.
davidthefat
30-03-2010, 01:01
Independent of everything else, how do you plan strategy and cooperation with alliance partners? How do you track the positions of the other 5 robots and make decisions accordingly and on the fly? How do you adjust and optimize this strategy?
I'm sure if you had 6 weeks you could code a robot to autonomously run laps in Overdrive or something, but games really require more than that.
I like re usability of code very much, its just the Object Oriented Programming way. If you have to repeat your self 3 or more times, Its better off in a separate function. Probably will load a file that has the strategy, just modify it prematch without reuploading code
Independent of everything else, how do you plan strategy and cooperation with alliance partners? How do you track the positions of the other 5 robots and make decisions accordingly and on the fly? How do you adjust and optimize this strategy?
Accomplishing this by yourself is nearly impossible, however if FIRST added the Zigbee module to the KOPs, alliance partners could communicate with each other.
timothyb89
30-03-2010, 04:25
This is something myself and a few friends were (somewhat seriously) joking about last year and a bit this year. In the end we settled on using an ANN and it got to the point where one was started, but that was about it. It's still something we're interested in, but I still have a feeling the best option would be to avoid AI entirely.
From my experience, it just requires too much processing power and storage space- for example, one of the bots that hangs out in our chatroom was recently outfitted with an AI, and its database grew to about half a gig in around two weeks from just working with text. Not only that, but searching through all that data to find the relevant bits would take up an entire core on a reasonably beefy processor. I'd hate to see how the cRIO would handle something like that.
Anyway, for a relatively low-powered device like the cRIO I'd have to recommend just hardcoding everything. That, and creative use of a bunch of sensors all around the robot. Without enough sensors I don't think a robot could reasonably compete with the (human) driver, but if used correctly, they could give the robot a good advantage. It'd be hard, but pretty awesome!
Greg McKaskle
30-03-2010, 08:36
I think a programming intensive off season project sounds amazing. The Guitar Hero (http://www.chiefdelphi.com/forums/showthread.php?t=78180&highlight=guitar+hero) project is an example of something that isn't a full robot, is a good stretch, and will definitely impress crowds.
My point is that the project you describe is comparable to RoboCup, which is graduate level research. Attempting to do the whole thing in an offseason is a bit naive, but if you break it down and focus on one aspect that you are most interested in, learn a lot about one type of sensor, or one type of control, or learn about Kalman filters or path planning, this becomes something that will have impact on your team and on your abilities.
If you don't think something like that is ambitious enough, coordinate it with a few others, but again, plan it out, layer it so that you can make progress one step at a time.
Greg McKaskle
JamesBrown
30-03-2010, 08:38
LABView is a JOKE IMHO, its like RPG Maker or Game Maker for robots... Its not programming. but IDK may be my mind just works that way and I caught onto C++ when I was 12... LOL
Learn LabView, it is far from being a joke and is extremely popular in the field of automation. LabView is absolutely programming, it is easily one of the most powerful programming tools available. I have been programming robots for quite a while, in every thing from MIPS to LISP. LabView is as much of a programming language as any of those, every design pattern you can implement in C/C++ (or your choice of other OO languages) can be implemented in LabView. The students out there who bash LabView most likely have not taken the time to learn it and master it. Sure it is cool to hard code every thing, I remember bashing EasyC when I was in High School because it wasn't real programming. However, I guarantee that when you get a real job writing software your boss isn't going to care that you can write code in a more noble language than the one I choose, if my code is as functional as yours, and I can write it faster I will get the raise, the contract, or the job 100% of the time.
Here's some better ideas:
Option # 1:
Given a camera image, label all robots. As in a FRC competition you encounter many different robots, you should be able to recognize a robot you have never seen before.
No fair covering it with colored tape or lights. I don't think many teams will allow you to do that with their robots.
Option # 2:
Don't give the robot an AI, just have it construct a world map and correctly position itself (localization) w/o having to put florescent colored tape/lights on every square inch of the game field. If a robot can't do something as "simple" as tell you where it is, you won't be getting intelligent actions from any AI. Do this with any semblance of reliability/robustness and I'd be impressed...
efoote868
30-03-2010, 08:57
Both can be accomplished by looking for the bumpers.
http://letsmakerobots.com/node/3843
(obviously would have to be ported to Java ME)
Except you're not playing defense against an alliance, you're playing defense against a robot on an alliance.
Also, I'm guessing that code won't work in every environment you put it in. Differences in field setups like lighting and backgrounds could kill it.
There's no rules saying a robot can't be blue and red, in a bumper fashion.
There may be field elements that are those colors too, like the bumps this year. They're long, they're rectangular, and they're the color of the opposing robot's bumpers.
Code that will take forever to write and forever to execute won't be as beneficial as say, making your robot super easy to drive and control.
I'm encouraged that there are folks considering an AI approach to FRC programming. I once thought that the 15 second autonomous period was a "waste". But that was based on it being largely ignored by the teams. Now that it has inspired more "serious" thoughts about programming among the teams, I applaud its introduction.
As to the supposed superiority of one language or system over another - just look at natural languages. French has been given credit as the language of love, but based on results, Chinese should be considered for that title, n'est pas? The more familiarity you have with differing programming environments, the better your resume will read.
gvarndell
30-03-2010, 09:19
Not in response to any particular post, I love that this is being discussed.
I really hope FIRST will find a way to design next year's game such that clever software will stand a chance to share the limelight with clever mechanical systems.
I know some teams (1629 included) used software to _assist_ with goal scoring.
That is, the robot (under operator control) never decided to go for a goal.
Instead, the robot took a command to kick the ball and attempted to do a little fine-tuning on the aim -- using the camera; but this is a baby step.
Think about implementing "situational awareness" in software:
What does my universe (the field) look like?
How big is it and where are the fixed objects?
Where am I within my universe and which way am I facing?
Where are my friends and where are my foes?
These are robot smarts that should prove valuable to you next year, no matter what the game is.
I dont know most of the fun for me, besides building the robot, is driving it.
Not every robot is atonomus in the real world, and our programers had a hard time this year, given they only had a few days to program the robot. I think that would take away some of what first is.
gvarndell
30-03-2010, 09:27
I think that would take away some of what first is.
Sorry. You think what "would take away some of what first is"?
What is FIRST to you?
TJ Cawley
30-03-2010, 10:20
I wish we could attempt this next year, but some teams that have the techology and money to try such a project, my cheers will be right behind you. no matter the outcome. but a safety switch would be good in case the programing went down and you had to manuever it yourself. here are my $0.02 and i wish teams that will try, the best of luck to you all.
davidthefat
30-03-2010, 10:25
Not in response to any particular post, I love that this is being discussed.
I really hope FIRST will find a way to design next year's game such that clever software will stand a chance to share the limelight with clever mechanical systems.
I know some teams (1629 included) used software to _assist_ with goal scoring.
That is, the robot (under operator control) never decided to go for a goal.
Instead, the robot took a command to kick the ball and attempted to do a little fine-tuning on the aim -- using the camera; but this is a baby step.
Think about implementing "situational awareness" in software:
What does my universe (the field) look like?
How big is it and where are the fixed objects?
Where am I within my universe and which way am I facing?
Where are my friends and where are my foes?
These are robot smarts that should prove valuable to you next year, no matter what the game is.
I personally want to stay away from the camera. if you havent noticed, it had a big lag, IDK if that was just the data getting sent over, but I want to use IR sensors and Sonars instead
gvarndell
30-03-2010, 10:29
I want to use IR sensors and Sonars instead
Good thinking -- I wasn't implying that camera vision was the best or only way to support situational awareness in a robot.
randalcr
30-03-2010, 10:37
That is a big challenge, and it's one that I'd be willing to accept! Lol, this will be fun.
Chris Hibner
30-03-2010, 10:59
Learn LabView, it is far from being a joke and is extremely popular in the field of automation. LabView is absolutely programming, it is easily one of the most powerful programming tools available. I have been programming robots for quite a while, in every thing from MIPS to LISP. LabView is as much of a programming language as any of those, every design pattern you can implement in C/C++ (or your choice of other OO languages) can be implemented in LabView. The students out there who bash LabView most likely have not taken the time to learn it and master it. Sure it is cool to hard code every thing, I remember bashing EasyC when I was in High School because it wasn't real programming. However, I guarantee that when you get a real job writing software your boss isn't going to care that you can write code in a more noble language than the one I choose, if my code is as functional as yours, and I can write it faster I will get the raise, the contract, or the job 100% of the time.
I'm going to second the godfather of soul.
Text based programming languages are starting to get to the point where assembly was 15-20 years ago (and punch cards before that). Everything is moving more in the direction of even higher level languages (i.e. graphics based).
In the field of embedded controls, everything is being shifted from hand-coded C to auto-coded model-based programming. In my current job and my last job we use Simulink (http://en.wikipedia.org/wiki/Simulink) and Stateflow (http://www.mathworks.com/products/stateflow/) to do all of our control design. When I left my previous job 4 years ago, we were already auto-coding the entire sensing and control algorithms from the Simulink/Stateflow models. At my current job, a good number of the control algorithms are already auto-coded from the Simulink/Stateflow models and the goal is for all of them to be auto-coded in the future.
Even for PC programming we're starting to see more graphical programming tools where you draw your windows and drag and drop menus and interface controls, then simply define the behavior of the interfaces and menus.
I don't know if we'll ever completely move away from text-based code (and in some cases, I think text based code is the most efficient method), but it would surprise me if software development was primarily text based 10 years from now.
For the on-topic part: there have been robots in the past that have played significant portions of the match autonomously. I once proposed a system to reward the teams with bonus points based upon how much of the match they played autonomously. I'll have to dig up that old thread and post it.
EricVanWyk
30-03-2010, 11:18
For the on-topic part: there have been robots in the past that have played significant portions of the match autonomously. I once proposed a system to reward the teams with bonus points based upon how much of the match they played autonomously. I'll have to dig up that old thread and post it.
EXACTLY.
Autonomous has been pretty boring for two years because there is no point value to it. Give it value, and it will become interesting.
Egg 3141592654
30-03-2010, 11:29
I would like this to happen, but here are my 2 issues with this idea. Firstly, how do you track game objects and the areas to score? The program would need to have a serious if statement, or a periodic task to override the game piece searching code from the scoring area track code. We tried something like this recently with epic failure, aka we made a $5,000 doughnut preforming machine. Secondly, the robot would have to be as cunning as a human player to avoid silly penalties, like getting hit while kicking a ball making it go out of bounds.
Those are my little considerations in making our robot completely autonomous next year. I guess it also depends on what the game is next year as well because something like this puts drivers like me out of work. Good luck to those teams attempting this, I'm starting now so this way I'll only be 4 weeks late instead of the usual 5.
gvarndell
30-03-2010, 11:50
something like this puts drivers like me out of work
Relax, you'll still have a job.
I know the topic is fully autonomous robots but no FRC team is going to accomplish that in the near future -- unless the games become mind-numbingly simple.
Does cruise control make a driver unnecessary?
slavik262
30-03-2010, 11:56
To echo and summarize a lot of really smart people, there's a lot of problems with a fully autonomous robot. I'm listing them in order of importance (in my eyes) from greatest to least.
Time/People - As people have said, it took teams of grad students working around the clock to accomplish a simpler challenge (DARPA) than what you're aiming to achieve. Even if you start working now, you'll still need to tailor the AI to the game next year. You can't do that much in six weeks without a time machine. You're highly underestimating the complexity of everything the robot has to do.
Hardware - Autonomous robots have extremely beefy processors, usually custom-designed for the task. You have a 400 MHz PowerPC processor, which is not just controlling your robot's movement, but is also busy doing other things like communicating with the field. Consider how much power you would need to run autonomously. Think of what you would have to do:
Gather inputs from a large amount of sensors.
Formulate a "world view" of what is going on. Where is your robot? Where are the other robots? Where are the game pieces? What is your current strategy? What step in the plan for that current strategy are you in? How much time is left in the match? There's a lot of questions. You'll need to use complex algorithms to analyze your data (including slow image analysis if you're using the data) and even more complex algorithms to take that analysis and turn it into some useful strategy.
Act on that strategy. Use even more complex algorithms to determine if your strategy is working. Decide when to switch strategies or dynamically adapt to the strategies being employed against you by the other alliance.
There's just not enough processor to do it all in real-time.
Your drivers will want to drive the robot. Arguments about "what FIRST is" aside, telling your drive team that your code can perform as well as they can is an insult to any human being. It may be a great off-season project, but don't you want to achieve your best during competitions? I'm not claiming that it's "all about winning" - but the competition makes it fun for a lot of people, and I can promise you that because of all the above reasons, an autonomous program developed by a few high school students with a year (maximum) of development time, running on FRC hardware, and within the context of an FRC game will not perform nearly as well as even the worst driver. You're removing almost any chance your team will do well if you run fully autonomous, provided the other alliance's robots so much as move. Is the rest of your team willing to accept this just so they can say that their robot is autonomous?
I'm sorry for raining on your parade, but it can't be done - at least not well. Ambition is a wonderful thing - never give up your dreams. But technological marvels aren't created with just a can-do attitude. It takes years of research, hard work, development, and custom hardware to finish the job. The people who think this is possible need to stop and be a bit more realistic.
Try something on a much smaller scale. An automated scoring algorithm would be great, and is a totally reachable goal. Work your way up, and see what you can do. There's a huge difference between playing a match "mostly autonomously" and fully autonomously in that the "mostly autonomous" option allows human drivers to position the robot, aware of the field and match conditions, before letting it go to work.
Kims Robot
30-03-2010, 12:20
I'll start this by saying its been a while since I have done any "real" programming. However, I did my masters degree in Robotic Intelligence and have loads of background knowledge here, in biorobotics, communications, etc.
I am excited that teams are considering this. FIRST should be about innovation.
I say this because having a fully autonomous robot may be *impossible* in 6 weeks, unless of course your team can come up with a 469 strategy
For the on-topic part: there have been robots in the past that have played significant portions of the match autonomously. I once proposed a system to reward the teams with bonus points based upon how much of the match they played autonomously. I'll have to dig up that old thread and post it.
Its definitely NOT impossible. Im 99% positive that team 176 did it one of the years (somewhere around 2001??). They didn't use it in every match, and as far as I know they only ran it in practice matches, but boy was it fun to watch their drivers press a button and walk away and see that robot do "its thing". Could it account for every variable? heck no. Was it impressive for the amount it did? YES!!
I've played around with everything from controlling an FRC robot with my bicep muscle to writing up simulation code for a modular robot. Its all about how complex you make the problem. If you start from the highest level and have a robot that can incorporate a lot of sensors, not only for knowing about it's mechanisms, but for "seeing" the world around it, you open up entirely new doors. We have seen plenty of teams be able to do this in tiny pieces - balancing the ramp, scoring tubes, keeping the arm inside the box, running cool auto modes with avoidance detection. A fully automated match is really just as step beyond all of those. Heck there are some FLL teams that can do this, why cant the FRC teams?!?!
It does require big picture and strategic thinking to consider what you might "run into". Do you need to avoid other robots? what do you need to interact with? what are the possible decisions to be made? But teams already do a lot of this type of thing for the 15 second auto modes. No not every team does, but some of the teams do!
Would I suggest using it in Finals? heck no. As already stated, in this competition we aren't going to be able to process like the human brain. But could it be a fun and awesome challenge for a programmer to try for an offseason or even to "show off" in a practice match?? Of course!! And hey if its "really good" you might even get your drivers to run it in one of those "easy" matches.
Stop being afraid of trying things you might "fail" at. Just because something seems impossible, doesn't mean you shouldn't try it. As long as your entire strategy isn't based around it, I would say go for it. Have one set of programmers implement the "normal functions", and have the other go for the full automation.
What I would suggest, as you go for it:
Start by writing automation for certain functions. For example, when you have a ball, target and shoot. When you don't have a ball, look for one.
It'll be a lot easier to get blocks of automation that can be used by the drivers and then combine them into one completely automated program than it will be to write the whole program from scratch. (It'll also allow the drivers to take control if they need to.)
slavic262, it's almost a simpler challenge. DARPA had to travel a certain distance, with a full-size vehicle, via previously unknown waypoints. On an FRC field, we know EXACTLY where the boundaries and other things are, other than gamepieces and other robots. It can't be done, you say. People said the same thing about: manned flight, steamboats, space flight, connecting computers... If it couldn't be done simply because some people thought it couldn't be done, we'd still be in the Middle Ages or earlier. Also note that some folks do stuff where the designers would say, "That plane can't do that!"--but they're still doing it. The plane wears out faster, but it can be done.
Also note that nothing in the rules prevents you from building a custom circuit to assist the cRIO with its processing. If you wanted to do that, you could get more done...
gvarndell
30-03-2010, 13:08
Even for PC programming we're starting to see more graphical programming tools where you draw your windows and drag and drop menus and interface controls, then simply define the behavior of the interfaces and menus.
I don't know if we'll ever completely move away from text-based code (and in some cases, I think text based code is the most efficient method), but it would surprise me if software development was primarily text based 10 years from now.
Underlying those little widget-y, icon-y things with various kinds of lines crisscrossing between them is text based code (or perhaps more accurately, the compiled binary representation of text based code).
Yes, the ubiquity of text based programming is loosing ground to 'software through icon connection', but we should not fool ourselves into thinking it's going away.
Its replacement hasn't arrived yet.
Alan Anderson
30-03-2010, 13:42
Underlying those little widget-y, icon-y things with various kinds of lines crisscrossing between them is text based code (or perhaps more accurately, the compiled binary representation of text based code).
This turns out not always to be the case. A typical LabVIEW vi doesn't have a text equivalent.
gvarndell
30-03-2010, 14:20
This turns out not always to be the case. A typical LabVIEW vi doesn't have a text equivalent.
What would be an example of a typical VI?
And how does the executable object code for it come to exist?
Alan Anderson
30-03-2010, 14:38
What would be an example of a typical VI?
Any one you see in FRC.
And how does the executable object code for it come to exist?
It gets compiled, just like turning source code into object code for any compiled language.
I implied the existence of a "nontypical" vi which does have a text representation. That would be a reference to the LabVIEW code which gets compiled into VHDL for further compilation into an image for the FPGA. The VHDL is text.
TJ Cawley
30-03-2010, 14:49
I would like this to happen, but here are my 2 issues with this idea. Firstly, how do you track game objects and the areas to score? The program would need to have a serious if statement, or a periodic task to override the game piece searching code from the scoring area track code. We tried something like this recently with epic failure, aka we made a $5,000 doughnut preforming machine. Secondly, the robot would have to be as cunning as a human player to avoid silly penalties, like getting hit while kicking a ball making it go out of bounds.
Those are my little considerations in making our robot completely autonomous next year. I guess it also depends on what the game is next year as well because something like this puts drivers like me out of work. Good luck to those teams attempting this, I'm starting now so this way I'll only be 4 weeks late instead of the usual 5.
it isn't easy but teams that managed to get their cameras working or other motion/vision devices working (we had encoders, the camera was for our drivers) could possibly do it. yes it is a LOT of coding and strategizing early, but why not when it might bring success? there's always a safe-mode people can create so of/when the robot gets out of hand you can take over and correct it, then let the controls go again.
gvarndell
30-03-2010, 16:45
It gets compiled, just like turning source code into object code for any compiled language.
I hope this isn't headed to a chicken and egg thing but, compiled from what? :confused:
I implied the existence of a "nontypical" vi which does have a text representation.
So a graphical element that represents the addition of 2 input integers and produces an integer sum at the output, is that a typical VI?
Or is that not a VI at all?
there's always a safe-mode people can create so of/when the robot gets out of hand you can take over and correct it, then let the controls go again.
Similar to why you don't want an empty cockpit when a plane is in autopilot. :)
Putting a banner up in your driver station would be an interesting PR technique during practice rounds...
Chris is me
30-03-2010, 19:07
EXACTLY.
Autonomous has been pretty boring for two years because there is no point value to it. Give it value, and it will become interesting.
Autonomy's value in this game rivals 2008, in my opinion. I like the subtle emphasis on it this year. I think it's not exciting simply because automatic robots are inherently less exciting.
There is a value. Look at L.A. Final Match 1.
Total points (final): 24.
Hanging points: 6.
24-6=18 points on the floor.
Of those 18 points, 9 were scored autonomously.
15 seconds saw the same number of points scored in the goals that the following 2 minutes saw. The difference? No defense.
Remember, defending a robot during automode carries massive penalties this year. So massive, you REALLY don't want to do it.
But being able to score autonomously can give plenty of points as a cushion, which you want going into teleoperated mode.
Autonomous has been pretty boring for two years because there is no point value to it. Give it value, and it will become interesting.
The FTC matches have a double scoring for their autonomous mode. They take a long pause between auto and teleop modes to record the scores. The same scored objects are thereby scored twice. This can be quite a payoff for an effective auto program.
davidthefat
30-03-2010, 19:20
I delivered the message to my teacher today, he thinks its a fantastic idea. He is even happy that I was even thinking about next year's robot, because he told me that previous years, people just forget about robotics after the regionals. He said I can use the tech shop to make my test robots
ideasrule
30-03-2010, 19:22
Just to give an idea of how hard a good autonomous robot is, there's no computer in the world that comes close to matching the flexible and creative thinking of the human brain, nor is there a computer with the lifetime of experience that human robot drivers have. Making a robot smarter than a human isn't like CADing a robot framework; it's like designing a fusion reactor in 6 weeks, especially since decades of research by professional scientists haven't been able to do it.
If anybody plays real-time strategy games, the AI in them took a whole team of well-funded programmers years to write and debug. Even so, it can't compete with even moderately experienced gamers on an equal footing, and is very easy to exploit.
Autonomy's value in this game rivals 2008, in my opinion. I like the subtle emphasis on it this year. I think it's not exciting simply because automatic robots are inherently less exciting.
I noticed your rookie year was 06, and I'm surprised you would say autonomous is boring, I thought the autonomous shooters were really exciting. Also look at 04, autonomous was really exciting then too.
davidthefat
30-03-2010, 19:27
Just to give an idea of how hard a good autonomous robot is, there's no computer in the world that comes close to matching the flexible and creative thinking of the human brain, nor is there a computer with the lifetime of experience that human robot drivers have. Making a robot smarter than a human isn't like CADing a robot framework; it's like designing a fusion reactor in 6 weeks, especially since decades of research by professional scientists haven't been able to do it.
If anybody plays real-time strategy games, the AI in them took a whole team of well-funded programmers years to write and debug. Even so, it can't compete with even moderately experienced gamers on an equal footing, and is very easy to exploit.
You do make a good point, but where in the world does it ever say we are having a similar game next year? For all I know, it can be football, which is all pre layed out plays and stuff...
Rion Atkinson
30-03-2010, 19:33
You do make a good point, but where in the world does it ever say we are having a similar game next year? For all I know, it can be football, which is all pre layed out plays and stuff...
Oh... I can promise you... It will not be similar in any way to this year... Darn the GDC....
theprgramerdude
30-03-2010, 19:59
To echo and summarize a lot of really smart people, there's a lot of problems with a fully autonomous robot. I'm listing them in order of importance (in my eyes) from greatest to least.
Time/People - As people have said, it took teams of grad students working around the clock to accomplish a simpler challenge (DARPA) than what you're aiming to achieve. Even if you start working now, you'll still need to tailor the AI to the game next year. You can't do that much in six weeks without a time machine. You're highly underestimating the complexity of everything the robot has to do.
Hardware - Autonomous robots have extremely beefy processors, usually custom-designed for the task. You have a 400 MHz PowerPC processor, which is not just controlling your robot's movement, but is also busy doing other things like communicating with the field. Consider how much power you would need to run autonomously. Think of what you would have to do:
Gather inputs from a large amount of sensors.
Formulate a "world view" of what is going on. Where is your robot? Where are the other robots? Where are the game pieces? What is your current strategy? What step in the plan for that current strategy are you in? How much time is left in the match? There's a lot of questions. You'll need to use complex algorithms to analyze your data (including slow image analysis if you're using the data) and even more complex algorithms to take that analysis and turn it into some useful strategy.
Act on that strategy. Use even more complex algorithms to determine if your strategy is working. Decide when to switch strategies or dynamically adapt to the strategies being employed against you by the other alliance.
There's just not enough processor to do it all in real-time.
Your drivers will want to drive the robot. Arguments about "what FIRST is" aside, telling your drive team that your code can perform as well as they can is an insult to any human being. It may be a great off-season project, but don't you want to achieve your best during competitions? I'm not claiming that it's "all about winning" - but the competition makes it fun for a lot of people, and I can promise you that because of all the above reasons, an autonomous program developed by a few high school students with a year (maximum) of development time, running on FRC hardware, and within the context of an FRC game will not perform nearly as well as even the worst driver. You're removing almost any chance your team will do well if you run fully autonomous, provided the other alliance's robots so much as move. Is the rest of your team willing to accept this just so they can say that their robot is autonomous?
I'm sorry for raining on your parade, but it can't be done - at least not well. Ambition is a wonderful thing - never give up your dreams. But technological marvels aren't created with just a can-do attitude. It takes years of research, hard work, development, and custom hardware to finish the job. The people who think this is possible need to stop and be a bit more realistic.
Try something on a much smaller scale. An automated scoring algorithm would be great, and is a totally reachable goal. Work your way up, and see what you can do. There's a huge difference between playing a match "mostly autonomously" and fully autonomously in that the "mostly autonomous" option allows human drivers to position the robot, aware of the field and match conditions, before letting it go to work.
Seeing as your #2 covered my reasons for more power thoroughly, I thought we might as well discuss ways to get more power now, with another 8 months ahead to plan.
So, does anyone believe there is a way to feasibly add a inverter/converter widget from the PD board to a PC PSU, which then runs a small PC/GPGPU with Linux, which can interface with the Crio for image analysis and strategy planning? (I'm not an electrical engineer) The main problem I'd see is that the voltage drop with the battery under load might have more severe effects on a PC than on the Crio.
The
Chris is me
30-03-2010, 20:01
I noticed your rookie year was 06, and I'm surprised you would say autonomous is boring, I thought the autonomous shooters were really exciting. Also look at 04, autonomous was really exciting then too.
I was in FTC until 2008, so the rookie year part of my postbit isn't entirely accurate.
Autonomy's value is really a lot like 2008, strategically. The points are free, undefended points. On top of that, though, "clearing" a zone allows a team to maximize its resources. A starved offensive zone is a death sentence in Breakaway. If both teams clear a zone, then the advantage is negated and the battle is fought at midfield This is a lot like 2008's hybrid, in that a large hybrid advantage is unsurmountable, so having a comprehensive hybrid is essential if only to negate the other alliance's hybrid.
My #1 priority for alliance selection at CT is strength of back autonomous mode. It's that important to winning, in my opinion.
Rick Wagner
30-03-2010, 20:03
Notwithstanding the difficulty of totally autonomous play, well documented here by many, I think that this is the direction FIRST competition is headed in, eventually. Remember that before '03 there was no autonomy at all. FLL is fully autonomous. These kids are now graduating to FRC teams.
Greg McKaskle
30-03-2010, 20:36
On the LV compilation topic, the LV source code is a dataflow graph of objects -- diagrams contain nodes connected by wires, with the occasional node containing other diagrams
The objects are visited over several passes in order to perform compilation tasks.
1. Data types are propagated after each edit.
2. Nodes are validated and syntax errors identified after each edit.
3. An algorithm performs what we call clumping -- coloring the graph based upon asynchronous operation.
4. Another algorithm improves inplaceness, reordering nodes to execute in an order which minimizes data copies.
5. Nodes allocate data storage.
6. Nodes emit code into clumps.
Clumps are blocks of memory that contain machine instructions in binary form. You can disassemble the instructions if you like and display them in text.
Of course the LV graph could be stored into a textual graph form, and internally, we experiment with such things as a save format. The compiler, however, does not operate on a sequential "tape" of characters.
On the robotic topic, instead of waiting for next years game and potentially interfering with the progress of the team, why not pretend that you just learned of this year's game. Start to automate the tasks given the current field and robot.
You state that the camera has lots of lag, but perhaps you should concoct a test to measure the lag. I did it last year, and it isn't that bad to the cRIO. If you test thoroughly, it isn't bad to the PC either. Anyway, try using different sensors and the camera to find the goal, find balls, find lines on the floor, find the edge of the field.
I'd consider it a huge step forward if robots in autonomous would detect the walls of the field and would do something other than barrel into them at high rates of speed.
If you have those elements, start trying to identify and track robots. Try to plan a path to the ball avoiding the robots...
This and all FRC games are built to challenge human drivers. There are more than enough challenges to keep SW people busy trying to navigate.
Greg McKaskle
Tom Bottiglieri
30-03-2010, 21:14
My #1 priority for alliance selection at CT is strength of back autonomous mode. It's that important to winning, in my opinion.
Don't box yourself into a corner. I believe you are exercising a bit of confirmation bias by believing autonomous is the end-all, be-all that makes a team successful. The teams that win are the teams who have the experience and expertise to build high quality machines and play well thought out strategies. If they have gotten to that point, they probably put the time in to make their autonomous mode work well.
For example, I could say that the only thing I will be looking for in a team is the ability to hang in under 3 seconds. Does that make a robot good? Not necessarily, but the teams who planned ahead enough and implemented this strategy are probably strong to begin with. The hanging is just an indicator of this.
The problem with this is that there are always corner cases. There is always going to be a team that has a strong auto, but has a bone-head driver who just racks up penalties. The same way there will probably be a team who hangs fast, but can't do squat for the rest of the match.
All in all, looking at a team's auto mode is probably a smart thing to do. But, there are a bunch of other aspects to what makes a team successful, and many of them are intangibles. If you want to pick like the pros, don't put all of your eggs in one basket.
Chris is me
30-03-2010, 21:18
The problem with this is that there are always corner cases. There is always going to be a team that has a strong auto, but has a bone-head driver who just racks up penalties. The same way there will probably be a team who hangs fast, but can't do squat for the rest of the match.
I don't mean to say that robots with strong autos are invariably higher ranked than robots without them (or at least I don't now, after reading your post), but that of a list of qualities I want in an alliance partner, strong autonomous is probably the most important. I guess the best way to phrase how I intend to pick is like evaluating robots with a WOT; autonomous right now would have the highest weight, but not enough to offset other categories.
dtengineering
30-03-2010, 21:19
Perhaps an interesting way to implement the challenge would be to build it in as part of the Breakaway game simulation.
There are several "competitive coding" games of this nature... a simple one that I use to teach PIC assembly language is called PicBots, but there are many more complex ones. A list of some is at http://www.google.com/Top/Games/Video_Games/Simulation/Programming_Games/Robotics/ I've used CRobots3D and AI Wars with junior students as a programming introduction.
The advantage, of course, is that the robots can compete in an ideal environment and can access all kinds of expensive "sensors", and the programmer can have access to the "robot" 24 hours/day at basically no cost.
Once the algorithms are worked out in the simulated environment, then they could be ported to a robot for real. Kind of like building a robot in CAD before cutting out the parts.
The added advantage, of course, is that by developing an AI for the Breakaway game simulation, it would be possible to play the game with fewer than 6 people.
Jason
davidthefat
30-03-2010, 21:28
All of you who are attempting this, how do you suppose you will get started? I am thinking of making an omni drove robot with 2 gyros, one in the main body and one on a little platform that a server will rotate up and down. 2 IR sensors mounted on a servo that will rotate it side to side will scan in front of the robot and the other 2 mounted on the platform will also scan side to side. they will take in an array of data regarding the objects inthe way
Tom Line
30-03-2010, 21:59
All of you who are attempting this, how do you suppose you will get started? I am thinking of making an omni drove robot with 2 gyros, one in the main body and one on a little platform that a server will rotate up and down. 2 IR sensors mounted on a servo that will rotate it side to side will scan in front of the robot and the other 2 mounted on the platform will also scan side to side. they will take in an array of data regarding the objects inthe way
I'm curious. Are you aware that a single hard collision can easily throw a gyro off by hundreds of degrees?
We removed our gyro from this year's competition bot - we were using it to measure inclination in order to automatically stop our winch - because each time we fired our kicker (it was a 300+ degree per second gyro) it would lose 30-40 degrees in a random direction. In fact, we tried 6 different gyros, all the ones we had in the build room, and they all showed the same results.
We used one in 2009 to make our turret field-oriented and had the same issue when we had a hard collision.
I'm also curious why you want a gyro on a platform when servos can be commanded accurately to any angle you want?
theprgramerdude
30-03-2010, 22:06
All of you who are attempting this, how do you suppose you will get started? I am thinking of making an omni drove robot with 2 gyros, one in the main body and one on a little platform that a server will rotate up and down. 2 IR sensors mounted on a servo that will rotate it side to side will scan in front of the robot and the other 2 mounted on the platform will also scan side to side. they will take in an array of data regarding the objects inthe way
Personally, I was thinking more along the lines or using two accelerometers mounted on opposing sides of the robot to measure motion and turn, as well as two front facing camera's for depth perception and (maybe) object recognition.
IMHO, we would need more processing power to develop complex strategy's that a human could think up, which is why I'm trying to figure out if <R50> means we can't use a DC-DC converter (I thought there was one powering the Crio) to power a mini-PC attached to the robot for image analysis.
Edit: something along the lines of this: http://www.mini-box.com/M4-ATX?sc=8&category=981
<R50> Custom circuits shall NOT directly alter the power pathways between the battery, Power Distribution Board, speed controllers, relays, motors, or other elements of the robot control system (including the power pathways to other sensors or circuits). Custom high impedance voltage monitoring or low impedance current monitoring circuitry connected to the ROBOT’S electrical system is acceptable, because the effect on the ROBOT outputs should be inconsequential.
davidthefat
30-03-2010, 22:07
I'm curious. Are you aware that a single hard collision can easily throw a gyro off by hundreds of degrees?
We removed our gyro from this year's competition bot - we were using it to measure inclination in order to automatically stop our winch - because each time we fired our kicker (it was a 300+ degree per second gyro) it would lose 30-40 degrees in a random direction. In fact, we tried 6 different gyros, all the ones we had in the build room, and they all showed the same results.
We used one in 2009 to make our turret field-oriented and had the same issue when we had a hard collision.
I'm also curious why you want a gyro on a platform when servos can be commanded accurately to any angle you want?
Gyro are cool... but seriously, I don't always want to rely on just one sensor, thats why multiple IR sensors are used, A sonar probably will be used. Its just being on the safe side I guess.
davidthefat
30-03-2010, 22:15
Personally, I was thinking more along the lines or using two accelerometers mounted on opposing sides of the robot to measure motion and turn, as well as two front facing camera's for depth perception and (maybe) object recognition.
IMHO, we would need more processing power to develop complex strategy's that a human could think up, which is why I'm trying to figure out if <R50> means we can't use a DC-DC converter (I thought there was one powering the Crio) to power a mini-PC attached to the robot for image analysis.
Edit: something along the lines of this: http://www.mini-box.com/M4-ATX?sc=8&category=981
<R50> Custom circuits shall NOT directly alter the power pathways between the battery, Power Distribution Board, speed controllers, relays, motors, or other elements of the robot control system (including the power pathways to other sensors or circuits). Custom high impedance voltage monitoring or low impedance current monitoring circuitry connected to the ROBOT’S electrical system is acceptable, because the effect on the ROBOT outputs should be inconsequential.
I was thinking of forgetting the cRio as of now and using a custom made microprocessor board, since its just experimental small prototypes
theprgramerdude
30-03-2010, 22:22
By ditch the Crio, do you mean just build an autonomous robot independant of any competition with the spare parts from FRC?
davidthefat
30-03-2010, 22:29
By ditch the Crio, do you mean just build an autonomous robot independant of any competition with the spare parts from FRC?
I guess, I just want to build a prototype robot that is not using the crio, since the crio and the power board itself will take up most of the space of the robot (2 x 2 feet) so why even bother? Just get the essential ideas working and then we can transfer it to the actual robot during the 6 week build period. I just have to just port the code, configure the sensors to work for the robot bigger size. I try to code so that its reusable and easy to understand. So I have to get in the habit to comment a lot, because I tend to forget what some of my code does sometimes, it looks so complex at times IDK how I even coded some stuff:ahh:
gvarndell
30-03-2010, 22:34
On the LV compilation topic, the LV source code is a dataflow graph of objects -- diagrams contain nodes connected by wires, with the occasional node containing other diagrams
The objects are visited over several passes in order to perform compilation tasks.
1. Data types are propagated after each edit.
2. Nodes are validated and syntax errors identified after each edit.
3. An algorithm performs what we call clumping -- coloring the graph based upon asynchronous operation.
4. Another algorithm improves inplaceness, reordering nodes to execute in an order which minimizes data copies.
5. Nodes allocate data storage.
6. Nodes emit code into clumps.
Clumps are blocks of memory that contain machine instructions in binary form. You can disassemble the instructions if you like and display them in text.
Thanks for this. It is good information to know.
That said, it's not the compilation I was asking about.
Assuming that a node is what I was calling an icon and graphically represents some logical or computational operation, there must exist some sequence of machine instructions to implement that operation.
You referred to these as clumps.
An assertion was made that there is no traditional textual source code associated with clumps -- notwithstanding that that terminology was not part of the discussion.
My claim is simply that the machine instructions contained in those clumps almost certainly were produced by a traditional compiler using a traditional text-based programming language -- quite likely C.
For the record, this didn't start out as a Labview discussion and I didn't lead it here -- nor did I want to.
There was first a claim that iconic programming was replacing text based programming.
I claimed that, on the contrary, text based programming is the foundation upon which iconic programming is built.
Icons (nodes if you prefer) graphically represent machine code.
AFAIK, other than compiling and/or assembling text files, we have no spiffier way of producing the machine code.
davidthefat
30-03-2010, 22:38
Thanks for this. It is good information to know.
That said, it's not the compilation I was asking about.
Assuming that a node is what I was calling an icon and graphically represents some logical or computational operation, there must exist some sequence of machine instructions to implement that operation.
You referred to these as clumps.
An assertion was made that there is no traditional textual source code associated with clumps -- notwithstanding that that terminology was not part of the discussion.
My claim is simply that the machine instructions contained in those clumps almost certainly were produced by a traditional compiler using a traditional text-based programming language -- quite likely C.
For the record, this didn't start out as a Labview discussion and I didn't lead it here -- nor did I want to.
There was first a claim that iconic programming was replacing text based programming.
I claimed that, on the contrary, text based programming is the foundation upon which iconic programming is built.
Icons (nodes if you prefer) graphically represent machine code.
AFAIK, other than compiling and/or assembling text files, we have no spiffier way of producing the machine code.
Text based programming will NEVER go away, it will be used to code those programs that are iconic or whatever... The click and drop ones. and the text based programming will also code the operating systems. and I doubt there will be enough freedom in the drag and drop ones. They are all high level, which means you can't have pointers, you can directly mess with the memory, click and drag will never replace text based programming
I'm curious. Are you aware that a single hard collision can easily throw a gyro off by hundreds of degrees?
I'm just curious have you tried using a compass to re-zero the gyro? I haven't tried it but from what I've read it seems to create an almost drift less gyro as the compass doesn't have the issues of drift.
note: running a compass is an expensive operation so it should only be run to reset the gyro which is less expensive (or so I've read...)
davidthefat
30-03-2010, 23:53
I'm just curious have you tried using a compass to re-zero the gyro? I haven't tried it but from what I've read it seems to create an almost drift less gyro as the compass doesn't have the issues of drift.
note: running a compass is an expensive operation so it should only be run to reset the gyro which is less expensive (or so I've read...)
Its only about $30 for a compass, will think about it. Gyros are also about $30-$40 too, may be I will just try with 1 gyro, 2 IR and 1 sonar before adding more sensors
Radical Pi
31-03-2010, 00:57
If I were doing this, my first project would be a defensive robot. It is actually the simplest to code IMO. Just take the camera image, do a color analysys to find a rectangle of the opposing team's color, and drive towards that. Once you factor in a pinning timer through an accelerometer to determine if you are still moving, you could probably have a decent autonomous defensive robot.
Next would be the offensive ball collector. I personally would think of using an array of IR sensors that can give you a rough map of what is in front of you. Finding roundish-objects would be a bit of a challenge on the cRIO, but a fun one at that (would it be possible to generate a rough IMAQ image to use the existing libraries?). Alternatively, if a team could use a second camera, they could mount one high for targeting and one low for an ellipse detection below the bumper for balls.
Once you have possesion of the ball, I think the rest would be fairly easy. Do a spin until you detect a target, do a color check below the target to make sure you are aiming at the right team (180 spin if wrong), and then the rest has been done by other teams this year.
Sensor-wise, I think a gyro is a given. If you zero it after every targeting calculation, you could easily use it for relative motion in targeting through a state machine. I think flow would be camera detection->calculate turn amount->zero gyro->turn->when finished, re-check targeting and repeat if necessary. After a gyro, I think 2 cameras would be much more valuable than most other sensors. As I said before, a high camera has been proven to be acurrate for goal targeting. A low camera (kicker level?) could work using another ellipse detection algorigthm. Against the field, the balls would definitely have enough contrast to detect, but the field walls I'm not sure about.
On top of the cameras, some form of possession detection would be good. For our system, we mounted 3 IR sensors around the ball entry area. 1 sensor above the ball indicates when a ball is in our kicker (would have used this for autonomous if we had time most likely), and then 2 sensors, one on the left of the ball and one on the right, tell us the internal position of the ball in the kicker. A similar system might work in the autonomous bot idea.
AI flow as of now in my idea: Spin randomly->check for balls, spin again until found->aim for nearest ball->drive until in possession->spin until goal found->check if is correct goal, spin around if not->aim for goal and kick->repeat
Input?
EDIT: one more thing: a compass wouldn't work too well in this situation. I would expect that as electromagnetic motors, a fair amount of drift would be seen in any compass operation, causing the accuracy of the compass to change depending on speed and direction of driving
a team could use a second camera, they could mount one high for targeting and one low for an ellipse detection below the bumper for balls.
I was wondering about this, how do you hook up two cameras? Can you do this with a network switch or is it another way?
Alex.Norton
31-03-2010, 02:43
to paraphrase the words of the preeminent voice in the field of computer vision when DARPA asked the somebody to make a computer that could see as well as a 5 year old child: "Could we try for 2 year old autistic monkey instead?"
I will realize that this view isn't very adventurous, but I'm a firm believer that taking small steps to improve ones programming is important, that and I wouldn't want my programmers to spend so much time on it that they fail their classes.
I personally would rather see my team automate everything except the basic movement of the machine. Make the machine collect the game piece, turn and score. I've programmed fully autonomous tank games and it can be very hard to design a system that can predict the actions of an opponent. I like having a human mind in the loop acting as a garbage collector and main decision maker.
-Alex
Vikesrock
31-03-2010, 02:52
Alternatively, if a team could use a second camera, they could mount one high for targeting and one low for an ellipse detection below the bumper for balls.
Why not use a single camera with a tilt mount? You could either try to find a mount height that allows for a single angle that will detect balls and another angle to target or you could determine a range of tilt values to scan over in each mode.
Why not use a single camera with a tilt mount? You could either try to find a mount height that allows for a single angle that will detect balls and another angle to target or you could determine a range of tilt values to scan over in each mode.
Ball detection was talked about in this thread:
http://www.chiefdelphi.com/forums/showthread.php?t=82537&highlight=ball
My thinking is by having a camera at the ball level, it would greatly reduce if not eliminate the shadow issue, allowing you to use the findEllipse method.
one design i did see which is a median between the two is putting the camera on a pneumatic, lower position for being at the ball level, top position for being high enough to see the goal.
Greg McKaskle
31-03-2010, 07:48
I'll try to sum up the icon and compile thing quickly so as not to interrupt the design of SkyNet.
My claim is simply that the machine instructions contained in those clumps almost certainly were produced by a traditional compiler using a traditional text-based programming language -- quite likely C.
Textual representation of machine code is usually called assembly code. The actual machine code is in fact a sequence of bytes, but not human readable in the slightest. The LV product contains a compiler which writes machine instructions into memory based on the passes over the graph -- no calls to gcc or asm, nothing up the sleeve.
There are actually numerous ways for LV to target a platform, the compiler is the most common, another is a bytecode/VM method used for small targets such as the NXT, and the third is a translator that does produce C or VHDL to more quickly be able to use vendor specific tools. So while this is a way LV to target VxWorks, it is not the method being used.
As for icons replacing text, I don't think that is the right comparison to make. In fact text is just a sequence of graphics. Textual and mathematical language is just as abstract, if not more so, than a drawing.
The issue is the expressiveness of the programming language. Would you rather write down a paragraph about the sunset or snap a photo? Is a map better than directions? Is a picture of food better than a recipe? It all depends on what you are trying to do. IMO, the best environment would let you pick based on the task. Did I mention that LV has had a formula box since 1.0?
And now ... back to SkyNet.
Greg McKaskle
Chris Hibner
31-03-2010, 07:51
For the record, this didn't start out as a Labview discussion and I didn't lead it here -- nor did I want to.
There was first a claim that iconic programming was replacing text based programming.
I claimed that, on the contrary, text based programming is the foundation upon which iconic programming is built.
Icons (nodes if you prefer) graphically represent machine code.
AFAIK, other than compiling and/or assembling text files, we have no spiffier way of producing the machine code.
I never said text based languages would disappear. I said that they would fall away from being the norm, just like assembly isn't the norm today. When I started working in the embedded controls area, everything was done in assembly. Right after I started the company decided to do the first product done in C. You wouldn't believe how many, "oh, that'll never work" statements were going around the office.
Text to graphical coding is no different than the last evolution from assembly to C. Just like many graphical "languages" start out by auto-coding C, most C compilers start out by "auto-coding" the C to assembly. That doesn't mean that C isn't a higher level language than assembly, nor does it mean everyone has to know assembly to use C. Assembly is just the relic and the automation handles the details.
There also isn't necessarily a chicken-and-egg scenario. The original C compiler was most likely written in assembly or another high level language (like, say, FORTRAN). Current C compilers are written using the previous C compiler. In other words, you may need a different language to write the initial compiler, but as soon as you write that first compiler, you may be able to do away with it from that point on.
With all of that being said, text based programming is currently more efficient for coding than graphical languages in certain areas (as I said in the original post). For that reason, I don't see text based programming dying any time soon. What I do see is it will become less and less prevalent and it will only be used when there isn't a much better choice (much like when people use assembly today).
gvarndell
31-03-2010, 08:18
so as not to interrupt the design of SkyNet.
OK, I think we've beat the text vs icons issue far enough into the ground -- at least for this go-round :o
It is a distracting tangent to the thread.
I think all us software types are pleased that the kids (at least I hope it's the kids) are exploring the possibilities of software, so let's let them get back to it.
Alan Anderson
31-03-2010, 08:36
The original C compiler was most likely written in assembly or another high level language (like, say, FORTRAN).
The first C compiler was written in B.
Next Year, no matter the game, I challenge you to make your robot fully autonomous. ... Just post your opinions and I will add to the list if you want to take the challenge.
Teams That Are Willing To Take The Challenge:
*Team 589 (Just Me As Of Now)
*Team 33
*Team 2503
*Team 1086 I haven't read this entire thread; but if the folks who have signed up for the original poster's challenge (and any who haven't yet) would like to get a little practice in before attempting this challenge with a real robot - I can make it possible for you to control one or all of the 5th Gear simulated robots purely through software - I did this myself for our Overdrive simulation, and it is definitely a learning experience that would be good prep for attempting to the same with a real bot - Do I have any takers?
Blake
Main 5th Gear Thread: Link to 5G Thread (http://www.chiefdelphi.com/forums/showthread.php?t=79796&highlight=5th+gear)
This appears to be a very challenging yet intriguing problem. From personal coding experience on my team it seams that creating a 100% autonomous robot would be very plausible. In past years, I've been able to implement code to completely automate driver twos job. Although one difficulty of this is if a sensor breaks or something fails driver two will never trust the code again.
As for a entirely autonomous robot, I agree it's possible but the logic behind it would be massive. I have no doubt it can be done; As the head programmer on my team I find myself automating everything I can with sensors. The two main limits keeping me from pursuing a robot with more autonomy or 100% autonomous robot is time, and resources.
I have no doubt other people have hit similar bottlenecks. As for me, I'm planning on structuring a very modular library of labview code to speed up this process and allow for easy implementation regardless of next years game
efoote868
31-03-2010, 12:35
I have no doubt it can be done; As the head programmer on my team I find myself automating everything I can with sensors. The two main limits keeping me from pursuing a robot with more autonomy or 100% autonomous robot is time, and resources.
In six weeks, with a budget of $5,000 of things that can go into the robot, using a normal skill set (high school programmers), it won't be done.
I used DARPA as an example early on in this thread - teams of professionals and graduate level students with near unlimited bankroll behind them, completing a task that is arguably easier/more straight forward.
It took them two years to complete the challenge.
Take this years game as an example, broken into its most simplistic macro steps.
(offense)
1. Find a ball
2. Drive to the ball
3. Kick or push the ball into the goal
That's very straight forward, until you toss in the fact that there are 5 other robots on the field. If you spent time completing all three steps in code and then testing, I say that's time wasted that could have been spent perfecting your autonomous mode, or spent making your robot the easiest machine to control on the field.
I don't want to sound like a nag or a nay-sayer, and I don't want to keep you from learning or failure. I'm trying to offer my words of wisdom having spent 4 years as a student in FIRST and a year as a mentor in FIRST.
There's a reason all cars don't drive themselves, this AI stuff isn't as easy as you think it may be.
In six weeks, with a budget of $5,000 of things that can go into the robot, using a normal skill set (high school programmers), it won't be done.
I used DARPA as an example early on in this thread - teams of professionals and graduate level students with near unlimited bankroll behind them, completing a task that is arguably easier/more straight forward.
It took them two years to complete the challenge.
Take this years game as an example, broken into its most simplistic macro steps.
(offense)
1. Find a ball
2. Drive to the ball
3. Kick or push the ball into the goal
That's very straight forward, until you toss in the fact that there are 5 other robots on the field. If you spent time completing all three steps in code and then testing, I say that's time wasted that could have been spent perfecting your autonomous mode, or spent making your robot the easiest machine to control on the field.
I don't want to sound like a nag or a nay-sayer, and I don't want to keep you from learning or failure. I'm trying to offer my words of wisdom having spent 4 years as a student in FIRST and a year as a mentor in FIRST.
There's a reason all cars don't drive themselves, this AI stuff isn't as easy as you think it may be. You tempt me to prove you wrong - but I'll leave that to some group of enterprising students.
If you remember that we are building role models, leaders and careers; and that we are using robots and tournaments to do that; then I think teams and students can be outstanding successes in the time and money available to them.
Blake
virtuald
31-03-2010, 13:01
Textual representation of machine code is usually called assembly code. The actual machine code is in fact a sequence of bytes, but not human readable in the slightest.
Actually, there are some people who can read it. Very very very very few. But I know at least one. Its pretty ridiculous.
I would be down with this. I know if I brought it up during a meeting it would be cut down pretty fast, but that doesn't stop me from working on it outside the competition to learn.
I see a lot of time spent in the parking lot with bot the next couple of months.
Tom Bottiglieri
31-03-2010, 13:45
I used DARPA as an example early on in this thread - teams of professionals and graduate level students with near unlimited bankroll behind them, completing a task that is arguably easier/more straight forward.
It took them two years to complete the challenge.
If you really think the DGC/DUC task is easier than FRC, I think you may be mistaken. Before I moved to California, I worked at MIT with the DGC team on their continuing autonomous land vehicle research. Unknown terrain, traffic laws, REAL safety, and IC engines are all a bit more complicated than our dinky electric drive bases and arms.
But I think anyone trying this can take some experiences away from the DGC. First of all, the teams who had the best software also had the best hardware. If your machine is not mechanically reliable or controllable, you aren't going anywhere fast. One of the biggest lessons I've learned in my years as a software guy in FIRST is that the best software fix is usually a mechanical fix.
As far as the software goes, you really need to start thinking about how to set up your machine as a series of interconnected systems. There are basically three components to an autonomous robot control system: Perception, Planning, and Control. Perception is the data you take in from the world around you (vision, distance, GPS, and their associated post-processing). Planning is the part that understands how to interpret the world around it and make educated decisions on what to do. Control is the part that actually makes the robot do what it wants to do.
If you are familiar with the Model/View/Controller design pattern, you can loosely parallel Perception to the Model, Planning to the Controller, and Control to the View. (Where the model is what you have, the controller is what you want, and the view is what you get.)
gvarndell
31-03-2010, 13:48
machine code is in fact a sequence of bytes, but not human readable in the slightest.
Actually, there are some people who can read it. Very very very very few. But I know at least one. Its pretty ridiculous.
It depends on the machine (as well as the people).
Nearly 30 years ago, I didn't need a disassembler to read 6502 opcodes -- 8-bit machine with small instruction set.
Today, 20 years after last touching a 68K machine, I still remember the opcode for the NOP instruction.
Nowadays, even RISC machines have sufficiently complex opcode and operand encoding that I don't even try -- don't need to and there's no glory or money in it if I could.
Collaborative Development...?
Seeing as many of the tasks are the same from robot to robot, (ie drive to target) I am wondering what people would think about working on the problem in a collaborative manner. Also if you were to work on a collaborative autonomous engine, what language would you want to use.
(any language is the right language, I'm not trying to open another debate about languages)
Doug Leppard
31-03-2010, 15:06
In the latest Wired Magazine (April 2010, page 42) it addressed this issue. Article was called "Advantage: Cyborgs". It asked the question which is smarter humans or machines? As an example machines have been beating humans in chess for years. But if you combine humans with machines as a team they beat both the human alone and the machine alone because the team uses the strength of each.
Really if a FRC machine is done well it is a cyborg of the drivers and machine, each playing the part they do best. I have always pushed for putting as much as intelligence in the machine to help the drivers.
So you drivers and teams out there on the fields are really cyborgs killing off your opponents.
davidthefat
31-03-2010, 16:11
You know what, people that bring up DARPA and stuff, you got to think, their goal is totally different than ours, they want to build a machine that is pretty fail proof in the unexpected. We are trying to do it in a known environment with known factors, 2 different ball parks.
edit: I am very offended by the thoughts of some people "highschool programmers" well honestly, that may be tru to a certain extent, but think of the potential of someone that got this far without any help... I learned from books and trying things out myself, I only went to a programming class this year to get my career prep credit but then I learned of robotics through this class... Honestly how many 12 year olds do you see that know C++?
I haven't read this entire thread; but if the folks who have signed up for the original poster's challenge (and any who haven't yet) would like to get a little practice in before attempting this challenge with a real robot - I can make it possible for you to control one or all of the 5th Gear simulated robots purely through software - I did this myself for our Overdrive simulation, and it is definitely a learning experience that would be good prep for attempting to the same with a real bot - Do I have any takers?
I think that'd be pretty neat. Plus then you could experiment with a few different designs. Sure, its a bit of a exact environment, but you get to play with a robot any of the time.
-Tanner
I would be down with this. I know if I brought it up during a meeting it would be cut down pretty fast, but that doesn't stop me from working on it outside the competition to learn.
I see a lot of time spent in the parking lot with bot the next couple of months.
I like how you think, I'm in the exact same situation, if I brought it up I'd be looked at like I was insane, and I agree it's a challenge, but I'm willing to go for it. Unfortunately I'm the only one on who thinks it's worth a shot. I mean even if it's not perfect the experience would be full of learning opportunities.
One programmer from 1124 is taking on the challenge.
davidthefat
31-03-2010, 18:31
I like how you think, I'm in the exact same situation, if I brought it up I'd be looked at like I was insane, and I agree it's a challenge, but I'm willing to go for it. Unfortunately I'm the only one on who thinks it's worth a shot. I mean even if it's not perfect the experience would be full of learning opportunities.
One programmer from 1124 is taking on the challenge.
Yea you just work on it and if you want to test it just say its experimental code, no one will care unless it goes crazy and runs everyone over
Rion Atkinson
31-03-2010, 19:17
Okay, so here goes another post for this thread.
I keep seeing people say "I've been in FIRST four year and am now a mentor. I think you are over ambitious high-schoolers. This can't be done. MIT students can't do it. Not in six weeks anyway." Where are your heads? I mean seriously. Think! :mad: You call yourself mentors... Aren't you supposed to inspire? Would those people be and MIT if they weren't ambitious? Would we be here in FIRST if we didn't think that high-schoolers could build robots? I can't say this is true. But I have a feeling that when FIRST was started people like yourself said "That's stupid, high-schoolers can't build robots." And now look! We have teams like 148, 114, 254, 33, and 1114(The list goes on.) who are amazing! I sit back in awe practically drooling at when I see their robot! If I knew that they also had a fully autonomous robot, I would be floored! So please, use your brains and think people. Do we encourage these ambitious students, or do we tell them it can't be done? (Which we all know will just make them want to do it all the more. ;) )
Plus, it's the off-season! I'm going to be starting up my own team, keeping up with school, mastering designing with sheet metal, along with fine-tuning my CAD skills. I see no reason why we shouldn't encourage these students to try their best to do what MIT students take two years to do. I hope they succeed! :D Do I expect them to get a almost perfect code by the beginning of next season? YES! Do I expect them to have a fully autonomous robot for next season bot? No, but I would love to see it! :D
I would personally offer any help that is needed, but I am not a programmer. I'll be over here cheering you on though.
Go for it guys, have fun, learn, fail, stand up, do it all over again, and then when there seems to be no hope. You will succeed.
-Rion
P.S. - FIRST isn't DARPA.
Feel free to give me some bad rep for this post, I don't care.
davidthefat
31-03-2010, 19:33
So just got the book http://www.amazon.com/Introduction-Autonomous-Mobile-Intelligent-Robotics/dp/026219502X
Now read some of it, there are so many wheel configurations you can do and I think I want to attempt 2 and 4 legged bots later...
Will post a chart later
davidthefat
31-03-2010, 19:42
http://i41.tinypic.com/t8ufbp.jpg
theprgramerdude
31-03-2010, 20:28
Collaborative Development...?
Seeing as many of the tasks are the same from robot to robot, (ie drive to target) I am wondering what people would think about working on the problem in a collaborative manner. Also if you were to work on a collaborative autonomous engine, what language would you want to use.
(any language is the right language, I'm not trying to open another debate about languages)
This is what I was thinking. We could try to get a giant, inter-team project going, with each team responsible for certain sections of code, which gets tossed back and forth via e-mail, etc.
Should we start a sign-up list for those that want to do this over our 8-month break? I'm definitely psyched for this.
Edit: Plus, the rules would allow us to use this code next year (with modifications for the THAT game) as long as we keep releasing it to the public.
Radical Pi
31-03-2010, 20:51
This is what I was thinking. We could try to get a giant, inter-team project going, with each team responsible for certain sections of code, which gets tossed back and forth via e-mail, etc.
This sounds like a project for...google wave (finally a real use for that site). I have 25 invites laying around if anyone wants to try setting up for that
davidthefat
31-03-2010, 21:04
This is what I was thinking. We could try to get a giant, inter-team project going, with each team responsible for certain sections of code, which gets tossed back and forth via e-mail, etc.
Should we start a sign-up list for those that want to do this over our 8-month break? I'm definitely psyched for this.
Edit: Plus, the rules would allow us to use this code next year (with modifications for the THAT game) as long as we keep releasing it to the public.
If that were to work, people will have to pick one common language and people will have to code so well and comment very well too, I think its way too hectic to do it online, but if its in person, Im down..
ideasrule
31-03-2010, 21:08
This is what I was thinking. We could try to get a giant, inter-team project going, with each team responsible for certain sections of code, which gets tossed back and forth via e-mail, etc.
Should we start a sign-up list for those that want to do this over our 8-month break? I'm definitely psyched for this.
Edit: Plus, the rules would allow us to use this code next year (with modifications for the THAT game) as long as we keep releasing it to the public.
I'm definitely interested in this collaborative project. A completely autonomous robot certainly can't perform better than a human driver, but the code that'll be developed will almost certainly be useful for the autonomous period.
davidthefat
31-03-2010, 21:11
Another question for y'all: Which way do you make the omni bot "front" the diagonal wheel or the front face?
This way?
http://www.societyofrobots.com/images/robot_omni_4wheels.gif
or that tilted so the flat side is the front?
Chris is me
31-03-2010, 21:12
Okay, so here goes another post for this thread.
I keep seeing people say "I've been in FIRST four year and am now a mentor. I think you are over ambitious high-schoolers. This can't be done. MIT students can't do it. Not in six weeks anyway." Where are your heads? I mean seriously. Think! :mad: You call yourself mentors...
There's a difference between ambition and foolhardily charging at a ridiculous idea because you're overconfident in yourself and think anything is possible. I don't think it would be really inspiring to tell my team "Yeah, we can build Super Complex Bot XYZ in a week! Faster than 1114 or 148 ever could! All we gotta do is try and believe!". I guess I'm supposed to inspire students by giving them ridiculous goals and padding their egos.
davidthefat
31-03-2010, 21:15
There's a difference between ambition and foolhardily charging at a ridiculous idea because you're overconfident in yourself and think anything is possible. I don't think it would be really inspiring to tell my team "Yeah, we can build Super Complex Bot XYZ in a week! Faster than 1114 or 148 ever could! All we gotta do is try and believe!". I guess I'm supposed to inspire students by giving them ridiculous goals and padding their egos.
Read my sigs... Did Thomas Edison give up after not getting the light bulb after a couple tries?
edit: Im not quitting until I get it... That means even after college I don't get an autonomous robot, I will just join a company that has a same goal as me
ideasrule
31-03-2010, 21:16
There's a difference between ambition and foolhardily charging at a ridiculous idea because you're overconfident in yourself and think anything is possible. I don't think it would be really inspiring to tell my team "Yeah, we can build Super Complex Bot XYZ in a week! Faster than 1114 or 148 ever could! All we gotta do is try and believe!". I guess I'm supposed to inspire students by giving them ridiculous goals and padding their egos.
"Super Complex Bot XYZ" is the understatement of the year. Designing an intelligent autonomous is as hard as, if not harder than, building a nuclear fusion reactor with less than $100 000.
EDIT: I mean one that actually generates electricity instead of consuming it.
davidthefat
31-03-2010, 21:18
"Super Complex Bot XYZ" is the understatement of the year. Designing an intelligent autonomous is as hard as, if not harder than, building a nuclear fusion reactor with less than $100 000.
EDIT: I mean one that actually generates electricity instead of consuming it.
In before FIRST competition gets blown up by a faulty nuclear reactor some kid made...::ouch::
EthanMiller
31-03-2010, 21:19
Why not Subversion, CVS, or the like? Google Code, Sourceforge, and the like can be used for free. WindRiver and Netbeans I know have the ability to do SV and CVS, I don't know about LabView.
Also: Who's bot would it run on? It would be a little hard to develop for a robot that you've never seen. Just my opinion, of course, having never developed for an application like that. Also would make testing a small bit harder.
davidthefat
31-03-2010, 21:22
Why not Subversion, CVS, or the like? Google Code, Sourceforge, and the like can be used for free. WindRiver and Netbeans I know have the ability to do SV and CVS, I don't know about LabView.
:rolleyes: Now I wouldn't want every robot in the next competition to be fully autonomous too... Whats the point of being Fully autonomous then?
efoote868
31-03-2010, 21:35
If you really think the DGC/DUC task is easier than FRC, I think you may be mistaken. Before I moved to California, I worked at MIT with the DGC team on their continuing autonomous land vehicle research. Unknown terrain, traffic laws, REAL safety, and IC engines are all a bit more complicated than our dinky electric drive bases and arms.
I was referring to the 2004-2005 Grand Challenge. I believe that getting a vehicle autonomously from point A to point B, with no limits on money spent, people involved, and a year to prepare is an easier challenge, and is more straight forward than programming a robot to compete autonomously in an FRC game with up to $5000 worth of materials and sensors, in six weeks.
But that's my opinion.
I keep seeing people say "I've been in FIRST four year and am now a mentor. I think you are over ambitious high-schoolers. This can't be done. MIT students can't do it. Not in six weeks anyway." Where are your heads? I mean seriously. Think! :mad: You call yourself mentors... Aren't you supposed to inspire?
Re-read that post, and you'll see that I said "It won't be done".
I would absolutely love it if all of you proved me wrong.
I'm also suggesting that you guys should set more realistic goals given the time, money, manpower, not to mention computing capabilities of the cRio.
A more worthy goal than a fully automated match would be:
1. A fully functioning and planned autonomous
2. A very easy to control robot
3. A mesh of automated functions and a simple user interface.
4. Get more sponsorship for your team, so that it may survive longer.
5. Start another team.
... Also: Who's bot would it run on? It would be a little hard to develop for a robot that you've never seen. Just my opinion, of course, having never developed for an application like that. Also would make testing a small bit harder.Once again - If you pick the 5th Gear simulator, or a non-FIRST game that offers a similar opportunity to write code to control a simulated machine (Alex mentions a tank game (http://www.chiefdelphi.com/forums/showpost.php?p=946144&postcount=108)) - Then everyone is using the same "machine", your testing is quick and easy anywhere on the planet, and you get multi-machine interactions.
Using some sort of simulator/game would be a wise first step to take on this ambitious undertaking.
How do you eat an elephant? One bite at a time.
Blake
Chris Hibner
31-03-2010, 21:41
Here's the thing:
Is anyone really going to be able to accomplish this for next year. Probably not (but it depends on the game). However, a lot of people might get very close, and they're going to learn an insane amount during the process. If you want to try it, give it a go. The best thing is that you'll learn a lot of shortcomings of a lot of sensing technologies, you'll learn how to translate human thought process into logic steps, and you'll come away with a boat-load of knowledge about real world control systems and the typical issues you have to deal with. Try it and have fun!
Rion Atkinson
31-03-2010, 21:46
Re-read that post, and you'll see that I said "It won't be done".
I would absolutely love it if all of you proved me wrong.
I'm also suggesting that you guys should set more realistic goals given the time, money, manpower, not to mention computing capabilities of the cRio.
A more worthy goal than a fully automated match would be:
1. A fully functioning and planned autonomous
2. A very easy to control robot
3. A mesh of automated functions and a simple user interface.
4. Get more sponsorship for your team, so that it may survive longer.
5. Start another team.
Think of it like FLL. Those matches are completely autonomous. Are they not? They use nothing but sensors? No, I'm not saying that they should tackle this all at once. I'm just saying that by learning this one thing at a time, it could very well greatly benefit them in the future. :D If they learn about sensors, they could apply it to the robot next year, allowing the robot to be that much easier to control. Come on, chassis people build chassis in the off-season. What do programming guys do? They program.
Now yes, i would suggest that you first program a two minutes autonomous that just involves a empty field with the game pieces. Ones you have that down. I would then add in the chance of other robots. Just a thought.
efoote868
31-03-2010, 22:10
Another question for y'all: Which way do you make the omni bot "front" the diagonal wheel or the front face?
This way?
http://www.societyofrobots.com/images/robot_omni_4wheels.gif
or that tilted so the flat side is the front?
I'd tell you that robot has no front, and you should program it as such.
theprgramerdude
31-03-2010, 22:26
Why not Subversion, CVS, or the like? Google Code, Sourceforge, and the like can be used for free. WindRiver and Netbeans I know have the ability to do SV and CVS, I don't know about LabView.
Also: Who's bot would it run on? It would be a little hard to develop for a robot that you've never seen. Just my opinion, of course, having never developed for an application like that. Also would make testing a small bit harder.
IMO, I was thinking if this was a group project, we'd just agree on a set design before starting. Everyone already has a fully (I hope) functioning robot, the only difference would be the exact specs (wheels, drivetrain, minor issues that modular code would easily adapt too).
davidthefat
31-03-2010, 23:02
IMO, I was thinking if this was a group project, we'd just agree on a set design before starting. Everyone already has a fully (I hope) functioning robot, the only difference would be the exact specs (wheels, drivetrain, minor issues that modular code would easily adapt too).
Thats why you have multiple programmers on your team... :rolleyes: I can't say I can trust them with any code I write...
Radical Pi
31-03-2010, 23:03
I'd tell you that robot has no front, and you should program it as such.
I'd tell you that front is whichever way the camera is pointing.
davidthefat
31-03-2010, 23:04
I'd tell you that front is whichever way the camera is pointing.
I was referring to when you go forward on teh joystick, which side goes forward.
efoote868
31-03-2010, 23:15
I was referring to when you go forward on teh joystick, which side goes forward.
Me too.
davidthefat
31-03-2010, 23:17
Me too.
So you are saying it goes forward relative to the driver's POV? Oh I see...
davidthefat
01-04-2010, 00:25
Ah $@#$@#$@#$@#... Robotics is too addicting, I have halted my game that I have to turn in the end of the year for my AP Comp Sci class... Better not slack off on that
Alex.Norton
01-04-2010, 00:55
The biggest problem that I can see here (assuming all of the sensor, code, etc...) works perfectly is the development of an Artificial Intelligence. How does the machine decide when it is prudent to go for a game piece, heck how do you choose to go left/right at any given moment.
The AIs that I have worked with in the past used hundreds of runs to get good predictive and training data. How do you get your machine this type of data against a good opponent? Getting a machine that can run around an empty field is one problem. Getting a machine that can play against even another AI is extremely difficult. As an example I'm currently working on a biomimetic vision system that will runs of data sets well in excess of 10,000 images and despite this it can still only tell the difference between a zebra and a car 80% of the time.
The next major hurdle that I can on the horizon is processing power. From this discussion it would seems that most people want to bet on the camera for most of their more complex sensing needs. Doing this will involve object recognition (ball, other bots) integrated with position control to accurately place the robot on the field. Doing real time image processing (I haven't seen the NI libraries for this so I don't know what functions they do have) that must process and recognize objects takes a amazing amount of processing power. Power that I'm sure the cRio doesn't have, especially considering how much I've heard about the camera slowing the robot down too much.
If somebody has a proposal for how to solve the processing problem I would be happy to work on a project like this. Sadly I don't have access to a robot so I would need to work more from the strategy end. If somebody feels like organizing this PM me.
Collaborative Development.
To everyone looking to work with me on the ADK, i have created a FirstForge project (Bobotics ADK)
The framework is very basic, but very extendable.
In there there is the ADK,
and then a sample Robot (Bob) which is built utilizing the ADK library.
My hope is that it is simple enough to pick up quickly, the goals of the project are two fold: Bring autonomous to every team, and successfully implement a basic fully autonomous proof of concept.
http://firstforge.wpi.edu/sf/projects/bobotics
*oh and its written in Java as I feel it is the easiest to learn.
gvarndell
01-04-2010, 07:42
If somebody has a proposal for how to solve the processing problem ...
Stack the deck and reduce the horsepower required.
For example, for tracking other robots, require that every participating robot computes, and broadcasts, 5 or 10 times per second, its location on the field and its acceleration vector.
This could be a tiny little network packet tagged with robot ID.
Is this less sexy to contemplate than full realtime vision on each robot?
Sure it is.
Is it almost infinitely more doable than full realtime vision on each robot?
Duh.
Constraining the problem domain is the only way this is going to fly.
Even so, the enormity of this undertaking only seems apparent to some.
Perhaps that's a good thing...
alexhenning
01-04-2010, 12:09
I tried one in 09, but unfortunately I the robot wasn't ready until ship day and it had too many quirks (mostly relating to the camera). We ended up never using it. I'll try again next year if I have enough time with the robot. To help with this it would be nice if FIRST revealed the scores on the board + time left, so that your strategy could change when you were winning and losing.
theprgramerdude
01-04-2010, 12:14
The biggest problem that I can see here (assuming all of the sensor, code, etc...) works perfectly is the development of an Artificial Intelligence. How does the machine decide when it is prudent to go for a game piece, heck how do you choose to go left/right at any given moment.
The AIs that I have worked with in the past used hundreds of runs to get good predictive and training data. How do you get your machine this type of data against a good opponent? Getting a machine that can run around an empty field is one problem. Getting a machine that can play against even another AI is extremely difficult. As an example I'm currently working on a biomimetic vision system that will runs of data sets well in excess of 10,000 images and despite this it can still only tell the difference between a zebra and a car 80% of the time.
The next major hurdle that I can on the horizon is processing power. From this discussion it would seems that most people want to bet on the camera for most of their more complex sensing needs. Doing this will involve object recognition (ball, other bots) integrated with position control to accurately place the robot on the field. Doing real time image processing (I haven't seen the NI libraries for this so I don't know what functions they do have) that must process and recognize objects takes a amazing amount of processing power. Power that I'm sure the cRio doesn't have, especially considering how much I've heard about the camera slowing the robot down too much.
If somebody has a proposal for how to solve the processing problem I would be happy to work on a project like this. Sadly I don't have access to a robot so I would need to work more from the strategy end. If somebody feels like organizing this PM me.
In terms of processing power, I was already thinking about how to overcome the (ironic) limits of the Crio. Back in '07, a team was planning on attacing a full computer to their robot to do calculations. I was thinking like-wise for this project.
My current idea is as follows. It would involve a DC-DC converter connected to the PD board, which could then plug straight into the ATX connector on a motherboard. Example:http://www.mini-box.com/M4-ATX?sc=8&category=981
A proccesor would then handle several inputs from various sensors that the Crio can't handle. A CUDA-based GPGPU would be able to handle most calculations from the cameras such as pattern recognition and positioning.
What do you think?
Back to the 10,000 Lakes Regional as my team needs the current Auton code done.
Edit: and IMHO, I believe Java would be a horrible language to create a fully autonomous robot, as it's slowness and inefficiences would handicap even the best system. It's why I chose to use C++ this year again.
As long as your definition of a "fully autonoumous" robot consists of something on the order of simply finding a game piece and kicking it in the direction of a goal you at least have a chance to be succesful. Many of this year's and previous year's robots already do that in autonomous, no big deal to run it for an additional 2 minutes.
However, don't expect your robot to do very well in competition. A game designed for competely autonoumous robots like FIRST TECH CHALLENGE or FIRST LEGO LEAGUE is much simpler than the games FIRST designs for their top level of competition.
Remember that you are part of an alliance. Your inability to co-operate with your teammates to maximize each robot's potential would be the equivelent of a football player running whatever play he feels like without co-operating with the rest of the team. Don't expect it to work very well.
Using this year's game for an example, how do you in intend to determine whether your robot should be going after a ball or blocking an opposing robot? How do you decide when to come over the bump into the front zone and assist in scoring? How do you decide when to come over the bump into the front zone and block the opposing alliance's blocker bot so your striker can score? How do you know if your alliance partner has moved to the middle and you need to move to the far zone to clear balls from it? If you do decide to block an opposing robot, how do you know it's not dead and you're wasting your time on it? How do you know that your alliance's striker bot is broken down and you need to move forward and score? How do you know both your alliance mates are on their back and you need to flip one back upright to have a chance to get past their blocker bot and score (this happened to us in the semi-finals, we won the game 2-1: http://www.thebluealliance.net/tbatv/match/2010la_sf2m2)?
These are just some of the decisions I've made as field coach on our team this year. They're all pretty straight forward and simple, but impossible to make without an overview of the game situation. I'd be interested to hear how you intend to implement any kind of awareness of the tactical situation into your robot. You can pre-program a game plan into it, but without an overall awareness of the game, you'll discover the truth of an old saying that's proven itself through the years: "No plan survives contact with the enemy".
I think it's great to inspire people to work towards fully autonomous robots, just don't lose sight of reality. Many of your teammates will probably want to do at least reasonably well at their competions. In the real world, your robot will be a liability to it's alliance.
ideasrule
01-04-2010, 16:28
I vote for C++ too. Not only is Java slow, the WPILibJ has no libraries for file access or for more than basic vision processing. Java also doesn't allow low-level access to memory.
efoote868
01-04-2010, 17:05
I vote for C++ too. Not only is Java slow, the WPILibJ has no libraries for file access or for more than basic vision processing. Java also doesn't allow low-level access to memory.
You know that embedded Java programming is little more than accessing low level, native functions through a JNI?
Also, the fact that it doesn't allow low-level access to memory is a feature in most cases, making it harder to screw up.
More important than picking a language is figuring out the algorithms themselves, which may be as simple as "drive here, do action x, turn"... etc, with the smaller pieces left to teams to implement.
But then again, you don't know what the game will be, so its incredibly difficult to make meaningful decisions on strategies.
Best of luck anyhow. I feel like I've been a negative nancy for too long with this, I'll only post in this thread from now on if I can offer help :]
Alan Anderson
01-04-2010, 17:21
Should we start a sign-up list for those that want to do this over our 8-month break?
What break are you referring to?
What break are you referring to?
Earlier in the thread there was discussion about working on the project in collaboration rather than individually.
Right now I am starting up a Java team to build an ADK(autonomous dev. kit).
It would be nice if the C++ team could get a lead too, this way we could combine our efforts to provide an ADK in both c++ and java.
The java project is on first forge : http://firstforge.wpi.edu/sf/projects/bobotics
ideasrule
01-04-2010, 18:03
You know that embedded Java programming is little more than accessing low level, native functions through a JNI?
Yes, I found that out after spending enormous amounts of time trying to use the native vision libraries with Java. The result was definitely not pretty (the vision code was many times longer than it would have been with C++), and got even uglier when I had to create my own C file to read from a cRIO file.
A program written in C can easily be run on a cRIO imaged for Java, but as far as I know a Java program cannot be run on a cRIO imaged for C.
davidthefat
01-04-2010, 19:37
Well I delivered this message in the team meeting today, went pretty well, but I think Im the only one still that wants full sutonomous
davidthefat
01-04-2010, 19:58
In terms of processing power, I was already thinking about how to overcome the (ironic) limits of the Crio. Back in '07, a team was planning on attacing a full computer to their robot to do calculations. I was thinking like-wise for this project.
My current idea is as follows. It would involve a DC-DC converter connected to the PD board, which could then plug straight into the ATX connector on a motherboard. Example:http://www.mini-box.com/M4-ATX?sc=8&category=981
A proccesor would then handle several inputs from various sensors that the Crio can't handle. A CUDA-based GPGPU would be able to handle most calculations from the cameras such as pattern recognition and positioning.
What do you think?
Back to the 10,000 Lakes Regional as my team needs the current Auton code done.
Edit: and IMHO, I believe Java would be a horrible language to create a fully autonomous robot, as it's slowness and inefficiences would handicap even the best system. It's why I chose to use C++ this year again.
What I was thinking, a microprocessor takes care of the individual stuff, like multi threading, but with robots hardware
Greg McKaskle
01-04-2010, 22:12
The underlying vision for Java, C++, and LV are all thin wrappers calling into the same binary. While the languages have different capabilities and different runtime features, as long as you are using NI-IMAQ via the WPI libraries, the vision performance isn't due to a language choice.
Greg McKaskle
Dancin103
01-04-2010, 22:13
I must say, if I ever saw this I would be amazed. It would be awesome. BTW this is my 340th post. Go GRR. :)
theprgramerdude
01-04-2010, 22:49
What break are you referring to?
An FRC-members schedule:
January-February: Nonstop work on robot
March-April: Competitions and Nationals
May-December: Intermittent workload until next season. <-8 month break
davidthefat
02-04-2010, 00:07
Is there any rules on limiting the number of extra microprocessors the robot may have? Was thinking of uing like 2 or 3 extra microcontrollers to help the cRio
The Lucas
02-04-2010, 00:29
Is there any rules on limiting the number of extra microprocessors the robot may have? Was thinking of uing like 2 or 3 extra microcontrollers to help the cRio
Many rules limiting how they are used. Particularly <R03> regarding custom circuits and all cost accounting rules. Also additional non KOP motors rules and power sources rules if you are thinking of using a laptop. Of course if you want to see how deep the rabbit-hole goes (http://www.adambots.com/wiki/Co-Processor)
gvarndell
02-04-2010, 07:22
My current idea is as follows. It would involve a DC-DC converter connected to the PD board, which could then plug straight into the ATX connector on a motherboard. Example:http://www.mini-box.com/M4-ATX?sc=8&category=981
A proccesor would then handle several inputs from various sensors that the Crio can't handle. A CUDA-based GPGPU would be able to handle most calculations from the cameras such as pattern recognition and positioning.
What do you think?
As my 5 year old son often says -- and then what?
Figuring out how to power a PC motherboard on a robot gets you nothing but added weight and battery drain.
You're gonna have to take this idea to a higher plane if you want anyone to get excited about it.
My point is not to squelch your obvious enthusiasm, but engineers generally don't design solutions and then look for a problem that it solves.
Pretend that you've just read a product announcement:
Late summer 2010, BeastlyRoboVision Inc. will be offering a self-contained robot vision module. The module will cost around $900.00. It will be capable of tracking up to 16 (relatively large) objects out to a distance of 100 feet, with a 360 degree field of view. The module interfaces to any robot control system via 10 Gbit wired network interface, providing a low latency, high bandwidth, TCP/IP connection. FIRST has already approved this module for use in the 2011 FRC robots and NI has promised a 10 Gbit ethernet module for the cRio to be available by late summer.
Now, some of you set out to design and build that vision module, and some of you set out to design a robot that can make good use of it.
As long as your definition of a "fully autonomous" robot consists of something on the order of simply finding a game piece and kicking it in the direction of a goal you at least have a chance to be successful. Many of this year's and previous year's robots already do that in autonomous, no big deal to run it for an additional 2 minutes. How about if they define success as Inspiring and Recognizing Science and Technology excellence? I'm going to bet many folks will want to line up behind that goal!
However, don't expect your robot to do very well in competition. A game designed for completely autonomous robots like FIRST TECH CHALLENGE or FIRST LEGO LEAGUE is much simpler than the games FIRST designs for their top level of competition. How about if they use this as a tool in competing to earn FIRST's top level of competition's top award? - The Chairman's award.
Remember that you are part of an alliance. Your inability to co-operate with your teammates to maximize each robot's potential would be the equivalent of a football player running whatever play he feels like without co-operating with the rest of the team. Don't expect it to work very well. I think many FIRST FRC teams invest their time and resources across many fronts, and thast building a "best" robot is only one of them. Almost certainly this reduces their robots' abilities to play the game well (but generally makes them better FIRST teams). Are you saying that investing in any goal that distracts significantly from building and improving the robot should be discouraged?
Using this year's game for an example, how do you in intend to determine whether your robot should be going after a ball or blocking an opposing robot? How do you decide when to come over the bump into the front zone and assist in scoring? How do you decide when to come over the bump into the front zone and block the opposing alliance's blocker bot so your striker can score? How do you know if your alliance partner has moved to the middle and you need to move to the far zone to clear balls from it? If you do decide to block an opposing robot, how do you know it's not dead and you're wasting your time on it? How do you know that your alliance's striker bot is broken down and you need to move forward and score? How do you know both your alliance mates are on their back and you need to flip one back upright to have a chance to get past their blocker bot and score (this happened to us in the semi-finals, we won the game 2-1: http://www.thebluealliance.net/tbatv/match/2010la_sf2m2)?
A fully autonomous robot-system can have an interface between itself and the rest of the game participants. For example a human sensor in the driver's station can listen to instructions given by allies and can transfer them to the cRIO part of the system. To keep the project's intent intact these instructions should probably be distilled into a relatively small set of pre-defined messages.
These are just some of the decisions I've made as field coach on our team this year. They're all pretty straight forward and simple, but impossible to make without an overview of the game situation. I'd be interested to hear how you intend to implement any kind of awareness of the tactical situation into your robot. You can pre-program a game plan into it, but without an overall awareness of the game, you'll discover the truth of an old saying that's proved itself through the years: "No plan survives contact with the enemy". See previous paragraph, and notice that the folks haven't decided to try to create a fully autonomous alliance (yet). They are discussing creating a fully autonomous robot.
I think it's great to inspire people to work towards fully autonomous robots, just don't lose sight of reality. Many of your teammates will probably want to do at least reasonably well at their competions. In the real world, your robot will be a liability to it's alliance. Perhaps. Or perhaps it will be a huge asset to the program, and all teams in it, because it becomes tangible evidence that FIRST is highly successful; and the resulting robot(s) consequently become an excellent inspirational tool.
Blake
PS: I haven't forgotten that this would/will be a very hard job. Nor have I changed my mind about spending more than one season helping a dedicated group of students eat this elephant one bite a time.
I agree, the processing is a very debilitating bottleneck. We've debated some solutions such as other processors and ran into the same problems discussed above. I think the way to combat would be to find the best combination of processing and plausibility of it actually working. As stated above you could strap a computer on the robot but not only does a CUDA processor pull massive power, upwards of 400 watts if I remember correctly; and as for getting useful information out of the mother bored would be a challenge. Also probability of a mother bored not cracking is slim to none. Although I'm unfamiliar with this new camera module I think that may be the way to do it. I'm planning on researching it in depth as soon as I can.
And as discussed above, the amount of field combinations is practically infinite. Maybe the way to go at this problem would be to be to use the AI to execute a small task within the game. Just like the ability to activate camera tracking in teleop this year, I agree we should aim for the best possible AI, but as stated that's a massive task. As a software developer and with experience from working with my drive team I believe this would be a valuable asset. It would take a lot of processing off the cRIO, and allow the coach to process the entire field, then rely to the drivers to press a button to execute the proper function.
Alan Anderson
02-04-2010, 10:05
An FRC-members schedule:
January-February: Nonstop work on robot
March-April: Competitions and Nationals
May-December: Intermittent workload until next season. <-8 month break
You don't go to offseason competitions or hold training sessions?
How about if they define success as Inspiring and Recognizing Science and Technology excellence? I'm going to bet many folks will want to line up behind that goal!
Sounds good to me! I thought that was the definition of FIRST anyway, no matter what degree of technical challenge a team attempted.
How about if they use this as a tool in competing to earn FIRST's top level of competition's top award? - The Chairman's award.
Nothing wrong with that. It'll definitely teach them science, engineering, and technology skills!
I think many FIRST FRC teams invest their time and resources across many fronts, and thast building a "best" robot is only one of them. Almost certainly this reduces their robots' abilities to play the game well (but generally makes them better FIRST teams). Are you saying that investing in any goal that distracts significantly from building and improving the robot should be discouraged?
Don't think I said that. You wouldn't consider full-autonomous as part of building and improving the robot? I would.
A fully autonomous robot-system can have an interface between itself and the rest of the game participants. For example a human sensor in the driver's station can listen to instructions given by allies and can transfer them to the cRIO part of the system. To keep the project's intent intact these instructions should probably be distilled into a relatively small set of pre-defined messages.
It appears that we have a big disconnect on what "fully-autonomous" means. My definition is more in line with the online dictionaries and Wikipedia:
A fully autonomous robot has the ability to
- Gain information about the environment.
- Work for an extended period without human intervention.
- Move either all or part of itself throughout its operating environment without human assistance.
- Avoid situations that are harmful to people, property, or itself unless those are part of its design specifications.
Not sure what your "human sensors" are (voice recognition or real people pressing buttons in the loop?) but in either case, it's still human intervention.
See previous paragraph, and notice that the folks haven't decided to try to create a fully autonomous alliance (yet). They are discussing creating a fully autonomous robot.
Which is exactly what I was pointing out. Their fully autonomous robot won't have any interaction with their team mates. Maybe that's the new definition of "co-opertition"! No problem if that's what they want, just think about all the ramifications.
Or perhaps it will be a huge asset to the program, and all teams in it, because it becomes tangible evidence that FIRST is highly successful; and the resulting robot(s) consequently become an excellent inspirational tool.
Does "highly succesful" mean "uses the highest technology"? I always considered "highly succesful" to mean inspiring young people to be science and technology leaders, by engaging them in exciting mentor-based programs that build science, engineering and technology skills, that inspire innovation, and that foster well-rounded life capabilities including self-confidence, communication, and leadership.
It is a fact of life that not all kids are interested in the programming side of FIRST. Nothing wrong with that, and nothing wrong with the programmers doing everything they can. Some kids are more interested in the mechanical aspects, some are more interested in the robotic competition aspects. Doesn't FRC still stand for FIRST ROBOTICS COMPETITION ? Nothing wrong with a team sacrificing their competitiveness to demonstrate their programming skills, just make sure that's what the team wants. My personal viewpoint is that sacrificing everything else about FIRST to emphasive a team's programming skills may not be in the best interests of FIRST or the rest of the team.
gvarndell
02-04-2010, 13:16
A fully autonomous robot has the ability to
- Gain information about the environment.
- Work for an extended period without human intervention.
- Move either all or part of itself throughout its operating environment without human assistance.
- Avoid situations that are harmful to people, property, or itself unless those are part of its design specifications.
When the operating environment is sufficiently constrained, and the time period is limited to 2 minutes, and the actions a robot can initiate are adequately limited, this is a much less daunting problem.
It's still big, no doubt.
But please consider that, for some of these kids, the typical team's 'make the robot so I can drive it' programming is about as challenging as putting Legos together or 'paint by numbers'.
Personally, I only hope this initiative isn't a flash in the pan.
ideasrule
02-04-2010, 14:42
For people who are interested in this, take a look at the autonomous programs on Spirit, Opportunity, and MSL. They have 20, 20, and 200 MHz processors respectively, running vxWorks, the same operating system that's on the cRIO. They also operate in a much simpler environment than FRC. These robots give a good idea of what vision processing is capable of.
davidthefat
02-04-2010, 16:28
Wish me luck, Im gonna order the ATmega640 and get all the parts from radio shack and put it all together during spring break... Hope I don't fry anything
Rion Atkinson
02-04-2010, 17:47
For people who are interested in this, take a look at the autonomous programs on Spirit, Opportunity, and MSL. They have 20, 20, and 200 MHz processors respectively, running vxWorks, the same operating system that's on the cRIO. They also operate in a much simpler environment than FRC. These robots give a good idea of what vision processing is capable of.
Those would me the MARS Rovers, no? I think Dave could shed some light on that subject. ;)
Radical Pi
02-04-2010, 19:46
It appears that we have a big disconnect on what "fully-autonomous" means. My definition is more in line with the online dictionaries and Wikipedia:
A fully autonomous robot has the ability to
- Gain information about the environment.
- Work for an extended period without human intervention.
- Move either all or part of itself throughout its operating environment without human assistance.
- Avoid situations that are harmful to people, property, or itself unless those are part of its design specifications.
Not sure what your "human sensors" are (voice recognition or real people pressing buttons in the loop?) but in either case, it's still human intervention
Probably not the idea of the other people wanting to participate in this, but I don't think a "full autonomous" robot is a good target. A "Human-assisted autonomous" (like hybrid mode?) is a more reasonable target for competition use IMO. In my limited memory of the mars rovers, I remember that even they weren't "fully autonomous". To compensate for the immense lag in transmissions between earth and mars, the human operators on earth would send out high-level commands, and then the rover implements the commands autonomously, doing things such as avoiding rocks in it's path while moving. That's exactly what would be useful in one of these competitions.
Imagine walking to the field with a control board that is one switch: score or block. These tasks are all autonomous, it's just the human operators tell the bot whether to kick that ball or to pin that bot. I'd still call this an excellent project if that was where it went
theprgramerdude
02-04-2010, 20:43
I agree, the processing is a very debilitating bottleneck. We've debated some solutions such as other processors and ran into the same problems discussed above. I think the way to combat would be to find the best combination of processing and plausibility of it actually working. As stated above you could strap a computer on the robot but not only does a CUDA processor pull massive power, upwards of 400 watts if I remember correctly; and as for getting useful information out of the mother bored would be a challenge. Also probability of a mother bored not cracking is slim to none. Although I'm unfamiliar with this new camera module I think that may be the way to do it. I'm planning on researching it in depth as soon as I can.
I was aware of the power draw. I'm not saying we should strap on a few 295GTX's in SLI and go from there, but use a smaller processor which draws 100-150 watts max. A low-power dual-core CPU would simply handle off-loading tasks to the GPU and handling I/O to the memory banks and Crio. My old-model 8800 GTS would easily handle the job, and it tops out at ~120 watts.
I guess we'd also need a way to keep a powersupply which can survive even in extreme low voltage situations and keep the current going to the system. As you could witness by watching the DS during a match this year, simply taxing 2 CIM's to stall drops the voltage to 8ish volts from a full battery, more if you have other systems or a damaged battery (as I witnessed when we got down to 5, then witnessed some fluid around it after the match. It was retired).
Folks - About adding more computing power to an FRC Robot - The last time I looked into them, I thought the Gumstix line offered some attractive options - Blake
http://www.gumstix.com/
http://en.wikipedia.org/wiki/Gumstix
davidthefat
02-04-2010, 21:41
Folks - About adding more computing power to an FRC Robot - The last time I looked into them, I thought the Gumstix line offered some attractive options - Blake
http://www.gumstix.com/
http://en.wikipedia.org/wiki/Gumstix
Currently attempting to make a board with a PIC controller, mentors gonna help me decide on what though
davidthefat
03-04-2010, 02:02
Oh $@#$@#$@#$@#, costing me about $100 out of my mom's pocket for all this stuff... $@#$@#$@#$@# it can get expensive real fast
Greg McKaskle
03-04-2010, 09:16
Cost is one of the reasons why mentors have been encouraging you to consider simulation or emulation. SW development doesn't have to wait for HW completion, and in the real world it typically doesn't.
Greg McKaskle
demosthenes2k8
03-04-2010, 12:16
Although I haven't read the full thread, I would like to say that I think it's feasable, and would love to attempt this over the summer.
We had a concept like this early on, but it was scrapped due to hardware problems-half the sensors it needed didn't get mounted. It definitely wasn't as complete a concept as this, but it should possible with Chopshop's current system. We still have the code saved in svn and hg, so we can get it back. The hardest part would probably be using the camera to see what's going on...
AustinSchuh
03-04-2010, 17:36
Folks - About adding more computing power to an FRC Robot - The last time I looked into them, I thought the Gumstix line offered some attractive options - Blake
The BeagleBoard is another comparable board that costs about $149. It's a lot cheaper than a Gumstix, though it isn't as small. And it uses the same CPU.
BeagleBoard (http://www.beagleboard.org/)
The BeagleBoard and the Gumstix are running an OMAP3 chip, which as an ARM Cortex-A8 core in it. They are clocked between 500 Mhz and 600 Mhz CPUs with Cortex-A9's are coming out shortly, and look to be very high performers. They are clocked at about 1 Ghz, and have two cores. I'm personally waiting until I can get my hands on an Cortex-A9 before I invest in any hardware for a similar project.
For those of you interested in learning how to do some AI, I recommend looking over the CS188 lecture slides from Berkeley. http://inst.eecs.berkeley.edu/~cs188/fa09/announcements.html I took the class a semester ago, and learned a lot.
Radical Pi
03-04-2010, 18:01
The BeagleBoard could be a good option. Load a small distro of linux on it, and then communicate over RS-232. Personally I couldn't do it since my mentor would kill me for breaking the CAN-bus :P (do RS-232 multiplexers exist, and more importantly, would they be FRC-legal?
EDIT: Actually, a board with an ethernet port would probably be easier to use. If we put a switch on eth2, we could plug the camera and extra board in and have the alternate board read directly from the camera and do the processing there. It would save a lot of overhead for many teams and bypass any rules issues, since ethernet switches are perfectly legal (2CAN)
The Lucas
03-04-2010, 18:37
You can also use SPI, I2C, and CAN for communication, provided you follow the rules.
Kevin Watson
04-04-2010, 00:43
For people who are interested in this, take a look at the autonomous programs on Spirit, Opportunity, and MSL. They have 20, 20, and 200 MHz processors respectively, running vxWorks, the same operating system that's on the cRIO. They also operate in a much simpler environment than FRC. These robots give a good idea of what vision processing is capable of.Anyone care to guess how far Spirit or Opportunity can drive fully autonomously during the few minutes a match lasts?
-Kevin
Anyone care to guess how far Spirit or Opportunity can drive fully autonomously during the few minutes a match lasts?
-Kevin
I would say about 20ish feet. Given the overall mechanical speed and the environment its in. Plus it needs to be REALLY sure its not going to get hurt wherever its heading. Then again, I would imagine they don't want it going to far with asking for another command because of safety issues.
Chris is me
04-04-2010, 01:11
I would say about 20ish feet. Given the overall mechanical speed and the environment its in. Plus it needs to be REALLY sure its not going to get hurt wherever its heading. Then again, I would imagine they don't want it going to far with asking for another command because of safety issues.
The limiting factor probably isn't mechanical speed...
Burmeister #279
04-04-2010, 01:17
I actually laughed out load at the first post :D I would love to do it, but our head mentor has zero faith in the programming for who knows what reason. I was told that once she decided the robot was performing fine i wasn't allowed to touch it again until after competition. God himself couldn't convince her to allow the programmers to fully automate our robotn:mad:
Kevin Watson
04-04-2010, 01:33
I would say about 20ish feet.Without any vision-based autonomy running, the rovers might be able to move about nine feet. With vision-based hazard avoidance and path planning turned-on, each rover would be hard pressed to move 20 cm, or about 8 inches, in that same amount of time. This is a very, very tough problem to solve without some very serious hardware and software. It might be more rewarding to solve smaller problems before tackling something like this. One example might be a vision-based ball detector that could properly align the 'bot before kicking.
-Kevin
...With vision-based hazard avoidance and path planning turned-on, each rover would be hard pressed to move 20 cm, or about 8 inches, in that same amount of time...
-Kevin
You just blew my mind. :eek:
ideasrule
04-04-2010, 03:10
Without any vision-based autonomy running, the rovers might be able to move about nine feet. With vision-based hazard avoidance and path planning turned-on, each rover would be hard pressed to move 20 cm, or about 8 inches, in that same amount of time.
The MERs have a top speed (mechanically limited) of 5 cm/s, but they stop for 10 seconds every 20 seconds for the vision processing to complete. Average speed is 1 cm/s. That's 120 cm in 2 minutes, which is really bad, but more than 20 cm. This is one of the strongest reasons for human space exploration: a four-year-old child on Mars' surface could accomplish far more science and moving around in ten minutes than the rovers could in a day.
This is a very, very tough problem to solve without some very serious hardware and software. It might be more rewarding to solve smaller problems before tackling something like this. One example might be a vision-based ball detector that could properly align the 'bot before kicking.
-Kevin
Note that the hardware on board the MERs are extremely limited compared to computers on Earth, even by 2004 standards, because MER hardware must be radiation-hardened. Processor speed is 20 MHz, which is 20 times slower than the cRIO.
gvarndell
04-04-2010, 11:27
Without any vision-based autonomy running, the rovers might be able to move about nine feet. With vision-based hazard avoidance and path planning turned-on, each rover would be hard pressed to move 20 cm, or about 8 inches, in that same amount of time. This is a very, very tough problem to solve without some very serious hardware and software. It might be more rewarding to solve smaller problems before tackling something like this. One example might be a vision-based ball detector that could properly align the 'bot before kicking.
-Kevin
This is the essence of what I've been trying to get across in several previous posts to this thread -- the point just seems to land with a thud.
If you had realtime vision that could identify and track numerous objects out to 100 feet and down to roughly 1 cubic foot in size, and the information about those objects was streaming into your robot control system via a simple socket connection, *** what would you do with the data??? ***.
You can fairly easily synthesize a virtual playing field, populate it with virtual allies, opponents, fixed obstacles, and game pieces.
You can animate this virtual world and provide well defined information about what's going on in it to your autonomy functions.
You can then start *really* exploring the difficulties of implementing autonomous behavior in software.
Or, you can pipe-dream about strapping more computers onto your robot in hopes of solving.... what?
davidthefat
04-04-2010, 11:47
Without any vision-based autonomy running, the rovers might be able to move about nine feet. With vision-based hazard avoidance and path planning turned-on, each rover would be hard pressed to move 20 cm, or about 8 inches, in that same amount of time. This is a very, very tough problem to solve without some very serious hardware and software. It might be more rewarding to solve smaller problems before tackling something like this. One example might be a vision-based ball detector that could properly align the 'bot before kicking.
-Kevin
:ahh: Its La Canada... Well, I think CV will be able to pull it off before you guys (no hate) BTW Good Job on this year's football team, you guys actually rocked, was a close game.
BTW The way I am thinking of doing it is obviously smaller functions all ran in succession;
like:
/*
*This is BS code, just made out of my butt
*I am aiming to make the main cpp file to have minimum code
*All the code would be in the individual functions to allow reusage
*That means you can use it in teleop mode too Like what you said
*/
while(isAuto)
{
CheckEnvirn();
if(CheckObject() == true)
{
GetObject();
DoSomething();
}
else
{
Move();
}
}
I would not issue motion commands and do vision processing on the same thread of execution. Vision processing takes a long time and usually you would like to issue/modify motion commands on very regular small intervals.
If you want a model to base code off of, check out Tekkotsu
http://www.tekkotsu.org/
http://www.tekkotsu.org/dox/
davidthefat
04-04-2010, 12:03
I would not issue motion commands and do vision processing on the same thread of execution. Vision processing takes a long time and usually you would like to issue/modify motion commands on very regular small intervals.
If you want a model to base code off of, check out Tekkotsu
http://www.tekkotsu.org/
http://www.tekkotsu.org/dox/
I was planning on doing the vision stuff on a board that I will make (as soon as I order the chip...)with ATmega1284p chip (pdip package) so it takes the load off the crio
apparently ATmega1284p chip is industrial use status... It must be hard core
The Lucas
04-04-2010, 12:49
If any of the control system higher ups are still monitoring this thread: Are we going to be allowed to send packets to alliance robots next year?
When the control system was introduced 2 years ago there was talk about communication between alliance robots. Since we have been rolling out new features of this system every year (CAN and vision feed are new this year). Is alliance communication still in the plans?
If any of the control system higher ups are still monitoring this thread: Are we going to be allowed to send packets to alliance robots next year?
Is it possible? yes. Is it allowed? not yet.
I spoke with Brad Miller (wpilib) about this a couple weeks ago regarding robot-robot com. From his response it didn't seem like something FIRST was considering right now. Seeing as the Zigbee is a legal device except for the price tag (cheapest one is $700), I think the best way to have something like this is if FIRST would exempt Zigbee module from the price restriction, or even better if it could be registered as a KOP item of the bill of materials.
http://shop.sea-gmbh.com/crio-produkte/module/funktechnologie/crio-zigbee-modul-10.html
It would also be nice if they used a localization system like the star gazer for target recognition. I think it would be much easier on the crio to use this sensor rather than the camera. Also could possibly be used for robot identification
http://www.robotshop.com/hagisonic-stargazer-rs-localization-system-1.html
Kevin Watson
04-04-2010, 14:02
...but they stop for 10 seconds every 20 seconds for the vision processing to complete. I'm not sure where your numbers are from, but each rover drives 10cm and then stops while calculating the next arc to drive. The stereo image to range map calculations alone take about 50 seconds.
-Kevin
AustinSchuh
04-04-2010, 15:49
I was planning on doing the vision stuff on a board that I will make (as soon as I order the chip...)with ATmega1284p chip (pdip package) so it takes the load off the crio
apparently ATmega1284p chip is industrial use status... It must be hard core
I would be very impressed if you were able to do much vision with the ATmega. It's a 20 MIPS CPU[1]. The CPU in the cRIO is a FreeScale MPC5200 runs at 750 MIPS at 400 MHz, and includes a floating point unit. So, the power added by the Atmega has about 2% of the CPU power of the cRIO's CPU. And if you need to do any floating point on the ATmega, the cRIO would outperform the ATmega by an even larger margin if it were doing the same job. And none of that even takes into account the extra hardware that some CPU's will have to deal with getting the picture into memory to operate on.
The ATmega is typically used by Industry when someone needs a cheap CPU that has a fair amount of performance and uses very little power.
I'm not saying that the ATmega is a bad CPU. I'm just trying to say that it isn't very well fit for the job you are trying to offload to it. If you do get an ATmega and start to program it, you will learn a lot about how embedded systems are put together. And while that may not be the lesson you are looking for, it's definitely pretty cool to learn how that kind of stuff works.
Something like the Beagleboard or Gumstix that were posted earlier are quite a bit faster. They both use OMAP3 series CPU's. The Beagleboard clocks in at 600 MHz and 1200 MIPS. The Gumstix uses the same CPU. Both of those also have DSP's, which will let you do even more computations. The DSP it's self clocks in at 500 MHz and 4000 MIPS, and that's on top of the 1200 MIPS from the CPU.
[1] MIPS stands for million instructions per second. Since each instruction on different architectures will do different amounts of work, it's still comparing apples to oranges, but it's a lot better than comparing MHz due to all the fun stuff you can do with superscalar CPUs.
Kevin Watson
04-04-2010, 17:15
Something like the Beagleboard or Gumstix that were posted earlier are quite a bit faster. They both use OMAP3 series CPU's. The Beagleboard clocks in at 600 MHz and 1200 MIPS. The Gumstix uses the same CPU. Both of those also have DSP's, which will let you do even more computations. The DSP it's self clocks in at 500 MHz and 4000 MIPS, and that's on top of the 1200 MIPS from the CPU.If I were to attack this problem, I'd start with an Intel Atom 330 with NVIDIA ION GPU and then start reading up on SIMD, CUDA and OpenCL. Assuming stereo vision is used, anyone who wants to attempt this would need to read up and fully comprehend algorithms like SLAM (Simultaneous Localization and Mapping) for localization, scale and rotation invariant model matching for identification and tracking of moving objects, and D* (pronounced "dee star") for path planning. I'd be happy to provide some guidance to teams seriously interested in implementing any kind of autonomy.
-Kevin
ICRA Robotic Planetary Contingency Challenge...
The goal is to program and build a robot for an unknown task that you receive at the event.
May be another goal for teams considering a fully autonomous robot.
http://modlabupenn.org/icra/icra-2008/
Robototes2412
04-04-2010, 22:14
actually, I wrote a method that would manually (no gyro) align a robot to a target using a mecanum drive. Sadly, I lost it. :(
EDIT:
What it did was get the target radius (size), target x-pos, and target y-pos. It then turned the robot arbitrarily to match up and strafed accordingly. It was slow, it was ugly, it ate/raped small children, bur it worked. Until I lost it that is.
I've been thinking about full-autonomous since 2008.
My approach is through high-level functions, field awareness, and inter-robot communication.
Our team has done some work on all three of those, however, it all tends to get bogged down with lack of testing.
For high-level functions, we have a forward(ft), turn(deg), strafe(ft), and kick(ms).
For field-awareness, I have an algorithm for detecting the soccer balls on the green carpet when in view of the camera.
For inter-robot communication, I was planning on using an ultrasonic signal generated by the cRIO from the digital sidecar, but I found I could only generate a 3khz signal. I may have to resort to using more hardware (making it more expensive to implement for a sizable quantity of teams) Modulated IR is still an option.
I'll stand up on my soapbox for a moment to mention a couple of ways that FIRST could further encourage autonomous:
make autonomous 30s, and put it at the END of the match
OR
make autonomous a necessary part of the match (e.g, make autonomous/teleop determined by where the robot is on the field, so that in certain essential parts of the field, robots must be in autonomous)
OR
encourage a method of communication BETWEEN robots, so that they can be more field-aware
OR
use RFID so the robots can tell when they are in a certain region
OR
broadcast beacons (modulated IR?) that the robots can triangulate off of
OR
make the game piece stand out and be easily acquired by a camera or other common sensor
OR
provide an objective in autonomous that can ONLY be completed in autonomous. (For example, something that allows the robots to complete a finale objective)
sircedric4
05-04-2010, 14:52
So maybe I missed it, but it seems that a VAST majority of the teams have a hard enough time getting their robot to just move forward and do one small thing during autonomous. How exactly would you do this challenge and still keep it rookie friendly?
Are we talking about seriously canned code here with simple GUI interfaces. For example I had a toy when I was a kid called a Big Track where you programmed what you wanted it to do, then activated it and watched it. It only had certain moves it could do and it was programmed from a small keypad. The point is that it had canned algorithms that required nothing but parameters. Unless someone comes up with code this simple I just can't see FIRST going full autonomous.
Now if the powerhouse teams are bored with their standard coding, maybe there can be an advanced FIRST like event at the college level that could do this kind of stuff. I have a hard enough time getting any of my students to even program teleop and that's easy compared to the 15 second autonomous mode. Now tell them the robot can't even compete unless they put a mars rover amount of code into it and I won't have any students left.
davidthefat
05-04-2010, 15:18
So maybe I missed it, but it seems that a VAST majority of the teams have a hard enough time getting their robot to just move forward and do one small thing during autonomous. How exactly would you do this challenge and still keep it rookie friendly?
Are we talking about seriously canned code here with simple GUI interfaces. For example I had a toy when I was a kid called a Big Track where you programmed what you wanted it to do, then activated it and watched it. It only had certain moves it could do and it was programmed from a small keypad. The point is that it had canned algorithms that required nothing but parameters. Unless someone comes up with code this simple I just can't see FIRST going full autonomous.
Now if the powerhouse teams are bored with their standard coding, maybe there can be an advanced FIRST like event at the college level that could do this kind of stuff. I have a hard enough time getting any of my students to even program teleop and that's easy compared to the 15 second autonomous mode. Now tell them the robot can't even compete unless they put a mars rover amount of code into it and I won't have any students left.
:ahh: Now really is that true? I mean if a programmer has any idea what he is doing, he can get the drive working just by looking at the API... I think new programmers don't look at APIs but more towards step by step applications of it like a tutorial...
Actually I believe you, there is this one "programmer" that has no idea what he is doing, and all he knows is if statements... Doesnt even know what a variable is really...
So maybe I missed it, but it seems that a VAST majority of the teams have a hard enough time getting their robot to just move forward and do one small thing during autonomous. How exactly would you do this challenge and still keep it rookie friendly?
This is actually the goal of my ADK, which is a collaborative initiative to bring an autonomous framework that would be easy to use. What I realized this year is that dozens of teams programmed a drive forward and kick. Why not just have one team do it and share it. The goal of the ADK is to give you cookie cutter templates to create code that can run the same autonomous as another team.
we are looking for more teams to sign on (so far we have 4)
there are repositories for both java and c++, but I need a lead developer for the c++
here is a link to the project:
http://firstforge.wpi.edu/sf/projects/bobotics
sircedric4
05-04-2010, 15:46
:ahh: Now really is that true? I mean if a programmer has any idea what he is doing, he can get the drive working just by looking at the API... I think new programmers don't look at APIs but more towards step by step applications of it like a tutorial...
Actually I believe you, there is this one "programmer" that has no idea what he is doing, and all he knows is if statements... Doesnt even know what a variable is really...
Well there is the simple matter that with most rookie teams I have seen and is still true for our second year team, its that the teams are still fairly small. It doesn't surprise me that there is little interest in programming when you only have 15 people on a team. From our area, they are mostly interested in the actual mechanical stuff, which is perfectly fine.
We do have a couple this year that were interested in programming, but only when it was at the school meetings and only while you were standing over their shoulder. :-) They had no motivation to look at it on their own time even with the links and training presentations provided to them. The problem is that it is very difficult to give the necessary training in the programming when you only have a 6 week build season. There's just not enough time to be training and doing at the same time with small teams, unless the students are self-motivating and interested. (I do look forward to one of these mythical students that the people recommending this challenge have, it'll make my workload easier that's for sure.)
I think FIRST recognizes this which is why they went with the WPI libraries. The WPI libraries have been life savers in that they have let us attempt some cool things we wouldn't have the ability to do otherwise. (now if they could just get some good documentation and a decent "canned" example on every library function it would be perfect) And even with all these programming helpers, there is still half the teams at the regionals I have attended that are lucky to have a working robot. Remember that not all mentors are engineers or programming professionals, they are teachers and parents that might not have experienced this stuff before.
So unless you are talking about having some heavily canned autonomous modes that can be mixed and matched, I just don't see how FIRST could go with this as an entire game's goals. That is of course if they want to continue to let new teams play that don't have vast resources. Now if the teams that signed up are just setting challenges for themselves because they already have got the basics zipped up, well then that's a good thing and I look forward to what they come up with.
I am one that is in favor of raising the less capable up instead of bringing the high flyers down, but it has to be done in fairly small steps. Like some Star Trek Prime Directive, we can't give pre-industrial age men nuclear propulsion and expect them to know how to take advantage of it. Not without the education that goes with it. I really do think FIRST could do with a "varsity" and "junior varsity" level. And I feel that the level should be based on the resources a team has, the size of the team, and the mentor skills available to that team, but I don't know how you would split the teams up correctly.
I really do think FIRST could do with a "varsity" and "junior varsity" level. And I feel that the level should be based on the resources a team has, the size of the team, and the mentor skills available to that team, but I don't know how you would split the teams up correctly.
FIRST has a history of teams wins losses and ties. I think it would be interesting rather than having a JV Varsity division, have teams of robots that you compete with throughout the comp. You could easily arrange these teams based on records. It would also bring upon an aspect of long distance collaboration which is critical in business and even certain college projects.
Don't assume that many older teams have more programming expertise than the rookies. This will be a struggle for the whole FIRST community. However, we have some things to help us here.
1. A common platform spec, to make generic code independent of the robot.
http://spreadsheets.google.com/ccc?key=0AgYDudKXpgOzcFI5aW5EUVhnVVUxNUVuQTdZLWNIW Xc&hl=en
2. Canned moves, as you were saying. They can be stringed in a sequence with the error terminals. An alternative is wiring an enum array into a for loop, where a case structure executes the correct move (determined by the value of the enum).
3. Simple GUI interfaces: LabVIEW. Next year we will have LabVIEW 2009, with the addition of some nice features like snippets (http://zone.ni.com/devzone/cda/tut/p/id/9330).
4. Collaboration, communication, sharing of resources. I do LabVIEW and control system workshops in my area. I try to collaborate with people to make sure the community progresses as a whole.
The point is to start programmers out from a very high-level view in autonomous: I want the robot to do this, then this, then this, then this. If sonar value is less than 20in, then turn right. Otherwise, keep following the line. And so on.
Of everything on the robot, programming is the hardest to explain, and so it needs the biggest push. We're not coding machine language anymore. A well-documented VI can be understood by normal people, so long as they know that data flows along the wires, and a subVI won't execute until it gets data from all its wired inputs. Block diagrams are a pretty clear way of thinking.
I understand your concern. It's a little scary to think of 6 150lb robots moving around a field without direct human control, especially with little testing.
FIRST sometimes challenges us in ways that we think are unnecessary, and there are some challenges we'd like in FIRST that aren't yet present. One of the major selling points of this control system is that it makes it more feasible to do a comprehensive autonomous. Autonomous is underrepresented in FRC; in FLL, it's virtually all autonomous, and FTC gets 30s of autonomous per match. I believe the reason FIRST hasn't made autonomous a larger portion of the match is not that they think we're not capable, but that they want to draw in spectators, and make the game interesting. They're struggling to balance between popularity and technical prowess.
I certainly won't stop them from encouraging new teams, but I'm going to push for technical proficiency in all FIRST teams.
The biggest one is how to solve a problem. It's a process. You must single out the problem, clearly define a solution, and test that solution.
Similarly, with code, you must define what you want to accomplish and how you are going to accomplish it before you try to code the whole thing. Take a look at my Software Development Process.
sircedric4, I do think one of the problems you're running into is people not understanding the fun, importantness, or awesomeness of programming. Often it can be hard to convey, because people assume it's simply geeky. People have streams of thought, and so do robots. If they can describe each decision the robot will make, then that's half the programming. It's like giving someone a set of instructions, but that person only knows what it's been told, and what it is told by the sensors you put on the robot.
sircedric4
05-04-2010, 16:46
3. Simple GUI interfaces: LabVIEW. Next year we will have LabVIEW 2009, with the addition of some nice features like snippets (http://zone.ni.com/devzone/cda/tut/p/id/9330).
I will check this link out, but as for simple GUI interfaces, I have never found Labview code easy to read. Maybe its because I am used to text programming and think in pseudocode to begin with, but I have always found it easier to follow steps right down a page. I look at Labview flowcharts and my brain just freezes. There are just too many icons that don't mean anything to me. Where with text code I can at least recognize a word.
This is a problem I recognize, especially with people moving toward GUI stuff more and more everyday, that I am trying to remedy, but it is hard to "grok" once you've worn other coding tracks into your brain. It why I also have a hard time getting my head around object oriented programming, because when I learned everything was straight down the page and easy to follow. I mean I learned on FORTRAN and still use it and Visual Basic day to day. Most of the new stuff I do is written for VBA because everyone has Excel.
I'm just saying that just because some people can follow flowcharts easier doesn't mean everyone can. People all have different thought methods and I like that FIRST maintains all 3 code bases to support whatever your thought model is. And there is something that just gripes my open source heart when LabView is a 3rd party proprietary language when C++ and Java have free development environments. I think that whatever coding would come out of an all autonomous task would need to be useable by both types of coders, grapical and text based.
sircedric4, I do think one of the problems you're running into is people not understanding the fun, importantness, or awesomeness of programming. Often it can be hard to convey, because people assume it's simply geeky. People have streams of thought, and so do robots. If they can describe each decision the robot will make, then that's half the programming. It's like giving someone a set of instructions, but that person only knows what it's been told, and what it is told by the sensors you put on the robot.
You are right here, and its something I try to convey to my students, but getting the students to make a decision isn't always easy. I don't know if its reverse peer pressure and those that are really into it are afraid to shine and speak up or what. High school is a pressure cooker, seeing the difference in maturity level between a freshman and a senior, so it can be hard to get new information into their already overworked brains. :-) It's a lot of fun seeing the students change though as they become more aware of what they are capable of. I am hoping since I finally have some younger students and not just all seniors on their way out that some will grow into liking programming once they've been exposed to it and how important it is.
I imagine each team has to fight with attrition and changing high school environments and students, so once again I can't see a way to do full autonomous without some locked up, easy to understand, and pre-canned repository. We all live in a real world environment, with different resources, and I think from looking at the Regionals, this is a huge undertaking. Worthy for the teams who can do it, but as a game design goal I hesitate to try it.
synth3tk
05-04-2010, 17:51
Actually I believe you, there is this one "programmer" that has no idea what he is doing, and all he knows is if statements... Doesnt even know what a variable is really...
Did you help him learn?
ideasrule
05-04-2010, 19:30
we are looking for more teams to sign on (so far we have 4)
there are repositories for both java and c++, but I need a lead developer for the c++
here is a link to the project:
http://firstforge.wpi.edu/sf/projects/bobotics
Given the complexity of the undertaking, I think it's better to focus on one project than to have two projects going in different directions. If some people here only know one language, I can help convert C++ code into Java code or vice versa, so that's not a problem.
ideasrule
05-04-2010, 19:42
So maybe I missed it, but it seems that a VAST majority of the teams have a hard enough time getting their robot to just move forward and do one small thing during autonomous. How exactly would you do this challenge and still keep it rookie friendly?
I know that rookie teams are fairly small, but it only takes one competent programmer to write all the code in teleop and (at least this year) write a successful autonomous. It takes much more than one person to machine the parts and assemble the robot. If a team can't get one competent programmer, there's something wrong with the recruitment efforts.
Currently, teams with great mechanical skills or great strategies dominate the competition. Why shouldn't teams with great programmers be rewarded as well?
davidthefat
05-04-2010, 19:42
Did you help him learn?
Actually he thinks he know everything and he doesnt listen and he says "I know" and Just don't like him personally. I cod something and hes right next to me watching, and when I get the code to work right, and the team leader congratulates me, he says "WE did it, It was team work" Obviously not... But he is older than me, and Its alright if he thinks he can do it...
sircedric4
05-04-2010, 19:58
I know that rookie teams are fairly small, but it only takes one competent programmer to write all the code in teleop and (at least this year) write a successful autonomous. It takes much more than one person to machine the parts and assemble the robot. If a team can't get one competent programmer, there's something wrong with the recruitment efforts.
Currently, teams with great mechanical skills or great strategies dominate the competition. Why shouldn't teams with great programmers be rewarded as well?
Uh, I agree with you, great programming teams should be and are quite obviously rewarded just like good mechanical bots. But to continue your analogy, teams can still build the basic kitbot and compete, all I am saying is that you need that same level of head start if you are going to tackle this full auto as a game challenge. This goes back to the needing canned algorithms thing.
And as for recruitment efforts and finding competent programmers let's remember that not everyone has the resources, contacts, or interest and recognize that not every team can get a competent programmer all the time. I live in the real world and there isn't enough time in the day sometimes. As it is, our team does have one competent programmer and its the mentor your chatting with right here. :-) I can name 6 teams in my immediate area that don't have the luxury of a dedicated programmer and they get help where they can. I just think when setting up game designs, the GDC does remember to give a little consideration to smaller teams (which I would be willing to bet is the vast majority of teams, just not the powerhouse known teams) and as such I don't expect to see fully auto as a requirement anytime soon.
AustinSchuh
05-04-2010, 20:10
I do look forward to one of these mythical students that the people recommending this challenge have, it'll make my workload easier that's for sure.
You and every other team around... 971 has been blessed to get one about every 4 years. Now that I look back and think about it, our current programmer at any point in time seems to be the one that finds the next programmer and trains them. If anyone has any tips on how to find self-motivated and talented programmers, I'm willing to bet that there are a lot of teams who would be interested in listening. Come to think of it, 111 might have some good ideas.
ideasrule
05-04-2010, 20:31
Uh, I agree with you, great programming teams should be and are quite obviously rewarded just like good mechanical bots. But to continue your analogy, teams can still build the basic kitbot and compete, all I am saying is that you need that same level of head start if you are going to tackle this full auto as a game challenge. This goes back to the needing canned algorithms thing.
I don't think anybody here is saying that FIRST should require all matches to be fully autonomous. Some people, like me, think that autonomous mode should be made at least as important to the match as teleop. Right now, you hardly need any autonomous at all; if your robot is mechanically well-built and well-designed, you're going to win the match.
we are looking for more teams to sign on (so far we have 4)
there are repositories for both java and c++, but I need a lead developer for the c++
here is a link to the project:
http://firstforge.wpi.edu/sf/projects/boboticsI'm curious who the four folks/teams are, and how you are organizing your assault on this mountain - Can you post a few of the project schedule milestones and high-level software architecture outlines? Those would be good fodder for this thread.
Blake
sircedric4
05-04-2010, 22:00
I don't think anybody here is saying that FIRST should require all matches to be fully autonomous. Some people, like me, think that autonomous mode should be made at least as important to the match as teleop. Right now, you hardly need any autonomous at all; if your robot is mechanically well-built and well-designed, you're going to win the match.
Well Aim High was basically the game you are looking for. I didn't get to Atlanta that year since it was my rookie year as a mentor and I was in way over my head, but a decent autonomous pretty much won the game for you at least at the regional level.
Now maybe at the Nationals the auto mode wasn't such a game winner but I know that the bonus for scoring the most on autonomous was enough to equal the points during the match. I actually liked that game because the autonomous mode wasn't difficult to be worthy of a good chunk of points. I mean we competed well at our regional and all we did was turn on our shooter. It was aimed by hand and good for 3-4 balls which usually won us our matches.
But the thing is, even when autonomous was worth so many points that year, there were still only 2-3 robots that did it at the Houston Regional that year. The balance that FIRST has been trying to figure out since then is how to make auto mode worth enough to pursue, but not enough to leave those without programmers hopelessly behind. They went from auto being useless in Rack and Roll, to their hybrid mode, to last years where you had to at least move or have your team sunk by human players. I think that this year is a good balance.
You have to do some auto unless you want to contest those balls with your opponent in their own home zone, but your robot can also score as well. Now if you want to use that same model and extend the time or put it towards the bonus round then I am all for that. I liked the balance this year in auto consequences, and also that it doesn't leave those that can't score behind too far to catch up.
MJ Miller
05-04-2010, 23:04
I'm a mentor for Team 1421 and Computational Scientist who would be happy to help with this project. I look forward to hearing more about it.
MJ
Given the complexity of the undertaking, I think it's better to focus on one project than to have two projects going in different directions. If some people here only know one language, I can help convert C++ code into Java code or vice versa, so that's not a problem.
The initial goals of the project require both a Java version and a C++ version. We are trying to develop a code base for easy programming of Mechanisms(Drives, Arms, Shooters...), Maneuvers, and an Event System. The architecture should be able to be utilized by teams regardless of what language they are using. So don't look at it as two projects, look at it as one architecture in two languages. (i.e. Hello and Bonjor accomplish the same goal, but by being able to say both you increase the number of people you can say hi to)
I'm curious who the four folks/teams are, and how you are organizing your assault on this mountain - Can you post a few of the project schedule milestones and high-level software architecture outlines? Those would be good fodder for this thread.
Blake
I am a mentor for team 319, and a senior of CS @ WPI.
History of 319 autonomous successes: pull 10 pt ball into field in 05, shoot 9/10 in 06, knock a ball in 08, shoot 2/3 in 10. The teams have recently signed on to the first forge project so not much has been organized but will be soon (I have a competition in 2 weeks for $1200 which i am trying to get to start the first FRC team in Haiti).
The goals of the project are to first get the basic architecture down so you can implement a cookie cutter program in one day.
Second phase would be to implements a more intelligent system (possibly working off some form of decision tree), which could play a match of Breakaway offensively.
Third phase would be to implement some form of localization and communication (Zigbee module) so robots could communicate with one another.
See above for basic architecture idea, but more complex is described on the firstforge page. The idea is that maneuvers are the same for every robot, and same with basic mechanisms(a drive drives, a arm raises and lowers, a shooter cocks and shoots..) by using maneuvers that can pass, fail and timeout (decided by the mechanisms) you can create a state machine of maneuvers which should work on any mechanism based system. All the programmer has to define is how the mechanism actually executes a given task, and what it believes a pass or fail to be, alternatively you can do it all on timers through the timeouts, and then the mechanism just runs that operation until it receives another.
The importance is in a simple but extendable architecture.
I'm a mentor for Team 1421 and Computational Scientist who would be happy to help with this project. I look forward to hearing more about it.
MJ
If you are looking to help that would be awesome! I am so excited by the response from teams looking to work collaboratively, I feel this is one area where FIRST has been held back ... (why for all the secrecy among teams??). As the awards that are given out are not actually monetary, I would like to see teams work together much more. But thats besides the point...
for anyone even considering working on this project please sign up for a first forge account and register under the ADK project, this is where all the tasks will be organized and delegated.
http://firstforge.wpi.edu/sf/projects/bobotics
cheers to all!
I can't say if my team is up for making a fully autonomous 'bot, but I'm more than happy to help in the planning and coding (LabVIEW) for a system to make an autonomous implementation easy for all teams.
You can reach me at kamocat@gmail.com.
Brandon_L
06-04-2010, 01:26
It would completely depend on the game. That, and my team might not be that happy about it. Our driver would step forward to control it and he would have no control. I can see his face now.
If I do it, it wouldn't be used at competitions, just for lolz.
I'm in.
To all of the posters who have said that ... their team won't like ..., I have a suggestion. Get permission to use one of your team's current robots to do some software work, and start learning.
You aren't asking the team to stake their next several seasons on the success or failure of creating a fully autonomous robot. You are instead scheduling some time with the robot to get some useful work done. Work that will be educational, and will contribute to both the team's and the community's body of knowledge. My guess is that the robot can be up on blocks for most of the time.
Once you are initially successful developing a few small and useful improvements (small steps first, big steps once enough small steps can be combined), put a smile on your face let your team know that whether or not you (and team mates) can make the next-season robot work better during the autonomous periods, or assist your drivers during tele-op, is no longer an open question. The code for doing it is already done and is on-the-shelf for next season. That should be a good day.
My point is not that you need to "go rogue"; but that you simply need to confer with the team mentors/leaders and let them know you will want to schedule some off-season time for using the robot (safely) to do some ordinary, simple, code integration and testing. If it is possible for a team to say that is a bad idea, then I'll be dumbfounded.
Sound good?
Blake
The code for doing it is already done and is on-the-shelf for next season. That should be a good day.
Make sure you post any code made in past seasons or the offseason in a public repository linked to CD or first forge, otherwise it doesn't register as a COTS and is illegal to use according to the first rules.
ChaosX73
06-04-2010, 20:50
Next Year, no matter the game, I challenge you to make your robot fully autonomous. That means autonomous during tele operation period too. Anyone up for that challenge? That would challenge your skills and dedication to the robot. That means no more just drive up 3 feet, kick, repeat type of coding. It would have to be a lot more thought out and will have to use real life robot coding. Its not really a robot if its not autonomous, its just an over glorified RC car if its human controlled. And if you are still sceptical, FIRST pretty much writes the libraries so that even a guy that picks up a programming book can code the robot in a week or even less... Well IMHO you can't learn programming from a book, sure you may learn the language and syntax, but you have to have experience to actually program. Programming comes with experience, and the way FIRST makes it, you get minimum experience as a programmer programming these robots. I will be announcing to my club next year that we want to try this. Just post your opinions and I will add to the list if you want to take the challenge.
Teams That Are Willing To Take The Challenge:
*Team 589 (Just Me As Of Now)
*Team 33
*Team 2503
*Team 1086
You would be a programming god if you could do this. I was the programmer for my team's autonomous, and it took me longer than the allotted six weeks, although that may be because this was my first year of official programming. Although I have thought about this concept before, it almost seems like we'd be making our own AI system, and we all know what happens then...:ahh:
davidthefat
06-04-2010, 21:23
You would be a programming god if you could do this. I was the programmer for my team's autonomous, and it took me longer than the allotted six weeks, although that may be because this was my first year of official programming. Although I have thought about this concept before, it almost seems like we'd be making our own AI system, and we all know what happens then...:ahh:
Honestly it only took us 2 days to get the autonomous right, the day before the competiton when you have to get inspection and stuff and the first day of autonomous... I think you over complicated it
ideasrule
06-04-2010, 22:13
You would be a programming god if you could do this. I was the programmer for my team's autonomous, and it took me longer than the allotted six weeks, although that may be because this was my first year of official programming. Although I have thought about this concept before, it almost seems like we'd be making our own AI system, and we all know what happens then...:ahh:
It took us longer than 6 weeks to get it right too. The reason was that we misread the rules and thought the balls were placed randomly on the field. Once we found out that wasn't true--which was after ship date--it took only an hour of coding + testing time during our first regional to get autonomous right.
That said, it was interesting to see the robot drive to the ball based on its own vision, turn, and take a shot. I really wish we could have used it during the competition.
During the off-season this year or next year, someone should hold an all autonomous event.
<rambling>
I wish first would make a game with way more. but it would sort of wreck their "no robot left behind", since for some reason there isn't enough support for beginner programmers.
</rambling>
But I digress, anyone up to host an all-autonomous off-season?
I like the "Perceive, Plan, Control" paradigm that was mentioned earlier, and I think it's time that we break the conversation up into those three topics.
Should I take the liberty of starting a thread for each of those?
The "control" discussion is almost solely about the Autonomous Development Kit.
In essence, what is the code structure for controlling mechanisms?
It needs to be able to handle both sequential and simultaneous tasks, with a variety of control parameters for each task. (For example, a ball kick must be able to be triggered by the completion of another action, or an input, or after a time delay, or during a certain time in the match. Similarly, it must be able to be stopped by any of those.)
The "plan" discussion is the most complex of the three, as it deals with analyzing the situation, and there's a range of levels that this can be done on.
Here's a couple of examples (in first-person robot):
Where should I move next?
Is now a good time to kick, or will that robot get in the way?
Should I block or score?
Is it worth it to go over the bump?
Will I get penalized if I go into that section of the field?
The "perceive" discussion entails what sensors should be used for what purposes.
I'll list some things an autonomous 'bot might want to know:
Where am I on the field?
Where are the robots around me? (what alliance are they?)
Where are the balls around me? (on the floor, presumably)
Where are the goals? The bumps? The towers? The walls?
Have I flipped over?
What are the other robots doing? Do they need help? (Inter-robot communication)
There are also simpler things, implemented into the "control" section, liked "have I completed my kick" or "is my arm fully extended", usually potentiometers or limit switches, that are used in feedback to make sure the action is completed. Unless someone's doing something exotic like using a non-contact thermometer to tell when a motor is stalled, I don't think these need to be discussed with the rest of the sensors.
As many have said, the scope is huge. I don't plan to do all of those things in "plan" and "perceive", but the first step is to consider the "how" so we can determine what is and is not feasible.
The "perceive" discussion entails what sensors should be used for what purposes.
I'll list some things an autonomous 'bot might want to know:
Where am I on the field?
Where are the robots around me? (what alliance are they?)
Where are the balls around me? (on the floor, presumably)
Where are the goals? The bumps? The towers? The walls?
Have I flipped over?
What are the other robots doing? Do they need help? (Inter-robot communication)
There are also simpler things, implemented into the "control" section, liked "have I completed my kick" or "is my arm fully extended", usually potentiometers or limit switches, that are used in feedback to make sure the action is completed. Unless someone's doing something exotic like using a non-contact thermometer to tell when a motor is stalled, I don't think these need to be discussed with the rest of the sensors.
This is where the GDC may play nice again, and bring back something like the 2 freq. IR beacons.
But, even if they don't there are simple ways for determining where you are on a field using encoders (Assuming the wheels don't slip. Hard for last year).
Using kinematic formulas:
S = (Delta Left + Delta Right) / 2
Delta Theta = (Delta left - Delta right) / wheelbase
Theta = Theta + Delta Theta
X = X + (S * cosine ( Theta ) )
Y = Y + (S * sine ( Theta ) )
This satisfies one item on the list, but only slightly.
As far as knowing where other robots are, that would need a lot of DSP, or an external observer telling the bots where they are, or all bots to communicate with each other
During the off-season this year or next year, someone should hold an all autonomous event.
<rambling>
I wish first would make a game with way more. but it would sort of wreck their "no robot left behind", since for some reason there isn't enough support for beginner programmers.
</rambling>
But I digress, anyone up to host an all-autonomous off-season?
Great idea!
Let's carry it out in a way that will let us try walking before we try to run.
I'll assert that it would be easy to produce a fully autonomous competition using Vex or Tetrix equipment and fields; and that it would be reasonably easy to replicate that competition set-up in many geographically diverse locations around North America and the rest of the planet.
Before any instinctive reaction takes hold and you reject this suggestion as being too unlike FRC, think for a minute.
The reduced diversity found in the the Vex/Terix equipment suites, the greater simplicity of the Vex/Terix computers, and all of the other factors that make a fully autonomous Vex or Tetrix match unlike an autonomous FRC match are all good things for people who want to take on David's challenge.
Working with, and succeeding with the simpler, Vex or Tetrix equipment will lay a solid foundation on which to base the FRC attempts. There will still be plenty of work to do when the project members graduate into FRC attempts; but many of the "Doh!" realizations and many of the collaboration-process SNAFUs will have been shaken out of the software and out of the project teams processes.
Think it over. Walk before running.
Blake
PS: This would not replace my earlier suggestion to use a simulator or appropriate video game as a learning tool. A Vex/Tetrix competition would complement also using a simulator.
the programmer
07-04-2010, 22:22
through my 3 years in FLL, I realize that performing routines like getting a ball, aiming, and shooting is something that's very realistic, so my suggestion, treat teleop like 08's hybrid mode and give the drivers different routines that they could run and just a normal teleop
Frenchie
08-04-2010, 03:41
I didn't read the entire thread, so my apologies if I repeat what has been said already.
Imho, most FRC games of years past did not lend themselves well to full autonomous play. Just look at how much trouble teams had to go through to get the smallest aspects of autonomy down (autonomous mode, camera use, automatic transmission, ...).
Instead, maybe an offshoot of FRC should be created with full autonomy in mind. The scale of the robots would probably have to be smaller. Hell, a standard platform could even be issued (i'm thinking robocup and aldebaran Nao robots...).
After all, we already have FTC and LLC.
This would allow for games that are better suited to autonomy. The lighting of the field could be standardized, AR tags could be integrated to field components, robot to robot communication could even be enabled...
The game could have a 10 sec "teleoperated" mode at the beginning of each match as a cameo to FRC.
Just a wild idea ;).
Honestly it only took us 2 days to get the autonomous right, the day before the competiton when you have to get inspection and stuff and the first day of autonomous... I think you over complicated it
It could be that there are different definitions of "getting autonomous right". Looking at davidthefats last 3 games on the Blue Alliance, I personally wouldn't call their autonomous "right".
It appears that in Q77 they started in the close zone and knocked the ball toward the goal without scoring.
Q84 they start in the far zone and kick one ball into the middle.
Q89 they start in the close zone and don't move at all.
I think you may be over-simplifying it .....
davidthefat
08-04-2010, 09:45
It could be that there are different definitions of "getting autonomous right". Looking at davidthefats last 3 games on the Blue Alliance, I personally wouldn't call their autonomous "right".
It appears that in Q77 they started in the close zone and knocked the ball toward the goal without scoring.
Q84 they start in the far zone and kick one ball into the middle.
Q89 they start in the close zone and don't move at all.
I think you may be over-simplifying it .....
The Q89, we chose NOT to go or that was the one with the leak in the pneumatic system. (the robot does not do anything if the kicker is not retracted, since the ir sensor is triggered by default)Don't blame me, thats the best autonous can get without an adjustable kicker. Its right since our goal was just to kick the ball, the camera was out for the last half of the competition, so we didn't even track.
Don't blame me, thats the best autonous can get without an adjustable kicker.
I'm not "blaming" anyone for anything, just trying to point out that a decent full-autonomous may not be quite as simple as you are assuming, given the level of performance you have demonstrated. I would disagree that what you have is the best you could do, an adjustable kicker would have no effect on how many balls you could kick out of the far zone. Just a matter of driving to each ball and kicking (sounds simple doesn't it?). It doesn't get much easier than knowing EXACTLY where each ball is located when you start and not having to worry about any defending robots.
We could only kick one ball in autonomous because one of our encoders died and there is no way to change it without dismantling half the robot. The robot has a small drift to the left and there wasn't time to get the time-based autonomous to compensate for it. Hopefully having all encoders working at Atlanta will allow us to clear whichever zone we are in. I would consider clearing our zone of balls (or scoring from the front zone) a minimum level of autonomous competence to shoot for. Many of the top teams already do this. Our programmers have been working on it all season and haven't got there yet.
Rather than trying to develop a full-autonomous game, it may be to your advantage to try smaller steps. Demonstrating a working 15 second autonomous that would at least clear the zone you are in would be a more attainable goal, and could possibly help in persuading your team to attempt more complicated building (you need to integrate the sensors into your robot) and programming projects.
davidthefat
08-04-2010, 22:58
I'm not "blaming" anyone for anything, just trying to point out that a decent full-autonomous may not be quite as simple as you are assuming, given the level of performance you have demonstrated. I would disagree that what you have is the best you could do, an adjustable kicker would have no effect on how many balls you could kick out of the far zone. Just a matter of driving to each ball and kicking (sounds simple doesn't it?). It doesn't get much easier than knowing EXACTLY where each ball is located when you start and not having to worry about any defending robots.
We could only kick one ball in autonomous because one of our encoders died and there is no way to change it without dismantling half the robot. The robot has a small drift to the left and there wasn't time to get the time-based autonomous to compensate for it. Hopefully having all encoders working at Atlanta will allow us to clear whichever zone we are in. I would consider clearing our zone of balls (or scoring from the front zone) a minimum level of autonomous competence to shoot for. Many of the top teams already do this. Our programmers have been working on it all season and haven't got there yet.
Rather than trying to develop a full-autonomous game, it may be to your advantage to try smaller steps. Demonstrating a working 15 second autonomous that would at least clear the zone you are in would be a more attainable goal, and could possibly help in persuading your team to attempt more complicated building (you need to integrate the sensors into your robot) and programming projects.
The last paragraph: I have been trying to do that since day one, I wanted a kicker that shoots ACCURATELY from the 3 zone... But my ideas just got shot down because I was new to the club and the team did not have faith in themselves, since our history of robots are not the best, infact I heard ours this year was the best out of all our robots... but I say it needs TONS of improvements...
I wanted a kicker that shoots ACCURATELY from the 3 zone
Honestly it only took us 2 days to get the autonomous right, the day before the competiton when you have to get inspection and stuff and the first day of autonomous... I think you over complicated it
I think you'll find it much easier to guide your team towards your goals if you attempt smaller steps. I'm having a hard time understanding why you couldn't at least KICK 3 balls from the 3 zone (not worrying about accuracy) if it's as easy as you seem to think.
Right now, you're like the Wright brothers trying to invent the airplane. They didn't start with a 747. You'll probably be much more successful if you work towards your goals in smaller, more realistic steps. Claiming you can invent warp drive by next weekend and fly to Mars in 5 minutes isn't going to get you many followers. Talk is cheap.
FIRST should make a game where you're only able to send commands to your robot every 5 seconds, or there could be an area which was completely blacked out, so you pretty much had to use auto as you couldn't see.
The "no sending a command for 5 seconds" could pose a safety problem. However the portion "blacked out" could be simulated by putting a wall up the middle of the field (with a small doorway in it for robots to go through).
Robototes2412
09-04-2010, 12:42
Actually, this kind of thing would be a good experiment for the off-season
ideasrule
09-04-2010, 13:56
And as for recruitment efforts and finding competent programmers let's remember that not everyone has the resources, contacts, or interest and recognize that not every team can get a competent programmer all the time. I live in the real world and there isn't enough time in the day sometimes. As it is, our team does have one competent programmer and its the mentor your chatting with right here. :-) I can name 6 teams in my immediate area that don't have the luxury of a dedicated programmer and they get help where they can. I just think when setting up game designs, the GDC does remember to give a little consideration to smaller teams (which I would be willing to bet is the vast majority of teams, just not the powerhouse known teams) and as such I don't expect to see fully auto as a requirement anytime soon.
I still find it surprising that you're the only competent programmer, but I concede that may be because I don't have much experience with teams outside my immediate area. Our school is medium/smallish, with 90 people in each grade, but we've managed to find 3 competent programmers this year, 2 of them dedicated. Our team isn't particularly good, certainly not a powerhouse, but any of the 3 programmers can write the teleop code within an hour (provided the drivers know what they want, the electrical stuff is connected correctly, the mechanics work, etc).
As for recruiting programmers, I think the best way to get them is to inspire them. Don't say they get to write the driving code and winning the game is entirely the responsibility of the drivers. Tell them they get to work on the camera, let the robot make intelligent decisions, or score autonomously. That's what got me enticed; I certainly wouldn't have joined the team just to make the robot drive.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.