|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||
|
||||
|
Re: Did Anyone Accept the Challenge?
Quote:
![]() but in seriousness I think it's an almost impossible task to accomplish in 6 weeks. If you had a fully built robot it may be possible, but you would have to have exclusive rights to the robot (ie no drive team practicing with it). I wish FIRST would allow the zigbee module so we could begin programming coordinated autonomous programs. Three autonomous robot working together would be pretty awesome Last edited by mwtidd : 14-03-2011 at 18:53. |
|
#2
|
|||
|
|||
|
Re: Did Anyone Accept the Challenge?
I personally think it is more than doable; the teams just need to have a very expandable and scalable engine. That will take a long time. But once you have the engine, I believe it is doable in 6 weeks. My worry is using too much funds for such goals. I don't want to put a cap on how much the other departments can use for the robot. The sensors do not pay for themselves.
My other worry is: "Woop dee doo, you have a fully autonomous robot; what is the big deal?" Programmers would be impressed; no one else would be. It mostly is a public statement to even have autonomy. The drive team was mostly seniors, and a sophomore and I will be the only ones left next year. I don't think drive team would mind. Third, it seems like a very selfish goal. I am purposely choosing NOT to be the team captain next year for the fear that I would turn out to be a dictator and drag the team down with me. I am nominating someone else on purpose. Oh and BTW, I love your signature Mr. Mike |
|
#3
|
||||
|
||||
|
Re: Did Anyone Accept the Challenge?
Quote:
I definitely think it is possible in 6 weeks also, but there would be some requirements. First using almost an identical chassis. Second, a swerve or a mecanum would make the task a lot easier. Third, working libraries for each sensor (many could be a layer on top of wpiLib) and a working camera library on the driver station (I just got Open CV working in netbeans on my mac). My list of absolute requirements for autonomous logomotion: 2 cameras 2 rangefinders strafe (mecanum, swerve, drop strafe) encoder on arm/lift 5 ft / s @ 90% on at least one speed (hopefully not the only speed) ideal additions a third camera for the robot (on board processing) 1 compass 1 gyro encoders on everything multi-speed transmission Last edited by mwtidd : 14-03-2011 at 19:23. |
|
#4
|
||||
|
||||
|
Re: Did Anyone Accept the Challenge?
Aren't the mini bots fully autonomous?... then any team with a mini bot has a fully autonomous robot this year
![]() |
|
#5
|
|||
|
|||
|
Re: Did Anyone Accept the Challenge?
Quote:
On the topic of the thread i think this year has to may things involved to really pull this off. Making logos, finding the right peaces, navigating the field the list goes on and on. In order for this to happen we need another game like lunacy there the field is open, there are very few goals, very simple strategy, and very few (preferably one) type of game piece. I for one hope to see this happen one day (or have the chance to try it myself) but i don't think its gonna happen successfully this year. Even if it was pulled off i think i would leave it in the shop and leave the real work to our drivers, autonomous is enough of a burden. |
|
#6
|
||||
|
||||
|
Re: Did Anyone Accept the Challenge?
Quote:
The concept was give me a robot close to what this years will be, and I could play with it all season. ![]() I agree this year is a bit ambitious to accomplish. I think the most difficult part would be avoiding penalties. finding tubes, picking them up, and knowing where to cap them can be done reasonably with off-robot processing using something like open CV |
|
#7
|
||||
|
||||
|
Re: Did Anyone Accept the Challenge?
David and lineskier ot of curiosity how much experience do you guys have with programming autonomous robots? I think you guys are drastically under estimating the difficulty of this task.
|
|
#8
|
|||
|
|||
|
Re: Did Anyone Accept the Challenge?
I really have no experience, I believe that it can't be that hard in a such confined environment such as a FIRST game compared to the world. I was only planning on being defensive anyway. A fully autonomous robot could have been just a wall like pong. Just strafing with the bot in front
|
|
#9
|
|||||
|
|||||
|
Re: Did Anyone Accept the Challenge?
It may be even harder. In "the world" it is often not a requirement to navigate around objects moving around at 10-15FPS.
|
|
#10
|
|||
|
|||
|
Re: Did Anyone Accept the Challenge?
Eh, true, but we won't need to be calculating realtime to compensate for uneven terrain or some other stuff like that. I see no reason why this will not be possible in 1 year.
|
|
#11
|
|||
|
|||
|
Re: Did Anyone Accept the Challenge?
I believe the challenge was a worth goal; it surely can be done given a small team of experience programmers with the drive to make something this awesome. However, this game doesn't really suit a full autonomous. Can someone here tell me exactly how complex the code would be to sense out all the tubes flying around the field, and differentiate them from the field, the arena, and other robots? Last year would've been a great game to try it with (not as many objects to deal with, and a more limited space at any given time). This year would just be hell if you have anything less than 10 or 15 sensors, including all the code required to operate them all effectively.
|
|
#12
|
|||
|
|||
|
Re: Did Anyone Accept the Challenge?
Perhaps we should try a new goal, maybe for off-season events. I doubt we will be able to do this, but I would love to see someone give it a shot.
What if the challenge were not a fully autonomous bot, but rather a MOSTLY autonomous bot? lineskier put forth a good idea, by telling the robot to "FIND TUBE" and "CAPTURE TUBE", etc. Obviously, if it misinterprets, the error will just rack up and there will be problems on problems. To avoid this, give the driver two buttons: "Success" and "Failure/Retry". If the robot succeeds in the command it's given, you tell it "Success". If it fails, you tell it to retry. To be really impressive, have it learn from failure. Yeah, that sounds difficult, but it's possible, and not as complex as you might think. If it grabs the wrong color, have it tweak it's color detection settings. If it went the wrong way, tweak the heading. So, what do you think? Anyone willing to take up the challenge? (Like I said earlier, unfortunately our team won't be able to do this yet, but maybe in the future). |
|
#13
|
||||
|
||||
|
Re: Did Anyone Accept the Challenge?
Quote:
Once I raise the money I'll be putting together a 4-speed swerve drive. it wouldnt need an arm at first. It would simply drive to a tube, and then drive to the peg column where that tube would be worth the most. Ideally such a task would be accomplished by 2 separate teams... one for the drive, one for the arm. So work could be done in parallel rather than in series. or even better 3, adding in a team for the vision system, as it would dictate to the drive and arm. Last edited by mwtidd : 15-03-2011 at 10:44. |
|
#14
|
||||
|
||||
|
Re: Did Anyone Accept the Challenge?
Quote:
The pegs are easier to see than the targets. The pegs relate more information than the goals. The tubes are easier to see than balls. Fully autonomous would help if you had a full alliance involved in the cause. I think an autonomous capper could be faster than a human capper, so if you had a team dedicated to delivering tubes, and your only job was to cap them I think you could get the average down to about 10 seconds per find and cap by staying in your endzone. One thing I've noticed a lot of is alliance partners bumping into one another. By putting an autonomous robot on one rack, and having one runner, i think it would free up quite a bit of the field. Again you wouldn't want to try to watch the tubes flying around, but tubes on the ground are certainly detectable as the are large and have specific shapes. Open CV contains some shape matching libraries that could help there. You would drive around scanning for a tube that was on the field. |
|
#15
|
||||
|
||||
|
Re: Did Anyone Accept the Challenge?
Quote:
However coding was completely changed with the introduction of the cRIO. That being said, I got the camera working for this game in about one nights of work. My biggest issue was our drive (2wd direct with only 1 cim per box). The arm didn't help either. If I was to try to accomplish this, I would have a long list of requirements for the robot. Something like 40 (swerve with a reliable arm) Using real vision libraries, you could find the tubes, analyze the rack and cap. The tubes make huge targets, and the pegs show up really well on the camera. Using something like that which 33 developed where based on the tube, the arm goes to a given height would help. I already have the structure to allow autonomous to work as a graph state machine, allowing me to create loops. So to do a fully autonomous robot, you would have to create maneuvers such as the following, and then it would progress through the graph. FIND TUBE CAPTURE TUBE FIND TARGET CAP TUBE RESET... I'm not saying it would be easy, but I think it would be possible if it was a goal from kickoff to IRI. If I could score 2 ubertubes and one logo fully autonomous that would be my metric for success. Probably the way I would start is by making a deal with the opposing alliance, "you throw three tubes in our scoring zone, we'll throw 3 in yours" This way I would never have to leave the area by the goal and deal with a defending robot. More realistically I could set up the maneuvers for driver use. So the driver gets the tube in the cameras image, and then the tube capture is handled. Then the driver gets to the pegs, and runs the auto cap. Drivers are better than the autonomous at adjusting to their environment, tube captures and caps are fairly static so may be better for auto maneuver Last edited by mwtidd : 14-03-2011 at 23:07. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|