Well initially I planned to try to cap in autonomous, I have realized from threads on here as well as my own experimenting with the camera that I would be lucky to cap even with a perfect code. In addition to this I have been doing quite a bit in other aspects of the bot and have not begun to do extensive amounts of programming, I am looking for opinios as to weather it would be more beneficial for me to work on defensive modes via dead reckoning that I have extensivly worked out or If it is far more beneficial to use the camera. I am looking strictly for opinions.
hmm. we won’t be getting extensive use out of the camera, although it might be useful in alignment in addition to dead-reckoning for, say, getting to a loading station in autonomous mode.
I do not think that using the vision system is going to be the make or break this year with auto. I think the teams that can use dead rec or other modes will be able to complete there tasks much faster and more accurately.
As much as I’d like to believe differently, the very few teams (<5%, I’d say) that can succesfully cap the vision tetra on the center goal will have an enormous advantage. Of course, you can argue that there is only use for one such team per alliance, but with so few capable teams, they will still be in extremely high demand.
How, per se, would a team get to a vision tetra in automode with a dead rec. auto mode? Oh that’s right, they wouldn’t. So let’s say you go for position, at a loading station first thing, what happens if a tetra is in your way and you get tangled? “Oh, well”?
you have a 1 or even 2 out of 8 chance to grab a tetra in auto if your bot shoots straight out for the center goal, I just think that the vision system is going to be slow and might take up to much time.
Perhaps if you are using the default vision code straight up, but I don’t think speed will be the main concern. I think bots wont be able to do it period.
We arent that worried about capping the vision tetra this year, I’m writing a copycat program that will let us have 16 different routines that we can run, so we should still be capable of dropping the tetras in the side goals, playing defense, or going to the loading station. Is an autonomous important this year? yes. is the camera critical? no.
Well, that’s a bold statement!
Just because you can’t imagine how to dead reckon a tetra onto a goal doesn’t mean that you won’t be witnessing a robot doing just that in a month or so.
I can’t tell you how many times some team’s been able to do something that I would have thought impossible.
Bold as it may have been, it’s based on a lot of thinking put in to it.
Sean and I have worked on that problem for quite a while, we can’t find any reliable method (2 out of 8 chances is not reliable.) to find a vision tetra in autonomous without the camera. All odds say it more than likely cannot be done well. So if you’re going to dispute it, don’t you think you should have something to back it up? Saying “Somebody will come up with it” doesn’t count. If you have a real idea on how to do it, I’d be very interested in hearing it though.
Your statement was equally bold, but you have nothing to really support it, if you do, I’d definitely like to see it, any tricks I can add to my bag would be great.
The real problem is that it’s impossible to know where the tetra’s going to be before the match begins (the tetras are placed on the field after the robot). Now, I can imagine a few ways it could be done, but I don’t think any would be particularly effective or use time wisely. That’s not to say it can’t happen but I consider it unlikely.
Its simple to use dead reckoning to find the vision tetra really (assuming you have dead reckoning equipment on board, and your programmers know how to use it). This method would not be foolproof, and there is a certain percentage of the time that it should not work at all (Im done with math for the day so someone else can figure it out), but it should work most of the time.
Basically you have to insist that you be allowed to position your bot in the left or right slot (not the middle, though you might be able to make it work for that too). Then you use your camera to scan for a vision tetra in front of you, then you determine the heading on which the tetra lies. This is where the problem comes in. It is unlikely that you could get an accurate enough angle to determine where the tetra is if it is not in one of the five tetra positions in front of you, so if both tetras are in the three positions not right in front of you then you just have to take a random guess at which of the other three positions they might be in and go for one. Then based on that angle you determine which position the tetra is in, then use dead reckoning to nail it.
Sorry that explanation was kind of incoherrent, but its the best I can do at this time of night.
Edit: Actually I just studied the field diagram again, and I realised that starting in the center might actually be the best, because you can get reasonably accurate angles on all of the tetras, and the only two that are on aproximately the same heading are the two in front of you, and if there are no tetras anywhere else, then they are both in front of you so you could theoretically be 100% effective. Assuming the pan servo can pan far enough to see the whole field.
To those that say that it’s impossible to find a vision tetra reliably without the camera… there are certainly other sensors you can use to find something that is in one of a finite number of known positions. Encoders, limit switches, and distance sensors are just a few of the possibilities. I am fully confident that there will be a team that can consistantly cap with a vision tetra in autonomous without using the camera.
Have you ever been to a competition event? The impossible happens there all the time.
Drop two arms, one on each side (you could drop a hanging tetra off with these also), move the robot to the side wall and turn it so it is facing the opposite side wall, move across the field so that the arms will bump a tetra where it exists. This starts a routine that picks up the tetra and calculates distance, etc to the goal you want to cap and drives there and scores. Since the tetras can only be in two of eight locations, your bot can count its way to the bump and know which position it’s in and where the goal is from there. It’d help to have the competition carpet to perfect it and/or be very ready to calibrate to the field carpet on practice day.
Echoing Russel (2 posts above), it’s not going to work every time but you can definitely get it consistent enough to be competitive.
Do I need to come up with any more ideas for you?
Will vision be key this year? I think so. A team who can cap would get an edge if the opposing team can not, IF they can cap reliably. There would be no point in wasting the time to try, and not getting it, which would result in no points in auto. Where as if you can not cap reliably, so you try to do other things to score reliably, you can put up decent points in auto, and not have to worry with vision. However, an alliance who can work together to cap, and score reliably in other ways, should have an edge. As far as capping in auto with dead rec., I myself don’t know how to do it, but I assure you it COULD be done. Probably not very reliable, but it is possible. (not the wisest choice in my opinion)
Well, the default camera code seems to be working pretty well for us. (For following stuff around, anyway.) I really couldn’t imagine us not using it. Granted, we may not use just the camera, (we may use touch sensors when we get close to the tetra / goal, for example) but the camera itself seems like way too useful of a tool to just not use. =P
Not to boast here, but here’s my standing.
I believe that our team will have a very good chance at being able to Grab a vision tetra and cap it on a goal in auto mode.
Yes, the camera is the centerpiece of our auto mode, but we are also grabbing input from possibly ~12(or more) other sensors on our bot to aid us… (they are ALL digital, BTW…)
Our bot will also have some workarounds built in. If, for example, it misses the tetra in an attempt to grab it, it will know, and it will try to get it again.
Notice, however, that I said “A” vision tetra on “A” goal. We aren’t trying to discern 100% between which goal/tetra we’re going after, most of that is going to be left up to the robot. This will also hopefully assist it in terms of speed, as we will probably pursue the closest item on the field to the bot.
I think the trick is going to be adding as much sensory at points where the robot needs information as possible. Lets just say our 16 digital inputs will probably be full
I also want to say that it is not impossible. There is plenty of opportunity for sensory to be added to complete the task. I also think what some teams identify as the task may be a bit much. I can understand how it is difficult to discern between goals and so forth, so don’t make your task so defined.
I also like the idea of teams going for the loading zones. It is likely that it is easier and quicker for 2 alliance bots to grab a tetra and cap the end goals (same as bonus for center vision capping) easier and quicker than vision tracking.
Our team has weighed the benefits, and Vision Capping is our prime autonomous strategy (wish us luck!)
So far, BTW, I haven’t seen anything that may give us trouble with our autonomous plans in the way of hardware.
I hope to have some video of the bot in action once everythings up and running. We’ve got to actually get our arm on the bot first! :yikes:
Did Wildstang know where all the stacks were in 2003? No, but they knocked down a lot of randomly placed stacks. This year, we know where they can and can’t be.
In addition to Mark’s idea, I could envision a swerve robot going to each of the locations and lunging at where the tetra is supposed to be. Using other sensors, you can tell if you have the tetra and be on your way to the goal.
How fast are you going to use vision and get a tetra and move to a goal? I bet I can move directly to the goal and block before you can place the tetra, but that is the subject of another long winded thread.
-Paul
P.S. - Yes. we are trying to use the camera, but we are also investigating other methods.
P.S.S. - Each year has a red herring element. There are some of us T-Chickens that think the camera is one of them.