|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||
|
||||
|
Is using vision Key this year?
Well initially I planned to try to cap in autonomous, I have realized from threads on here as well as my own experimenting with the camera that I would be lucky to cap even with a perfect code. In addition to this I have been doing quite a bit in other aspects of the bot and have not begun to do extensive amounts of programming, I am looking for opinios as to weather it would be more beneficial for me to work on defensive modes via dead reckoning that I have extensivly worked out or If it is far more beneficial to use the camera. I am looking strictly for opinions.
Thank you. James |
|
#2
|
||||
|
||||
|
Re: Is using vision Key this year?
hmm. we won't be getting extensive use out of the camera, although it might be useful in alignment in addition to dead-reckoning for, say, getting to a loading station in autonomous mode.
|
|
#3
|
|||||
|
|||||
|
Re: Is using vision Key this year?
I do not think that using the vision system is going to be the make or break this year with auto. I think the teams that can use dead rec or other modes will be able to complete there tasks much faster and more accurately.
|
|
#4
|
||||
|
||||
|
Re: Is using vision Key this year?
As much as I'd like to believe differently, the very few teams (<5%, I'd say) that can succesfully cap the vision tetra on the center goal will have an enormous advantage. Of course, you can argue that there is only use for one such team per alliance, but with so few capable teams, they will still be in extremely high demand.
|
|
#5
|
||||
|
||||
|
Re: Is using vision Key this year?
Quote:
|
|
#6
|
|||||
|
|||||
|
Re: Is using vision Key this year?
Quote:
|
|
#7
|
||||
|
||||
|
Re: Is using vision Key this year?
Perhaps if you are using the default vision code straight up, but I don't think speed will be the main concern. I think bots wont be able to do it period.
|
|
#8
|
||||
|
||||
|
Re: Is using vision Key this year?
Quote:
Just because you can't imagine how to dead reckon a tetra onto a goal doesn't mean that you won't be witnessing a robot doing just that in a month or so. I can't tell you how many times some team's been able to do something that I would have thought impossible. |
|
#9
|
||||
|
||||
|
Re: Is using vision Key this year?
Quote:
Sean and I have worked on that problem for quite a while, we can't find any reliable method (2 out of 8 chances is not reliable.) to find a vision tetra in autonomous without the camera. All odds say it more than likely cannot be done well. So if you're going to dispute it, don't you think you should have something to back it up? Saying "Somebody will come up with it" doesn't count. If you have a real idea on how to do it, I'd be very interested in hearing it though. Your statement was equally bold, but you have nothing to really support it, if you do, I'd definitely like to see it, any tricks I can add to my bag would be great. Now back to breaking software, Matt |
|
#10
|
||||
|
||||
|
Re: Is using vision Key this year?
Quote:
Drop two arms, one on each side (you could drop a hanging tetra off with these also), move the robot to the side wall and turn it so it is facing the opposite side wall, move across the field so that the arms will bump a tetra where it exists. This starts a routine that picks up the tetra and calculates distance, etc to the goal you want to cap and drives there and scores. Since the tetras can only be in two of eight locations, your bot can count its way to the bump and know which position it's in and where the goal is from there. It'd help to have the competition carpet to perfect it and/or be very ready to calibrate to the field carpet on practice day. Echoing Russel (2 posts above), it's not going to work every time but you can definitely get it consistent enough to be competitive. Do I need to come up with any more ideas for you? Last edited by Mark Pettit : 03-02-2005 at 07:34. |
|
#11
|
|||||
|
|||||
|
Re: Is using vision Key this year?
Quote:
Matt |
|
#12
|
|||||
|
|||||
|
Re: Is using vision Key this year?
Its simple to use dead reckoning to find the vision tetra really (assuming you have dead reckoning equipment on board, and your programmers know how to use it). This method would not be foolproof, and there is a certain percentage of the time that it should not work at all (Im done with math for the day so someone else can figure it out), but it should work most of the time.
Basically you have to insist that you be allowed to position your bot in the left or right slot (not the middle, though you might be able to make it work for that too). Then you use your camera to scan for a vision tetra in front of you, then you determine the heading on which the tetra lies. This is where the problem comes in. It is unlikely that you could get an accurate enough angle to determine where the tetra is if it is not in one of the five tetra positions in front of you, so if both tetras are in the three positions not right in front of you then you just have to take a random guess at which of the other three positions they might be in and go for one. Then based on that angle you determine which position the tetra is in, then use dead reckoning to nail it. Sorry that explanation was kind of incoherrent, but its the best I can do at this time of night. Edit: Actually I just studied the field diagram again, and I realised that starting in the center might actually be the best, because you can get reasonably accurate angles on all of the tetras, and the only two that are on aproximately the same heading are the two in front of you, and if there are no tetras anywhere else, then they are both in front of you so you could theoretically be 100% effective. Assuming the pan servo can pan far enough to see the whole field. Last edited by russell : 03-02-2005 at 00:49. |
|
#13
|
||||
|
||||
|
Re: Is using vision Key this year?
To those that say that it's impossible to find a vision tetra reliably without the camera... there are certainly other sensors you can use to find something that is in one of a finite number of known positions. Encoders, limit switches, and distance sensors are just a few of the possibilities. I am fully confident that there will be a team that can consistantly cap with a vision tetra in autonomous without using the camera.
|
|
#14
|
|||
|
|||
|
Re: Is using vision Key this year?
We arent that worried about capping the vision tetra this year, I'm writing a copycat program that will let us have 16 different routines that we can run, so we should still be capable of dropping the tetras in the side goals, playing defense, or going to the loading station. Is an autonomous important this year? yes. is the camera critical? no.
|
|
#15
|
||||
|
||||
|
Re: Is using vision Key this year?
Quote:
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Drill Motors Removed? Illegal this year? | Chris_Elston | Motors | 12 | 12-01-2005 09:04 |
| First Year Team Assistance - MOEmentum: FYI | Mr MOE | General Forum | 0 | 07-01-2005 14:07 |
| How did you thank your engineers/mentors/sponsors this year? | Bharat Nain | Thanks and/or Congrats | 15 | 01-08-2004 21:18 |
| Starting a college team | mtrawls | Starting New Teams | 23 | 28-04-2004 09:58 |