Quote:
|
Originally Posted by Dave Scheck
Just for the sake of argument, why bother if you can do it without the camera? This type of use can be compared to the IR beacons of 2004...it becomes an extra complexity for something that can be done using a sensor or time based system. The only way that I see to incorporate the camera, in this example, is to use it to double check where you are.
On a similar note, one problem that I'm sure people found, is that with the current color models, the camera sees blue the same as green. Granted, this can be compensated for (i.e. determining the size of the object), but once again it adds additional complexity.
I'm not saying that using the CMU Cam is necessarilly a good/bad thing, I just think that it, like many things, has its application. In most cases you wouldn't use a limit switch to measure angular rotation, you'd most likely use an encoder or pot. Its all about choosing the right and/or most efficient tool for the job.
|
One team at Annapolis used both mag induction sensors and the camera to tell when they were lined up with the goals. They had two of the mag sensors on the low front of the robot to "see" the metal of the goal base. They used the camera to see the yellow triangle in the middle of the goal.
I really wish I could remember the team number, their system was so nifty, and their tetra loader was so simple. They were at the DC scrimage too.(Maybe Dave can help me out)
Wetzel