CMUCam Next Year?

I was talking to some members of our team and we are discussing programming an autonomous mode using the CMUCam. The only thing thats come up is whether its going to be used next year or not. So who thinks that it will be used next year?

Thanks

nuke

Unfortunately I think your answer is yes. I made a poll a few weeks ago and Dave voted “only”](http://www.chiefdelphi.com/forums/poll.php?do=showresults&pollid=1261) for the CMU cam. Now he is a slippery one and could have seen the fact that his selection would have been scrutinized - hence did it to throw us off. But I don’t think so. Also this product had a lot of thought/investment put into it from FIRST for 2005, so I doubt they’d abandon it already.
I don’t think it’s a good tool out in the open on randomly lit fields. But it might work if the new game controlled the background you searched for items on and maybe on a smaller scale.
Example: If for instance we end up searching for colored PVC pipe laying on a black cloth, your mechanism could elevate the camera above the items looking down on the black background and eliminate the environmental noise that screwed those cams up.

I hope to never see that awful thing again. In the event that we do see it again, things had better be alot more organized.

Lets look at the history of FIRST’s “forced technology” additions to the KIT.
2002 - Infrared detection of Goals - Faliure, no one really did it.
2003 - Infrared Detection of boxes - Failure, no one really did it.
2004 - Infrared beacon at Ball Tees - Failure, backscatter and other issues made this toatlly unreliable.
2005 - Camera to detect vision tetras - Failure, only a few teams ever managed to make this work.

I hope the powers at FIRST will learn from these lessons and NOT repeat past mistakes. If they do want us to develop new technology, why keep it a secret until January. If we are to use the camera again in 2006, why not tell us now??? If FIRST did this, the likelyhood of our success would be much higher.

Probably will. It’s the most advanced sensor offering a lot of possiblities. I don’t like it though.

I do believe that the 2002 and 2003 games used light sensors and reflective tape, not IR. IR was introduced as new for the 2004 game. The boxes, and I think the goals for 2002, had highly reflective tape that allowed the robots to potentialy see them. However, the boxes only had tape if they were the Human placed boxes, I think. IR was a flop and lasted one year. Not to say that it won’t come back.

I’m not familiar with any sort of autonomous-type stuff for 2002. I was of the understanding that autonomous mode didn’t start until 2003. (It was before my time.)

If I remember my reading correctly, there was retro-reflective tape on the 2003 HP bins, which the robots could spot and then act accordingly. The only problem was that autonomous wound up being let’s-see-who-can-get-to-the-ramp-first. Not so much a sensor issue as a game design issue.

Infrared had one light side nobody realized until afterwards (I recall it being said by cbolin of 342): if you got close, it worked much better. All you had to do was get in the neighborhood (the memory is saying five feet or so) before actually caring about the sensors.

Now, my one real gripe with the CMUcam (aside from the technical difficulties that everyone seemed to get) is that it seemed to kill the visual liveliness of the field. The 2003 and 2004 fields had stuff on them, big things that made the fields look distinctive…and I just didn’t feel that from the Triple Play field.

I remember when i first saw the field at the kickoff i felt very empty. It seemed to me like something was wrong. I was asking myself “when are they going to bring out the ramps, balls and goals? Where are all the solid objects?”

I wouldn’t mind it too much… as long as the not so intelligent members don’t touch it this time and burn it.

An interesting application for the CMU cam would be for object identification, more than actual navigation. For example, if all the game objects were placed over the operator stations ( I know, dangerous), in two colors, it would make sense to use the camera to distinguish, maybe help you pick them up, but not to run the bot entirely. As a subsystem. Maybe half the objects are worth 5 points a piece, and the other half are worth like 1 point. Anyone can fumble around and get a piece, but only the teams with the CMUcam could reliably pick up the valuable pieces. ( However, I must say I also hate the CMUcam. 6 weeks is not long enough to build a robot, then code a complex camera with hardware limits in mind, all with students who probably have never coded a total of 500 lines of valid C code. Just us, but it seems a little rough.)

Coding the CMU cam isnt actually all that tough. Its coding the CMU cam on a FIRST Robot Controller, and having it deal with all the chaos (and lighting condiditions) on a FIRST field thats tough. If you have a PC running linux on the other hand its simple to program it. I think it would be cool to have a competition that in some way promotes automated systems under human control. For example the human points a boom on the robot at an object, and the robot automatically retrieves it.

This is actually something that would be very, very interesting for FIRST, and can easily be implemented in a game. For example, in a thread thats popular right now, there’s discussion about how to pick up a baton. For a team that wanted to invest the time and effort, a multi-axis grabber with the cmucam to align it would be the best option. Simply get the grabber(linked in position to the cmucam) close to the baton, and the cmucam could align so as to allow baton capture in any orientation. A real world problem/challenge that could be, or maybe already is, integrated into FIRST.

I hope we get another one because our team is planning to give it depth perseption with two cameras

Not to say that this is impossible, but how will you deal with the different focal lengths of the two cameras? (If the target is off-centre, one camera’s image will look substantially different than the other’s.) Also, how will you distinguish two similarly-coloured objects side by side? Which camera will point to which object?

Out of curiousity, why would you need that advanced depth perception?

Personally, I think reading the tilt servo readout is sufficient. If you’d like to get more advanced you could just convert the readout to feet/meters/etc.

I would not want the CMUcam next year. It is logistically, a failure. The theoretical power of that kind of sensor is great, but it just doesn’t work in real life… under real lighting conditions.

[quote=russell]Coding the CMU cam isn’t actually all that tough. Its coding the CMU cam on a FIRST Robot Controller, and having it deal with all the chaos (and lighting conditions) on a FIRST field thats tough./QUOTE]

I for one am hoping to see a repeat of the CMU Cam in 2006. It is a shame that it was so wildly underutilized last year, since it does so much - much more than a camera, at least. This is an opportunity for some really powerful programming, and not only for autonomous. And, lighting conditions are not as critical as you might believe.

There’s a great series of five articles on how to do robot camera vision in the July through November issues of SERVO magazine, explaining most of the significant details. Due to copyright issues, I can’t provide copies of the articles, sorry.

Don[/quote]

Mike, thats a very good analysis. I also don’t want it for that reason. However, I would like to have under different game conditions, say searching for one, undeniable color, say a big, white baton. Simply, the camera would be used only for the purpose of lining something up, not the control of an entire robot across rather large distances. Something the variations in lighting would not affect as much. Whatever the most robust color is, FIRST needs us to look for that, and that only.

Since I’ve been designing my robot for the Trinity Fire Fighting Robot Competition, I’ve been addicted to Infrared stuff. I really think that a thermopile array such as the TPA81 may provide the power of the CMUCam with the flexibility of working in the real world.

That would create a very dynamic game, much like the real world. Heating the game objects ( pretty simple actually, a big hair dryer blowing though pipes the batons are set into) would allow you to easily lock onto them, and pick them up. Then, if they were dropped, they would cool down and become indistinguishable to others, another challenge. And robots could warm them to a specific temp. to mark them too, though that wouldn’t be too useful.

Shh! Don’t go around telling people that we’re using batons next year!

:wink: