![]() |
CMUCam Next Year?
I was talking to some members of our team and we are discussing programming an autonomous mode using the CMUCam. The only thing thats come up is whether its going to be used next year or not. So who thinks that it will be used next year?
Thanks nuke |
Re: CMUCam Next Year?
Unfortunately I think your answer is yes. I made a poll a few weeks ago and Dave voted "only" for the CMU cam. Now he is a slippery one and could have seen the fact that his selection would have been scrutinized - hence did it to throw us off. But I don't think so. Also this product had a lot of thought/investment put into it from FIRST for 2005, so I doubt they'd abandon it already.
I don't think it's a good tool out in the open on randomly lit fields. But it might work if the new game controlled the background you searched for items on and maybe on a smaller scale. Example: If for instance we end up searching for colored PVC pipe laying on a black cloth, your mechanism could elevate the camera above the items looking down on the black background and eliminate the environmental noise that screwed those cams up. |
Re: CMUCam Next Year?
I hope to never see that awful thing again. In the event that we do see it again, things had better be alot more organized.
|
Re: CMUCam Next Year?
Lets look at the history of FIRST's "forced technology" additions to the KIT.
2002 - Infrared detection of Goals - Faliure, no one really did it. 2003 - Infrared Detection of boxes - Failure, no one really did it. 2004 - Infrared beacon at Ball Tees - Failure, backscatter and other issues made this toatlly unreliable. 2005 - Camera to detect vision tetras - Failure, only a few teams ever managed to make this work. I hope the powers at FIRST will learn from these lessons and NOT repeat past mistakes. If they do want us to develop new technology, why keep it a secret until January. If we are to use the camera again in 2006, why not tell us now???? If FIRST did this, the likelyhood of our success would be much higher. |
Re: CMUCam Next Year?
Quote:
|
Re: CMUCam Next Year?
Quote:
|
Re: CMUCam Next Year?
I'm not familiar with any sort of autonomous-type stuff for 2002. I was of the understanding that autonomous mode didn't start until 2003. (It was before my time.)
If I remember my reading correctly, there was retro-reflective tape on the 2003 HP bins, which the robots could spot and then act accordingly. The only problem was that autonomous wound up being let's-see-who-can-get-to-the-ramp-first. Not so much a sensor issue as a game design issue. Infrared had one light side nobody realized until afterwards (I recall it being said by cbolin of 342): if you got close, it worked much better. All you had to do was get in the neighborhood (the memory is saying five feet or so) before actually caring about the sensors. Now, my one real gripe with the CMUcam (aside from the technical difficulties that everyone seemed to get) is that it seemed to kill the visual liveliness of the field. The 2003 and 2004 fields had stuff on them, big things that made the fields look distinctive...and I just didn't feel that from the Triple Play field. |
Re: CMUCam Next Year?
Quote:
|
Re: CMUCam Next Year?
I wouldn't mind it too much... as long as the not so intelligent members don't touch it this time and burn it.
|
Re: CMUCam Next Year?
An interesting application for the CMU cam would be for object identification, more than actual navigation. For example, if all the game objects were placed over the operator stations ( I know, dangerous), in two colors, it would make sense to use the camera to distinguish, maybe help you pick them up, but not to run the bot entirely. As a subsystem. Maybe half the objects are worth 5 points a piece, and the other half are worth like 1 point. Anyone can fumble around and get a piece, but only the teams with the CMUcam could reliably pick up the valuable pieces. ( However, I must say I also hate the CMUcam. 6 weeks is not long enough to build a robot, then code a complex camera with hardware limits in mind, all with students who probably have never coded a total of 500 lines of valid C code. Just us, but it seems a little rough.)
|
Re: CMUCam Next Year?
Coding the CMU cam isnt actually all that tough. Its coding the CMU cam on a FIRST Robot Controller, and having it deal with all the chaos (and lighting condiditions) on a FIRST field thats tough. If you have a PC running linux on the other hand its simple to program it. I think it would be cool to have a competition that in some way promotes automated systems under human control. For example the human points a boom on the robot at an object, and the robot automatically retrieves it.
|
Re: CMUCam Next Year?
Quote:
|
Re: CMUCam Next Year?
I hope we get another one because our team is planning to give it depth perseption with two cameras
|
Re: CMUCam Next Year?
Quote:
|
Re: CMUCam Next Year?
Out of curiousity, why would you need that advanced depth perception?
Personally, I think reading the tilt servo readout is sufficient. If you'd like to get more advanced you could just convert the readout to feet/meters/etc. I would not want the CMUcam next year. It is logistically, a failure. The theoretical power of that kind of sensor is great, but it just doesn't work in real life... under real lighting conditions. |
Re: CMUCam Next Year?
[quote=russell]Coding the CMU cam isn't actually all that tough. Its coding the CMU cam on a FIRST Robot Controller, and having it deal with all the chaos (and lighting conditions) on a FIRST field thats tough./QUOTE]
I for one am hoping to see a repeat of the CMU Cam in 2006. It is a shame that it was so wildly underutilized last year, since it does so much - much more than a camera, at least. This is an opportunity for some really powerful programming, and not only for autonomous. And, lighting conditions are not as critical as you might believe. There's a great series of five articles on how to do robot camera vision in the July through November issues of SERVO magazine, explaining most of the significant details. Due to copyright issues, I can't provide copies of the articles, sorry. Don |
Re: CMUCam Next Year?
Quote:
|
Re: CMUCam Next Year?
Since I've been designing my robot for the Trinity Fire Fighting Robot Competition, I've been addicted to Infrared stuff. I really think that a thermopile array such as the TPA81 may provide the power of the CMUCam with the flexibility of working in the real world.
|
Re: CMUCam Next Year?
Quote:
|
Re: CMUCam Next Year?
Quote:
;) |
Re: CMUCam Next Year?
Quote:
|
Re: CMUCam Next Year?
From what I have gathered (our team gave up on the CMUcam at kickoff) the CMUcam was very easy to damage, so if it does come back FIRST better make it more robust. I personally like the IR sensor idea, but it might be a bit unfair, since they'd work better in a enviroment with a bigger difference, so they'd work better at Canadian regionals than, say, Texas. It would be funny, albeit dangerous, if the IR sensors are too sensitive and pick up a FP or judge's computer/projector as a scoring object and procedes to attempt to "score" with it.
|
Re: CMUCam Next Year?
That is a valid concern, as referees are heat emmiting objects, so the game object would have to be heated to a high heat, so 110 degrees, or cooled, like 40 degrees. Probably cooling would be better. In this way you would start eliminating the margin for error. But, teams still wore green with the CMUcam this year. Moe still has all its members, from what I know...:D
|
Re: CMUCam Next Year?
Lol thats cause no one actually used the camera. I mean not litterally no one, but probably no one enough that it wasnt an issue.
|
Re: CMUCam Next Year?
I havent entirely worked out kinks involved with depth perseption but i would get the cameras to focus on the same object(dunno how yet) and then use the directions they are pointing to determine how far away the baton i mean object is
|
Re: CMUCam Next Year?
Quote:
For example: Mount the camera on a horizontal arm a foot long, then quickly move the camera from one end to the other - you keep the same object in view, but it 'moves' in the frame a bit. measure that and you can calculate distance. OK, more brain food: Does it have to move horizontally? How about vertically? Or, what if the camera is fixed and the robot moves??? Finally, you can do it with one camera and without moving. If you know the size of the object, it's just a matter of 'measuring' it. If you don't know the size, put the camera up high and measure the down angle to center the object in the frame. Using simple trigonometry you can calculate the distance to the object. There, that should get you started. Please let us know how it turns out. Don |
Re: CMUCam Next Year?
Quote:
Quote:
-Kevin |
Re: CMUCam Next Year?
For using trig to find where an objct is i wrote an autonomous that does this but it only works correctly when the object is on the ground, the point of 2 cameras i so that i can tell how far away it is in any direction including up and down
|
Re: CMUCam Next Year?
so basically if we had higher powered cameras this year that actually focused on what they were supposed to, then it would be possible to do the trig thing(of course higher powered processor would be needed for higher powered cam).
doubt it would be very accurate with the two cams. Sometimes even the human brain fails in this. Better would be to know the size of the object and find out how much it changes....somehow. there's gotta be another way to be more accurate. |
Re: CMUCam Next Year?
The problem with the Camera was the calibration software was junk. I think better calibration software would make all the diffrence. We also ran out of memory in the controler.
|
Re: CMUCam Next Year?
My limited experience as the team's lead teacher/sponsor was that we could get the camera to recognize color blobs and track but we lacked sufficient time with a working robot to program navigation. IMHO it was the dual challenge of autonomy coupled with looking for a variety of colors on different planes as closed loop system for navigation that was the challenge and not the camera.
Six weeks is all that Botball teams are going to have with their vision camera set up once again this season. Those robots are completely autonomous so once again I think the specific application of the cmu system was the challenge. I echo what others have suggested, include the camera with applications that would give closed-loop feedback to robot systems less challenging than autonomous navigation. Get the robot or manipulator close via a driver and then engage the software lock. Those types of grabbing, dropping, color sorting functions would increase robot performance reliability dramatically. APS |
Re: CMUCam Next Year?
FIRST needs to put more emphasis on autonomus, and less emphasis on arbitrary sensors. Teams don't need "cool" additions to the kit to motivate them, they need a game which makes autonomus a worthwhile pursuit. If you don't do autonomus well, you should be in a whole lower class of competition. The games should be designed such that the top ranking teams are teams with effective autonomous operation.
Oh, and I was very dissapointed last year to find my CMU cam was D.O.A., as were seemingly a large percentage of them. Though looking back, I'm glad I didn't order a replacement because it seems as though the varying lighting conditions made the whole thing fairly unworkable... |
Re: CMUCam Next Year?
I would be very, very surprised if the CMUCam was not in the kit. It was the most advanced sensory piece of equipment and yet it wasn't used. Why? Becuase it wasn't worth enough strategically. Why spend all that effort for a bonus that will barely last 15 seconds past autonomous in the finals? If FIRST had said "Oh and by the way, if you cap the center goal in autonomous, you unconditionally own that goal for the rest of the match, we would have seen people cap it autonomously. Also it doesn't make sense to say, "very few people could accomplish a difficult task, so let's give them an easier one."
You don't have to do "the task" for autonomous so why make "the task" easier? For exampe in 2004, our robot went forward, grabbed a goal, and brought it back for the ball dump. Does it accomplish "the task"? No. Is it awe-inspiring? No. Did the programmers-in-training learn something while accomplishing it? Yes. |
Re: CMUCam Next Year?
Quote:
:D This is, unequivocaly, the BEST POST I've EVER SEEN on ChiefDelphi. EVER! |
Re: CMUCam Next Year?
Well, I have heard that the autonomous mode will play a much more vital roll in this year's game which my team coach never fails to remind me. So, consequently, it would be advantageous to show the prestige of any robot's autonomous mode by using the CMUCam, and possibly have an upper hand in the score early on in the game to a larger extent than the previous years.
If it isn't going to be used, then I'll just be flabbergasted. Simple as that. |
Re: CMUCam Next Year?
Quote:
My only concern with the CMUCam is that it is color-based. Last year, even the practice field at the St. Louis regional had different lighting, prohibiting any testing / refinements of the autonomous mode. Also, FIRST only distributed the exposure calibration values to teams. That is NOT all the information needed to get that system working reliably! The teams (mine included) that used RGB windowing (much more stable) were out of luck. All that being said, I definitely want to see it come back this year. :D |
Re: CMUCam Next Year?
Quote:
|
Re: CMUCam Next Year?
Quote:
|
Re: CMUCam Next Year?
i haven't used it but iv'e been reading the help and reading the code and it all looks so confusing to try and program a robot to follow something. yes its probably easy for u vet teams but for a team only in it for a couple years and the fact that i'm programming and i just barely understand how our past codes work i don't want to use it. our team (1075) used a pre programmed code in '03 and '04. both years we had now problems. in '04 we were one of the few teams that could manage to knock the ball at the side off during autonomous. with pre programmed theres only the battery power that plays effect whereas with the cams and infra red sensors and gyros and compasses you have lighting conditions, battery power, parts to fail, other sources of interference. we experimented with a compass but simple metal objects would completely throw it off. the only way they could MAKE us use the cam would be to actually check our code everytime we go to the field. otherwise anyone could program the cam to do stuff and make the bot look like its following the cams instructions. so theres something so u don't have to worry so much about it. all there trying to do is give us a different way of doin autonomous. honestly i think its probably alot more reliable than the infrared sensors and line tracking and maybe the gyros(i was watching the '05 video and the gyro bot ended up about 45degrees turned from where it was supposed to be.
|
| All times are GMT -5. The time now is 00:11. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi