For more information on how it works, and what it does, visit http://cr4.globalspec.com/blog/29/Robotics-Team-250-The-Dynamos-Blog The information is up there now. From there you can learn more about it, and get in touch with us about the possibility of beta code. Now that is up, feel free to read the other blogs our team has posted or comment here and we can discuss it further.
Also, I figured I would take what my teammate jt250 said below and put it in the first post so it is easier to find when visiting the thread.
I will try to keep this post up to date with pertinent information.
Thats awsome… can you give a brief explination on how you did it? Did you just have to keep switching the VW back and forth to get the location of each target?
A more detailed explanation will be on the blog once the post gets read over and checked over by one of our mentors, but yes, basically it works in three steps using the VW command and switching back and forth. The current version will not work properly with more than two lights. There was a more robust version but there were major trade-offs in terms of time and post-processing, and since the current version can handle even more than what would crop up in competition, we chose to stick with that.
It discusses a little bit more than the camera, just to let you know. Anyways, for now here are the basic steps for how it works:
First, put the camera in Polled Mode, (command is “PM 1”). This means that for every Track Color command sent, only one T packet will come back. This allows you to write back to the camera without it flooding you out with data. (This may not actually be necessary, but it seemed to work more reliably for us when we did this)
Now cycle through this loop:
Using the regular camera window, track onto a target. A T Packet will be sent
Based on the centroid coordinates in the T Packet (labeled .mx and .my in Mr. Watson’s code), draw a Virtual window with the following parameters: 1, 1, centroid_x, 239. Then resend the Track Color command
Now draw a virtual window from: centroid_x, 1, 159, 239. Resend the Track Color command.
Set the virtual window back to the full view: 1, 1, 159, 239 and go back to step one.
Effectively, what this does is split the window in half based on the center of the initial blob received. To make it more robust you might want to have it only do the splitting if the confidence is below a certain value. When we’ve cleaned it up and made it work with the dashboard we will post some more.
Excellent! I was hoping someone would do this. Bonus points for sharing too.
Just FYI, The vw or virtual window function is already included in the 2.1 code. See Virtual_Window( ) in camera.c/.h. More information about this command can be found in the CMUcam2 command dictionary found in the 2.1 zip file, or at http://kevin.org/frc.
Edit: If you mounted the camera as I suggested, the image (and coordinates) will be upside down. See Q26 of the camera FAQ for details.
Seems like a pretty good idea. But there’s a few problems that I can see so far:
In order to get angles to each target, you would need to rotate the servos around to one target, then to the other. How fast would this be? (This is my first year in FIRST, and I have no idea on how anything really works.) Maybe there’s a way to get the angles out of the pixel count and current position? If that was possible, we could use trig to get a position fix relative to the Rack, and even be able to go for the columns without the targets above them, which will be harder to locate.
How would this be affected by the movement of the bot and the servos? It seems like putting the camera in polled mode would interfere with that.
I have sent an update email to everyone who posted. However, there was an issue with one of the email addresses, so if you did not get the message please post another comment. Sorry for the delay.
In response to jpaupore:
I am a little confused about your first question. I believe you are asking if the relative tilt angle to each of the two lights can be directly inferred from the distance between them in terms of pixels and how off center they are. That’s a good question, as the code stands now it does not actually yet move or orient a robot towards the lights so I have not even tried to figure out if this is possible using just the pixel data.
At least for us last year, it was impossible to use the camera data as a form of feedback as we tried to turn to face the light. Instead we stored how far we needed to turn and then used a feedback loop involving the gyro to turn to that angle. After that the camera could be re-checked to reaffirm that we were in fact aligned, but during the actual movement it was too hard to synchronize the movement of the tilt servo and the robot.
Thanks to Mr. Watson and everyone else who has expressed interest in this work.
Wow… Contrats! I was thinking that it would be possible to do something like this when I saw Kevin’s new function.
Actually, I remember having a problem two years ago where it wasn’t possible to talk to the camera while it was sending data in tracking mode (I was trying to command the servos attached to the camera, but it wouldn’t listen to me)… how did you get around this limitation? Or was it not a problem?
EDIT: Ah, I saw your answer to this at http://chiefdelphi.com/forums/showthread.php?p=566552 … I guess it was the Poll Mode that was causing this behavior.
As jpaupore said in his comment, it might be hard to get angles to each target… however there should still be a way of getting an angle from a pixel position.
Anyway, I’m thinking that the only reason to track two targets at once is to go for the center position, so it shouldn’t matter as long as you are aiming in between the two. Otherwise, you would just choose one to begin with and point toward it.
Thanks! We spent a while working on it, especially before we sent it to a lot of teams. We used the update prior to when Watson included the VirtualWindow function, but then later changed ours to use his latest before we sent it out.
This was my logic as well. I think that it would be best to use the two lights to find the closest representation of the center. This was much more difficult before we could see two distinct lights. However to go to a side, with the virtual window you can use it to decide which side you want to track at first, until you get close enough that the other light goes out of view.
We recently got tracking to the center working, and soon we will be working on tracking to either the left or the right side.
Congrats! on conquering this code. You are displaying Gracious Professionalism with your additude of sharing. I too would be interested in this code.
Here is my email: [email protected]
I do have a question though, I have a concern that the range on the camera lense is pretty narrow. I calculate that with a fixed camera you will loose sight of the 2 lights about 10 ft from the goal. Any thoughts on how to expand the view of the camera? At this point we would need to track using other devices over a 10 ft range, doesn’t leave much room for error.
Thanks for all the info so far on tracking multiple targets. We’re just getting our camera on line using Mr. Watson’s Bells and Whistles code (thanks Kevin). If I could, I’d like to have a peek at the Virtual Window code that you sent to other teams and decide if we have the time and ability to pursue such an intelligent solution. Can you drop the code to me via email ([email protected])? Also, I have noticed that in the single target mode, the camera seeks and locks onto the target nicely. But, as I look down the direction the camera is pointing, I see that it seems to be aimed slightly to the left of the target. Any ideas? Do I need to perform some calibration of the camera to get it to aim spot on?
Thanks again for your help
My idea with trying to get the angles to the two lights would be that then, you could use trigonometry to get the exact angle to the one in the middle. But I guess your solution is better - the angle to the lights is far from an exact measurement, so just an approximation should get the robot to the general area. My question is, with the rack swaying and only an imprecise measurement, can we determine where the ends of the spider legs are and how to put a tube over one? Can the camera “see” the plates on the ends of the legs if we find the right parameters?
I would imagine that it would be rather difficult to locate and track one of the spider legs, due to its reflective nature. Nearly every time you looked at it, the coloring/shading would be different. However, it may be more plausible than I think. I’m not too well versed on camera stuff. XD
You have no idea how long our team has been straining to try and write something similar to what you have already coded. Please, if it is possible send over the code, it would be appreciated greatly. Thanks again from Team 369!
You can either PM me on the boards or here is my email: [email protected]
Our team has been trying to do something similar, but have been wrangling with sending camera commands and such. Could we have a copy too? Send it to [email protected], please.