Log in

View Full Version : Multiple Light Tracking with Distinct Light Boxes


MaHaGoN
21-01-2007, 19:24
Team 250 has been hard at work with the camera, and we have finished our multiple light tracking.

Here is a video of it in action.
http://video.google.com/videoplay?docid=4711623040823747005

For more information on how it works, and what it does, visit http://cr4.globalspec.com/blog/29/Robotics-Team-250-The-Dynamos-Blog The information is up there now. From there you can learn more about it, and get in touch with us about the possibility of beta code. Now that is up, feel free to read the other blogs our team has posted or comment here and we can discuss it further.

Also, I figured I would take what my teammate jt250 said below and put it in the first post so it is easier to find when visiting the thread.

First, put the camera in Polled Mode, (command is "PM 1"). This means that for every Track Color command sent, only one T packet will come back. This allows you to write back to the camera without it flooding you out with data. (This may not actually be necessary, but it seemed to work more reliably for us when we did this)

Now cycle through this loop:

1. Using the regular camera window, track onto a target. A T Packet will be sent
2. Based on the centroid coordinates in the T Packet (labeled .mx and .my in Mr. Watson's code), draw a Virtual window with the following parameters: 1, 1, centroid_x, 239. Then resend the Track Color command
3. Now draw a virtual window from: centroid_x, 1, 159, 239. Resend the Track Color command.
4. Set the virtual window back to the full view: 1, 1, 159, 239 and go back to step one.

I will try to keep this post up to date with pertinent information.

MaHaGoN

bear24rw
21-01-2007, 20:33
Thats awsome.. can you give a brief explination on how you did it? Did you just have to keep switching the VW back and forth to get the location of each target?

jt250
21-01-2007, 20:48
Thats awsome.. can you give a brief explination on how you did it? Did you just have to keep switching the VW back and forth to get the location of each target?

A more detailed explanation will be on the blog once the post gets read over and checked over by one of our mentors, but yes, basically it works in three steps using the VW command and switching back and forth. The current version will not work properly with more than two lights. There was a more robust version but there were major trade-offs in terms of time and post-processing, and since the current version can handle even more than what would crop up in competition, we chose to stick with that.

jt250
22-01-2007, 14:48
The full blog entry is up and can be read here: http://cr4.globalspec.com/blogentry/1000/Team-250-Programming-CMUCam2

It discusses a little bit more than the camera, just to let you know. Anyways, for now here are the basic steps for how it works:

First, put the camera in Polled Mode, (command is "PM 1"). This means that for every Track Color command sent, only one T packet will come back. This allows you to write back to the camera without it flooding you out with data. (This may not actually be necessary, but it seemed to work more reliably for us when we did this)

Now cycle through this loop:

1. Using the regular camera window, track onto a target. A T Packet will be sent
2. Based on the centroid coordinates in the T Packet (labeled .mx and .my in Mr. Watson's code), draw a Virtual window with the following parameters: 1, 1, centroid_x, 239. Then resend the Track Color command
3. Now draw a virtual window from: centroid_x, 1, 159, 239. Resend the Track Color command.
4. Set the virtual window back to the full view: 1, 1, 159, 239 and go back to step one.

Effectively, what this does is split the window in half based on the center of the initial blob received. To make it more robust you might want to have it only do the splitting if the confidence is below a certain value. When we've cleaned it up and made it work with the dashboard we will post some more.

Kevin Watson
22-01-2007, 17:00
The full blog entry is up and can be read here: http://cr4.globalspec.com/blogentry/1000/Team-250-Programming-CMUCam2

It discusses a little bit more than the camera, just to let you know. Anyways, for now here are the basic steps for how it works:

First, put the camera in Polled Mode, (command is "PM 1"). This means that for every Track Color command sent, only one T packet will come back. This allows you to write back to the camera without it flooding you out with data. (This may not actually be necessary, but it seemed to work more reliably for us when we did this)

Now cycle through this loop:

1. Using the regular camera window, track onto a target. A T Packet will be sent
2. Based on the centroid coordinates in the T Packet (labeled .mx and .my in Mr. Watson's code), draw a Virtual window with the following parameters: 1, 1, centroid_x, 239. Then resend the Track Color command
3. Now draw a virtual window from: centroid_x, 1, 159, 239. Resend the Track Color command.
4. Set the virtual window back to the full view: 1, 1, 159, 239 and go back to step one.

Effectively, what this does is split the window in half based on the center of the initial blob received. To make it more robust you might want to have it only do the splitting if the confidence is below a certain value. When we've cleaned it up and made it work with the dashboard we will post some more.Excellent! I was hoping someone would do this. Bonus points for sharing too.

Just FYI, The vw or virtual window function is already included in the 2.1 code. See Virtual_Window( ) in camera.c/.h. More information about this command can be found in the CMUcam2 command dictionary found in the 2.1 zip file, or at http://kevin.org/frc.

Edit: If you mounted the camera as I suggested, the image (and coordinates) will be upside down. See Q26 of the camera FAQ (http://kevin.org/frc/camera/) for details.

-Kevin

jpaupore
22-01-2007, 17:24
Seems like a pretty good idea. But there's a few problems that I can see so far:

In order to get angles to each target, you would need to rotate the servos around to one target, then to the other. How fast would this be? (This is my first year in FIRST, and I have no idea on how anything really works.) Maybe there's a way to get the angles out of the pixel count and current position? If that was possible, we could use trig to get a position fix relative to the Rack, and even be able to go for the columns without the targets above them, which will be harder to locate.
How would this be affected by the movement of the bot and the servos? It seems like putting the camera in polled mode would interfere with that.

jt250
25-01-2007, 22:34
I have sent an update email to everyone who posted. However, there was an issue with one of the email addresses, so if you did not get the message please post another comment. Sorry for the delay.

In response to jpaupore:


I am a little confused about your first question. I believe you are asking if the relative tilt angle to each of the two lights can be directly inferred from the distance between them in terms of pixels and how off center they are. That's a good question, as the code stands now it does not actually yet move or orient a robot towards the lights so I have not even tried to figure out if this is possible using just the pixel data.
At least for us last year, it was impossible to use the camera data as a form of feedback as we tried to turn to face the light. Instead we stored how far we needed to turn and then used a feedback loop involving the gyro to turn to that angle. After that the camera could be re-checked to reaffirm that we were in fact aligned, but during the actual movement it was too hard to synchronize the movement of the tilt servo and the robot.


Thanks to Mr. Watson and everyone else who has expressed interest in this work.

ace123
29-01-2007, 01:52
Wow... Contrats! :D I was thinking that it would be possible to do something like this when I saw Kevin's new function.

Actually, I remember having a problem two years ago where it wasn't possible to talk to the camera while it was sending data in tracking mode (I was trying to command the servos attached to the camera, but it wouldn't listen to me)... how did you get around this limitation? Or was it not a problem?
EDIT: Ah, I saw your answer to this at http://chiefdelphi.com/forums/showthread.php?p=566552 ... I guess it was the Poll Mode that was causing this behavior.

As jpaupore said in his comment, it might be hard to get angles to each target... however there should still be a way of getting an angle from a pixel position.

Anyway, I'm thinking that the only reason to track two targets at once is to go for the center position, so it shouldn't matter as long as you are aiming in between the two. Otherwise, you would just choose one to begin with and point toward it.

MaHaGoN
30-01-2007, 20:03
Wow... Contrats! :D I was thinking that it would be possible to do something like this when I saw Kevin's new function.

Thanks! We spent a while working on it, especially before we sent it to a lot of teams. We used the update prior to when Watson included the VirtualWindow function, but then later changed ours to use his latest before we sent it out.

I'm thinking that the only reason to track two targets at once is to go for the center position, so it shouldn't matter as long as you are aiming in between the two. Otherwise, you would just choose one to begin with and point toward it.
This was my logic as well. I think that it would be best to use the two lights to find the closest representation of the center. This was much more difficult before we could see two distinct lights. However to go to a side, with the virtual window you can use it to decide which side you want to track at first, until you get close enough that the other light goes out of view.

We recently got tracking to the center working, and soon we will be working on tracking to either the left or the right side.

MaHaGoN

Ted155
31-01-2007, 22:20
Congrats! on conquering this code. You are displaying Gracious Professionalism with your additude of sharing. I too would be interested in this code.
Here is my email: theodorehall@sbcglobal.net

I do have a question though, I have a concern that the range on the camera lense is pretty narrow. I calculate that with a fixed camera you will loose sight of the 2 lights about 10 ft from the goal. Any thoughts on how to expand the view of the camera? At this point we would need to track using other devices over a 10 ft range, doesn't leave much room for error.

Ted155

baclaskya
02-02-2007, 00:56
Thanks for all the info so far on tracking multiple targets. We're just getting our camera on line using Mr. Watson's Bells and Whistles code (thanks Kevin). If I could, I'd like to have a peek at the Virtual Window code that you sent to other teams and decide if we have the time and ability to pursue such an intelligent solution. Can you drop the code to me via email (baclaskya@thompson.k12.co.us)? Also, I have noticed that in the single target mode, the camera seeks and locks onto the target nicely. But, as I look down the direction the camera is pointing, I see that it seems to be aimed slightly to the left of the target. Any ideas? Do I need to perform some calibration of the camera to get it to aim spot on?
Thanks again for your help

jpaupore
05-02-2007, 17:23
Thanks! We spent a while working on it, especially before we sent it to a lot of teams. We used the update prior to when Watson included the VirtualWindow function, but then later changed ours to use his latest before we sent it out.


This was my logic as well. I think that it would be best to use the two lights to find the closest representation of the center. This was much more difficult before we could see two distinct lights. However to go to a side, with the virtual window you can use it to decide which side you want to track at first, until you get close enough that the other light goes out of view.

We recently got tracking to the center working, and soon we will be working on tracking to either the left or the right side.

MaHaGoN

My idea with trying to get the angles to the two lights would be that then, you could use trigonometry to get the exact angle to the one in the middle. But I guess your solution is better - the angle to the lights is far from an exact measurement, so just an approximation should get the robot to the general area. My question is, with the rack swaying and only an imprecise measurement, can we determine where the ends of the spider legs are and how to put a tube over one? Can the camera "see" the plates on the ends of the legs if we find the right parameters?

Eclipse
09-02-2007, 17:23
I would imagine that it would be rather difficult to locate and track one of the spider legs, due to its reflective nature. Nearly every time you looked at it, the coloring/shading would be different. However, it may be more plausible than I think. I'm not too well versed on camera stuff. XD

Ultima
10-02-2007, 14:58
You have no idea how long our team has been straining to try and write something similar to what you have already coded. Please, if it is possible send over the code, it would be appreciated greatly. Thanks again from Team 369!
You can either PM me on the boards or here is my email: Durytskyy@yahoo.com

tmbg37
10-02-2007, 18:23
Our team has been trying to do something similar, but have been wrangling with sending camera commands and such. Could we have a copy too? Send it to first@shlashdot.org, please.

DanDon
10-02-2007, 18:24
Is that first@slashdot? Or is it correct as is?

tmbg37
10-02-2007, 20:01
It is correct as it is, I'm not cool enough to have an email address at Slashdot :P

gnirts
10-02-2007, 23:29
In case you missed my post on your team's blog, Team 1648 would love to be able to have a copy of this code!

Thanks in advance,
Robinson Levin
me@robinsonlevin.com

Gaj
13-02-2007, 14:08
I would very much like to see this code -- the demo looked amazing!

(Email: hydrargyri@gmail.com)

Ankush Dhar
15-02-2007, 11:53
It seems you had talked about setting up a CVS respitory somewhere, but i cannot find it and wasn't able to get the copy of the code.

If you can, can you please provide me a copy of that code, thanks

killzone721@gmail.com

Regards

Ankush Dhar

lkdjm
15-02-2007, 14:15
I already have this code and I have played with it a lot. Now I am interested in using it practically. How do you actually "choose a light" to go to, and ignore the other. Also, when the camera sees 2 lights, it twitches back and forth between the two. Is there a fix for that? Also, if I want to go "in between" the 2 lights, how would I do that? With the twitching, it doesn't seem possible. Thank you in advance for your help.

Ankush Dhar
15-02-2007, 15:36
I already have this code and I have played with it a lot. Now I am interested in using it practically. How do you actually "choose a light" to go to, and ignore the other. Also, when the camera sees 2 lights, it twitches back and forth between the two. Is there a fix for that? Also, if I want to go "in between" the 2 lights, how would I do that? With the twitching, it doesn't seem possible. Thank you in advance for your help.


Ikdjm, Can u please, please send me that copy to my email! it will be awsome if you do so?

killzone721@gmail.com


Regards

Ankush Dhar

Chuck Merja
17-02-2007, 08:17
hey cool - and cool to you for sharing. we lost our programming mentor to a heavy work load, so we are trying to do autonomous in Easy C, but having trouble with the camera. Might you help us a bit? thnks team 1696 in the stix (Montana - not exactly the hotbed of C ) chuckm@3rivers.net and rcurtis3.3rivers.net

Ankush Dhar
17-02-2007, 13:00
Heh has anyone actally figured out how to pick the blob, either left or right?

other than that i wanted to ask one more quesion, i am not sure but anyone knows how to calculate the width of each blob?

please help if anyone knows

Regards

Ankush Dhar

gnirts
17-02-2007, 21:05
Heh has anyone actally figured out how to pick the blob, either left or right?

other than that i wanted to ask one more quesion, i am not sure but anyone knows how to calculate the width of each blob?
The blob with the lower y coordinates will be closer (or the other way around if your camera is upside down). But you might consider going to the one that is more infront of your robot.

width = T_Packet_Data.x2 - T_Packet_Data.x1;

Good Luck,
Robinson Levin

MaHaGoN
17-02-2007, 21:47
I sent this e-mail out tonight to everyone that was on my mailing list.

Dear Everyone,
Thank you for expressing your interest in this camera code. We have been happy to help those who have requested the code. As a result of snow delays and other complications, our team has changed directions and will no longer be pursuing the camera. As a result we can no longer continue developing the dual light code. If any of you are interested in still obtaining the beta code, I will be placing a link on the original Chief Delphi post shortly. It is stickied in the CMUcam sub-section of the Programming forum. Best of luck in their upcoming competitions, and we hope to see you at Nationals.

Thanks again, and good luck!
The Programmers of Team 250



1. Please, please, please read the readme and check over the code before you run it on your robot. There's nothing out of the ordinary going on here, but we can not have foreseen your potential configuration, so please check over everything and make sure it's ok. (For example, pwm outputs are configured as in Mr. Watson's streamlined code, so there definitely is some (potential) non-neutral activity on every pwm. Please see user_routines.c if you need to comment this out.)

2. I should clarify that this code does not yet move the pan/tilt servos. We have started working on that but it's not yet debugged completely. Instead of waiting, I wanted to at least get this base code out.

3. If your camera is mounted upside down (see Kevin's post in the sticky on chiefdelphi) then the GUI representations will be upside down. I think everything else should work (I'm not totally sure of this).

I have attached a zip file with the uncompiled python scripts and the camera code for the robot. The executable python scripts were too large to attach. If you have any further questions I will try to get back to you.

gnirts
17-02-2007, 23:23
Let us know if we can help with the situation, and thanks for posting it for all to enjoy.

Good luck,
Robinson Levin

neutrino15
18-02-2007, 11:12
Hello!
I am new to chiefdelphi and, well, new to FRC. Our team has been having trouble with the Virtual_Window command. Whenever we send the command, our camera seems to initialize. This makes the camera physically move back to it's starting position... We must be doing something wrong, we just don't know what. We came up with an alternate method of tracking the rack without using virtual window (only using a corner of the "blob" we see.) This solution is messy at best, so if anybody could help, please either reply or email me at neutrino15[at]gmail[dot]c0m. (the 0 is a o in com)..

Thank You!
-Jordan

gnirts
18-02-2007, 13:43
We came up with an alternate method of tracking the rack without using virtual window (only using a corner of the "blob" we see.)We use this, it works quite well. Just aim to either corner (x1 or x2) instead of the center (mx).

Robinson Levin

Draggy
18-02-2007, 22:45
Send it to :


anriel.tuiqest@gmail.com

Please =)

gnirts
19-02-2007, 10:46
Send it to :
anriel.tuiqest@gmail.com
Please =)
It is attached to post #26 (http://www.chiefdelphi.com/forums/showpost.php?p=580474&postcount=26)
- Robinson Levin