Have you gotten your robot to go towards the vision tetra?

So how many of you have gotten your robot to go towards the vision tetra. It would be nice if you are willing to share a few tweaks/techniques/anything that will help us all do the same. I bet half of us are still clueless on how to do this.
-Bharat

I got our kitbot camera stuff working great. Only problem is that I still am unsure on how to go about putting it in the autonomous section. I will figure that out once I have our final robot to play with.

Here is what I did:

Once I got the camera to track the targets I mounted it on the robot and connected the left motors to pwm12 and the right motors to pwm11. Then with the robot on blocks I tested it just to make sure it sort of worked. It did. Then I set it up in a carpeted room that had marginal lighting and lots of yellow walls, blue tables, and lights that apparently look green. So I set up a worklight to illuminate the red target and played around some more. The code I was using is at school so all I can tell you will have to be from memory. First I modified the code by creating an if statement directly below the one that does the camera driving in user_routines.c that had all the same conditions as the one above but also the camera was pointing down a certain amount then set both pwms to 127. Then I modified the conditions for that statement that actually does the driving to say only drive it if the camera angle is higher than that same angle. This would have worked OK if we had been using braking but we were not so I also put in another similar statement that made it slow down for the last few feet. One thing I found is that you want to find a good drive speed (about 155 if you are using kitbot) then find a steering compensation variable that works with it. Once you have that if you decide to change the drive speed try to keep the steering compensation proportional to the speed (if you double the speed then double the steering compensation, etc). You may also want to play with a way to change what value the servos use as center. I had some simple way but I foget exactly what it was. I did all of this last week, and we have since dissassembled kitbot.

Anyway kind of long, and nothing really brilliant, but I hope it helps.

There is a few things that can be helpful to use while tracking color using camera_find_color().

Ill give a quick run down of how this works. The camera processor can send and receive messages through the serial port. If you look in the camera manual PD, you will see a complete list of commands and their parameters. The one we are interested in is “TC” (track color) You send the camera this command followed by the max and min RB values.

This is the format of the Track Color Command:
TC Rmin Rmax Gmin Gmax Bmin Bmax ] r

Now, the camera processor takes over. Using the max and min RGB values (which you found in calibration), it looks for a color blob which has pixels that fit into the RGB value window you have specified (once again, with calibration) If the does not find a color, it will return a 0. If the camera does find your color, it will return what is called a ‘t packet’ This serial packet includes information about the color blob’s Centroid coordinates, bounding box (how big of an area it covers), amount of pixels tracked, and a certainty rating (0-100%, how sure the camera is this is YOUR blob)

Once the camera finds the blob, it goes into frame differencing mode. In this mode, the camera records many frames per second, in a very small resolution. It then can determine whether the color blob has moved over time. It will compute how far the blob has moved to either the left or right, and since the camera board has control of a pan servo, it will try to correct itself, so that the blob’s centroid is in the exact center of the frame.

Now, you may be asking what you can do with this information…

The camera driver sets up a structure, from which you can grab different values. When camera_track_update( ) is called, it will take the last good T packet and throw the data into local variables. It also grabs the current servo angles (pwm values) and throws those into RC local variables. Now, you have exactly what the camera is seeing in variables on the RC, which can be accessed by referencing the “cam” object. So to get the pan servo’s pwm value, all you would need to do in your code is include the cam structure, and then use that structures variable ‘pan_servo’

This is how you can do that:

extern cam_struct cam; //include cam structure in your NEW, not default code.

//set local reference to latest cam data
camera_track_update( );  
TheServo = cam.pan_servo;

Now, you know what angle the blob is in relation to the front of the bot (from the servo angle) You can now adjust the bots heading according to this. If the blob is to the left of the bot, rotate left. To the right, rotate right. Now when your bot is 0o with the blobl, you are safe to drive forward.

So (in very basic form), this is how to track a color, in code form.

int SPEED_SETTING = 155; //Set how fast you want bot to drive/turn.
int centered = 0;   //use this to check if bot is alligned with blob.
int tracking = 0;    //use this to see if we got a good frame.

//set camera to track your color, lets say green.
tracking = camera_find_color( GREEN );

//update camera info
camera_track_update( );

//See if bot is facing blob. If not, correct.

if(cam.pan_servo < 120) //im using a 7 value deadband.
{
     wheel_l = 255 - SPEED_SETTING;
     wheel_r = SPEED_SETTING;      // This turns bot to the left.
     centered = 0;
}
else if(cam.pan_servo > 134)
{
     wheel_l = SPEED_SETTING;
     wheel_r = 255 - SPEED_SETTING;     //This turns bot to right.
     centered = 0;
}
else centered = 1;

//Now check to see if theres a good frame, and if the servo blob is in front of the bot. If there is, drive to it.

if(tracking == 1 && centered==1)
{
     wheel_l = SPEED_SETTING;
     wheel_l = SPEED_SETTING; //Drive straight!
}

Now that will bring you into basic color tracking. The next logical thing to do would be to stop when you are close to a color. To this, you may use a tilt servo, which does the same thing as the pan servo (in which it tries to center the color blob; meaning… the tilt servo would be pointing directly at your color’s centroid) If you know the tilt servos angle, you can do some basic trig to find out how far the color is away.

http://www.team195.com/images/trig.gif

Another way you could find how far away the blob is by looking at the bounding box, and comparing its values.

If I get some good feedback, I will write a white paper explaining the camera more in depth and giving more example code.

Hope this helped,
Tom.

Wow! Consider this to be good feedback!
I have a couple of questions though. First of all I am a newbie programmer, and there are a few things I dont understand. As described above I had our kitbot set up so it could drive to the target and stop. I like this program somewhat more than yours I think because the robot can make steering corrections on the move whereas with yours it would stop to make steering corrections.

Basically my question for you is what do I have to do to get this code to execute in autonomous mode? Do I simply go to the autonomous section of user_routines_fast.c and put my code there (obviously I would have to modify it so it doesnt use the joysticks)? Or do I have to put "camera_track_update( ); " in that section then my code? Or do I have to take all of the camera stuff out of user_routines.c and transfer it to the autonomous section?

I know that is a lot of questions, but basically it all comes down to how do I use the camera in autonomous mode?

Anyway thanks in advance,

Russell

Edit: Ok I have looked at your code above some more and I think I am beginning to get it, but if I put that in the autonomous section how will the robot get the color values, as they are in the user_routines.c section? Does that section also execute in autonomous or do I have to copy that part of the code over?

Great job Tom, i’m truly in awe. Sadly my trig knowledge ends at sin = opp / hyp… which I learned from googling “beginner trigonomotry” about 5 minutes after I read this thread. I’m gonna keep on re-reading this and my google results, hopefully I’ll get it soon.

The trig for this is all right triangle stuff. That is reasonably easy (though I dont know how to do it in C but it couldnt be all that hard could it?). The rule is SOH-CAH-TOA (pronounced “so cuh toe-uh”).
sine=opp/hyp
cosine=adj/hyp
tan=opp/adj

So basically you have a right triangle where one side (lets say this is side a) is a line straight down to the floor (or level of the panel on a vision tetra) from your camera, another side (side b) is the line between the bottom of that line and the vision target, and the hypotenuse is the line between your camera and the vision target.

So you know that the sin of the tilt angle of the camera is equal to the distance to the target divided by the height of the camera. Assuming that the tilt servo angle is a linnear thing when compared to the pwm value it returns you should be able to compute that angle then the height of your camera times the tangent of that angle should equal the distance to the target. Of course the servo resolution is no good, but it should be ok for close range.

Sounds like we’re all taking VERY similar approaches here.

We must be doing something right =).

  1. Proportional correction is the way to go. This was mentioned above, and basically means, read the cam.pan_servo value, and bias your left and right motors proportionally to the angle in which your camera is pointing. Simple enough right?

  2. Adjust your theta (fixed tilt axis angle) so that you don’t see the green tetras on the opponent’s side of the field when the match starts. I don’t know yet whether a tilt axis servo is necessary, as when you start driving towards a tetra on your side of the field, you MAY pick up an opponents tetra in your field of view with a fixed tilt angle.

  3. Depending on where the vision tetras are placed and where your robot is placed at the beginning of the match, you may not see all the possible tetra positions. Make a panning routine that makes the camera look left and right a bit until it finds a tetra to track. Due to some very minor loopholes, you can do this while the robot is disabled before the beginning of the match.

  4. We’re still confused about what to do when there are two tetras in our field of view… drive until one of them drops out =).

At this point, we’ve gotten our test platform to find a tetra by panning while disabled before the match, when the match starts, spin on axis to point to robot straight at a tetra, approach the tetra guided by the camera, and stop when the tetra is about a foot from the front of the robot.

Our platform uses omni-wheels, and we are trying to minimize wheel-slip in our approach to the tetra, because the HARD part is actually finding the center goal AFTER you’ve picked up the tetra. The use of a gyro is still up in the air depending on how close we can get the robot pointed towards the center goal with just our encoders. If we can get it close, and use the camera to get it the rest of the way, we’re set. But we don’t know whether this is the case just yet.

-SlimBoJones…

Can’t it just track one at a time?

Something I noticed with Tom’s setup, with the way you have it, you can’t use the one camera to track AND find the point when to stop to pick up the tetra. You’ve got it set where it’ll be looking straight forward and turning till it finds the tetra then going to it. Then, tilting the camera down till the robot lines up with the angle and you know the distance from it. If you can see it at the beginning by having it horizontal, then you can’t use it to angle and find it (because horizontal already allows you to see the vision tetra). Or am I mistaken?

I think that if the camera sees two at once it will draw that box thing around both of them and will probably drive to a point between them, but will tend toward the one that is bigger (probably the closer one in this case).

Great piece of info.
I’ve got a few questions,though:
1)What is the camera’s field of view?
2)how are the servo’s values related to angles?
it would really help my poor programming team(around yesterday our number shrinked to 3 people- one of them with no programming expirience and no chance(no will,too) to get any. In the first few days of the project, we numbered more than 10!) if somebody told us these(and mabe some other) basic things/

Ya we got our not “bot” but our practice kit bot to find the tetra and drive straight into it. Still have to do some work to mae it do something useful, though

  1. It depends on how you mount it… measure it manually
  2. You take the angle measure of the field of view and divide it by 255

We’ve also been able to make our practic bot go towards the vision tetra and stop.

I just realised today what a waste of time it was playing with the camera on our practice bot, because I really didnt change anything except a few numbers for calibrating different things, and now with our actual bot I have to start all over again.

Oh well, at least I learned something. I hope.

By the way, If I am writing autonomous code, do I have to put in there somewhere to do the camera_init() thingy?

AAah stopping… were still trying to get that part to work reliably. :slight_smile:

Update: 195 Test-a-bot now succesfully finds a vision tetra within a 180 degree field of view, drives to it (quite fast, too), and stops 6 inches away, every time.

Whats next?

I’m kinda new to programming in c, and i have a question: what is between 120 and 134 for cam.pan_servo, and what is cam.pan_servo?

Additionally, what would you recommend I read up on to become proficent at programming in c?

Thanks,
Mike

We can consistently get a vision tetra and raise our arm. About 80% of the time we can drive to the goal, and stack about 30% of the time. Some minor tweaking should bring these numbers up alot :yikes:

For ours at this point it tracks and picks up a vision tetra consistently then looks for the goal… Unfortunately at crunch time the mechanical had lots of work to do on the robot and I lost the time I needed to fine tune :/. Hopefully it’ll be able to cap by the second competition O_O.