Not that it’s written badly. But we all know that not everyone is good at programming, which is especially true for the newer teams, and it is a wealth of code that can seem daunting to any new programmer. I have tried my best to explain this code in simple terms, originally for the benefit of my own team. However, once I had completed it, I thought, why not make this available to others? I might prove beneficial for those who do not understand the code. So, I spiced it up a bit and uploaded it to our site. Feel free to use it however you like, and if you do, please let me know what you think.
I think your link is broken.
Great job! Welcome to ChiefDelphi, that’s a great first post, hopefully many more to come.
Are you on a FRC team as well as FVC? Or do you just like to mess around with the CMUcam?
Works fine for me. If anyone else can’t get it, I’ll copy it onto here.
Firstly, note that this documentary is based upon the version of the camera-
integrated code developed by Kevin Watson for the 2006 First Robotics
Competition, downloadable at http://www.kevin.org/frc/frc_camera.zip . This
version of the default robot code ignores all default routines and performs only
camera functions in normal mode, and nothing in autonomous. The default routines
are all commented out, thus easily un-commentable, and still perfectly usable,
and the camera code can simply be copied and pasted into the User_Autonomous()
routine. But I’ll leave this for you to do.
Now, for some of the more experienced programmers, and of course the mentors,
this code may be easily understandable, if you have several hours to read
through it all. However, guessing by the increasing amount of pre-written code
provided over the past few years, I’d say that a lot of programmers and software
guys, especially on new teams, do not know enough about programming and are not
experienced enough in dealing with code and algorithms to understand what this
code does on their own. Or, perhaps, you don’t have the time to figure it out on
your own. Hopefully, this guide can prove to be useful to you.
I’m going to being in user_routines.c, where the camera’s operation code was
placed by Kevin Watson for show, and walk through the routines and operations
performed, step by step. I’ll try to be as brief as possible, without over-
complicating it. The more complex routines provide a fair amount of commenting,
and that, combined with this guide, should be more than enough to give you an
understanding of what each part of this code does.
Also, Kevin Watson has provided useful menu features for configuring the camera
during run-time, through the IFI Loader Output Termina. Bear in mind, that as of
right now, this documentation will not go into these features in depth, as I
haven’t had a chance to actually use any of these features. I hope to add info
in, eventually.
So, here goes…
user_routines.c
Process_Data_From_Master_uP() - Receives input data from the microprocessor,
executes all of the user’s routines, and sent
output data to the microprocessor.
– Some initial variables declared, for use with the menu features which are not
covered here.
– If neither of the menu features are active, display some variables and info
on the tracking process.
– Camera_Handler() called. Let’s take a look at it, shall we?
camera.c
– Camera_Handler() - Handles initialization of the camera, receives any
packet data sent to the camera through the TTL Serial
Port, and inserts that data into the data structure.
– -- If the camera has not been initialized, call Initialize_Camera();
– -- – Open up the camera’s Serial port and send some configuration commands
to it. This is equivalent to the TTL port when the camera is connected
to the RC.
– -- – Reset the initialization state machine.
Now, we are ready to parse the serial port. The port works like a command
prompt, handling everything sent to it as a string of text and adding new
messages to the end of a long buffer. Each string of data that is added to the
buffer is termed as a packet. ACKs and NCKs are terms that refer to different
kinds of packets that we may need to retreive.
Keep in mind that after we get to the end of the following string of if
statements and the state machine, the robot has to run through the entire rest
of the program before it gets back to this routine.
– -- – If we an ACK or NCK was added to the buffer during the last run through
of this routine, we want to reset the count of how many times this
routine has been run since the last packet was received. If we wait too
long, an error has ocurred.
– -- – Now, the state machine begins. The machine will run through one state
at a time, all in order, taking a new specific step in the
initialization in each state.
– -- – 1) Get_Camera_Configuration() - This will retreive configuration
variables used by the camera, either
through the terminal menu feature, or
by using the values stored in camera.h.
– -- – 2) Nadda
– -- – 3 - 16) Add some packets to the serial port, initializing controls to
be used in the terminal menu features.
– -- – 17) Send color tracking configuration variables that we retreived
earlier to the serial port.
– -- – 18) Initialization complete. Return a 1, rather than a 0 to signal it.
– -- Now, we’re back in Camera_Handler(). Don’t take this literally, though.
Camera_Handler() is called each time through the loop, before each state
of the state machine. But, at this point, it won’t be calling
Initialize_Camera() each time.
This part of the function parses and clears the serial port’s buffer each time
through the program loop. This is where we process the ACKs, NCKs, and other
packets that the initialization state machine sent to the serial port.
– -- First, we count out how many bytes of data are present in the serial
port’s queue. We then enter a loop to process each of these bytes, one by
one.
– -- Once a byte is sent in, we call Camera_State_Machine() to process it.
Personally, I think Camera_State_Machine is a poor name for this routine.
Process_Packet_Data() or Process_Packet_Byte() would be more accurate.
Now, we get T packets, which contain the specific variables retreived by the
camera, concerning the position of the target, the offset of the target from the
center of the camera, the confidence the camera has in this information, etc.
– -- – We use the bytes of the packet, as read in, to determine what kind of
packet we have. If it is an ACK or NCK, we ignore it. ACKs and NCKs are
processed automatically and we don’t need to perform any actions.
– -- – T packets, are sent from the camera, to us. If it’s a T packet we’re
reading, we read in the rest of it and insert the data into our
T_Packet_Data struct.
user_routines.c
– So, finally, Camera_Handler() is finished. Again, I do not mean this
literally. Camera_Handler() is still called every loop.
– Now, as long as the tracking menu is not in use, meaning we’re configuring
the camera, we track our target with Servo_Track().
tracking.c
– -- The basic idea for the running of the servos is that we read the current
position into a variable and if we want to change the position of either
of the servos, we do our work on those variables, make sure that the
values do not go outside the PWM limits, and then send them back, through
an ACK or an NCK.
– -- So, first, we need to initialize our camera for tracking, using
Initialize_Tracking().
– -- – This routine, aside from debug output, only calls
Get_Tracking_Configuration(), which runs under the same principles as
Get_Camera_Configuration(). If the terminal menus are in use, get the
configuration stuff we need from there, otherwise, we use what’s in
tracking.h
– -- Now, if a new T packet has arrived, and thus been processed by
Camera_Handler(), we want to check to see if the target is in our view.
If it is, we may need to adjust the servos to keep the camera facing it,
and if not, we want to track for it.
– -- If we have the target in sight, let’s refigure the positions we need for
our pan servo, in case it’s wrong. If the amount we’re off from the
center is greater than the allowable amount, calculate the value that
needs to be added to the servo value. We make sure that the result is not
outside the value parameters of a PMW, and set our result to the
middle-man variable.
– -- Same thing for the tilt servo.
– -- If we don’t have the target in sight, we need to search for it. We start
the pan all the way to the left and the tilt in the center. After each
loop we add a certain amount to the pan servo. If the result is greater
than the servo’s right limit, we put it all the way back to the left and
add a certain amount to the tilt servo. Once we reach the max for that one
we put it down to the minimum and keep searching.
– -- Finally, set the servo PWMs to the values we calculated.
user_routines.c
– And, last, we handle the bulk of the terminal menus, and processing data from
the terminal.
And, that is the entire camera code. Not that much, once you break it down.
Now, I’m only one guy, and I know I make mistakes, just like anyone else.
If anyone happens to notice something I’ve left out, or gotten wrong because I
don’t know enough about the system, please tell me. And anyone who does utilize
this information should check on this, periodically, in case I update it, which
is likely. Anyone wishing to contact me can do so at [email protected].
Enjoy.
Bah. The domain name’s been faulty ever since the site went up. It’s some temporary thing our host’s got setup for us, until we figure out if we can use our old one. Anyway, it doesn’t work for me either. Try this.
And actually, I meant to put FRC. So, I have a very good reason to mess around with the camera. I do still have a lot of work to do on our website, and with the robot itself, but I’ll try and improve on this as much as I can. Perhaps I can put this all in the code itself, through comments. Or, at least put the code into the summary.
Why not post it here as a whitepaper?
If I knew what that means, I might be able to answer you.
Well, in that case…
because I plan to add to/improve upon it and I don’t see an option to update it later on.