View Full Version : Where is the multi object tracking code for the RC?
EHaskins
06-01-2007, 14:24
Dave said the multi object code was available online, but I can't find it. Does anyone know where it is?
Thanks,
Eric Haskins
Ctrl Alt Delete
06-01-2007, 17:29
Does kevin's code work for this?
http://kevin.org/frc/frc_camera_2.zip
Jeff Rodriguez
06-01-2007, 17:36
http://www.intelitekdownloads.com/easyCPRO/
http://www.intelitekdownloads.com/easyCPRO/
Do we use last year's unlock code for this?
Alexa Stott
06-01-2007, 17:39
http://www.intelitekdownloads.com/easyCPRO/
So wait, are you only able to use that code with EasyC?
Kingofl337
06-01-2007, 17:51
Currenty only easyC and WPILib have code written for Multi-Target.
Brad Miller will be making a project on Monday for WPILIB.
The reason being the GDC used easyC for FRC to program
the Demo and most of the test robots.
easyC PRO has a full IDE builtin now so you can write C code
as much as your heart desires. Also, easyC PRO has a graphics
display window that works similar to the way the demo did.
Team contacts will recieve a CD-KEY Monday.
You can run in evaluation mode till then.
Kevin Watson
06-01-2007, 18:08
...Only easyC and WPILib have code written for Multi-Target....Really?
-Kevin
JBotAlan
06-01-2007, 18:09
I am confused; I didn't even know that the *hardware* supported multiple objects. I'll have to have a look-see at that code to figure out how multi-tracking works.
JBot
Really?
-Kevin
:) I canr wait.
I am confused; I didn't even know that the *hardware* supported multiple objects. I'll have to have a look-see at that code to figure out how multi-tracking works.
JBot
If course it could. Its all about the software implementation.
Ctrl Alt Delete
06-01-2007, 18:14
OK, so we have to use WPILib to get the tracking code? We were going to use FusionEdit and not easyC.
JBotAlan
06-01-2007, 18:15
If course it could. Its all about the software implementation.
Yes, but if you have 2 targets in one field-of-view, you will get a wide box and a low confidence value. That's at the camera's firmware level. So just by looking at the tracking packets, you wouldn't be able to see 2 targets. And I know we couldn't dump the contents of the frame into the RC; we don't have that kind of bandwidth.
However, if you only had one target in the field-of-view, and panned until you saw another, this would work fine. I'm guessing that's how the software works.
JBot
Kevin Watson
06-01-2007, 18:27
I am confused; I didn't even know that the *hardware* supported multiple objects. I'll have to have a look-see at that code to figure out how multi-tracking works.
JBotThe hardware doesn't support tracking of multiple objects. The code that Dave mentioned, written by FIRST's Neil Rosenberg, infers the location of a "spider leg" by looking at the size of the image. If there are two lights in the scene -- which is not always the case -- the camera returns a blob size that is far too big to be from one light. If the blob size is small, you know that there is only one light in the scene and there is a spider leg directly underneath.
-Kevin
Kevin,
If you did write code (I hope I'm not misinterpreting your post), do you have an approximate release date?
Bump? It sounds like Kevin's leading us on a little, but I would love love love to have that code. :D
Kingofl337
06-01-2007, 22:07
Kevin, if your 2007 code can has the tracking similar to what Neil wrote then I apologize as I was misinformed.
Kevin is 100% correct the CMU can't track multiple objects the code just looks at the size of the target region and if its over a certain size prints that its looking a 2 targets. In the demo they used a terminal emulation program and used VT100 calls. We made our own custom terminal window and function to do the same thing in easyC PRO.
WPILIB is 100% compatible with MPLAB, and Eclipse. Infact it's currently being written and maintained in Eclipse,
WPI has been using it since 2005 on their FRC robots.
arshacker
07-01-2007, 00:48
any reason why easyc can't open bds file for the multi-object tracking code??
any reason why easyc can't open bds file for the multi-object tracking code??
Are you sure that you are using easyC PRO (http://www.intelitekdownloads.com/easyCPRO/), and not easyC for Vex?
Astronouth7303
07-01-2007, 12:36
The way the camera works is that it takes a picture, finds all the pixels that fall into a color range, and finds information about them. The information I've used in the past is:
bounding box (x1, y1, x2, y2)
The median point (mx, my)
The pan/tilt (see note)
a "confidence" (basically a ratio of the color blob area to the area of the image)
IIRC, some statistical information about the color is also provided.
Note: In 2005, the camera would drive its own servos and report their values. In 2006, the code would not configure this way, instead relying on the RC to do it. (I changed this to the 2005 behavior in my own code.) I do not know yet what the behavior is this year.
I do not remember if the camera provides the number of separate blobs in the default information. If it does not, the communication overhead and computation requirements would likely be prohibitive. If it does, there is a good chance that you can not get the separate bounding box/median information for each blob.
Of course, I'm likely to eat my words when the actual code gets around.
EDIT: Wow. How much discussion occurs while I post!
Kevin Watson
07-01-2007, 12:48
I do not remember if the camera provides the number of separate blobs in the default information. If it does not, the communication overhead and computation requirements would likely be prohibitive. If it does, there is a good chance that you can not get the separate bounding box/median information for each blob.Yes you can, but the RC needs to control the pan/tilt servos...
-Kevin
How exactly does one access the data pertaining to the individual blobs? I didn't remember ever seeing anything like that in the code...
Kingofl337
07-01-2007, 14:51
They are not individual blobs they are a single large blob. The camera just draws a box around the target pixels.
If you load up easyC PRO we have a program in the Sample Code that
shows what the camera is seeing. It draws a box around the blob and shows an "X" for the centriod (center of blob) and shows the data the camera is showing.
If your not using easyC the CMU JavaApp also can show you the region. I don't know if labview can show this data.
Yes you can, but the RC needs to control the pan/tilt servos...
-Kevin
If one were to use the VW command (virtual window, see page 55 of the CMUCam2 manual), processing could be done on a particular chunk of the camera's view. By examining a 50px wide window and then repeatedly sliding that window over by a given number of pixels and re-processing, one could reconstruct the two distinct blobs and make an estimate of the number of pixels between them.
Of course if you used this method, the camera's servos would have to be driven by the RC because otherwise resetting the virtual window would cause the camera to track/center on that particular portion of the window. (I'm pretty sure anyways, haven't ever actually tested out the command).
Can anyone verify that using the VW window causes the camera to re-process only that chunk of the view? Also I'm not sure if a sliding window would be too slow. Eagerly anticipating any more hints from Kevin! :)
Kevin Watson
07-01-2007, 18:41
If one were to use the VW command (virtual window, see page 55 of the CMUCam2 manual), processing could be done on a particular chunk of the camera's view. By examining a 50px wide window and then repeatedly sliding that window over by a given number of pixels and re-processing, one could reconstruct the two distinct blobs and make an estimate of the number of pixels between them.
Of course if you used this method, the camera's servos would have to be driven by the RC because otherwise resetting the virtual window would cause the camera to track/center on that particular portion of the window. (I'm pretty sure anyways, haven't ever actually tested out the command).
Can anyone verify that using the VW window causes the camera to re-process only that chunk of the view? Also I'm not sure if a sliding window would be too slow. Eagerly anticipating any more hints from Kevin! :)Yes, this is one of the cooler approaches that you could try. A simpler way might be to rotate the camera fully clockwise, call Track_Color() and then rotate the camera counter-clockwise until the camera detects the light. Then continue to rotate counter-clockwise until the entire blob is in frame (i.e., the blob isn't touching the edge of the image). Now you know where the right most blob is and its size. Do this again to find the left most blob. A little math, and you should know where the closest scoring location is.
It's a fun problem <evil grin>.
-Kevin
drakesword
08-01-2007, 23:07
i had a solution which i do not think is legal but would be cool.
Parts
CMU Cam 2 x2
Basic Stamp Or Javilin
BOE Programming Board
Serial data into the free pins. Program the stamp to tack on a L or R to the packet befor sending it to the robot. That would allow for multi cameras but i dont think you can use a stamp. Maby a pic though.
A little math, and you should know where the closest scoring location is.Any chance you could elaborate on this math? Even if you know the relative headings of two lights, I am still having a hard time figuring out how you can approach the spider leg head on (not at an angle).
Thanks in advance,
Robinson
Kevin Watson
15-01-2007, 00:58
Any chance you could elaborate on this math? Even if you know the relative headings of two lights, I am still having a hard time figuring out how you can approach the spider leg head on (not at an angle).
Thanks in advance,
RobinsonYou don't need to approach the spider leg head on to score (I'm told that is was designed to be fairly forgiving). One bit of math that will come in handy is the equation that allows you to calculate range to the light from the camera tilt angle then it is pointed directly at the light. This method was described in a post (http://www.chiefdelphi.com/forums/showpost.php?p=427955&postcount=8) last year. This year the centroid of the light will be about 116 inches above the floor.
-Kevin
You don't need to approach the spider leg head on to score (I'm told that is was designed to be fairly forgiving). One bit of math that will come in handy is the equation that allows you to calculate range to the light from the camera tilt angle then it is pointed directly at the light. This method was described in a post (http://www.chiefdelphi.com/forums/showpost.php?p=427955&postcount=8) last year. This year the centroid of the light will be about 116 inches above the floor.
I'm familiar with the method for calculating range, but I'm afraid that the mechanical subteam may win in the war to have a manipulator that is (or isn't) robust enough to accommodate any approach to the target except for a head on one.
While I would like it if we weren't angle sensitive, if we are, what math is necessary to figure out your approach angle to the target (eg. head on, coming in at 20*, etc.)?
Thanks in advance,
Robinson
Kevin Watson
15-01-2007, 02:57
I'm familiar with the method for calculating range, but I'm afraid that the mechanical subteam may win in the war to have a manipulator that is (or isn't) robust enough to accommodate any approach to the target except for a head on one.
While I would like it if we weren't angle sensitive, if we are, what math is necessary to figure out your approach angle to the target (eg. head on, coming in at 20*, etc.)?
Thanks in advance,
RobinsonIt's not an easy problem because you'll need a very agile robot that, at the very least, will need to be four wheel drive so that you can do a turn-in-place (of course, several high school students will read this and grin because they've thought of a cool way to solve the problem that the NASA guy didn't think of -- it happens every year <grin>). If I were you, I'd push back and let the mechanism folks know that their solution for delivering a scoring piece needs to be more robust so the 'bot can approach at more oblique angles. To help visualize how you might accomplish the task, here's a link to a PDF containing a scale Visio drawing of the field: http://kevin.org/frc/2007_frc_field.pdf.
-Kevin
maniac_2040
15-01-2007, 15:50
It's not an easy problem because you'll need a very agile robot that, at the very least, will need to be four wheel drive so that you can do a turn-in-place (of course, several high school students will read this and grin because they've thought of a cool way to solve the problem that the NASA guy didn't think of -- it happens every year <grin>). If I were you, I'd push back and let the mechanism folks know that their solution for delivering a scoring piece needs to be more robust so the 'bot can approach at more oblique angles. To help visualize how you might accomplish the task, here's a link to a PDF containing a scale Visio drawing of the field: http://kevin.org/frc/frc_2007_field.pdf.
-Kevin
Our team is coming across the same dilemma. However, do you really need four wheel drive to do a turn in place? My team is using a forklift style drive this year(two drive wheels in the front, steering wheels in the back). The engineers of my team told me that we can turn in place by just turning the steering wheels almost perpedicular to the front and spinning the front wheels in opposite directions(ie- to turn left in place, spin the left wheel backwards and the right wheel forward). I am sceptical of this method. Will it really work?
But also, I have an idea of how to determine the orientation of the rack/vision target from information from the camera and would like to know the feasibility of it. It draws on the fact that the blob size is proportional to the angle that your approaching the target from. the blob size will be "thinner" if you're approaching from an angle, and larger if you're approaching head on. Do you think it would be possible to determine the angle of the rack based on this information, and the distance?
It seems that our robot will only be able to score(feasibly) head on.
Dominicano0519
15-01-2007, 16:44
one question,is it possible to use feedback from the camera to controll the robot, almost like a mini-autonomous mode built in to the code.
i.e. have the size of the rectangle tell you your robot if it is head on or if the light is to the right or if it is between two lights. then use that info to triangulate it's position{like kevin said it would rotate clockwise unitl it only has one light is sight. then it would rotate counterclockwise until it has only one light in sight, then use the that to find the distance between the two using some sort of equation(custom)} once that distance becomes let's say the equivalent of 45* it would go straight forward and hang the ringer.
i know that it is complicated but is it possible with the hardware and software.
Kevin Watson
15-01-2007, 17:12
Our team is coming across the same dilemma. However, do you really need four wheel drive to do a turn in place? My team is using a forklift style drive this year(two drive wheels in the front, steering wheels in the back). The engineers of my team told me that we can turn in place by just turning the steering wheels almost perpedicular to the front and spinning the front wheels in opposite directions(ie- to turn left in place, spin the left wheel backwards and the right wheel forward). I am sceptical of this method. Will it really work?It's more complex mechanically, and the software will be a pain, but once dialed-in, it should work fairly well.
But also, I have an idea of how to determine the orientation of the rack/vision target from information from the camera and would like to know the feasibility of it. It draws on the fact that the blob size is proportional to the angle that your approaching the target from. the blob size will be "thinner" if you're approaching from an angle, and larger if you're approaching head on. Do you think it would be possible to determine the angle of the rack based on this information, and the distance?It will be hard to do this because you have too few pixels to work with. As an example, from the closest starting distance possible (~180 inches), the light at 0 degrees only lights-up 12 pixels, at 25 degrees 8 pixels are illuminated. I think you're better off with a scoring mechanism that will work over greater range of angles.
-Kevin
Uberbots
15-01-2007, 18:31
Ive looked through this a bit and basically conceptualized a couple of ways to go about this. First off... if you look at kevin's old camera code you will see that all of the data that you could ever need it within the t_packet_data structure. The bounding box corners are there, along with the centroid location.
the way that i am conceptualizing going about this is pretty much to use size and confidence in order to determine the number of targets. the way i see it, the confidence will decrease when you have a low amount of tracked pixels within a large bounding box. When this confidence goes below a certain threshold, the code will know that only one target is in sight. The next challenge is finding the centroid of each target.
Ok, you know that the camera's X boundaries (but not necessarily the y boundaries) will mark the left edge of the left target, and the right edge of the right target. you will not know the height from this data, but you will not need to. the targets are in a fixed aspect ratio (like 2:1 w:h i believe), and you know the tracked pixels that you have. With this data, by dividing the tracked pixels by two and conforming each dividend to the aspect ratio, you can get an approximate X location of each target.
if you combine this method with the frame differencing data you could probably get the approximate Y value of each target as well. I will have to play around with this method a bit more in labView before i am able to come up with a conclusive algorithm... but that is what i have for now.
Kevin Watson
15-01-2007, 18:44
Ive looked through this a bit and basically conceptualized a couple of ways to go about this. First off... if you look at kevin's old camera code you will see that all of the data that you could ever need it within the t_packet_data structure. The bounding box corners are there, along with the centroid location. Yes, it's true! Thanks for noticing.
the way that i am conceptualizing going about this is pretty much to use size and confidence in order to determine the number of targets. the way i see it, the confidence will decrease when you have a low amount of tracked pixels within a large bounding box. When this confidence goes below a certain threshold, the code will know that only one target is in sight. The next challenge is finding the centroid of each target.
Ok, you know that the camera's X boundaries (but not necessarily the y boundaries) will mark the left edge of the left target, and the right edge of the right target. you will not know the height from this data, but you will not need to. the targets are in a fixed aspect ratio (like 2:1 w:h i believe), and you know the tracked pixels that you have. With this data, by dividing the tracked pixels by two and conforming each dividend to the aspect ratio, you can get an approximate X location of each target.
if you combine this method with the frame differencing data you could probably get the approximate Y value of each target as well. I will have to play around with this method a bit more in labView before i am able to come up with a conclusive algorithm... but that is what i have for now.Excellent analysis. Another bit of data you can use is the location of the centroid within the bounding rectangle. The centroid will be closer to the side of the rectangle with the nearest green light.
-Kevin
Kevin Watson
15-01-2007, 18:47
...I am still having a hard time figuring out how you can approach the spider leg head on (not at an angle).One way to do it is illustrated in the attached illustration.
-Kevin
One way to do it is illustrated in the attached illustration.
-Kevin
Yes, but you can't line up your robot exactly facing forward or the judges can move it. And the rack can be translated and rotated, too. So nothing is a given. I'll know by tomorrow whether our manipulator can handle oblique angles or not. [crosses fingers]
Thanks,
Robinson
Uberbots
15-01-2007, 19:37
Yes, but you can't line up your robot exactly facing forward or the judges can move it. And the rack can be translated and rotated, too. So nothing is a given. I'll know by tomorrow whether our manipulator can handle oblique angles or not. [crosses fingers]
if you load your tube so that it is parallel to the floor, you can essentially load it at any angle. I'm not sure if this should be posted here, but you asked the question. it really is a bad idea to mount it dead on... there are too many accuracy woes to worry about.
maniac_2040
15-01-2007, 20:18
Yes, but you can't line up your robot exactly facing forward or the judges can move it. And the rack can be translated and rotated, too. So nothing is a given. I'll know by tomorrow whether our manipulator can handle oblique angles or not. [crosses fingers]
Thanks,
Robinson
Who said that you can't line up your robot exactly facing forward?:confused:
I thought the general rule was that you couldn't come on the field with tape measure and other measuring tools to precisely position your robot. Other than that, you can place it however you want. That is the whole point. Last year, you could aim your robot "exactly" facing toward the corner or center goal so you could score.
if you load your tube so that it is parallel to the floor, you can essentially load it at any angle. I'm not sure if this should be posted here, but you asked the question. it really is a bad idea to mount it dead on... there are too many accuracy woes to worry about.This is exactly what I am pushing for. However, I am on a robotics team, and if the rest of the team decides that the other alternative on the drawing board (a gripper claw that grips from the inside, and loads perpendicular to the floor) is better/easier to make, then so be it. It becomes a software problem. I should know my fate by tomorrow afternoon.
michniewski
15-01-2007, 22:39
The multi-object tracking cmucamera 2 code shown in the 2007 kickoff can be downloaded here:
http://first.wpi.edu/FRC/25814.htm
This link was also accessible from the usfirst.org programming resource library accessible from usfirst.org. Anyways, I'm posting this because the intelitek webisite link ( http://www.intelitekdownloads.com/easyCPRO/ ) that was posted earlier, while containing the same code I beleive, is currently down due to exceeding their bandwidth.
Michael
1353, Spartans
maniac_2040
16-01-2007, 11:41
Really?
-Kevin
So...IS there going to be multi-object tracking code available?(Not the EasyC) Or do we have to modify the existing camera code ourselves?
EHaskins
16-01-2007, 11:50
So...IS there going to be multi-object tracking code available?(Not the EasyC) Or do we have to modify the existing camera code ourselves?
If you look at the EasyC code it is very simple. All the information from the camera is available through Kevin's code. I think it took me about 2-3 to get it working.
Kevin Watson
16-01-2007, 12:20
...Or do we have to modify the existing camera code ourselves?I've been making suggestions on how this could be done in the hope that someone would modify my code (tracking.c) and discuss their modifications here. Given the lack of actual discussion taking place, I'm not sure if teams just went into their "Skunk Works" mode or if teams are just throwing-up their collective hands and not working on it.
Anyway, I have an idea for an algorithm that I'll have time to test out over the next few evenings. If it works well, I'll post it on my website. As all good engineers keep a backup plan in their pocket, you might consider implementing one of the other algorithms discussed in the forums, or invent your own. Either way, your time won't be wasted thinking about the problem.
-Kevin
I've been making suggestions on how this could be done in the hope that someone would modify my code (tracking.c) and discuss their modifications here. Given the lack of actual discussion taking place, I'm not sure if teams just went into their "Skunk Works" mode or if teams are just throwing-up their collective hands and not working on it.
Anyway, I have an idea for an algorithm that I'll have time to test out over the next few evenings. If it works well, I'll post it on my website. As all good engineers keep a backup plan in their pocket, you might consider implementing one of the other algorithms discussed in the forums, or invent your own. Either way, your time won't be wasted thinking about the problem.
-Kevin
I'm not sure how many of the teams have gotten to the camera yet. Our team does mainly design and field elements the first week, so we haven't touched the camera yet.
bcieslak
16-01-2007, 13:10
Sounds like we're all thinking along the same track.
Lets say the rack is turned so an unlighted sided is directly in front of the robot. The two lights will be on the opposite sides of the tracking box and the centroid about in the middle. So keep the centroid in the middle and march to the unlighted goal. Too Simple??? perhaps.
BC
Another method for approaching the spider leg head on is to know your position on the field (meaning you have to get a gyro and accelerometer/encoder combo working in addition to the camera). If you know your position and the angle of the light you're tracking relative to you, you can determine what angle the light is facing and thus what angle you need to go in at.
Guy Davidson
16-01-2007, 14:10
We're currently looking at a number of things. Of particular interest to us is the relationship between the location of the centroid and the locations of the top-left and lower-right corners. We're hoping to use the fact that the centroid is closer to the light with a smaller angle to the robot in order to write a function that takes those three locations and tells us enough about where the lights are to either lock-in on one of them or on the middle.
-Guy
I'm not sure how many of the teams have gotten to the camera yet. Our team does mainly design and field elements the first week, so we haven't touched the camera yet.
We are in a similar situation, and I was hoping to begin camera work this week. Kevin, I'm actually glad that you've decided not to release your camera code right away and have instead just gently prodded (:D) us in the right direction. Though your code is phenomenal and a big help, it has been a lot of fun this year to try to figure out some possible alternative solutions (especially for such an interesting problem).
To all the people who have proposed methods other than the one used by the EasyC code: would you care to share any test results or code for those methods? It would be very time consuming for everyone to test every method for its viability, so perhaps some collaboration would be useful.
I'll try to post some LabView Code and C source if I can test the VW thin-slicing method soon. Thanks again to Kevin and everyone else's input on this topic
maniac_2040
16-01-2007, 19:03
I've been making suggestions on how this could be done in the hope that someone would modify my code (tracking.c) and discuss their modifications here. Given the lack of actual discussion taking place, I'm not sure if teams just went into their "Skunk Works" mode or if teams are just throwing-up their collective hands and not working on it.
Anyway, I have an idea for an algorithm that I'll have time to test out over the next few evenings. If it works well, I'll post it on my website. As all good engineers keep a backup plan in their pocket, you might consider implementing one of the other algorithms discussed in the forums, or invent your own. Either way, your time won't be wasted thinking about the problem.
-Kevin
Well, it's not like I wasn't/haven't been thinking about this. I only asked because I was about to start hardcoding and wanted to know if I should wait. It would be stupid to code up a whole solution if I knew that one would already be available in a few days(no need to reinvent the wheel). I have been thinking about this quite a bit.
Brad Voracek
16-01-2007, 22:52
.... Basically, range = (green light height - camera height)/tan(tilt angle), ... tilt angle is the calculated tilt angle derived from the tilt PWM value when the tracking software has a good solution. ...
-Kevin
My question, what is the best way to change those PWM outputs to a tangible angle for the tangent function to use? Our team didn't get to test the camera much last year... So it's all sort of new for me. Obviously it has to be in radians, because, correct me if I'm wrong the tan function in C is based off of radians.
I've thought about ways to do this, and it isn't all too complicated... I just need to test it out, and make sure 127 would be the camera pointing parallel with the ground. I'll get to testing it, but I'm wondering what you guys have come up with for this?
Bharat Nain
16-01-2007, 23:46
Well, it's not like I wasn't/haven't been thinking about this. I only asked because I was about to start hardcoding and wanted to know if I should wait. It would be stupid to code up a whole solution if I knew that one would already be available in a few days(no need to reinvent the wheel). I have been thinking about this quite a bit.
You might as well start experimenting. Neither Kevin, nor anyone else(apart from maybe GDC) knows how autonomous will turn up. If Kevin comes up with a better solution for your purpose, use it.
Alan Anderson
17-01-2007, 06:58
My question, what is the best way to change those PWM outputs to a tangible angle for the tangent function to use?
I believe the best way is not to use the tangent function on an angle in the code. It's both easier and quicker to use a small lookup table that turns the camera tilt PWM value into a distance.
You can use a tangent function when you create the lookup table, of course. Or you can fill in the table empirically by repeatedly placing the robot a known distance from the target and observing the tilt angle.
Brad Voracek
17-01-2007, 09:40
I believe the best way is not to use the tangent function on an angle in the code. It's both easier and quicker to use a small lookup table that turns the camera tilt PWM value into a distance.
You can use a tangent function when you create the lookup table, of course. Or you can fill in the table empirically by repeatedly placing the robot a known distance from the target and observing the tilt angle.
Ahh, that makes a lot of sense. Thanks.
Kevin Watson
17-01-2007, 11:55
Ahh, that makes a lot of sense. Thanks.Just FYI, you can create that table, and read from it using the EEPROM code at http://kevin.org/frc.
Edit: I also wrote some example code that creates an 8- and/or 16-bit sine table in EEPROM with one piece of code and then allows you to quickly do a sin() or cos() lookup using another piece of code (stuff written to EEPROM is permanent until you erase it). I haven't written any formal documentation yet, but I think the code is fairly readable. The code can be found here: http://kevin.org/frc/frc_trig.zip.
-Kevin
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.