Go to Post A lack of knowledge is okay; a lack of initiative to learn more is not. - Madison [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 03-01-2014, 09:38
Jerry Ballard's Avatar
Jerry Ballard Jerry Ballard is offline
Registered User
AKA: Jerry Ballard
FRC #0456 (Siege Robotics)
Team Role: Mentor
 
Join Date: Dec 2012
Rookie Year: 2011
Location: Vicksburg, MS, USA
Posts: 13
Jerry Ballard is a jewel in the roughJerry Ballard is a jewel in the roughJerry Ballard is a jewel in the rough
2013 Lessons Learned for a Vision Co-Processor

On the eve of Kick-Off, we'd thought it would be a good time to post our lessons learned from last year's competition.

Team 456 (Siege Robotics)

Hardware: Pandaboard with Linux Ubuntu, Logitech C110 Webcam
Software: OpenCV, yavta, iniparser, mongoose
Code: C

1) Control your exposure: For consistent results, avoid over saturation of bright targets, set and control the exposure (integration time) setting on the webcam. We found that a 8ms integration time worked best in most settings. Camera exposure settings were set using yavta: (https://github.com/fastr/yavta). Remember, many webcams have auto-exposure set as a default. The auto-exposure setting won't allow you to consistently identify targets in changing lighting conditions (see #2).

2) Do calibrate at competition: Each competition arena will have different lighting. Take advantage of the time provided (before competition begins) and calibrate camera to those settings. We setup a simple executable script that recorded movies at different camera exposures that we then ran against our target tracking systems off the field. I've attached frame shots of the same field at different camera exposures. Corresponding movies are available, let me know if you would like a copy of them.

3) Use a configuration file:
We found that using a configuration file allowed us to modify target tracking settings during competition without having to recompile code. Recompiling code during competition is rarely a good thing.

4) OpenCV isn't always optimized for FRC Competition: In the pursuit of faster frame processing, we found that going to the traditional OpenCV RGB->HSV->Threshold(Value)&Threshold(Hue) pipeline was a major time-eater. We found a significant gain in performance from 15 fps to 22 fps by doing the thresholding on Value and Hue while converting from RGB. This allowed a reduction in computation load by eliminating the Saturation calculation and skipping obvious non-target pixels. If requested, we can post the algorithm here or it is also available on our github site. (GitHub is awesome!)

5) Avoid double precision:
If our input source is a 640x480 image, most if not all calculations can be done with integer and single precision arithmetic. Using doubles adds more processing time = reduced fps processing.

6) Data transfer via HTTP was questionable: We used the Mongoose webserver code to transfer target angular coordinates to the control system (https://github.com/cesanta/mongoose). Coding-wise, the Mongoose code was easy to compile, simple to integrate, and multi-threaded (that is another discussion). During competition, sometimes it appeared that we would not get target lock as quick as was tested and we suspect that it was related to the HTTP communication between the PandaBoard and the CRIO. We are still trying to decide the best way to communicate between the vision co-processor and the CRIO (looking at UDP next).
Attached Thumbnails
Click image for larger version

Name:	framegrab_2ms.jpg
Views:	112
Size:	43.9 KB
ID:	15708  Click image for larger version

Name:	framegrab_4ms.jpg
Views:	78
Size:	57.2 KB
ID:	15709  Click image for larger version

Name:	framegrab_6ms.jpg
Views:	66
Size:	77.4 KB
ID:	15710  Click image for larger version

Name:	framegrab_8ms.jpg
Views:	65
Size:	78.5 KB
ID:	15711  Click image for larger version

Name:	framegrab_16ms.jpg
Views:	88
Size:	97.8 KB
ID:	15712  


Last edited by Jerry Ballard : 03-01-2014 at 12:29. Reason: Corrected minor typo in #3
  #2   Spotlight this post!  
Unread 03-01-2014, 10:17
billylo's Avatar
billylo billylo is offline
Registered User
FRC #0610 (Coyotes)
Team Role: Mentor
 
Join Date: Mar 2012
Rookie Year: 2012
Location: Toronto
Posts: 161
billylo has a brilliant futurebillylo has a brilliant futurebillylo has a brilliant futurebillylo has a brilliant futurebillylo has a brilliant futurebillylo has a brilliant futurebillylo has a brilliant futurebillylo has a brilliant futurebillylo has a brilliant futurebillylo has a brilliant futurebillylo has a brilliant future
Re: 2013 Lessons Learned for a Vision Co-Processor

thanks for sharing... #priceless
  #3   Spotlight this post!  
Unread 03-01-2014, 12:09
mechanical_robot's Avatar
mechanical_robot mechanical_robot is offline
Registered User
no team
Team Role: Driver
 
Join Date: Jul 2013
Rookie Year: 2013
Location: United States
Posts: 92
mechanical_robot will become famous soon enough
Re: 2013 Lessons Learned for a Vision Co-Processor

Thankyou for sharing this. Very interesting as I am considering doing a OpenCV project with my new Raspberry Pi
  #4   Spotlight this post!  
Unread 03-01-2014, 15:33
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: 2013 Lessons Learned for a Vision Co-Processor

Thanks. Can you post a link to the algorithm you used? Did you use the web server to communicate with the cRIO or just the DS?
  #5   Spotlight this post!  
Unread 03-01-2014, 18:29
JesseK's Avatar
JesseK JesseK is offline
Expert Flybot Crasher
FRC #1885 (ILITE)
Team Role: Mentor
 
Join Date: Mar 2007
Rookie Year: 2005
Location: Reston, VA
Posts: 3,722
JesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond repute
Re: 2013 Lessons Learned for a Vision Co-Processor

UDP would work fine. We use raw network programming like that for a complex data structure between the robot and driver's station. We could use something fancy like Protobufs, CORBA, or other WSDL-type middleware - but hardcoding the encode/decode order of 4-byte values also works and is what we do.

As for vision itself, I have no comments
__________________

Drive Coach, 1885 (2007-present)
CAD Library Updated 5/1/16 - 2016 Curie/Carver Industrial Design Winner
GitHub
  #6   Spotlight this post!  
Unread 03-01-2014, 19:24
Jerry Ballard's Avatar
Jerry Ballard Jerry Ballard is offline
Registered User
AKA: Jerry Ballard
FRC #0456 (Siege Robotics)
Team Role: Mentor
 
Join Date: Dec 2012
Rookie Year: 2011
Location: Vicksburg, MS, USA
Posts: 13
Jerry Ballard is a jewel in the roughJerry Ballard is a jewel in the roughJerry Ballard is a jewel in the rough
Re: 2013 Lessons Learned for a Vision Co-Processor

Quote:
Originally Posted by yash101 View Post
Thanks. Can you post a link to the algorithm you used? Did you use the web server to communicate with the cRIO or just the DS?
Communication was just with the cRIO LabView controller code.

Here's the algorithm we used for the color conversion:

Code:
/*
**  Local MACRO defines
*/
#define MIN3(x,y,z)  ((y) <= (z) ? \
                         ((x) <= (y) ? (x) : (y)) \
                     : \
                         ((x) <= (z) ? (x) : (z)))

#define MAX3(x,y,z)  ((y) >= (z) ? \
                         ((x) >= (y) ? (x) : (y)) \
                     : \
                         ((x) >= (z) ? (x) : (z)))

/* 
** prototypes of functions
*/

void T456_change_RGB_to_binary( IplImage *, CvMat *, int, int, int);
void T456_filter_image( unsigned char , unsigned char , unsigned char , 
                 unsigned char *, int, int, int);

/*
**  Filter RGB image, HSV convert, threshold, etc...
*/

void T456_change_RGB_to_binary( IplImage *rgb, CvMat *binary,
                                int val_thresh, int hue_mid_thresh, 
                                int hue_mid_span )
{
  register int y;
  register unsigned char r,g,b;
  register char *data;
  register uchar *bin_data;

  register int total_vals;

  /* 
  **  Point the data pointer into the beginning of the image data
  */
  data = (char*)rgb->imageData;

  /*
  **  Point the output binary image pointer to the beginning of the image
  */
  bin_data = (uchar*)binary->data.ptr;
  
  total_vals = rgb->height * rgb->width;

  for ( y = 0; y < total_vals; y++ )  /* rows */
  {
     /* grab the bgr values */
     b = data[0];
     g = data[1];
     r = data[2];
     data += 3;

     T456_filter_image( r,g,b, bin_data, val_thresh, 
                        hue_mid_thresh, hue_mid_span );

     /* increment output pointer */
     bin_data++;
  }

}


void T456_filter_image( unsigned char r, unsigned char g, unsigned char b, 
                 unsigned char *binary ,
                 int val_thresh, int hue_mid_thresh, int hue_mid_span)
{
   unsigned char rgb_min, rgb_max, rgb_diff;
   unsigned char hue = 0; 
   unsigned char val = 0; 

   /*
   **  set the default return value to zero 
   */
   *binary = 0;

   /*
   **  get the min and max values of the RGB
   **   pixel
   */
   rgb_min = MIN3( r, g, b );
   rgb_max = MAX3( r, g, b );

   rgb_diff = rgb_max - rgb_min;

   val = rgb_max;

   /* 
   **  This is the trivial case:
   **    zero pixels or value is less than VAL_THRESH
   */
   if ( (val == 0) || (val < val_thresh) ) {
      return;   /* binary = 0 */
   }

   /*
   **  Zero out white pixels 
   **   WARNING (use only if camera is not oversaturated)
   */
   if ( (val >= val_thresh) && (rgb_diff == 0 ) ) 
   {
      *binary = 0;
      return;
   }

   /* 
   ** Compute hue 
   */
   if (rgb_max == r) {
       hue = 0 + 43 * (g - b)/(rgb_diff);
   } else if (rgb_max == g) {
       hue = 85 + 43*(b - r)/(rgb_diff);
   } else /* rgb_max == b */ {
       hue = 171 + 43*(r - g)/(rgb_diff);
   }

   /* 
   **  to get to this point, val > val_thresh
   */
   if (    (hue >= ( hue_mid_thresh - hue_mid_span)) 
       && ( hue <= ( hue_mid_thresh + hue_mid_span) ) )
   {
       *binary = 255;
   }
 
   return;
}

Last edited by Jerry Ballard : 03-01-2014 at 22:46. Reason: reformatted code section
  #7   Spotlight this post!  
Unread 03-01-2014, 20:41
Joe Ross's Avatar Unsung FIRST Hero
Joe Ross Joe Ross is offline
Registered User
FRC #0330 (Beachbots)
Team Role: Engineer
 
Join Date: Jun 2001
Rookie Year: 1997
Location: Los Angeles, CA
Posts: 8,600
Joe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond repute
Re: 2013 Lessons Learned for a Vision Co-Processor

Your tips are useful for vision processing in general. Do you have any tips specific to using a Co-Processor, ie how to power it, how to shut it down cleanly, how to get a development environment up and running quickly, etc?

You can enclose your code in [ code] tags to keep it formatted.
  #8   Spotlight this post!  
Unread 03-01-2014, 23:41
Jerry Ballard's Avatar
Jerry Ballard Jerry Ballard is offline
Registered User
AKA: Jerry Ballard
FRC #0456 (Siege Robotics)
Team Role: Mentor
 
Join Date: Dec 2012
Rookie Year: 2011
Location: Vicksburg, MS, USA
Posts: 13
Jerry Ballard is a jewel in the roughJerry Ballard is a jewel in the roughJerry Ballard is a jewel in the rough
Re: 2013 Lessons Learned for a Vision Co-Processor

Thanks Joe for the formatting tip.

Here are a few more lessons learned related to your questions:

7) Stable and reliable power is required: During competition, the system voltage can vary significantly (below 10V) due to the varying demands of the other powered components. If input voltage dropped that much the PandaBoard would drop power and reboot (not good). So we used the MiniBox DC-DC converter (http://www.mini-box.com/DCDC-USB?sc=8&category=981) to solve this problem. The only issue (we believe) we had in competition with power was probably due to a loose power connection into the PandaBoard.

8) Hard Stop and Rebuild during competition: During the build season, we made the decision not to find a graceful method to shutdown the system during competition. Instead we let the system hard stop when the power was shutdown and then did a reboot and fsck'ed (file system checked) between matches. Also, we maintained multiple copies of the complete OS on memory cards and often did card swaps between short match times. Swaping OS memory cards allowed for some simple diagnostics of target tracking and provided redundancy in case of severe system crash.

9) Diagnostic photos are good: During the matches, we captured and saved frames from the camera once a second (sometimes less) to help us determine how the targeting system was doing. It turned out that these images were very useful to the pilot/copilot of the robot to quickly replay the previous match. Visual cueing from these images helped the students recall better what happened during the match (from the robot's point of view). I've attached an example image below. The blue circle is the aim point of the shooter, cross-hairs identify targets in range, the red dot is the predicted frisbee hit, and the green circle is the target selected.

10) Raspberry Pi didn't work for us:
We spent a lot of time trying to get the RPi to run the targeting code. But as written, our code wasn't able to achieve the fps speed needed. We had set a goal of 15 fps as a minimum. The most we could squeeze out of the RPi was 10 fps. We all love the RPi but in this case we weren't able to get the required speed.
Attached Thumbnails
Click image for larger version

Name:	frame_00913_003825.jpg
Views:	75
Size:	52.4 KB
ID:	15720  
  #9   Spotlight this post!  
Unread 04-01-2014, 03:38
SoftwareBug2.0's Avatar
SoftwareBug2.0 SoftwareBug2.0 is offline
Registered User
AKA: Eric
FRC #1425 (Error Code Xero)
Team Role: Mentor
 
Join Date: Aug 2004
Rookie Year: 2004
Location: Tigard, Oregon
Posts: 487
SoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant future
Re: 2013 Lessons Learned for a Vision Co-Processor

Quote:
Originally Posted by Jerry Ballard View Post

10) Raspberry Pi didn't work for us:
We spent a lot of time trying to get the RPi to run the targeting code. But as written, our code wasn't able to achieve the fps speed needed. We had set a goal of 15 fps as a minimum. The most we could squeeze out of the RPi was 10 fps. We all love the RPi but in this case we weren't able to get the required speed.
How did you determine what speed you needed? Was 10 Hz too slow just because it wasn't meeting your goal of 15 or did you try it and not like the results?
  #10   Spotlight this post!  
Unread 04-01-2014, 07:58
MikeE's Avatar
MikeE MikeE is offline
Wrecking nice beaches since 1990
no team (Volunteer)
Team Role: Engineer
 
Join Date: Nov 2008
Rookie Year: 2008
Location: New England -> Alaska
Posts: 381
MikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond repute
Re: 2013 Lessons Learned for a Vision Co-Processor

Quote:
Originally Posted by Jerry Ballard View Post
8) Hard Stop and Rebuild during competition: During the build season, we made the decision not to find a graceful method to shutdown the system during competition. Instead we let the system hard stop when the power was shutdown and then did a reboot and fsck'ed (file system checked) between matches. Also, we maintained multiple copies of the complete OS on memory cards and often did card swaps between short match times. Swaping OS memory cards allowed for some simple diagnostics of target tracking and provided redundancy in case of severe system crash.

9) Diagnostic photos are good: During the matches, we captured and saved frames from the camera once a second (sometimes less) to help us determine how the targeting system was doing. It turned out that these images were very useful to the pilot/copilot of the robot to quickly replay the previous match. Visual cueing from these images helped the students recall better what happened during the match (from the robot's point of view). I've attached an example image below. The blue circle is the aim point of the shooter, cross-hairs identify targets in range, the red dot is the predicted frisbee hit, and the green circle is the target selected.
Thanks for sharing your experiences.
Your success with hard stop is particularly helpful as clean shutdown was a significant concern. I like the pragmatic approach of having spare memory cards to swap between matches.

Great tip about the diagnostic images.
  #11   Spotlight this post!  
Unread 04-01-2014, 08:55
billbo911's Avatar
billbo911 billbo911 is offline
I prefer you give a perfect effort.
AKA: That's "Mr. Bill"
FRC #2073 (EagleForce)
Team Role: Mentor
 
Join Date: Mar 2005
Rookie Year: 2005
Location: Elk Grove, Ca.
Posts: 2,384
billbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond repute
Re: 2013 Lessons Learned for a Vision Co-Processor

Quote:
Originally Posted by Joe Ross View Post
Your tips are useful for vision processing in general. Do you have any tips specific to using a Co-Processor, ie how to power it, how to shut it down cleanly....
Our approach to powering the co-processor was to use the same +5 vdc source that is provided on the PDB for powering the Axis camera. We used a USB webcam, so the output was free.
To power down gracefully, we created a vi that only ran during "Disabled mode" and was activated by pressing a designated button on one of the joysticks. The vi initiated a socket connection to our co-processor. On the co-processor, we had a routine that ran and constantly listened for a socket request. When it received a request, it ran a "shutdown" command. This was done at the end of every match.
__________________
CalGames 2009 Autonomous Champion Award winner
Sacramento 2010 Creativity in Design winner, Sacramento 2010 Quarter finalist
2011 Sacramento Finalist, 2011 Madtown Engineering Inspiration Award.
2012 Sacramento Semi-Finals, 2012 Sacramento Innovation in Control Award, 2012 SVR Judges Award.
2012 CalGames Autonomous Challenge Award winner ($$$).
2014 2X Rockwell Automation: Innovation in Control Award (CVR and SAC). Curie Division Gracious Professionalism Award.
2014 Capital City Classic Winner AND Runner Up. Madtown Throwdown: Runner up.
2015 Innovation in Control Award, Sacramento.
2016 Chezy Champs Finalist, 2016 MTTD Finalist
  #12   Spotlight this post!  
Unread 04-01-2014, 12:49
jesusrambo jesusrambo is offline
Self-Proclaimed Programmer Messiah
AKA: JD Russo
FRC #2035 (Robo Rockin' Bots)
Team Role: Programmer
 
Join Date: Feb 2012
Rookie Year: 2010
Location: Carmel, CA
Posts: 114
jesusrambo is an unknown quantity at this point
Re: 2013 Lessons Learned for a Vision Co-Processor

Thanks for posting this!

Considering different coprocessors, last year we just did all our processing on the driverstation. The Axis camera feed is available directly to it, you can run the processing by just adding it into the VI for the dashboard, and then values can be sent back using NetworkTables. This also has the benefit of letting you use NIVison, and the Vision Assistant to develop your processing algorithms.

Many of your points ring very true -- particularly calibration, consistent exposure, and the config file. We had the new dashboard VI store a config file with the HSL thresholding values we used. A widget on the dashboard with sliders and text fields for upper/lower HSL thresholds would load from the config file when the dashboard was opened, and write to it whenever they were updated.
  #13   Spotlight this post!  
Unread 04-01-2014, 14:50
Jerry Ballard's Avatar
Jerry Ballard Jerry Ballard is offline
Registered User
AKA: Jerry Ballard
FRC #0456 (Siege Robotics)
Team Role: Mentor
 
Join Date: Dec 2012
Rookie Year: 2011
Location: Vicksburg, MS, USA
Posts: 13
Jerry Ballard is a jewel in the roughJerry Ballard is a jewel in the roughJerry Ballard is a jewel in the rough
Re: 2013 Lessons Learned for a Vision Co-Processor

Quote:
Originally Posted by jesusrambo View Post
Considering different coprocessors, last year we just did all our processing on the driverstation. The Axis camera feed is available directly to it, you can run the processing by just adding it into the VI for the dashboard, and then values can be sent back using NetworkTables. This also has the benefit of letting you use NIVison, and the Vision Assistant to develop your processing algorithms.
I agree that is another good approach. That's one of the things I love about FRC is to see the many different solutions to the same problem.

We did use Axis camera strictly for driver control and contingency manual aiming. Our main goal in 2013 was to add an additional camera dedicated to targeting without increasing bandwidth. The only data feed from the vision co-processor on the network was a ASCII string of target ids and target position coordinates. For 2013, all the driver had to do was point the robot towards the goals and then the automatic targetting/shooting would take over and shoot autonomously.

Quote:
Originally Posted by SoftwareBug2.0
How did you determine what speed you needed? Was 10 Hz too slow just because it wasn't meeting your goal of 15 or did you try it and not like the results?
From our experience during the high speed movement (and defensive collisions) in competition, 10 Hz isn't enough to keep a persistent target lock. It could be done, but for the 2013 game, speed in scoring and accuracy was everything.
  #14   Spotlight this post!  
Unread 14-01-2014, 16:47
sparkytwd's Avatar
sparkytwd sparkytwd is offline
Registered User
FRC #3574
Team Role: Mentor
 
Join Date: Feb 2012
Rookie Year: 2012
Location: Seattle
Posts: 102
sparkytwd will become famous soon enough
Re: 2013 Lessons Learned for a Vision Co-Processor

Awesome work, I love seeing how much is being done in terms of onboarding processing with ARM computers.

One of the approaches 3574 took for 2013 after getting blinded at one of the events in 2012 was switching over to IR for the retro-illumination. Our camera of choice is the PS3 eye, and there are many tutorials online for performing this conversion. The other benefit of this is that the value channel can be used directly. Simply taking the red channel is a close enough approximation and very fast.

For power of our Odroid U2 last year, we used this from Pololu: http://www.pololu.com/product/2110

even as low as 8 volts from the battery are enough to work with a buck power supply and they're considerably cheaper. 3.5A was also enough to drive a separate powered USB hub, as we ran into stability issues with 2 cameras.

As far as hard poweroff, that's the way we went as well, however this article from Bunny talks about potential issues with hard power removal: http://www.bunniestudios.com/blog/?p=2297
  #15   Spotlight this post!  
Unread 16-01-2014, 08:46
Dr.Bot
 
Posts: n/a
Re: 2013 Lessons Learned for a Vision Co-Processor

Is it legal to use a Beaglebone as a co-processor? I have had success running ROS and a Kinect off both a BBB and a Pi, but the Pi is underpowered for onboard vision processing. I don't have any Panda board experience.

The benefit of ROS is the code is all open source, integration into a FRC robot would take some work. The pi has a special camera that works directly into the GPIO? My thoughts the the Rpi would be challenged if it was getting its vision from USB.

BTW ROS is the Willow Garage Robot Operating System. It has a fully integrated OpenCV stack plus lots of methods to integrate navigation sensors/encoders. It would be real KA if it could do autonomous mode - but again I am uncertain if this would be legal.
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 03:35.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi