Go to Post CAD or it didn't happen - R3P0 [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
 
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 22-06-2014, 01:30
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
A Vision Program that teaches itself the game

I am taking an AI/ML course online. I am just wondering if it would be FEASIBLE (I know it is possible) to create a vision system that learns the game by itself?

While it would seem quite hard, instead of writing a different program every year, it could be possible to write one program to play each game.

Such a program would need to be taught the game, what it needs to do. However, it would also need to learn how to use itself.

The last question I have is, has this ever been done? It seems extremely hard so it would be pointless to reinvent the wheel.

What would be the best environment to build something like this in? Currently, I am learning with Octave, however, OpenCV seems to have a lot of useful components, including the ML module.
  #2   Spotlight this post!  
Unread 22-06-2014, 01:57
EricH's Avatar
EricH EricH is online now
New year, new team
FRC #1197 (Torbots)
Team Role: Engineer
 
Join Date: Jan 2005
Rookie Year: 2003
Location: SoCal
Posts: 19,693
EricH has a reputation beyond reputeEricH has a reputation beyond reputeEricH has a reputation beyond reputeEricH has a reputation beyond reputeEricH has a reputation beyond reputeEricH has a reputation beyond reputeEricH has a reputation beyond reputeEricH has a reputation beyond reputeEricH has a reputation beyond reputeEricH has a reputation beyond reputeEricH has a reputation beyond repute
Re: A Vision Program that teaches itself the game

I'm not a programmer, but no.

The reason is that while a single program may learn the game every year, it has to adapt to the different robots that are built. Some things stay the same, other things change--sometimes pneumatics are an advantage and sometimes not, for example. So the program will need to be changed to fit the robot every year, regardless of whether or not the game changes are minor or major.

Now add to that that no robot in FRC history has ever been fully autonomous beyond automode or a "drive straight and don't stop", and the odds are VERY against you actually pulling it off this side of grad school.
__________________
Past teams:
2003-2007: FRC0330 BeachBots
2008: FRC1135 Shmoebotics
2012: FRC4046 Schroedinger's Dragons

"Rockets are tricky..."--Elon Musk

  #3   Spotlight this post!  
Unread 22-06-2014, 02:20
SoftwareBug2.0's Avatar
SoftwareBug2.0 SoftwareBug2.0 is offline
Registered User
AKA: Eric
FRC #1425 (Error Code Xero)
Team Role: Mentor
 
Join Date: Aug 2004
Rookie Year: 2004
Location: Tigard, Oregon
Posts: 485
SoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant future
Re: A Vision Program that teaches itself the game

Quote:
Originally Posted by yash101 View Post
I am taking an AI/ML course online. I am just wondering if it would be FEASIBLE (I know it is possible) to create a vision system that learns the game by itself?

While it would seem quite hard, instead of writing a different program every year, it could be possible to write one program to play each game.

Such a program would need to be taught the game, what it needs to do. However, it would also need to learn how to use itself.

The last question I have is, has this ever been done? It seems extremely hard so it would be pointless to reinvent the wheel.

What would be the best environment to build something like this in? Currently, I am learning with Octave, however, OpenCV seems to have a lot of useful components, including the ML module.
I would not go into this sort of project with any sort of expectation of success. I've fiddled a bit with general game-playing programs, in fact I wrote one for a science fair when I was in high school. It was successful, but the success criteria was that the results were statistically better than random moves. That's a much lower bar than I would feel safe with for something controlling a 120 lb. mobile robot.

I know a couple of years ago state of the art game-playing systems could choke unexpectedly on even relatively simple board games. Unless things have improved by leaps and bounds in the last couple of years I wouldn't even want to be in the same room as a robot controlled by one of these things.

Possibly interesting:
http://en.wikipedia.org/wiki/General_game_playing
http://games.stanford.edu/index.php/...tition-aaai-14
  #4   Spotlight this post!  
Unread 22-06-2014, 04:31
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: A Vision Program that teaches itself the game

I know that it would be required to at least program in all the I/O, etc. However, I believe the best robot would be one that gets better at the game by experience. First Match: Roaming in circles, knowing nothing to do. Last Match: Game Pro. It will beat any robot that tries to win!

I want to get my vision program for next year, rolled out with a bit of ML. This way, it would be able to learn how to do better the next time. That is why that computer that plays checkers was so good at playing checkers
  #5   Spotlight this post!  
Unread 22-06-2014, 15:06
Steven Smith Steven Smith is offline
Registered User
FRC #3005 (RoboChargers)
Team Role: Mentor
 
Join Date: Apr 2013
Rookie Year: 2013
Location: Dallas, TX
Posts: 208
Steven Smith has a reputation beyond reputeSteven Smith has a reputation beyond reputeSteven Smith has a reputation beyond reputeSteven Smith has a reputation beyond reputeSteven Smith has a reputation beyond reputeSteven Smith has a reputation beyond reputeSteven Smith has a reputation beyond reputeSteven Smith has a reputation beyond reputeSteven Smith has a reputation beyond reputeSteven Smith has a reputation beyond reputeSteven Smith has a reputation beyond repute
Re: A Vision Program that teaches itself the game

Simply put, human brains are still much better at some things than computers... so your answer is no within the context of FRC. For a game as complex as an FRC game, do not expect for a fully autonomous robot control system to be able to outperform the combination of a human brain(s) + control assist.

Computers tend to excel in games that are extremely well defined with few variables. For chess/checkers/backgammon, you may only have a handful of possible moves to a few handfuls of spaces. A basic player is capable of looking at those moves and determining which is "best" right now. An expert player or computer iterates that forward, analyzing several layers deep. If I do this, my opponent's options change from set X to set Y, which gives me another set of options, etc. You can essentially play the game out for each of the possible moves, and look at which of your current moves has the best outcome.

If you are interested in this topic, which has intrinsic value (even if I wouldn't recommend applying it at the level you propose), I'd recommend writing a few game solver applications first. Start with a puzzle solver (like Sudoku) where you are essentially writing an algorithm to find the single "right" answer.

Approaching a new game with a mindset "like a computer" could also be fun as well. Just start describing your action table when you play out the game. If I'm located at mid-field and my opponent is between me and the goal, what are my options? What are his options? Is he faster than me? Can does he have more traction/weight than me? Is he taller or shorter? Generally, if you are not capable of explaining all these things in words, adding the complexity of a computer will not help you. However, the process of describing them might lead to good strategies, whether implemented by a computer or a human driver.

-Steven
__________________
2013 - 2016 - Mentor - Robochargers 3005
2014 - 2016 - Mentor - FLL 5817 / 7913
2013 - Day I Die - Robot Fanatic
  #6   Spotlight this post!  
Unread 22-06-2014, 15:09
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: A Vision Program that teaches itself the game

Ok, I'm on my phone. Lets see how this goes. It's storming at work so I have time.

Machine learning for vision is rather common, but the approach you want to take isn't feasible due to your lack of training examples. The rule of thumb is that you need at least 50 training examples to start learning from. You simply won't have enough training examples to get a result worth the effort, or any noticeable result at all.

Moving on. You can use machine learning for calculating distance from characteristics in the image. You have to have training examples though. So you'd go out, record contour characteristics such as height, width, area, and center.x and .y. then you manually input distance. You do this from as many points as you possibly can bear to do. Then you run a gradient descent algorithm(regression) or apply the normal equation. You can scale your data if you don't think it's linear, such as taking the natural log of contour height. For this example, you are dealing with 6 dimensions, so it is impossible to visualise. You just have to guess what scaling is needed. Then you apply the squared error function (predicted-actual)^2, also called your resudual. You want this to be as close to zero as possible. This can also be applied to game pieces.

Another application is shooting pieces. You have a chart of inputs such as motor speed, angle, and distance, and the output is a 1 or 0: making a basket or miss. You have a 3d plot now. There exists a line, or multiple lines virtually the same, in 3d space that garuntees making all your shots (given your robot is 100% consistent).

Another type of ai is path planning. If you have a depth map of all your objects in front of you, then you can apply the a star path planning to get to a certain location on the field given you have a means of knowing where you are on the field. (Cough cough encoders on undriven wheels or a vision pose calculation)

I might have forgetten somethings. Feel free to ask questions.

Disclaimer: all these calculations can be done virtually instantly using octave or matlab. The a star is a bit more intensive. It is an iterative algorithm to my understanding.
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."

Last edited by faust1706 : 22-06-2014 at 15:16.
  #7   Spotlight this post!  
Unread 22-06-2014, 15:56
Bpk9p4's Avatar
Bpk9p4 Bpk9p4 is offline
Registered User
FRC #1756
Team Role: Mentor
 
Join Date: Jan 2013
Rookie Year: 2010
Location: Illinios
Posts: 270
Bpk9p4 is on a distinguished road
Re: A Vision Program that teaches itself the game

This is possible. A couple of years ago i made a pong game that taught itself how to move the pedal to block the ball back. It taught itself with a neural network. The fitness was base on how long it could play without losing.
  #8   Spotlight this post!  
Unread 23-06-2014, 01:32
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: A Vision Program that teaches itself the game

http://en.wikipedia.org/wiki/A*_search_algorithm

^^ That seems like something I want in my next year program. I would like it if I could have a tablet pc for the driver station, with the robot constantly generating a map of the field. If you click on a location on the field in the tablet, the robot could automatically navigate there with a high accuracy.

However, for that to be possible, the program would need to know where all the obstacles are. How do you suggest getting the exact position of other robots and field elements? Should I have a Kinect (or a couple), outputting the distance to all the field elements?

This gives me another question. What does the Kinect distance map look like? How do you get the distance measurement from a single pixel?
  #9   Spotlight this post!  
Unread 23-06-2014, 01:52
NWChen's Avatar
NWChen NWChen is offline
Alum
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: New York City
Posts: 205
NWChen is a splendid one to beholdNWChen is a splendid one to beholdNWChen is a splendid one to beholdNWChen is a splendid one to beholdNWChen is a splendid one to beholdNWChen is a splendid one to beholdNWChen is a splendid one to behold
Re: A Vision Program that teaches itself the game

Quote:
Originally Posted by yash101 View Post
the program would need to know where all the obstacles are. How do you suggest getting the exact position of other robots and field elements?
In addition to locating other robots and field elements, you also need to know the position of your own robot, e.g. with simultaneous localization and mapping.
__________________
2012 - 2015 • Team 2601

  #10   Spotlight this post!  
Unread 23-06-2014, 12:31
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: A Vision Program that teaches itself the game

Quote:
Originally Posted by NWChen View Post
In addition to locating other robots and field elements, you also need to know the position of your own robot, e.g. with simultaneous localization and mapping.
I don't know if you picked this up on my long post, but the method I proposed was undriven wheels with encoders or doing a pose calculation on a vision target. If you're really wondering what camera pose is....


My team last summer got a birds eye view of the object in front a kinect to work: http://www.chiefdelphi.com/media/photos/39138 The next step was to implement a* path planning but we never got it to work (it is still on our to do list). (The objects in view are soccer balls, that is why they are all the same size in the top view)

On a side not. Slam is so cool. For anyone interested:

Quote:
Originally Posted by yash101 View Post

This gives me another question. What does the Kinect distance map look like? How do you get the distance measurement from a single pixel?
Yash, check the dropbox (pm me your email if you want to be included into the dropbox. It has....23 sample vision programs ranging from our 2012-2014 code, to game piece detection for 2013 and 2014, to depth programming. I passed the torch of computer vision to a student who uses github, so don't be surprised if it gets switched over): TopDepthTest. It is the program that the image I linked to is from. The kinect depth map allocates distance as a pixel value (colour), for those of you who aren't aware.

Here is the code to calculate distance from the intensity of a pixel:

Scalar intensity = depth_mat2.at<uchar>(center[i]);
double distance = 0.1236 * tan(intensity[0]*4 / 2842.5 + 1.1863)*100;

center[i] is the center of a contour (object of interest that passed all of our previous tests), it has a x and y component.

The kinect is rather intensive. We ran 3 cameras this year an analysed every aspect of the game we possibly could with vision and we got 8 fps on an odroit. You'd most certainly have to have multiple on board computers to handle multiple kinects, but that may not be necessary if you only play to move forward and you don't have omnidirectional drive capabilities.

I'm waiting for the cheesy poofs to release their amazing autonomous code so I can apply it to autonomous path planning (instead of their predrawn paths)

Quote:
Originally Posted by JohnFogarty View Post
I have to say a Kinect for the first bit would not be my first choice. I got the chance to use a lydar this season and boy was it nice.
There are other alternatives to the kinect, I personally prefer the asus xtion. It is smaller, faster, and lighter.
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."

Last edited by faust1706 : 23-06-2014 at 19:31.
  #11   Spotlight this post!  
Unread 26-06-2014, 20:01
MatthewC529 MatthewC529 is offline
Lcom/mattc/halp;
AKA: Matthew
FRC #1554 (Oceanside Sailors)
Team Role: Mentor
 
Join Date: Feb 2014
Rookie Year: 2013
Location: New York
Posts: 39
MatthewC529 is on a distinguished road
Re: A Vision Program that teaches itself the game

I am not going to contribute to the reasons as to why you shouldnt do it on the scale you are seeking (Because I think the idea and concept is awesome) but I will say one thing. I have worked with AI Pathfinding algorithms to a decent extent as a Game Programmer, I was freelancing and did work in implementing specific AI Algorithms and various different Game Mechanics.

You have limited Memory on an embedded system like the RoboRIO. Of course the RoboRIO is a massive step up but I am talking about 2 GB RAM vs. 256 MB RAM. A* is in its most basic form an informed Djikstra Pathfinding algorithm. Unlike Djikstra where all moves have a Heuristic cost of 1, A* has ways of assigning a cost to each movement. Depending on your method you will usually get an O((V+E)log(V)) or even O(V^2) algorithm. Pathfinding is an expensive task and if the field was a perfect size where a resolution of 64 px by 32 px worked then you could end up with an extremely large Fringe if enough obstacles exist. In certain scenarios this could be a bit long for an Autonomous period and if proper threading isnt implemented it could cripple your Teleoperated period if you have to wait too long for the calculations to finish in a dynamically changing field of non-standard robots.

Also this could work for shooting but if the game calls for a much different scoring system then your AI and Learning may be even further crippled by complexity... Also you dont want a friendly memory error popping up and killing your robot for that round.

Its an awesome idea and you should definitely follow through but probably not immediately on a 120 lbs. robot. Experiment first with Game Algorithms and get used to implementing it in an efficient and workable way, then move to the robot where efficiency will really matter. I cant speak for how efficient you will need to be... again... Game Developer but again I really like your concept of pixels but I think you should be wary of how much time and the maintainability of your code.
  #12   Spotlight this post!  
Unread 26-06-2014, 22:33
Ginto8's Avatar
Ginto8 Ginto8 is offline
Programming Lead
AKA: Joe Doyle
FRC #2729 (Storm)
Team Role: Programmer
 
Join Date: Oct 2010
Rookie Year: 2010
Location: Marlton, NJ
Posts: 174
Ginto8 is a glorious beacon of lightGinto8 is a glorious beacon of lightGinto8 is a glorious beacon of lightGinto8 is a glorious beacon of lightGinto8 is a glorious beacon of light
Re: A Vision Program that teaches itself the game

Aside from the many technical limitations, there is one glaring barrier to such a learning system. Vision systems play very specific roles in each game, and in each robot. They typically tracking geometric, retroreflective targets, but the vision systems my team has created have had no say in the robot's logic -- they effectively turn the camera from an image sensor to a target sensor, streaming data about where the targets are back to the robot. For a vision system to learn the game, it must learn not only what the targets look like, but also what data the robot's central control needs -- whether it wants "is the target there?" data like this year's hot goals, or "At what angle is the target?" as in Rebound Rumble and Ultimate Ascent. Any learning system requires feedback to adapt, and when it has to learn so many different things, designing that feedback system would be at least as complex as making a new vision system, and certainly more error-prone.
__________________
I code stuff.
  #13   Spotlight this post!  
Unread 27-06-2014, 01:10
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: A Vision Program that teaches itself the game

Quote:
Originally Posted by Ginto8 View Post
For a vision system to learn the game, it must learn not only what the targets look like, but also what data the robot's central control needs -- whether it wants "is the target there?" data like this year's hot goals, or "At what angle is the target?" as in Rebound Rumble and Ultimate Ascent. Any learning system requires feedback to adapt, and when it has to learn so many different things, designing that feedback system would be at least as complex as making a new vision system, and certainly more error-prone.
The two tasks you just described are in themselves not difficult to achieve through a vision program (An example method to do this is called cascade training), the real problem is how the robot would act on it. This task would be a no brainer for Yash, in fact, he has already done it for the 2014 game if I remember correctly. This only looks at one aspect of the game though. It also has to know what is in front of it, find game pieces, know whether it has game pieces, and go to where it needs to to score or pass. We did most of this in our code this year with 3 cameras and we were lucky to get 10 fps. It would take months at least for there to be enough generations of the learning algorithm for there to be any noticeable result.

Quote:
Originally Posted by MatthewC529 View Post
Its an awesome idea and you should definitely follow through but probably not immediately on a 120 lbs. robot. Experiment first with Game Algorithms and get used to implementing it in an efficient and workable way, then move to the robot where efficiency will really matter. I cant speak for how efficient you will need to be... again... Game Developer but again I really like your concept of pixels but I think you should be wary of how much time and the maintainability of your code.
Isn't there a simulation for each year's game? In my mind, that would be a perfect place to start.
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
  #14   Spotlight this post!  
Unread 27-06-2014, 00:37
SoftwareBug2.0's Avatar
SoftwareBug2.0 SoftwareBug2.0 is offline
Registered User
AKA: Eric
FRC #1425 (Error Code Xero)
Team Role: Mentor
 
Join Date: Aug 2004
Rookie Year: 2004
Location: Tigard, Oregon
Posts: 485
SoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant future
Re: A Vision Program that teaches itself the game

Quote:
Originally Posted by MatthewC529 View Post
You have limited Memory on an embedded system like the RoboRIO. Of course the RoboRIO is a massive step up but I am talking about 2 GB RAM vs. 256 MB RAM. A* is in its most basic form an informed Djikstra Pathfinding algorithm. Unlike Djikstra where all moves have a Heuristic cost of 1, A* has ways of assigning a cost to each movement. Depending on your method you will usually get an O((V+E)log(V)) or even O(V^2) algorithm. Pathfinding is an expensive task and if the field was a perfect size where a resolution of 64 px by 32 px worked then you could end up with an extremely large Fringe if enough obstacles exist.
I don't quite understand what the big deal is. A 64x32 grid is only 2048 nodes. I'd expect that you could have an order of magintude more before you ran into speed problems. I also don't think you'd have memory issues. If you assume that you have 256 MB of memory, half of which is already used, and 2048 nodes then you'd get 64 bytes per node. That seems like plenty.
  #15   Spotlight this post!  
Unread 27-06-2014, 05:10
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: A Vision Program that teaches itself the game

Quote:
Originally Posted by MatthewC529 View Post
I am not going to contribute to the reasons as to why you shouldnt do it on the scale you are seeking (Because I think the idea and concept is awesome) but I will say one thing. I have worked with AI Pathfinding algorithms to a decent extent as a Game Programmer, I was freelancing and did work in implementing specific AI Algorithms and various different Game Mechanics.

You have limited Memory on an embedded system like the RoboRIO. Of course the RoboRIO is a massive step up but I am talking about 2 GB RAM vs. 256 MB RAM. A* is in its most basic form an informed Djikstra Pathfinding algorithm. Unlike Djikstra where all moves have a Heuristic cost of 1, A* has ways of assigning a cost to each movement. Depending on your method you will usually get an O((V+E)log(V)) or even O(V^2) algorithm. Pathfinding is an expensive task and if the field was a perfect size where a resolution of 64 px by 32 px worked then you could end up with an extremely large Fringe if enough obstacles exist. In certain scenarios this could be a bit long for an Autonomous period and if proper threading isnt implemented it could cripple your Teleoperated period if you have to wait too long for the calculations to finish in a dynamically changing field of non-standard robots.

Also this could work for shooting but if the game calls for a much different scoring system then your AI and Learning may be even further crippled by complexity... Also you dont want a friendly memory error popping up and killing your robot for that round.

Its an awesome idea and you should definitely follow through but probably not immediately on a 120 lbs. robot. Experiment first with Game Algorithms and get used to implementing it in an efficient and workable way, then move to the robot where efficiency will really matter. I cant speak for how efficient you will need to be... again... Game Developer but again I really like your concept of pixels but I think you should be wary of how much time and the maintainability of your code.
I actually wanted to treat this like a game. That is the reason why I thought of creating a field grid. Are you saying that 2GB of RAM won't be enough. The program will have access to 1GB in the worst case scenario. The data collection using OpenCV will use well under 16 MB of RAM.

If you are saying that A* is too ineficient, what do you suggest I should try instead. If anything, I could have 3 computers -- vision processor, AI, and cRIO.

Also, 64 by 32 px was just a crude example. By testing the performance of the system, I could tell whether I need to reduce the resolution or what. Otherwise, I could treat everything as either go there or not.

My buddy programmer and I would like to use an nVidia Jetson Dev board. Should we use that for AI, or vision processing? We can use an ODROID for the other task!

I have already figured out how to effectively use OpenCV and optimize it for a very high performance. I can use a configuration file to make the same setup track multiple target types, and I understand how to use OpenCV to get accurate target data even if the target is tilted!
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 16:13.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi