View Single Post
  #20   Spotlight this post!  
Unread 27-06-2014, 16:46
MatthewC529 MatthewC529 is offline
Lcom/mattc/halp;
AKA: Matthew
FRC #1554 (Oceanside Sailors)
Team Role: Mentor
 
Join Date: Feb 2014
Rookie Year: 2013
Location: New York
Posts: 39
MatthewC529 is on a distinguished road
Re: A Vision Program that teaches itself the game

Quote:
Originally Posted by yash101 View Post
I actually wanted to treat this like a game. That is the reason why I thought of creating a field grid. Are you saying that 2GB of RAM won't be enough. The program will have access to 1GB in the worst case scenario. The data collection using OpenCV will use well under 16 MB of RAM.

If you are saying that A* is too ineficient, what do you suggest I should try instead. If anything, I could have 3 computers -- vision processor, AI, and cRIO.

Also, 64 by 32 px was just a crude example. By testing the performance of the system, I could tell whether I need to reduce the resolution or what. Otherwise, I could treat everything as either go there or not.

My buddy programmer and I would like to use an nVidia Jetson Dev board. Should we use that for AI, or vision processing? We can use an ODROID for the other task!

I have already figured out how to effectively use OpenCV and optimize it for a very high performance. I can use a configuration file to make the same setup track multiple target types, and I understand how to use OpenCV to get accurate target data even if the target is tilted!
It depends. Personally I would use the Jetson Dev Board for Vision and ODROID for AI if you are going to separate it that way just from a quick skim of their specifications but I would need to look at it more.

A* is likely your best bet, Pathfinding algorithms are known for being either Time consuming (if restricted) or Memory consuming (if you want speed) and you are right for looking at as if it is a Video Game AI. A* is commonly used because it is fast while being reasonably smart. I would recommend the 3 basics to choose from. Check up on your knowledge of Djikstra, A* and Best First Search. Each have trade offs. Most simply you either get Slow With the Best Path, or Fast with A Good Enough Path. If you have the ability to multi-thread with several CPU's you could possibly get away with a Multi-Threaded Djikstra approach that can quickly search through the Fringe and determine the true shortest path. But sticking to A* might be your best bet.

If you separate it into 3 computers and each process has access to its own dedicated memory then you could pull it off in terms of processing power, 1 GB should be well enough I would think. I am still concerned though with how you plan on it being useful outside of an awesome project. On the field I still think it will be hard to make it adaptive to a dynamically changing field (though not impossible) sufficiently and too slow to calculate the best path in a short time frame, though I suppose it also depends on what you consider the best path. I think its awesome and I do honestly support the idea (Because I dont have access to the same materials on my team ), just trying to gauge where your head is at.

Also I agree if you follow through you will definitely need to constantly tweak (or dynamically update) the resolution of the graph you are analyzing.

I have questions such as how you tested your optimizations and how the data is being collected?

AI is so hard to discuss since it all depends on your goals and how it needs to adapt to its current scenario.

Last edited by MatthewC529 : 27-06-2014 at 16:47. Reason: That awkward moment where you write an essay instead of a quick response...