Quote:
Originally Posted by PAR_WIG1350
Stick with what you know, especially that which you know but don't know that you know. Think about how you think. How do you tell what an object is? How do you know if it needs to be avoided? In humans, image processing occurs in the occipital lobes of the brain which communicate with the frontal lobes to determine what everything is based on stored information. In your system, the GadgetPC seems to be equivalent to the occipital lobes of the brain. The frontal lobes would be the CRIO, possibly the FPGA. This involves a lot of boolean logic.
Code:
is it a wall?
yes
{avoid}
no
{is it a robot?
yes
{do something, maybe add an extra "is it an opposing robot?" test}
no
{is it a scoring object?
yes
{pick it up}
no
{is it a goal?
yes
{score, if scoring objects are in possession}
no
{is it part of the field that can be driven over?
yes
{ignore}
no
{avoid}
}}}
Another important part of the brain is the parietal lobe--> processes other sensory information and builds maps of the environment
Motion would be controlled by another part of the system that uses all of this information gathered by the three "lobes" and maps out the best route to take.
-----------------------------------------------------------------------------------
**note** this isn't exactly how the human brain works, I'm just simplifying it to fit the application better and to avoid confusion of some people, myself included.
___________________________________________
I'm sorry if this is hard to follow or makes no sense, I had a hard time wording it, or even figuring out what I was trying to say, maybe I should take a break and come back to it later
|
I am assuming the bumper system is going to be mandatory again, so I thought that I can just do a color detection of the blob and decide with that info if that robot is an enemy or not. Also for detecting the objects, I have to think a bit more about that. I thought that just comparing colors is simple enough, but that can be very shoddy.
I been looking at the Machine Learning lectures by Andrew Ng @Stanford, I think that will give me a better insight on this.
I totally understand what you are saying, my mind thinks the same. What I was hoping I could do was use the PS3, but that does not seem likely.
There are 8 SPEs, only 7 are available, but thats fine. I was thinking each SPE was to be responsible for one part. All the SPEs can access the images without writing to it, so no problem there since all the SPEs would be only reading. Like 1 SPE can do the color detection, another do the distances of the objects, another do object recognition, and ect. They all relay that info to the PPE which will then compile the info and then do the logic. Then the PS3, through the ethernet, sends the instructions to the cRio.
Now that seems like a stretch, but honestly I like to aim high. I am 1/4 through the MIT PS3 lectures, I have learned so much just from that LOL.
If the PS3 is legal and the DC to AC inverters are legal, I can go ahead with this. Only problem, its my only PS3, do I want to potentially risk it getting crushed or something? I have 2 options too, run linux or go the homebrew way? I got a 60GB Japanese launch PS3, I still run linux on it, I have not updated it. The linux libraries for the PS3 are well documented and very thorough (IBM wrote them), but I would assume the homebrew route would have incomplete and shady libraries since its in its infancy. The down side of the linux on PS3 is the boot time, takes at least 45 seconds to boot up. The game OS only takes 2 seconds.