Yeah that’d do it - I’m guessing you mean that it takes the coordinates for the green it’s found then looks for red/blue in nearby coordinates to verify whether it is your tetra or not.
But there are other tetras about (on loading stations) and probably other red(/blue) objects, ie. the flashing lights for each robot on your alliance. There is a good chance that the module will pick up those reds(/blues) unless they are different enough from the red(/blue) of the tetra. As far as I know, the camera module can only output one set of coordinates when you tell it to track a colour.
That sounds a bit confusing. What I mean is:
- You tell the camera module to track green, which it duly does. Other robots aside, there isn’t any other green in the arena. It returns a coordinate of the centre value of a tetra.
- Now you want it to look for red or blue, to see if this coordinate is near the previous one. But there are plenty of these reds and blues about in the form of the tetras on the tetra loading stations, and flashing lights from other robots. You can’t specify a specific area for the module to look in.
The alternative is to go right up to the tetra and see if it is the right colour, but this wastes critical time in your 15-seconds.
Actually, this makes me think of another possible problem, to do with how the module locates colour. It returns the centre value of a box that bounds all the pixels its found of your colour. So if you’re looking for green, won’t it pick up both your alliance’s vision tetras and give a value somewhere in between the two (and thats not even considering the other alliance’s vision tetras)?
Justyn.
p.s. sorry to hear that even the you-ess of aye appends useless letters to otherwise perfectly serviceable words.
p.p.s. didn’t want to say “football” lest it be mistaken for an american pastime of the same name, which involves lots of men who want to play rugby but are too scared and pad themselves up until they look really silly.
p.p.p.s. You been to Britain then nartak?