|
Re: Ball Color Problem
Show of hands, who's keeping track of whether your can of Red Bull is falling on your car floor when you hit a patch of black ice at rush hour? It'll be difficult for the driver(s) (who happen to be in control and not paying attention to where the coach is pointing) to know where the balls are coming from, and so the color confusion remains. I'm not colorblind, in fact I'm kind of the opposite with photographic memory and synesthesia, but looking at the picture of the balls, my brain kind of deletes the blue and purple unless I really focus BECAUSE of those conditions. Peoples' minds will inherently go directly to concentrating on the orange because it's brighter and stands out more, and will make the task of distinguishing blue and purple nearly impossible.
I like the idea of just swapping the super cells and empty cells. THAT would add to the challenge, because by the last 20 seconds, you're TRYING to follow the super cells, and it will be hard for everyone. Tagging the balls would also be good, but I'm a little worried about modifying the weight of the balls, as this will make them more inconsistent and inaccurate to use in multiple cases in a simulated environment (object iterations in the robot's program).
I think relying on the camera to do color differentiation would be a very fun challenge, but not fair to rookie teams who have no CS major mentors and can barely tell the robot to go forward, much less deal with calibrating vision algorithm thresholds on-the-fly. Not to mention unfair to teams with 5 people on them, as this limits the time you can spend on code. It would not be like the GDC to assume teams can use the camera to pinpoint the correct balls. An obvious differentiation or rearrangement of ball types is a very reasonable request.
|