One of 1706’s summer projects is path planning. This is the top view, were x is left and right and y is distance, of 3 objects in front of the kinect. It adjusts the blue circles to how big the object is. The white circles are the centers. The next step is to calculate how big the gap is, and if it’s big enough, end the coordinates to the robot so it can drive through it. Almost done, just in time for school.
That’s really neat. What libraries are you using to process the image, and what are you doing the processing on?
This could be really useful for an autonomous mode where you need to make decisions on where to go as you drive like in 08, but without somebody signaling the robot.
Very cool. I look forward to seeing the final results!
What kinds of algorithms are you using to detect the objects? And likewise, what kinds of algorithms are you thinking about for planning a path through the obstacle field?
We’re using the OpenCV libraries.
We’re trying to implement A* path planning (what many games such a League of Legends uses) to autonomously transverse between objects.
This project was thought of mostly out of interest and not for a specific aspect of the competition. We did upon early stages of this project discover that it’d be neat to have a high speed drive train that could autonomous move between say the feeder station in 2013 to a designated point on the field, avoiding the pyramids and moving robots. The issue with that is: would the path planning program run faster than the human mind can process the environment? If so then great, but if not, then why incorporate it?