|
Re: Team 254 Presents: CheesyVision
Quote:
Originally Posted by JamesTerm
Did it work something like this?
|
I would prefer not to hijack this thread, but here is a short description of what we did. If you would like to discuss this further, please PM me or maybe I can create a new thread.
Yes and no. We never feed video back to the driver. We just used the value of "x center" of the ball to steer the robot whenever the driver needed assistance. One button on the steering wheel overrode the wheel position and replaced it with the "((image x center - ball x center value) * k)". "k" was a gain value used to bring the error value to a useful level to steer the robot.
All image acquisition and processing were done on a PCDuino on-board the robot. None of the network traffic for this crossed the WiFi network, it all stayed local to the robot.
__________________
CalGames 2009 Autonomous Champion Award winner
Sacramento 2010 Creativity in Design winner, Sacramento 2010 Quarter finalist
2011 Sacramento Finalist, 2011 Madtown Engineering Inspiration Award.
2012 Sacramento Semi-Finals, 2012 Sacramento Innovation in Control Award, 2012 SVR Judges Award.
2012 CalGames Autonomous Challenge Award winner ($$$).
2014 2X Rockwell Automation: Innovation in Control Award (CVR and SAC). Curie Division Gracious Professionalism Award.
2014 Capital City Classic Winner AND Runner Up. Madtown Throwdown: Runner up.
2015 Innovation in Control Award, Sacramento.
2016 Chezy Champs Finalist, 2016 MTTD Finalist
|