Quote:
Originally Posted by Greg McKaskle
118 used C++ and a Beagle Bone. I don't really see an issue with doing it on the dashboard. What are your concerns?
Greg McKaskle
|
When I was in beta last year... I was really excited about the idea of doing video processing through the driver station, and I really wanted to do that. Since this was a new presented feature, I couldn't find very much information on step by step what to do. Brad had some instructions but only for the Java platform. Jarod (341) has some good info, and source... but same scenario. I could not find the info I needed and eventually gave up. I still want to figure this out and hopefully can help others at for the task 5 exhibition who were stuck like I am now. So at this point... I just want to gather links docs info... etc on how to do this using c++ wind-river environment. Eventually have a step-by-step document on what to do... as well as example code on how to send commands from the driver station back to the robot... for things like target coordinates etc.
I've never heard of a beagle bone... thanks for info on team 118. I should hook up with them and see if they can help.
Hopefully enough teams have tested this new path... and from what I've heard it's been positive results with minimal latency.
P.S. If I can figure all of this out... it would be cool to overlay cross hairs on the video image that comes to the driver station... I'll save this research for next iteration.