|
Re: 2012 Beta Testers
You may be able to find step-by-step instructions, but perhaps you don't need them.
To get an image from the dashboard to the camera, you simply do an http get to start the mjpg stream. You can alternately request individual jpegs. You can do this in LV with the default dashboard, in Java with smart dashboard. You can also write a C/C++ dashboard, but there are no tools in the kit for doing this -- you can use Microsoft or gcc for PC development.
Once you have the image, you can use any laptop image processing library you like -- some use OpenCV, some use the NI IMAQ libraries.
To send info back to the robot, you can use UDP, TCP, or the smart dashboard. The smart dashboard is the simplest approach, and with some sample code, the UDP and TCP aren't too bad either.
As for placing marks on top of the image, the NI IMAQ vision control does this pretty easily, and I'm assuming you can modify the image or similarly overlay it in smart dashboard.
Once you start the project, you can ask additional quesitons.
Greg McKaskle
|