Does anyone know of a way to right all the code for image processing in vision assistant?
Then how could that be implemented in the labview dashboard?
What do you mean by vision assistance? There are a multitude of ways a camera can help a team in this game. One would be calculating the center of the target then drawing a line on the screen down the middle from top to bottom, and when the center is on that line, the robot is aimed perfectly.
Look in the menus, the vision assistant will export its script directly to a LabVIEW VI or to C++.
Is there a way to do everything we need to find the center of the target and the distance to it built into vision assistant? And does anyone have a sample program they can post?
You should look at the Rectangular Target - 2013.lvproj example project. See Tutorial 8—Integrating Vision into Robot Code for detailed instructions. It’s not the Vision Assistant, and it’s not going to work with precaptured images unless you do some extra steps, but I expect it’s what a lot of teams will be using.
Our team is actually programming in java but looking to do vision processing with vision assistant on the dashboard is there no way to do this?
is there no way to do this?
Well there usually is, and this case, …
If you want to develop your own vision algorithm, you can use NI Vision Assistant, and generate either LabVIEW code to merge into the dashboard, or create C code to merge into your dashboard.
As Alan pointed out, much of the work has been done for you and you may want to at least look at the white paper and example to see how they compare to your ideas.
Greg McKaskle
Where can i find the “whitepaper”?
When you say whitepaper are you referring to screen steps live?