Good news everyone! Team 1939 has vision tracking up and running on robot code. As one of the lead programmers on the team, I’ve learned a lot through the process and I’m offering help to any team that needs any assistance with tracking. If you would like some assistance, PM me on the forums and I can point you in the right direction.
To teams that have programming experience:
In order to get code to work for tracking, find the example “rectangle tracking example.vi” (Or something like this) under the Vision section of the support category in lab view. Go To the block diagram (ctrl-e) and then copy all of the code inside of the while loop and save it to another VI.
Next, Open up a robot project from the Labview splash screen and open the VI that you just created. Under team code, go to the vision tracking VI and paste the code from your saved VI into the inner most while loop box. then simply wire the purple image stream from inside the loop to the top left corner of your imported code. Then just run robot main.VI, (with your bridge connecting your laptop, axis camera, and robot), then open up the front panel of your vision tracking vi and look at the image viewers, If your camera is appropriately set up, you should see what the camera is seeing.
Then, simply click the “Luminance” Tab above the original image and adjust the sliders such that the target is being tracked (with retro reflective tape and a flashlight next to the camera. then you get a Target Array out with the X, Y , and distance.
Thanks for Listening!
ps: Later today i can get some of the code in Sub Vi form so that teams can drag and drop. But please actually look into the code to learn how it functions!
Im interested to see what you have came up with and how accurately you are able to track the square and how far, test we have came up with it looses it in poor conditions very easily.
Congratulations on getting working vision code. But, may I point out that there is a somewhat easier way to incorporate the vision code into the framework. As detailed in the tutorial, I’d encourage you to open the Vision VI and Save As to the framework project.
The primary reason to do this is to avoid cross-linking between the code in the example and the code in the project. To view where the code resides, go to the project window and click on the Files tab to see if any files from the example are being used. It isn’t the end of the world if they are, but it indicates that those files are now used by your robot project and if changed or deleted will impact your robot code. In general, I wouldn’t advise adding example source to your project without making a copy.
When we run the example code (using the save as method mentioned in the code comments) execution hangs at the getimage VI. The default code camera loop runs without hanging, the driver station is retrieving the camera images and the begin.vi was checked for consistency in naming with the example vision VI. Is there anything that we are forgetting causing the code to be unable to retrieve an image from the camera?
This is a thread from back in 2012. I’m guessing you found it through the forum search. The OP hasn’t been active since 2013 so it isn’t very likely that you will get a response from them. You might try looking for a different thread or creating your own.