What our team is wanting to do is have the camera track the target to the point where on our driver station we will have cues for the driver as to what direction to go.
Now, I am a rookie programmer and our coach is dead set on using this so I need help with vision tracking, and using it to give cues to the driver station. How would I go about doing something like this?
Ah, well working with nivision.h for image processing is already difficult enough. Basically your program flow will look like the following:
Take in image from the camera -> Apply threshold to the image -> convex hull operation -> edge detection -> get the corner edges so that you know how far you are away from the target in pixels. (All of these functions can be found in nivision.h or in the Vision folder in WPILib)
Having it send the cues to the DriverStation will be and issue. If you have yet to do so, look into SmartDashboard in WPILib or NetworkTables in WPILib. With them you should be able to freely send data back and forth between the cRIO and the DriverStation.
THanks for the replies, while the information given was helpful I’m afraid I didn’t specify well enough what I meant. I have almost no programming experience whatsoever. I don’t even know where to start for such tasks as getting the image or applying specific filters to it or putting that into code. Any examples or papers that take me step-by-step would be great, thanks!
If you are programming in Java, we just noticed there was an update to the wpi libraries that includes additional functions for image processing as well as a new sample program specifically for tracking this years targets. We were able to use the sample program and within an hour have it identify targets and print information about them using the camera input. (The sample is coded to use a stored image, but you just need to uncomment the camera code and comment out the stored image code).
You DEFINITELY want to check out that sample program for a starting point.
My team is also just starting out and looking into programming the camera. We have a basic idea on the process, and we feel that we can do it. However, how would the sample NetBeans project be converted to use with the CommandBase Robot style?