I am trying to get some sort of vision on our bot this year. All i want it do is be able to center itself on the peg. I have the vision sample open but, i have no idea how to integrate it into our robot code. I have looked at a lot of examples but i just keep getting lost because i don’t know what most of the code is doing.
If someone could help me implement this that would be great.
The 2017 vision example has several VIs in it. One runs on the laptop, uses a USB camera coming from a connected camera, and lets you tune and experiment with vision code. It also links to many provided image files.
Another one runs on the laptop and gives feedback on setting camera exposure, brightness, and other settings so that LED ring lights work consistently.
A third VI is underneath the roboRIO target and has a simplified Robot Main that calls an RT modified version of the desktop example above.
None of these will control your robot without code from you. But that code isn’t too hard.
Controlling your robot like this boils down to making the outputs look a bit like a joystick. You make the an arcade VI have some forward value, and the left/right value comes from whether the vision target is left, right, or center. This generally is just measuring the distance from center, then multiplying by a Proportion gain to determine how much to steer.
By the way, the default example has code for the horizontal strips on the boiler, but there isn’t code for the vertical gear stripes. I believe some have been posted, and I think it is a good exercise to make your own Rectangle Comparison routine to score the stripes based on how well they match gear stripe expectations. If this gets hard, feel free to ask questions.
I don’t even know where to start with my own rectangle comparison routine. I don’t even know where to start. I am looking at the example fine i just don’t know how to manipulate it for my use. I also don’t even know where i would put the code for moving the bot based on how far the camera is from centered. Or what it would look like.
The example puts out values to a cluster that is a global variable. You can then drag (from the project window) Robot Global Data.vi into your Teleop program to get all the vision result data in your Teleop program. https://www.chiefdelphi.com/forums/showthread.php?t=154930
Here is my peg vision code, but before implementing this, you will need to import the vision example (which targets the boiler). I don’t clearly remember how that all went, so you’ll have to get help from someone else (like Greg) for that.
As mentioned, I wouldn’t combine the vision and robot steering code together. Leave them separate and just use the output of the vision to build steering values for an arcade steering robot drive.
As for making your own rectangle comparison code. The code takes in two rectangles and calculates the height, width, and distances of some edges. It then compares those values to the “good” values that you’d expect if this really is a target. For example, if the strip is supposed to be twice as tall a wide, then you would take the height and width of a particle rectangle, divide, and score it based on how close it is to 2.0. If exactly 2.0, it gets 100%. If it is 1.9, a bit less. If 2.1, a bit less, etc. That is done with the teepee code on the right had side.
Anyway, do this to some of the things you think will distinguish a good image from a bad one, then teepee, then average or weighted average your scores and the calling code will decide which pair of rectangles makes the best target according to your scoring code.
The example has a tab for doing things from file, and that is how I’d recommend starting. Take your own photos, put them in a folder, and debug the code using your files. Then flip to the other tab and use live images from a USB camera connected to your laptop.
I do not know how to move the example code into my code and get it to run. I do not know how to make it recognize the tape for the peg and try and center the robot on it.
How do i tell it to use vision? Like are you supposed to have it mapped to a button to tell it to start processing vision? and then it will start to center on the peg? Or is it always running and when it sees the tape it will auto try and center on it?
Depends on how you program it. Typically what I’ve seen is that the vision code runs all the time, but the PID loop that makes the robot center on the peg is switched on or off explicitly so that it doesn’t move when you don’t expect it to move.
To enable vision, go to the front panel of Robot Main.vi and click Enable Vision so it’s on, then right click on it–>Data Operations–>Make Current Value Default.
Note: Upon importing the example vision code, you will encounter errors. You will need to fix these errors. I can’t give you a full walkthrough on that right now, but it isn’t too hard to do if you click on the broken arrow for more info.
You feed it through a PID loop and tune it. Tuning takes time, and is hard. I don’t have any links handy, but Wikipedia may be helpful to you here. Just… first understand how PID works, okay?
I have seen a little bit about how to tune it but i haven’t ever tired to. But i am having trouble actually tracking the tape it won’t even recognize it at all. Are there dimensions that the example vision is looking for that i need to change? I can get some screenshots of the images i am using later today.
Adjust your exposure so that everything is relatively dark except the tape. The tape should not be super bright either. Run your code using the Run arrow. Go to the Front Panel of Vision Processing.vi and enable “Display Original” and “Display Mask”. Then adjust “LED Color” as follows:
Adjust Saturation range to 100-255
Adjust Value range to 50-255
Adjust Hue range to match LED color (you can use the original and mask displays to help you)
Adjust bottom limit of Saturation range just until tape disappears from mask display, then lower by ~30
Adjust Value range only if there is still noise in the mask display
That’s how you calibrate it to recognize the tape. If you have done that but still it doesn’t recognize the tape, you have a bigger problem, and I need more info before I can help you with that.