Finally releasing my peg vision code

It’s not perfect, it needs tuning per robot/ring light, and the scoring method needs a lot of improvement, but it works. And if that’s what you’re looking for, this is for you.

This covers all the edits to Vision Processing.vi from the example boiler-targeting code that you need to do to match my code. These are not code snippets, so you cannot simply drag and drop this into your program.

If you want the challenge, make the scoring program even better.

Vision processing main print.pdf (353 KB)
Vision processing distance print.pdf (314 KB)
Vision processing cullgroup print.pdf (181 KB)
Vision processing scoring print.pdf (166 KB)


Vision processing main print.pdf (353 KB)
Vision processing distance print.pdf (314 KB)
Vision processing cullgroup print.pdf (181 KB)
Vision processing scoring print.pdf (166 KB)

Now that it can target the peg, how do it get it to follow the peg during autonomous.

I’m not taking away the entire challenge. You need it to turn the robot based on how far off you are. All the vision results are contained in Robot Global Data.vi, which is a collection of variables, and Normalized->Center->X is where you get how far off you are, ranging from -1 to 1.

I saw you disconnect all the wires for the line border in the comparison.vi, then how are you suppose to change the value for line border?

There is a constant in the upper-right of the Scoring VI block diagram, it is currently set as 10. You can split that into two constants if you wish, or wire it back to an unbundling of the line border cluster, but that constant currently controls the line border of both sides.

Thank you so much for helping!