pic: Vision Development

69ab3b1e08502dffa2bb44425853fc59_l.jpg

I was looking through my archives and found a bunch of these vision runs, part of my development of the vision system. This particular one was during the development of the diamond target analysis code from the second set of test images. I wrote a LabVIEW program to run sets of JPEG’s through the vision code, and it spat out debug JPEG’s like this.

A little bit of timeline to go along with this picture:

-We had the baskets built in a few days. The night they were finished with retro-reflective tape, I took these pictures with my phone. I turned on the flash to get good bright rectangles.

-I ended up with 15 or so pictures from various places. I ran them through a bulk resizer to end up with 640x480 and 320x240 versions in 8-bit greyscale, since I was planning on processing them in greyscale for speed (at this point in time, I was still planning on running the vision code on the cRio itself).

-I took the sample LabVIEW code and heavily modified it into what I have now, and tested the algorithm on each folder of images, printing the data returned onto the images as baked overlays, and writing them to new JPEG files (such as this one). The resulting images were all 320x240 or 640x480 24-bit color, although ChiefDelphi has resized this one.

-I ran about 40 different primary revisions of the code through the vision test setup, incrementing the output directory each time (this particular image is from diamond day2 round5 testing, so it’s the 5th major revision of the 2nd day I worked on the diamonds).

-The diamond algorithm finds all of the rectangles using a modified example program, then finds all possibly links between them (16 links for 4 targets), categorizes the links, categorizes the targets by guessed position in the diamond, and then weights each of the targets to find the proper center of the diamond. I began writing code to do “side angle calculation” (calculating the angle offset to the basket to allow compensation for side banking shots or side swish shots), but it never worked before I gave up.

-I ended up with the code I have today. You can find it in the Buzz17 Code I posted on ChiefDelphi, specifically the 2012vision zip (“test_vision.vi” is the actual test harness I used to generate this image).

-It took me about a week and a half to finish this. After finishing the JPEG runs, I setup a cRio/Axis cam setup and was thoroughly unimpressed with the performance, and designed the UDP dashboard system we use now.

We were having dismal vision tracking at Troy (using only the high target). While I was standing talking to Jim, I saw this screen flickering while you were tuning your shooting.

At that point, we had considered doing multiple target tracking, but simply not had time to implement it. Our good showing at West Mich. made us think it wasn’t necessary.

After seeing that screen and having a nightmare at Troy, we spent a week developing a system that would lock on to various combinations of targets in this order of preference:

  1. All 4
  2. Top, Left, and Right.
  3. Bottom, Left, and Right.
  4. Top and Bottom.
  5. Left and Right.
  6. Top and Left
  7. Top and Right
  8. Bottom and Left
  9. Bottom and Right
  10. Top.
  11. Bottom.

We verified the target by first classifying them based on size (throwing out too large and too small), their height from the floor, then by angular relationships to other targets.

It worked beautifully, and was primarily finished because we knew you had already done it.

Figured you deserved the credit for pushing us to work just a little harder :slight_smile: .

It sounds so easy!!!
:confused:

Noteworthy: First emoticon in 151 posts by Ekcrbe… He’s given in.

I don’t actually use the height from the floor, just a few basic parameters (min size, rectangularity, edge strength) and the angular relationships.

If I were to improve it, I would probably attempt to guess position in the evil two-target scenario based on size vs height. I couldn’t just categorize it by height or size since we do both fender and key shooting (and the fender shot prefers the lower targets, while the key prefers the upper ones).

Currently, we help the evil two-target guess by reading the gun state, but this is not very good for code reasons (the compare is done on the laptop side, so modifying anything in the state-machine means I have to recompile the laptop vision helper).