|
|
|
![]() |
|
|||||||
|
||||||||
I was looking through my archives and found a bunch of these vision runs, part of my development of the vision system. This particular one was during the development of the diamond target analysis code from the second set of test images. I wrote a LabVIEW program to run sets of JPEG's through the vision code, and it spat out debug JPEG's like this.
24-07-2012 22:21
apalrd
A little bit of timeline to go along with this picture:
-We had the baskets built in a few days. The night they were finished with retro-reflective tape, I took these pictures with my phone. I turned on the flash to get good bright rectangles.
-I ended up with 15 or so pictures from various places. I ran them through a bulk resizer to end up with 640x480 and 320x240 versions in 8-bit greyscale, since I was planning on processing them in greyscale for speed (at this point in time, I was still planning on running the vision code on the cRio itself).
-I took the sample LabVIEW code and heavily modified it into what I have now, and tested the algorithm on each folder of images, printing the data returned onto the images as baked overlays, and writing them to new JPEG files (such as this one). The resulting images were all 320x240 or 640x480 24-bit color, although ChiefDelphi has resized this one.
-I ran about 40 different primary revisions of the code through the vision test setup, incrementing the output directory each time (this particular image is from diamond day2 round5 testing, so it's the 5th major revision of the 2nd day I worked on the diamonds).
-The diamond algorithm finds all of the rectangles using a modified example program, then finds all possibly links between them (16 links for 4 targets), categorizes the links, categorizes the targets by guessed position in the diamond, and then weights each of the targets to find the proper center of the diamond. I began writing code to do "side angle calculation" (calculating the angle offset to the basket to allow compensation for side banking shots or side swish shots), but it never worked before I gave up.
-I ended up with the code I have today. You can find it in the Buzz17 Code I posted on ChiefDelphi, specifically the 2012vision zip ("test_vision.vi" is the actual test harness I used to generate this image).
-It took me about a week and a half to finish this. After finishing the JPEG runs, I setup a cRio/Axis cam setup and was thoroughly unimpressed with the performance, and designed the UDP dashboard system we use now.
26-07-2012 22:59
Tom LineWe were having dismal vision tracking at Troy (using only the high target). While I was standing talking to Jim, I saw this screen flickering while you were tuning your shooting.
At that point, we had considered doing multiple target tracking, but simply not had time to implement it. Our good showing at West Mich. made us think it wasn't necessary.
After seeing that screen and having a nightmare at Troy, we spent a week developing a system that would lock on to various combinations of targets in this order of preference:
.
26-07-2012 23:20
EkcrbeIt sounds so easy!!!

Noteworthy: First emoticon in 151 posts by Ekcrbe... He's given in.
27-07-2012 22:00
apalrd
|
We verified the target by first classifying them based on size (throwing out too large and too small), their height from the floor, then by angular relationships to other targets.
|