|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools |
Rating:
|
Display Modes |
|
#16
|
|||||
|
|||||
|
Re: paper: Team 341 Vision System Code
Quote:
It's awesome that you guys are analyzing the code and you have already taught me something new (that Java's "%" operator works on floating point values...as a primarily C++ guy, I have it burned into my brain that thou shalt use "fmod" for floating point modulus). But if this is the part of the code that engenders the most discussion, then I'm a bit disappointed ![]() Last edited by Jared Russell : 07-05-2012 at 08:13. |
|
#17
|
||||
|
||||
|
Re: paper: Team 341 Vision System Code
Quote:
Code:
return(((angle%360)+360)%360); |
|
#18
|
|||
|
|||
|
Re: paper: Team 341 Vision System Code
I feel like at this point the original code was easier to understand from an outside perspective.
On another note, this is AWESOME! It's nice to see another team using Java to program, especially when it's done so well. Our camera tracking system is currently only used in auton to track after grabbing the third ball off the co-op bridge. It's very rudimentary code so that it can be run on the cRio (we had issues with a bad camera that made it impossible to test any sort of laptop based vision code). This is very interesting, and I'm excited to go over it in more detail later. |
|
#19
|
|||
|
|||
|
Re: paper: Team 341 Vision System Code
Thanks for posting this, Jared. I took a look at your dashboard on the field at Champs and was very impressed. It seems incredibly useful to have the vision system draw its idea of target state on top of the real image.
I suspect if vision is a part of upcoming games, we will probably use a solution very similar to this. This year we wrote all of our vision code on top of WPILib/NIVision to run on the cRIO. In the end we got this to work pretty well, but development and debugging was a bit of a pain compared to your system. |
|
#20
|
|||
|
|||
|
Re: paper: Team 341 Vision System Code
Very cool! I just finished taking an online data structures course from IIT, so it was really awesome to see the implementation of the TreeMap! Just so that I'm sure I understand, it is used to quickly find an rpm based on your distance, right? What are the advantages (if any) of using this instead of plugging your data points into an excel graph and getting a polynomial best fit equation to transform your distance into an rpm?
|
|
#21
|
|||||
|
|||||
|
Re: paper: Team 341 Vision System Code
Quote:
Last edited by Jared Russell : 09-05-2012 at 10:17. |
|
#22
|
|||||
|
|||||
|
Re: paper: Team 341 Vision System Code
Quote:
|
|
#23
|
||||
|
||||
|
Re: paper: Team 341 Vision System Code
Quote:
Yeah! I like that one a lot... For the c++ wind river people out there who look at this... you cannot use the modulo operator on doubles but you can use the fmod() offered in math.h Code:
return (fmod ((angle+360.0) , 360.0)); //c++ //That function reminded me of this one: //Here is another cool function I pulled from our NewTek code that is //slightly similar and cute... int Rotation_ = (((Info->Orientation_Rotation + 45) / 90) % 4) * 90; //Can you see what this does? |
|
#24
|
||||
|
||||
|
Re: paper: Team 341 Vision System Code
Quote:
While I'm here though... if you are a c++ guy why use Java? Is it because it was the only way to interface with the dashboard? I gave up when trying to figure out how to do that in wind river. |
|
#25
|
|||||
|
|||||
|
Re: paper: Team 341 Vision System Code
Easy, it's because I am not the only person who writes software for our team! Java is what is taught to our AP CS students, and is a lot friendlier to our students (in that it is a lot harder to accidentally shoot yourself in the foot). I also have a lot of training in Java (and still use it on a nearly daily basis), even if C++ is my bread and butter.
|
|
#26
|
||||
|
||||
|
Re: paper: Team 341 Vision System Code
Quote:
Quote:
I will reveal one piece now with this video: http://www.termstech.com/files/RR_LockingDemo2.mp4 When I first saw the original video, it screamed high saturation levels of red and blue on the alliance colors, and this turns out to be true. The advantage is that there is a larger line to track at a higher point as I could use particle detection alone. The goal then was to interpret the line to perspective and use that to determine my location on the field. From the location I had everything I needed as I then go to an array table error correction grid with linear interpolation from one point to the next. (The grid among other tweaks are written in LUA more on that later too). more to come... There is one question that I would like to throw out there now though... Does anyone at all work with UYVY color space (a.k.a YPbPr). We work with this natively at NewTek, and it would be nice to see who else does. |
|
#27
|
||||
|
||||
|
Re: paper: Team 341 Vision System Code
So after attending the Einstein Weekend debugging session this past weekend and chatting with some of the teams about their various OpenCV implementing vision systems, I just HAD to check out Daisy's code (another suggestion from Brad Miller).
So honestly, I have little experience with Java but figured what the heck since it's so close to C++. After following some of the Getting Start Guides and playing with a couple of projects, I downloaded Daisy's code and set to work running main() and passing the example image paths to the code as arguments. This seems to work well and two windows pop up showing the "Raw" and "Result" images. What baffles me is that I get this as output as well: "Target not found Processing took 24.22 milliseconds (41.29 frames per second) Waiting for ENTER to continue to next image or exit..." and for the "Result" image I get a vertical green line more or less in the middle of the picture. I ran the program a couple of times with different images and similar results? Can someone tell this C guy what the heck he's doing wrong? Is there something I'm missing? If you guys (Daisy) were using a different color ring light or something, could you provide some sample images that work? Thanks in advance! - Bryce P.S. I'm running this on an OLD POS computer running XP and a Pentium D processor. I'll have to run it at home on something with a little bit of muscle and check performance. Last edited by Bryscus : 16-06-2012 at 16:32. |
|
#28
|
|||||
|
|||||
|
Re: paper: Team 341 Vision System Code
The vertical green line is simply an alignment aid that is burned in to each image - it is not an indication that you have successfully detected the target. If the vision system is picking up the vision targets, you should see blue rectangles and dots indicating the outline and center of the targets, respectively.
Which images are you testing with? The supplied images should work with the code "as is". If you are using your own images, are you using a green LED ring? If you are using a different color LED ring, you will need to alter the color threshold values in the code. Note that regardless of LED ring color, adjusting the camera to have a very short exposure time (so that the images are quite dark) increases the SNR of the retroreflections, and makes tracking both more robust and much quicker. |
|
#29
|
||||
|
||||
|
Thanks for your reply Jared! I'm using the sample images supplied with the code located in DaisyCV/SampleImages. They're called names like 10Feet.jpg and 10ft2.jpg. I tried about three different images. These look like the same images supplied with the Java vision sample program that I pulled off FirstForge. Does this seem correct? Thanks.
- Bryce |
|
#30
|
||||
|
||||
|
Re: paper: Team 341 Vision System Code
Thanks for this great code, but I would really appreciate it if someone could explain to me how this worked. I open the file you uploaded in netBeans, and loaded the libraries, but I didn't know what the classpath was and there was abunch of errors everywhere. Also if someone could explain to me how the whole network thing worked I would greatly appreciate it. I know this is a lot to ask, so thanks in advance.
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|