Log in

View Full Version : 2009 base C/C++ code?


Maxpower57
28-09-2008, 14:44
Sorry if this has already been asked, and I'd be very surprised if it hasn't (but I can't find it)
Is there a base C/C++ code framework for the NI compact Rio system that we will be using? Much like there was last year. there must be if some teams already have beta versions of it.
does anyone have a link where this can be downloaded.


Thanks,
Max.

pogenwurst
28-09-2008, 15:30
There is, but at present only the beta teams have access to it (a mistake, IMHO).

See: Should FIRST Release... (http://www.chiefdelphi.com/forums/showthread.php?t=69371)

Abrakadabra
28-09-2008, 16:52
I believe that the release of more information about the C/C++ libraries for 2009 is imminent - Brad Miller and the team at WPI are doing the development and Brad himself is starting next weekend to do some public talks and demos of the approach: see this (http://www.chiefdelphi.com/forums/showthread.php?p=766413&highlight=WPI#post766413) post. I'm sure it won't be long before the information from these talks makes it onto CD.

If you are curious about the approach being taken by the WPI team, I would think their previous work on WPILib might be indicitive of the approach we may see for this year's libraries. Check out the original WPILib here (http://users.wpi.edu/~bamiller/WPILib/).

Please note that I have no insight into the actual work at WPI, and that this is all speculation on my part. But the WPILib approach has been very successful for many teams in the past (including, whether they knew it or not, everyone who used EasyC), so I see no reason why Brad and the team would make a radical departure now. And now with open source and the ability of the FIRST community to fully participate in the development and maintenance of the libraries, I personally think it will be even better!

BradAMiller
07-10-2008, 18:05
We posted the documentation for the library here:
http://forums.usfirst.org/showthread.php?t=10064

Understand that it's preliminary documentation. But everything in there is currently running with a few exceptions.

Tom Line
08-10-2008, 10:18
Perhaps I missed it, but I see no reference to the camera in any of the documentation. Is that a yet-to-come-this-year option, or is it not going to be part of the C implementation?

BradAMiller
08-10-2008, 12:29
Perhaps I missed it, but I see no reference to the camera in any of the documentation. Is that a yet-to-come-this-year option, or is it not going to be part of the C implementation?

Thanks, I should have mentioned that... it's not quite integrated in the doxygen created documentation. The complete vision system should be there in a week or two.

Bomberofdoom
09-10-2008, 13:21
Hi Brad,

Can you tell us if the Vision/Camera class/library will be more or less similar to the way Kevin Watson's Camera code (2007) works?

I know there will be some (major) diffrences between the 2007 CMU2 cam and the new Axis camera, so could you elaborate on those?

Thanks,

Nir.


P.S,
What about a Dashboard for the Laptops that can be connected to the Driver's Station? How will it work (compared to creating one in Labview)?

Greg McKaskle
09-10-2008, 21:18
I can't do much of a comparison -- I'm not familiar with the previous vision code.

The new library will be broken into a camera library with functions for connecting, getting an image, setting parameters of the camera, and an image processing library with functions for analyzing what is in an image. This includes shape based analysis, color based analysis, pattern based analysis, etc. The analysis section is based upon the NI vision library, so more details can be found online.

Greg McKaskle

rogerlsmith
09-10-2008, 23:09
since forums.usfirst.org is down,

try the direct link:
http://users.wpi.edu/~bamiller/WPIRoboticsLibrary/index.html

ShotgunNinja
13-10-2008, 21:47
I can't do much of a comparison -- I'm not familiar with the previous vision code.

The new library will be broken into a camera library with functions for connecting, getting an image, setting parameters of the camera, and an image processing library with functions for analyzing what is in an image. This includes shape based analysis, color based analysis, pattern based analysis, etc. The analysis section is based upon the NI vision library, so more details can be found online.

Greg McKaskle

:ahh:

I'm sorry, did you just say shape-based analysis? I wonder if that might have anything to do with this year's challenge... (Hint Hint FIRST!)

Daniel_LaFleur
14-10-2008, 12:34
I can't do much of a comparison -- I'm not familiar with the previous vision code.

The new library will be broken into a camera library with functions for connecting, getting an image, setting parameters of the camera, and an image processing library with functions for analyzing what is in an image. This includes shape based analysis, color based analysis, pattern based analysis, etc. The analysis section is based upon the NI vision library, so more details can be found online.

Greg McKaskle

Greg,

In the 'shape based' analysis, can the vision system interpolate orientation of a known 3-d object within the cameras field of vision? Can the vision system determine trajectory (in 3-d) based on movement and size change of a known shape/size object within the cameras field of vision?

or will we have to do the calculations outside of LV vision?

Roger
14-10-2008, 13:08
From the demo video I saw (not a live demo), a table full of typical tools was flashed on the screen (overhead view), then right after a count of the different tools was displayed. From what I understand (and reading Greg's description above) the "shapes" (and colors and patterns) are programmed into the code somehow, then the code and/or hardware (I don't know which) does the dirty work grinding out which is what. Fairly quickly, I might add. Thus "shapes" could be tool profiles or letter outlines or game pieces. Same with colors. Size and rotation didn't matter, but the profile had to be similar to the pre-programmed profile. In a word---- :cool:

I guess for 2009 the signs "Turn Left!" "Go Fast!" could really be read by the robot.

I'm thinking there must be a catch, or a trade-off, or something. The problems with the CMUcam was not that it couldn't read green vs red vs blue -- it could -- but that it would pick up arena lights from across the field. (I'm looking at you, Boston's BU arena!) The shapes I'm thinking have to be directly in front of the camera and on a plain contrasting background to be picked up clearly. I can't wait to see how well it really works.

Daniel, my guess is that the software/hardware can determine shape (profile), but your software would have to save size/rotation of the shape each "frame" to track differences between frames to determine trajectory. Or, wait and see.

Greg McKaskle
14-10-2008, 21:52
In the 'shape based' analysis, can the vision system interpolate orientation of a known 3-d object within the cameras field of vision? Can the vision system determine trajectory (in 3-d) based on movement and size change of a known shape/size object within the cameras field of vision?

or will we have to do the calculations outside of LV vision?

NI vision library is primarily for planar shapes. Typically chips on a board, or other bits of an object being manufactured. Additionally, the camera image is simply a 2D matrix of light intensities. Determining the difference between a scene of 3D objects, and a printed poster of the same scene is difficult even for humans, much more difficult for computers.

So if you are clever about how you use it, yes, you can use vision to measure things about the image and to interpret it in 3D. But no, it doesn't automatically infer 3D information from a camera image.

Greg McKaskle

ShotgunNinja
20-10-2008, 20:52
So, in summary, it's very similar to the tracking and profiling systems found in newer security cameras? I've read in a few-years-old PopSci somewhere that some newer camera systems were incorporating a collection of two-color bitmaps to profile shape and direction of people walking by. Is it something similar?

EDIT: Also, any new hustle and bustle from the FIRST officials about my hint (see above) ? Again, I'd LOOOVE to see the OOP approach to something like this. It just makes me feel even smarter :D

Greg McKaskle
20-10-2008, 21:06
If the templates are bitmap based, then this is what IMAQ would refer to as pattern based shape recognition. The patterns are trained in advance, and a scene is examined for instances of the pattern.

Geometric recognition, by contrast finds the high contrast lines in a image and compares their shape against the template shapes.

Greg McKaskle

kamocat
24-10-2008, 21:02
And we will be getting all this (or it will become available online) when we get the cRIO in November?
(In this case, I'm just referring to the Labview default code, because NI has no control whatsoever over the WPI C++ code)