It’s definitely possible, but is it plausible? I heard it takes up a lot of memory, and slows down the code. From my understanding the best way is such: (onboard comp/comp. dashboard)>(classmate w/ dashboard)>(crio).
What if you can turn vision tracking off and on when needed? Imagine this: a robot is lining up to shoot by the pyramid, standing still running no motors except lets say the shooter and a lifter/hopper. I push a button to do tracking, and tracking starts on crio, finds target, and shoots all the discs. Then tracking stops right after. With such an occurrence, will the tracking slow anything else down? It's not running the whole match.
And If it has a big impact on processing power/speed, how "lean" should the code on the crio need to be successful?...how lean?..how lean do you like your meat? just kidding..
Last year the vision processing slowed down our robot by an intolerable amount when we ran it on the cRIO. Every aspect of our code was lagging when we had vision running, which caused us to get all kinds of watchdog errors and other problems.
That was using an 8-slot cRIO, and I can’t speak for the new 4-slot ones, but I think that running vision on the cRIO is not really worth your time. It’s probably more worth your while to set it up on the dashboard and send rectangle coordinates over to the robot.
Even on an on-board PC image processing is still a pretty slow method of sensor feedback after the latency of the camera, networking and processing are accounted for. This means that it is generally advisable to use a faster, lower latency sensor to close the loop such as a gyro or encoders with the image processing providing the setpoint.
One consequence of this method is that you really only need to process a small number of images to align the robot. Process one image on demand->Wait until the setpoint is reached->Process another image to see if you’re close enough->Repeat as necessary.
If using this type of on-demand alignment sequence it is unlikely you will notice the latency caused by the processing of the image relative to the time required to actually align the robot (provided your image processing is fairly efficient)
I suspect if you’d performed a search you would have found numerous discussions about this.
First, don’t make the mistake of believing that ‘vision processing’ is real-time. Even most of the fastest industrial robots take a single picture and work from that picture.
From that standpoint, the cRIO is more than capable of of processing a single frame in a fraction of a second. Or even more than a couple frames, if needed.
So, let’s follow what you suggested: only run the vision processing when necessary. You certainly don’t need to use it when you aren’t trying to aim.
What most teams did last year was to take pictures only when the shoot button is pressed. Once the vision system finds a target that matches your parameters, it then calculates the postion and distance. Using that single number, it turns the robot or turret (or whatever) to the needed position.
Very few teams used off-board processing last year, and only a handful had the knowledge to run it at real-time speeds (341 comes to mind).
However, only analyzing pictures when needed and only getting one set of ‘good’ values to turn toward the target will not over stress your cRIO at all. If you’re only running your CPU at 100% to process a couple of pictures in a fraction of a second, it’s really not an issue. If you want to get really fancy, you could even turn all your other code ‘off’ (with a case structure) when you’re running your vision routine while holding the shoot button. Be creative.
To add to that, the targets this year are huge. It should be possible to process them based on 160x120 images and definitely at the medium resolution. It should be sufficient to process just a few a second or on demand. It should be fine to process then at a lowered priority.
Tried today and It’s terrible with camera resolution low and everything. Besides that, how would one go about taking a single picture from the axis camera and move motors to line up to target, based on the given x,y and distance values from that set frame? PID?
Use the information computed from a frame to determine how much to turn. Use a gyro to turn that much. Optionally, when you’ve finished the turn, repeat the process. Once you verify that you’re facing the right direction, use encoders to move to the proper distance from the goal.
PID is often a good tool for controlling robot direction based on a gyro, and for controlling robot drive based on encoders.
You can probably use two cRIOs (one as a co-processor). We asked the GDC last year if this was allowed, and they said yes. The PCB also did not need to remain in the metal chassis (a huge weight savings). We ended up using the method mentioned earlier with a driver station laptop doing the processing.
To fit the image within the bandwidth constraints, the refresh rate and image quality had to be set low. It was good enough to work for image recognition, but won’t win any image quality awards
Yes it is possible and plausible, but there are a few special considerations. We did it last year, and got between 5-10 fps processing time depending on how clean our images were.
You need excellent illumination to create high-contrast images. I recommend two bright green LED ring-lights, or even more. You will also need to tune your camera’s exposure settings to give you the most contrast as possible.
Here is a oldie but goodie whitepaper from team 67 on how to do this:
You need to keep the object recognition routines as simple as possible, so you need to start with very simple images. Black background with your object in bright green is your goal.
Here is Java code from our robot last year showing how we did this:
public void processCamera() {
if (camera.freshImage()) {
try {
//Take a snapshot of the current turret pot position
curTurretPot = turret.getPos();
colorImage = camera.getImage(); // get the image from the camera
freshImage = true;
//TODO: Tune these HSL values at the venue!
binImage = colorImage.thresholdHSV(ImagingConstants.kHThresholdMin, ImagingConstants.kHThresholdMax, ImagingConstants.kSThresholdMin, ImagingConstants.kSThresholdMax, ImagingConstants.kLThresholdMin, ImagingConstants.kLThresholdMax);
s_particles = binImage.getOrderedParticleAnalysisReports(4);
colorImage.free();
binImage.free();
if (s_particles.length > 0) {
int lowestY = 0;
for (int i = 1; i < s_particles.length; i++) {
circ = s_particles*;
//Find the highest rectangle (will have the lowest Y coordinate)
if (s_particles[lowestY].boundingRectTop > circ.boundingRectTop) {
if ((circ.boundingRectWidth > 20) && (circ.boundingRectHeight > 20)) {
lowestY = i;
}
}
}
topTarget = s_particles[lowestY];
//Send bounding rectangle info to SmartDashboard
// Check if the best top blob is bigger than 20
if (topTarget.particleArea > 20) {
xOffset = ((topTarget.boundingRectLeft + topTarget.boundingRectWidth / 2) - 160.0) / 160.0;
size = topTarget.particleArea;
} else {
xOffset = 0;
topTarget = null;
}
} else {
xOffset = 0;
topTarget = null;
}
} catch (AxisCameraException ex) {
ex.printStackTrace();
} catch (NIVisionException ex) {
ex.printStackTrace();
}
}
}
Justin - this is the same approach we are taking to limit the “interference” of the image processing with the operation of the rest of the robot systems. We are running a very fast processing loop for all our code to enhance our control system capabilities, so we broke the processing into 2 steps to reduce the required image processing time to remain within our allotted loop time. It seems to be working for us so far but we have not done extensive testing as yet. Hope for both of us that this is a viable implementation strategy. Based on the comments here it seems very plausible.