Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   Walkthrough on how to do offboard image processing! (http://www.chiefdelphi.com/forums/showthread.php?t=111100)

jesusrambo 12-01-2013 20:28

Walkthrough on how to do offboard image processing!
 
Just typed this out in a PM for someone and thought it might be useful for everyone here.

You'll have the easiest time doing your vision processing in either C or Labview. Our team is a Java team, so neither of those would have been our first choices, but the fact is while it's possible to do it in Java, C++ and Labview let you use the NI Vision Assistant, which is a VERY powerful tool. I'll get back to that later. Make sure you have an LED ring light for your webcam, as mentioned in the second link below.

I modified the dashboard to do all the vision processing on that, so that the robot didn't have to waste cycles on it. The driver station is going to be a lot more powerful than the robot anyways unless you're using the Classmate, in which case I'd recommend you find just a laptop to run it on since the Classmate can barely even stream the video output without doing any processing on it. You can open up Labview 2012 and start a New Dashboard Project, which will basically be an editable version of the default dashboard. If you don't know Labview, it's OK, neither did I when I started that. It's a major pain to work with but if you keep to it, you'll get it, and there's plenty of support online for it.

Now in your modified dashboard, you're going to have a few discrete parts.

1) The vision processing itself. Open up the NI Vision Assistant, in All Programs > National Instruments, and play with that a bit. This link will help you with that. It'll serve as a good guide on how to use NI Vision Assistant, and how to compile your generated scripts into VIs that we'll use in your actual modified dashboard. Now, as for a guide on what algorithm you'll want to put together in Vision Assistant, this whitepaper is absolutely amazing and will take you through it all in good detail.

2) Some sort of socket communications, to relay data back to the cRIO. You can do this with NetworkTables if you're a special blend of brave and masochistic, but I never ventured down that particular route. In my opinion, even the reworked NetworkTables implementation is just too obfuscated, confusing, and poorly documented to be worth using. I wrote in TCP communication, to relay back to the robot what I get from...

3) A way to format the results from the vision assistant. I took the x,y coords of the centers of all the detected rectangles, put those in a string formatted like {Distance, x1, y1, x2, y2, x3, y3, x4, y4}. Since this was from last year's game, I was seeing the 4 vision targets. My distance value was generated from a rangefinding script I wrote that roughly estimated distances based on the size of the rectangles it's seeing. You can make a proportion of that to the actual size and do some trig and come up with your distance from the target. You'll want a good way of formatting it so it's easy to pick apart once it's on the cRIO. You can just make a subVI for this and put it within the dashboard to make it a little cleaner.

4) This is optional but I'd highly recommend it. I added 6 boxes to the dashboard, for min/max values of Hue, Saturation, and Luminosity for the Vision Assistant script. This lets you tweak the threshold values on the fly for what it's detecting, so when you're on the field you don't have to be recompiling stuff all the time. I had it store those values in a .cfg file in the Program Files/FRC Dashboard directory.

So, let's recap. Your new dashboard, since it's just a modified version of the old one, will leave in the part of the VI that puts the image on the dashboard for you to watch. It'll also establish a TCP connection to the cRIO (more on how to do this in your code on the cRIO later). However, it's also going to pass this image to the VI you generated with the Vision Assistant program. That'll spit out the X,Y coordinates of the centers of each detected target, which will be formatted into a clean string within the dashboard. Then, your dashboard will send that over the TCP link to the robot.

Now, the robot side! Isn't this fun? The hard part is over, don't worry. This is MetaTCPVariables, a java library file we wrote and added to our copy of WPILIBJ. MetaTCPVariables adds the ability for the cRIO to listen for an incoming TCP connection, establish it on one of the designated ports you're allowed to use. Check page 6, it's an out of date whitepaper but that part has stayed the same.

Now you've got all your x,y coordinates, just come up with a way to sort them! I wrote this, which let us sort the targets based on relative position to each other. It would always classify the highest up target as top, but it used a sort of rolling assignment, so if there was 1 target it would always be called top, if there were 2 the higher one would be called top, if there were 4 the highest one would be top.

If you've got any more questions, please ask me. This ended up being a lot longer than I initially anticipated I'd write, but it's really not nearly as bad as reading a wall of text makes it seem. My code is in Java but you should be able to read the basic ideas if you're not using it and translate to C++ or whatever you're using. If you want more information on how I actually did the Labview side of things please ask.

I'd never done anything with Labview or TCP socket communications on this, and by the time we were at competition I had an autotargeting robot that scored more than a couple perfect autonomous rounds. If you set your mind to it you'll get it, don't be intimidated by having to learn a bunch of new stuff. It's fun once you get into it, and hey, if we didn't want to sink ridiculous amounts of time and effort and frustration into software, we wouldn't be writing code for a FIRST competition!

I hope this helps! Again, if anything's unclear or if you want more help or specific code examples please just ask.

Alan Anderson 13-01-2013 11:18

Re: Walkthrough on how to do offboard image processing!
 
Good information. I'll only make two comments:

UDP is usually preferable to TCP for the kind of data you're sending from the offboard computer to the robot. You're really only interested in the latest information, not in guaranteeing delivery of every single packet in order. TCP's retransmission of unacknowledged packets can introduce significant delays if there is network congestion (e.g. video streams).

Your custom MetaTCPVariables package is no longer necessary. NetworkTables provides exactly the same thing and is included with robot software development environments this year.

WarehouseCrew 13-01-2013 11:30

Re: Walkthrough on how to do offboard image processing!
 
Is off loading vision processing to the drivers station still a viable option now that FIRST has changed the priorities of the packets (eg. video has lowest priority)?

Does using the Raspberry Pi (or similar device) make a better alternative to off load video processing?

Greg McKaskle 13-01-2013 12:15

Re: Walkthrough on how to do offboard image processing!
 
That is a valid question, but unless the available bandwidth is in short supply, the prioritization should have little effect.

Greg McKaskle

jesusrambo 13-01-2013 18:10

Re: Walkthrough on how to do offboard image processing!
 
Quote:

Originally Posted by Alan Anderson (Post 1214286)
Good information. I'll only make two comments:

UDP is usually preferable to TCP for the kind of data you're sending from the offboard computer to the robot. You're really only interested in the latest information, not in guaranteeing delivery of every single packet in order. TCP's retransmission of unacknowledged packets can introduce significant delays if there is network congestion (e.g. video streams).

Your custom MetaTCPVariables package is no longer necessary. NetworkTables provides exactly the same thing and is included with robot software development environments this year.

UDP is absolutely preferable, sadly the Squawk VM doesn't support it. We tried that at first, and gave it a shot again this year in hopes the Squawk VM would be updated in the newer cRIO firmwares but it hasn't been.

As far as NetworkTables goes, I'm sure it works but I'm personally not a big fan of it. It's big, confusing, poorly documented, and there are few if any good examples on how to use it. That said, I have been considering trying it this year just to give it a shot, I just need to block out some time to sit down with it and figure it out. It'd definitely be nice to have something everyone can use more easily without having to deal with writing in their own TCP code, I'll keep this updated as we deal with that.

Warehouse: Funny you'd mention that, we're actually thinking of using a Raspberry Pi on the robot this year. The packet priority should not matter as the bandwidth isn't nearly saturated enough for that to be an issue, though. Our plan with the Raspberry Pi is to run pretty much all the same Labview VIs, but to separate them out from the Dashboard (which would actually be even easier.)

JM033 25-01-2013 16:23

Re: Walkthrough on how to do offboard image processing!
 
can anyone help?
http://www.chiefdelphi.com/forums/sh...26#post1221426

JM033 25-01-2013 22:13

Re: Walkthrough on how to do offboard image processing!
 
"Invalid template range" is an error I'm receiving on the driver station, I believe I did the steps correctly. The only thing I didn't do is setup the tcp/udp yet. Is that required to track the image?
EDIT: error is IMAQ:invalid color template image

jesusrambo 25-01-2013 22:16

Re: Walkthrough on how to do offboard image processing!
 
Quote:

Originally Posted by JM033 (Post 1221971)
"Invalid template range" is an error I'm receiving on the driver station, I believe I did the steps correctly. The only thing I didn't do is setup the tcp/udp yet. Is that required to track the image?

Hey,

That is not required for the track, but you won't get any data on the cRIO. The Dashboard should still be able to show the detection though. What's giving you that error? Where's it showing up?

Edit: I know what it is. the image processing script uses a template image to match detected blobs to. I forgot to include that, but I'll get it up first thing as soon as I get in the lab tomorrow. Sorry!

JM033 25-01-2013 22:40

Re: Walkthrough on how to do offboard image processing!
 
Thanks much appreciated!

jesusrambo 26-01-2013 13:28

Re: Walkthrough on how to do offboard image processing!
 
Hey,

Attached is an image of my vision assistant script. I made this a while ago, I'd forgotten that I just decided to forego using a template image at all. Instead, I just use thresholding and shape recognition. It makes it a little more flexible, if I remember right why I did it that way.

JM033 26-01-2013 13:30

Re: Walkthrough on how to do offboard image processing!
 
where did you attach it?

jesusrambo 26-01-2013 13:34

Re: Walkthrough on how to do offboard image processing!
 
Um.

Here

JM033 26-01-2013 14:13

Re: Walkthrough on how to do offboard image processing!
 
What do you recommend in trying to find the disk? Circular edge tool?

jesusrambo 26-01-2013 14:28

Re: Walkthrough on how to do offboard image processing!
 
Replace the shape detection block of the script with whatever shape you're trying to find, I'd think.

muaddib42 28-01-2013 11:12

Re: Walkthrough on how to do offboard image processing!
 
This is probably such a simple answer, but how do I place the VI script for image processing on the dashboard? I created the script using NI Vision Assistant, but I don't know how to place it in the modified Dashboard project.

Thank you

JM033 28-01-2013 13:14

Re: Walkthrough on how to do offboard image processing!
 
So I do the image processing on the dashboard and I do a UDP send of my target info results from the dashboard to the robot (crio). From their I make a UDP receive in my robot code from the dashboard. Correct?

fovea1959 28-01-2013 14:14

Re: Walkthrough on how to do offboard image processing!
 
what ports are people using for sending the data back from the DS to the robot? Doesn't the FMS only pass data destined for a limited selection of ports?

According to the FMS White Paper:
Quote:

The ports that the teams are able to access on the playing field are as follows:
  • TCP 1180: This port is typically used for camera data from the robot to the DS when the camera is connected to port 2 on the 8-slot cRIO. This port is bidirectional on the field.
  • TCP 1735: SmartDashboard, bidirectional
  • UDP 1130: Dashboard-to-Robot control data, directional
  • UDP 1140: Robot-to-Dashboard status data, directional
  • HTTP 80: Camera connected via switch on the robot, bidirectional
  • HTTP 443: Camera connected via switch on the robot, bidirectional


jesusrambo 28-01-2013 16:43

Re: Walkthrough on how to do offboard image processing!
 
Quote:

Originally Posted by muaddib42 (Post 1223258)
This is probably such a simple answer, but how do I place the VI script for image processing on the dashboard? I created the script using NI Vision Assistant, but I don't know how to place it in the modified Dashboard project.

Thank you

NI explains it better than I could - "You also can add a VI to the project by selecting the VI icon in the upper right corner of a front panel or block diagram window and dragging the icon to the target. When you add a VI to the project, LabVIEW automatically adds its entire hierarchy to the Project Explorer window under Dependencies."

Basically click and drag the icon in the upper right hand corner of the block diagram of the VI you're trying to import, into the dashboard project.


Quote:

Originally Posted by JM033 (Post 1223343)
So I do the image processing on the dashboard and I do a UDP send of my target info results from the dashboard to the robot (crio). From their I make a UDP receive in my robot code from the dashboard. Correct?

That's exactly right. I'll be updating this for smartdashboard today.


As far as concerns about port numbers go, fovea you're right that only a few ports are allowed during comp. We used 1130, since it's open for sending data from Dashboard to Robot which is exactly what we're doing.

ohrly? 28-01-2013 19:50

Re: Walkthrough on how to do offboard image processing!
 
We're using a raspberry pi as well (python+opencv after many years of disappointment with NIVision), and I don't know if we should use network tables or TCP? Recommendations?

Also, I don't know either protocol, so is it worth it to learn the protocols or just use someone else's code?

violinuxer 28-01-2013 20:35

Re: Walkthrough on how to do offboard image processing!
 
Quote:

You'll have the easiest time doing your vision processing in either C or Labview.
Have you guys tried using WPIJavaCV/SmartDashboard? We use it to do image processing and then use NetworkTables to send values back to the robot. I think you may be able to use pure OpenCV functions if you want (via JavaCV) for plenty of power.

Sorry if I'm being redundant...

violinuxer

jesusrambo 28-01-2013 20:59

Re: Walkthrough on how to do offboard image processing!
 
Quote:

Originally Posted by ohrly? (Post 1223545)
We're using a raspberry pi as well (python+opencv after many years of disappointment with NIVision), and I don't know if we should use network tables or TCP? Recommendations?

Also, I don't know either protocol, so is it worth it to learn the protocols or just use someone else's code?

Now that the documentation for NetworkTables has been posted, I'd use that, but you'd have to find a way to use it with Python. Maybe a C middleman would be the easiest way, so you don't have to reimplement NetworkTables in Python. If you don't feel like figuring that out, then I'd stick with either TCP or UDP.

I'll be updating the guide soon as I have the time with more information on using NetworkTables.

Quote:

Originally Posted by violinuxer (Post 1223566)
Have you guys tried using WPIJavaCV/SmartDashboard? We use it to do image processing and then use NetworkTables to send values back to the robot. I think you may be able to use pure OpenCV functions if you want (via JavaCV) for plenty of power.

Sorry if I'm being redundant...

violinuxer

It's certainly possible, I haven't looked into it but I'd love if you guys would share your work. I chose the C/Labview route because it let me use the vision assistant.

violinuxer 29-01-2013 09:30

Re: Walkthrough on how to do offboard image processing!
 
Here is how we did it last year:

Smart dashboard comes with the wrapper to OpenCV. You install SmartDashboard via the installer (available at firstforge.wpi.edu) and add the included library jar files (there are tutorial pdfs available on the web) into a pure java Netbeans project. You then extend WPICameraExtension in which you can process an image, do thresholding, polygon detection, etc. etc. Any calculated values are then sent back to the robot via NetworkTables.

Below is our image processing code from last year. Note the synchronized block, that's how we sent the values back to the robot.

Code:

import edu.wpi.first.smartdashboard.camera.WPICameraExtension;
import edu.wpi.first.smartdashboard.properties.DoubleProperty;
import edu.wpi.first.smartdashboard.properties.IntegerProperty;
import edu.wpi.first.wpijavacv.*;
import edu.wpi.first.wpilibj.networking.NetworkTable;
import java.util.ArrayList;


public class VisionProcessing extends WPICameraExtension {


    public static final String NAME = "Camera Target Tracker";
    public final IntegerProperty threshold = new IntegerProperty(this, "Threshold", 180);
    public final DoubleProperty contourPolygonApproximationPct = new DoubleProperty(this, "Polygon Approximation %", 45);
    NetworkTable table = NetworkTable.getTable("camera");
        WPIColor targetColor = new WPIColor (0,255,0);
        WPIColor contourColor = new WPIColor (17,133,133);

        @Override
        public WPIImage processImage(WPIColorImage rawImage) {
            WPIBinaryImage blue = rawImage.getBlueChannel().getThreshold(threshold.getValue());
            WPIBinaryImage green = rawImage.getGreenChannel().getThreshold(threshold.getValue());
            WPIBinaryImage red = rawImage.getRedChannel().getThreshold(threshold.getValue());
            // Does the thresholding;

            WPIBinaryImage colorsCombined = blue.getAnd(red).getAnd(green);
            //Mixes the paint;

            colorsCombined.erode(2);
            colorsCombined.dilate(6);
            //Gets rid of small stuff and fills holes;

            WPIContour[] contours = colorsCombined.findContours();
            rawImage.drawContours(contours, contourColor, 3);
            ArrayList<WPIPolygon> polygons = new ArrayList<WPIPolygon>();

            for (WPIContour c : contours) {
                double ratio = ((double)c.getHeight())/ ((double)c.getWidth());
                //if (ratio < 1.5 && ratio > .75)
                if (1==1) {
                    polygons.add(c.approxPolygon(contourPolygonApproximationPct.getValue()));
                    //Makes polygons around edges;
                }
            }
            ArrayList<WPIPolygon> possiblePolygons = new ArrayList<WPIPolygon>();
            for (WPIPolygon p : polygons) {
                if (p.isConvex() && p.getNumVertices() == 4) {
                    possiblePolygons.add(p);
                } else {
                    rawImage.drawPolygon(p, WPIColor.MAGENTA, 1);
                }
            }
            WPIPolygon square = null;

            int squareArea = 0;
            double centerX = 0;
            double heightRatio = 0;

            for (WPIPolygon p : possiblePolygons) {
                rawImage.drawPolygon(p, WPIColor.BLUE, 5);
                for (WPIPolygon q : possiblePolygons) {
                    if (p == q)
                        continue;
                    int pCenterX = (p.getX() + (p.getWidth()/2));
                    int qCenterX = (q.getX() + (q.getWidth()/2));
                    int pCenterY = (p.getY() + (p.getHeight()/2));
                    int qCenterY = (q.getY() + (q.getHeight()/2));
//                    rawImage.drawPoint(new WPIPoint(pCenterX, pCenterY), targetColor, 5);
//                    rawImage.drawPoint(new WPIPoint(qCenterX, qCenterY), targetColor, 5);
                    if (Math.abs(pCenterX - qCenterX) < 20 && Math.abs(pCenterY - qCenterY) < 20) {
                        int pArea = Math.abs(p.getArea());
                        int qArea = Math.abs(q.getArea());
                        if (square != null) {
                            if (square.getY() < p.getY())
                                continue;
                                // if we already have a square, and it is lower, doesn't add anything
                        }
                        if (pArea>qArea) {
                            square = p;
                            squareArea = pArea;
                            centerX = (double)pCenterX/rawImage.getWidth()-.5;
                        } else {
                            square = q;
                            squareArea = qArea;
                            centerX = (double)qCenterX/rawImage.getWidth()-.5;
                        }
                        WPIPoint [] v = square.getPoints();
                        int x1 = Math.abs(v[1].getX() - v[0].getX());
                        int y1 = Math.abs(v[1].getY() - v[0].getY());
                        int y2 = Math.abs(v[2].getY() - v[1].getY());
                        int y3 = Math.abs(v[3].getY() - v[2].getY());
                        int y4 = Math.abs(v[0].getY() - v[3].getY());
                        if (y1 > x1) { // first segment isvertical
                            heightRatio = (double)y1 / y3;
                        } else {
                            heightRatio = (double)y2 / y4;
                        }
                        break;
                    }
                }
            }

            if (square != null) {
                int sCenterX = (square.getX() + (square.getWidth()/2));
                int sCenterY = (square.getY() + (square.getHeight()/2));
                rawImage.drawPoint(new WPIPoint(sCenterX, sCenterY), targetColor, 5);
            }

            synchronized (table) {
                table.beginTransaction();
                if (square != null)
                {
                    double distance = (1564.4/Math.sqrt(squareArea) - 0.5719) * 12;
                    table.putBoolean("found", true);
                    table.putDouble("area", squareArea);
                    table.putDouble("distance", distance);
                    table.putDouble("xoffset", -centerX);
                    table.putDouble("heightratio", heightRatio);
                }
                else
                {
                    table.putBoolean("found", false);
                }
                table.endTransaction();
            }
            return rawImage;
    }
}

Anyhow, I'll see if I can find the docs on getting WPIJavaCV set up and will post later.

violinuxer

JM033 29-01-2013 15:31

Re: Walkthrough on how to do offboard image processing!
 
Kinda like this rambo?
https://www.dropbox.com/s/it4kfwq2ny6mew5/Vision.png

JM033 29-01-2013 15:32

Re: Walkthrough on how to do offboard image processing!
 
https://www.dropbox.com/s/it4kfwq2ny6mew5/Vision.png
Is that correct?

jesusrambo 29-01-2013 16:42

Re: Walkthrough on how to do offboard image processing!
 
Looks good to me. Let me know if something breaks horribly. As an aside, UDP doesn't work in Java, for any teams considering trying.

I did it slightly differently by putting the UDP send inside the Dashboard loop, and the socket open and close outside of the normal running loop. I don't know if it'd have problems with you opening and closing the port rapidly like yours might, if you get errors that it can't bind because address is already in use that's likely it.

This is my new implementation of the viToCRIO VI, redone to use networktables. It posts all the x,y coordinates of each target, and the range, to fields in the NetworkTable.

In the robot code, I use this command to handle sorting the targets and storing top, bottom, left, and right targets. Now, it'll also pull the values from the networktable instead of using the TCP connection. (See lines 113-139)

Line 103 in this, our main robot class, sets up the NetworkTable in case you need a Java example.

JM033 29-01-2013 19:11

Re: Walkthrough on how to do offboard image processing!
 
can someone upload their dashboard code with the vision tracking and udp stuff, I'm having a lot of trouble trying to run it :*(

jesusrambo 29-01-2013 19:36

Re: Walkthrough on how to do offboard image processing!
 
Not at the lab right now so I can't, sorry. What trouble are you having?

JM033 29-01-2013 19:43

Re: Walkthrough on how to do offboard image processing!
 
So I deleted dashboard main from a new dashboard project, and replaced it with the vision processing vi from rectangular target - 2013 sample code. Put UDP stuff in it, and It opens a bunch of random windows of my code when I load driverstation, and asks me too find "LVODE.dll" which I did. Then I don't have any camera image. Too much
Don't think I'm building it right

jesusrambo 29-01-2013 21:49

Re: Walkthrough on how to do offboard image processing!
 
You don't want to delete the entire Dashboard, you want to drop the vision processing code inside of the current one. Check out the screenshots of my VI in the OP.

JM033 29-01-2013 22:01

Re: Walkthrough on how to do offboard image processing!
 
As in drop the vision processing subvi inside the dashboard main? Or just add it to the project? You should just save me my stupidity and upload your dashboard so I have a better understanding? I only got your screenshot of network table which seems pretty complicated btw, but I want to use UDP..other screenies in java so I have no clue.
Sorry, total noob here -__-

jesusrambo 29-01-2013 22:07

Re: Walkthrough on how to do offboard image processing!
 
No problem. It should be inside the dashboard main.

Think of it this way -- the dashboard is still running normally, but within it it's also handling the image processing.

EDIT: Here are screenshots of all of our VIs.

JM033 29-01-2013 22:09

Re: Walkthrough on how to do offboard image processing!
 
Thanks I'll try :)

JM033 29-01-2013 22:25

Re: Walkthrough on how to do offboard image processing!
 
Do you have the script I can possibly download?

jesusrambo 29-01-2013 22:49

Re: Walkthrough on how to do offboard image processing!
 
Sorry, I'll give you all the steps to do it but I'm not going to just do it for you. Learning to do it is part of the competition =)

If you need specific help though, I'll be happy to do that. Look through that folder of screenshots.

JM033 29-01-2013 22:55

Re: Walkthrough on how to do offboard image processing!
 
Ok thanks, Is it ok If I don't do the script and just integrate the default rectangular target example instead like how you did it?

jesusrambo 29-01-2013 22:58

Re: Walkthrough on how to do offboard image processing!
 
What do you mean? Which default rectangular target example?

JM033 30-01-2013 06:59

Re: Walkthrough on how to do offboard image processing!
 
The one in the FRC examples when you open up labview, it's called 'rectangular target - 2013' It tracks it for you and seems to work just fine, gives all the x/y ft. values just fine when I tested it. Should work, I'm fairly sure though.

jesusrambo 30-01-2013 17:00

Re: Walkthrough on how to do offboard image processing!
 
Then it sounds like it'd work fine to me, give it a shot and let me know how it works.

Joe Ross 30-01-2013 18:57

Re: Walkthrough on how to do offboard image processing!
 
Quote:

Originally Posted by JM033 (Post 1224153)
So I deleted dashboard main from a new dashboard project, and replaced it with the vision processing vi from rectangular target - 2013 sample code. Put UDP stuff in it, and It opens a bunch of random windows of my code when I load driverstation, and asks me too find "LVODE.dll" which I did. Then I don't have any camera image. Too much
Don't think I'm building it right

Like JesusRambo said, you deleted to much. Here's another example of how to do it, based on last year's dashboard and vision example. http://forums.usfirst.org/showthread.php?t=19449
Since the dashboard was rewritten this year, it won't look exactly the same, but it should be close enough for someone who's willing to work at it a little, to figure out.

JM033 30-01-2013 23:44

Re: Walkthrough on how to do offboard image processing!
 
Quote:

Originally Posted by jesusrambo (Post 1224726)
Then it sounds like it'd work fine to me, give it a shot and let me know how it works.

Do you know what settings you have for "Filters 1" in the vision script?

Quote:

Originally Posted by Joe Ross (Post 1224808)
Like JesusRambo said, you deleted to much. Here's another example of how to do it, based on last year's dashboard and vision example. http://forums.usfirst.org/showthread.php?t=19449
Since the dashboard was rewritten this year, it won't look exactly the same, but it should be close enough for someone who's willing to work at it a little, to figure out.

Unfortunately It does not let me look at the .zip because I need to make an account to download. And when I click register it says registration is disabled.

JM033 01-02-2013 16:01

Re: Walkthrough on how to do offboard image processing!
 
Nevermind I got the dashboard code working. Thanks

muaddib42 11-02-2013 01:35

Re: Walkthrough on how to do offboard image processing!
 
I am still having trouble changing the source of the image to the axis cam. If anyone could help direct me, it would be greatly appreciated. Thank you

jesusrambo 11-02-2013 01:54

Re: Walkthrough on how to do offboard image processing!
 
In the Dashboard, there's a purple "image out" wire. Right click the image display on the dashboard and click "Select Terminal" and you'll see the wire going into it as an input. Route that into the image processing subVI

muaddib42 12-02-2013 02:13

Re: Walkthrough on how to do offboard image processing!
 
Thank you! So when I am exporting the script from the NI Vision Assistant I should select the image source as Image File and route this input into that?

jesusrambo 12-02-2013 19:17

Re: Walkthrough on how to do offboard image processing!
 
I think Image Control is the one you want.

jesusrambo 30-07-2013 00:42

Re: Walkthrough on how to do offboard image processing!
 
I worked with one of the mentors on my team to write up a whitepaper on what we did to get this working, along with a litle bit of further research into using vision processing as a component of an autonomous frisbee shooter controller.

Hosted here, on ChiefDelphi


All times are GMT -5. The time now is 21:36.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi