|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#16
|
||||
|
||||
|
Re: Walkthrough on how to do offboard image processing!
So I do the image processing on the dashboard and I do a UDP send of my target info results from the dashboard to the robot (crio). From their I make a UDP receive in my robot code from the dashboard. Correct?
|
|
#17
|
||||
|
||||
|
Re: Walkthrough on how to do offboard image processing!
what ports are people using for sending the data back from the DS to the robot? Doesn't the FMS only pass data destined for a limited selection of ports?
According to the FMS White Paper: Quote:
|
|
#18
|
|||
|
|||
|
Re: Walkthrough on how to do offboard image processing!
Quote:
Basically click and drag the icon in the upper right hand corner of the block diagram of the VI you're trying to import, into the dashboard project. Quote:
As far as concerns about port numbers go, fovea you're right that only a few ports are allowed during comp. We used 1130, since it's open for sending data from Dashboard to Robot which is exactly what we're doing. |
|
#19
|
||||
|
||||
|
Re: Walkthrough on how to do offboard image processing!
We're using a raspberry pi as well (python+opencv after many years of disappointment with NIVision), and I don't know if we should use network tables or TCP? Recommendations?
Also, I don't know either protocol, so is it worth it to learn the protocols or just use someone else's code? |
|
#20
|
|||
|
|||
|
Re: Walkthrough on how to do offboard image processing!
Quote:
Sorry if I'm being redundant... violinuxer |
|
#21
|
|||
|
|||
|
Re: Walkthrough on how to do offboard image processing!
Quote:
I'll be updating the guide soon as I have the time with more information on using NetworkTables. Quote:
|
|
#22
|
|||
|
|||
|
Re: Walkthrough on how to do offboard image processing!
Here is how we did it last year:
Smart dashboard comes with the wrapper to OpenCV. You install SmartDashboard via the installer (available at firstforge.wpi.edu) and add the included library jar files (there are tutorial pdfs available on the web) into a pure java Netbeans project. You then extend WPICameraExtension in which you can process an image, do thresholding, polygon detection, etc. etc. Any calculated values are then sent back to the robot via NetworkTables. Below is our image processing code from last year. Note the synchronized block, that's how we sent the values back to the robot. Code:
import edu.wpi.first.smartdashboard.camera.WPICameraExtension;
import edu.wpi.first.smartdashboard.properties.DoubleProperty;
import edu.wpi.first.smartdashboard.properties.IntegerProperty;
import edu.wpi.first.wpijavacv.*;
import edu.wpi.first.wpilibj.networking.NetworkTable;
import java.util.ArrayList;
public class VisionProcessing extends WPICameraExtension {
public static final String NAME = "Camera Target Tracker";
public final IntegerProperty threshold = new IntegerProperty(this, "Threshold", 180);
public final DoubleProperty contourPolygonApproximationPct = new DoubleProperty(this, "Polygon Approximation %", 45);
NetworkTable table = NetworkTable.getTable("camera");
WPIColor targetColor = new WPIColor (0,255,0);
WPIColor contourColor = new WPIColor (17,133,133);
@Override
public WPIImage processImage(WPIColorImage rawImage) {
WPIBinaryImage blue = rawImage.getBlueChannel().getThreshold(threshold.getValue());
WPIBinaryImage green = rawImage.getGreenChannel().getThreshold(threshold.getValue());
WPIBinaryImage red = rawImage.getRedChannel().getThreshold(threshold.getValue());
// Does the thresholding;
WPIBinaryImage colorsCombined = blue.getAnd(red).getAnd(green);
//Mixes the paint;
colorsCombined.erode(2);
colorsCombined.dilate(6);
//Gets rid of small stuff and fills holes;
WPIContour[] contours = colorsCombined.findContours();
rawImage.drawContours(contours, contourColor, 3);
ArrayList<WPIPolygon> polygons = new ArrayList<WPIPolygon>();
for (WPIContour c : contours) {
double ratio = ((double)c.getHeight())/ ((double)c.getWidth());
//if (ratio < 1.5 && ratio > .75)
if (1==1) {
polygons.add(c.approxPolygon(contourPolygonApproximationPct.getValue()));
//Makes polygons around edges;
}
}
ArrayList<WPIPolygon> possiblePolygons = new ArrayList<WPIPolygon>();
for (WPIPolygon p : polygons) {
if (p.isConvex() && p.getNumVertices() == 4) {
possiblePolygons.add(p);
} else {
rawImage.drawPolygon(p, WPIColor.MAGENTA, 1);
}
}
WPIPolygon square = null;
int squareArea = 0;
double centerX = 0;
double heightRatio = 0;
for (WPIPolygon p : possiblePolygons) {
rawImage.drawPolygon(p, WPIColor.BLUE, 5);
for (WPIPolygon q : possiblePolygons) {
if (p == q)
continue;
int pCenterX = (p.getX() + (p.getWidth()/2));
int qCenterX = (q.getX() + (q.getWidth()/2));
int pCenterY = (p.getY() + (p.getHeight()/2));
int qCenterY = (q.getY() + (q.getHeight()/2));
// rawImage.drawPoint(new WPIPoint(pCenterX, pCenterY), targetColor, 5);
// rawImage.drawPoint(new WPIPoint(qCenterX, qCenterY), targetColor, 5);
if (Math.abs(pCenterX - qCenterX) < 20 && Math.abs(pCenterY - qCenterY) < 20) {
int pArea = Math.abs(p.getArea());
int qArea = Math.abs(q.getArea());
if (square != null) {
if (square.getY() < p.getY())
continue;
// if we already have a square, and it is lower, doesn't add anything
}
if (pArea>qArea) {
square = p;
squareArea = pArea;
centerX = (double)pCenterX/rawImage.getWidth()-.5;
} else {
square = q;
squareArea = qArea;
centerX = (double)qCenterX/rawImage.getWidth()-.5;
}
WPIPoint [] v = square.getPoints();
int x1 = Math.abs(v[1].getX() - v[0].getX());
int y1 = Math.abs(v[1].getY() - v[0].getY());
int y2 = Math.abs(v[2].getY() - v[1].getY());
int y3 = Math.abs(v[3].getY() - v[2].getY());
int y4 = Math.abs(v[0].getY() - v[3].getY());
if (y1 > x1) { // first segment isvertical
heightRatio = (double)y1 / y3;
} else {
heightRatio = (double)y2 / y4;
}
break;
}
}
}
if (square != null) {
int sCenterX = (square.getX() + (square.getWidth()/2));
int sCenterY = (square.getY() + (square.getHeight()/2));
rawImage.drawPoint(new WPIPoint(sCenterX, sCenterY), targetColor, 5);
}
synchronized (table) {
table.beginTransaction();
if (square != null)
{
double distance = (1564.4/Math.sqrt(squareArea) - 0.5719) * 12;
table.putBoolean("found", true);
table.putDouble("area", squareArea);
table.putDouble("distance", distance);
table.putDouble("xoffset", -centerX);
table.putDouble("heightratio", heightRatio);
}
else
{
table.putBoolean("found", false);
}
table.endTransaction();
}
return rawImage;
}
}
violinuxer |
|
#23
|
||||
|
||||
|
Re: Walkthrough on how to do offboard image processing!
Kinda like this rambo?
https://www.dropbox.com/s/it4kfwq2ny6mew5/Vision.png |
|
#24
|
||||
|
||||
|
Re: Walkthrough on how to do offboard image processing!
https://www.dropbox.com/s/it4kfwq2ny6mew5/Vision.png
Is that correct? |
|
#25
|
|||
|
|||
|
Re: Walkthrough on how to do offboard image processing!
Looks good to me. Let me know if something breaks horribly. As an aside, UDP doesn't work in Java, for any teams considering trying.
I did it slightly differently by putting the UDP send inside the Dashboard loop, and the socket open and close outside of the normal running loop. I don't know if it'd have problems with you opening and closing the port rapidly like yours might, if you get errors that it can't bind because address is already in use that's likely it. This is my new implementation of the viToCRIO VI, redone to use networktables. It posts all the x,y coordinates of each target, and the range, to fields in the NetworkTable. In the robot code, I use this command to handle sorting the targets and storing top, bottom, left, and right targets. Now, it'll also pull the values from the networktable instead of using the TCP connection. (See lines 113-139) Line 103 in this, our main robot class, sets up the NetworkTable in case you need a Java example. |
|
#26
|
||||
|
||||
|
Re: Walkthrough on how to do offboard image processing!
can someone upload their dashboard code with the vision tracking and udp stuff, I'm having a lot of trouble trying to run it :*(
|
|
#27
|
|||
|
|||
|
Re: Walkthrough on how to do offboard image processing!
Not at the lab right now so I can't, sorry. What trouble are you having?
|
|
#28
|
||||
|
||||
|
Re: Walkthrough on how to do offboard image processing!
So I deleted dashboard main from a new dashboard project, and replaced it with the vision processing vi from rectangular target - 2013 sample code. Put UDP stuff in it, and It opens a bunch of random windows of my code when I load driverstation, and asks me too find "LVODE.dll" which I did. Then I don't have any camera image. Too much
Don't think I'm building it right Last edited by JM033 : 29-01-2013 at 21:29. |
|
#29
|
|||
|
|||
|
Re: Walkthrough on how to do offboard image processing!
You don't want to delete the entire Dashboard, you want to drop the vision processing code inside of the current one. Check out the screenshots of my VI in the OP.
|
|
#30
|
||||
|
||||
|
Re: Walkthrough on how to do offboard image processing!
As in drop the vision processing subvi inside the dashboard main? Or just add it to the project? You should just save me my stupidity and upload your dashboard so I have a better understanding? I only got your screenshot of network table which seems pretty complicated btw, but I want to use UDP..other screenies in java so I have no clue.
Sorry, total noob here -__- Last edited by JM033 : 29-01-2013 at 22:05. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|