|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#1
|
||||
|
||||
|
Vision Processing
This year, my team wants to use vision processing, specifically using the retroreflective material near the goals to help us aim this year.
Two ways that I've seen this done is using OpenCV, and also with RoboRealm. Which of these, or any other, vision processing utility would you guys recommend, and why? We program in Java if it helps. Also: onboard, offboard, or coprocessor? Thanks. |
|
#2
|
|||||
|
|||||
|
Re: Vision Processing
Quote:
|
|
#3
|
||||
|
||||
|
Re: Vision Processing
Thanks!
|
|
#4
|
||||
|
||||
|
Re: Vision Processing
If you have the weight for a coprocessor, it may be adventageous to go that route.
I've still got to wire my code into network tables, but that should be easy. We spent days trying to get the onboard stuff working but as many have warned, it's really tough to do it well on the crio. We went to week zero with onboard code and had no problems seeing hot, anything more than that I would be concerned about. If you're using robotbuilder / subsystems / commands, I can help you get up and running with some hot camera code. Shoot me an email and I'll invite you to my dropbox (mwtidd@gmail.com). Here is the current version of my opencv code for java: Code:
import java.awt.image.BufferedImage;
import java.util.ArrayList;
import java.util.List;
import javax.swing.JFrame;
import javax.swing.JSlider;
import org.opencv.core.Core;
import org.opencv.core.Core.MinMaxLocResult;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfByte;
import org.opencv.core.MatOfPoint;
import org.opencv.core.MatOfPoint2f;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.highgui.Highgui;
import org.opencv.highgui.VideoCapture;
import org.opencv.imgproc.Imgproc;
public class MatchingDemo {
public static void main(String[] args) throws InterruptedException {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
JFrame frame1 = new JFrame("Camera");
frame1.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame1.setSize(1260,720);
frame1.setBounds(0, 0, 1260, 720);
Panel panel1 = new Panel();
frame1.setContentPane(panel1);
frame1.setVisible(true);
VideoCapture capture =new VideoCapture(0);
capture.set(Highgui.CV_CAP_PROP_FRAME_HEIGHT, 720);
capture.set(Highgui.CV_CAP_PROP_FRAME_WIDTH, 1260);
Mat src=new Mat();
//BGR
Scalar rgb_min = new Scalar(0,0,50,0);
Scalar rgb_max = new Scalar(250,250,250,0);
while(true){
Thread.sleep(10);
capture.read(src);
Mat dest = new Mat();
Core.inRange(src.clone(), rgb_min, rgb_max, dest);
Mat erode = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(3,3));
Mat dilate = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(5,5));
Imgproc.erode(dest, dest, erode);
Imgproc.erode(dest, dest, erode);
Imgproc.dilate(dest, dest, dilate);
Imgproc.dilate(dest, dest, dilate);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(dest, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
MatOfPoint2f mMOP2f1 = new MatOfPoint2f();
MatOfPoint2f mMOP2f2 = new MatOfPoint2f();
List<Rect> filteredTargets = new ArrayList<Rect>();
for(MatOfPoint contour : contours){
contour.convertTo(mMOP2f1, CvType.CV_32FC2);
Imgproc.approxPolyDP(mMOP2f1, mMOP2f2, 2, true);
Rect rectangle = Imgproc.boundingRect(contour);
int area = rectangle.width * rectangle.height;
double verticalRatio = rectangle.height / rectangle.width;
double horizontalRatio = rectangle.width / rectangle.height;
if(rectangle.width * rectangle.height > 2500 && (verticalRatio > 2 || horizontalRatio > 2) ){
filteredTargets.add(rectangle);
System.out.println("The target is: " + rectangle.width + " x " + rectangle.height);
Core.rectangle(src, rectangle.tl(), rectangle.br(), new Scalar(0,111,255,0));
}
}
System.out.println("Targets found: " + filteredTargets.size());
panel1.setimagewithMat(src);
frame1.repaint();
if(true) continue;
}
}
}
|
|
#5
|
|||
|
|||
|
Re: Vision Processing
For our team, we use OpenCV with all detection analysis and such being done off the robot on the driver station, then sent back to the robot via the network table. OpenCV works nicely, however it can be a bit tricky to get the hang of, especially if you're on a small time limit. And doing it on the driver laptop requires a bit beefier of a laptop than some things would, but it makes sure there's no lag on the robot.
|
|
#6
|
||||
|
||||
|
Re: Vision Processing
Quote:
We were looking at OpenCV as well, but when I downloaded it, my computer refused to open it, so I wasn't sure what to do. |
|
#7
|
|||
|
|||
|
Re: Vision Processing
Quote:
However, a valid reason to go with a co-processor onboard the robot is if you may end up at an event where FTA's turn off your dashboard/camera to help reduce network issues. I know many teams had this happen to them last year, and teams relying on vision processing on their dashboard were forced to play handicapped. This would be avoided if you had vision processing on board, and reduced/removed any streams back to the dashboard. However, you need to provide space/weight/power for the onboard processor. If disabled dashboards at your events may be a concern for you, then I recommend a co-processor. This is the method my team employs. We use a beaglebone white on board, and open tcp sockets between the bone and the crio, no data is sent to the driverstation so we don't have to worry about bandwidth limitations or FTAs shutting down cameras. We can achieve 10fps real-time successfully with FFMPEG, OpenCV, and a 320x240 resolution image from the Axis M1011 camera. A third option floating around on chiefdelphi that some teams are using is a class 1 lasers detector and aiming it at the hot target, they get a true/false reading in the presents of a hot taget after the match starts. No vision camera required. If it works, it would be a very simple, elegant solution. Hope this helps, Kevin Last edited by NotInControl : 25-02-2014 at 22:56. |
|
#8
|
||||
|
||||
|
Re: Vision Processing
Kevin: your coprocessed OpenCV solution: what language?
|
|
#9
|
||||
|
||||
|
Re: Vision Processing
Am I the only one who is doesn't like how the solution is already given for teams through these demos? I feel like teams that do vision this way don't actually understand what is happening.
|
|
#10
|
|||
|
|||
|
Re: Vision Processing
C++ on the beaglebone, Java on the CRIO, simple TCP stream between both
|
|
#11
|
||||
|
||||
|
Re: Vision Processing
Quote:
Remember that many teams may not have access to the same resources (education/mentors) that you have. I for one find the wpi samples a nice starting point, and certainly not a "solution". The code pretty much works "Out of Box", but getting them working well with your actual code base is not a menial task (especially if you're using robot builder). |
|
#12
|
|||||
|
|||||
|
Re: Vision Processing
We're for the most part using the out of the box vision code. But, we went through all of the code and discussed the process that it is using. We did have to make some changes to get it to work the way we want in our system. And to do that we needed at least a basic understanding of how it works. So I wouldn't say we don't understand what is happening.
|
|
#13
|
||||
|
||||
|
Re: Vision Processing
Quote:
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|