|
|
|
![]() |
|
|||||||
|
||||||||
|
|
Thread Tools | Rate Thread | Display Modes |
|
#1
|
|||
|
|||
|
I've made a few threads here on ChiefDelphi about errors I have faced with my vision code and have received no replies. I've used the knowledge I have to fix them as best as I can, but I feel like my code is sloppy and unoptimized. I fear that we will run into bandwidth issues or the vision processing will be slow with my current methods of vision tracking. I am the only programmer on my team, and we have no programming mentors, so I am in dire need of help. I understand that a reply to this might take a while, so kudos to whoever does!
I will run through my code first, then state some questions at the end. We are going to be using vision processing for gear placement, and high goals. That being said, I thought it would be smart to use 2 separate threads for the vision processing. (We have a camera for each target) Code:
@Override
public void robotInit() {
/*
* Sync vision variables from thread to thread.
*/
imgLockGoal = new Object();
imgLockGear = new Object();
/*
* Image used to be processed by a CvSink and outputted through a
* CvSource.
*/
image = new Mat();
/*
* Camera used to track vision targets on the boiler.
*/
cam0 = new UsbCamera("cam0", 0);
cam0.setResolution(camWidth, camHeight);
cam0.setFPS(15);
/*
* Camera used to track vision targets on the airship.
*/
cam1 = new UsbCamera("cam1", 1);
cam1.setResolution(camWidth, camHeight);
cam1.setFPS(15);
/*
* CvSink used to grab and process the image used to output to the
* CvSource
*/
selectedVid = CameraServer.getInstance().getVideo(cam0);
/*
* CvSource used to output the processed image onto the SmartDashboard
* (CameraServer Stream Viewer).
*/
outputStream = CameraServer.getInstance().putVideo("Tracking", camWidth, camHeight);
/*
* Vision Thread uses the high goal contour filtering to find the best
* targets and help lead the robot to the target destination.
*/
visionThreadHighGoal = new VisionThread(cam0, pipeline, pipeline -> {
while (!visionThreadHighGoal.isInterrupted()) {
if (whichCam) {
selectedVid.grabFrame(image);
if (pipeline.filterContoursOutput().size() >= 2) {
// isTargetFound = true;
Rect r = Imgproc.boundingRect(pipeline.filterContoursOutput().get(0));
Rect r1 = Imgproc.boundingRect(pipeline.filterContoursOutput().get(1));
Imgproc.rectangle(image, new Point(r.x, r.y), new Point(r.x + r.width, r.y + r.height),
new Scalar(0, 0, 255), 2);
Imgproc.rectangle(image, new Point(r1.x, r1.y), new Point(r1.x + r1.width, r1.y + r1.height),
new Scalar(0, 0, 255), 2);
outputStream.putFrame(image);
synchronized (imgLockGoal) {
centerX = (r.x + (r1.x + r1.width)) / 2;
width = (r.x + r1.x) / 2;
}
} else {
synchronized (imgLockGoal) {
// isTargetFound = false;
}
outputStream.putFrame(image);
}
}
}try {
Thread.sleep(10);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
});
visionThreadHighGoal.start();
// TODO: change filters to specify for gears
/*
* Vision Thread uses the gear contour filtering to find the best
* targets and help lead the robot to the target destination.
*/
visionThreadGear = new VisionThread(cam1, pipeline, pipeline -> {
while (!visionThreadGear.isInterrupted()) {
if (!whichCam) {
selectedVid.grabFrame(image);
if (pipeline.filterContoursOutput().size() >= 2) {
// isTargetFound = true;
Rect r = Imgproc.boundingRect(pipeline.filterContoursOutput().get(0));
Rect r1 = Imgproc.boundingRect(pipeline.filterContoursOutput().get(1));
Imgproc.rectangle(image, new Point(r.x, r.y), new Point(r.x + r.width, r.y + r.height),
new Scalar(0, 0, 255), 2);
Imgproc.rectangle(image, new Point(r1.x, r1.y), new Point(r1.x + r1.width, r1.y + r1.height),
new Scalar(0, 0, 255), 2);
synchronized (imgLockGear) {
// TODO:
}
outputStream.putFrame(image);
} else {
synchronized (imgLockGear) {
// isTargetFound = false;
}
outputStream.putFrame(image);
}
}
}
try {
Thread.sleep(10);
} catch (InterruptedException e) {
e.printStackTrace();
}
}); visionThreadGear.start();
The idea behind this is that I can set a boolean true or false (whichCam). Both of the threads are running at the same time, but the vision code only runs on one at a time. I see that most people are using Raspberry Pi's to process vision (I would not know where to start). Here are my questions. 1) Will I run into performance issues only using the roboRio? 2) Should I use a coprocessor? (We own a Jetson TK1, but it seems like too much). 3) Where would I start with this? Can I use java? 4) If it's okay for me to stick with vision processing on only the roboRio, is there a better method for me to do this? Thank you all so much for reading. I'd love to read all the replies! The complete code can be viewed here! https://github.com/Lesafian/Nick-s-Truck-SkrtSkrt Last edited by Lesafian : 05-02-2017 at 21:46. |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|