|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#1
|
|||
|
|||
|
I've made a few threads here on ChiefDelphi about errors I have faced with my vision code and have received no replies. I've used the knowledge I have to fix them as best as I can, but I feel like my code is sloppy and unoptimized. I fear that we will run into bandwidth issues or the vision processing will be slow with my current methods of vision tracking. I am the only programmer on my team, and we have no programming mentors, so I am in dire need of help. I understand that a reply to this might take a while, so kudos to whoever does!
I will run through my code first, then state some questions at the end. We are going to be using vision processing for gear placement, and high goals. That being said, I thought it would be smart to use 2 separate threads for the vision processing. (We have a camera for each target) Code:
@Override
public void robotInit() {
/*
* Sync vision variables from thread to thread.
*/
imgLockGoal = new Object();
imgLockGear = new Object();
/*
* Image used to be processed by a CvSink and outputted through a
* CvSource.
*/
image = new Mat();
/*
* Camera used to track vision targets on the boiler.
*/
cam0 = new UsbCamera("cam0", 0);
cam0.setResolution(camWidth, camHeight);
cam0.setFPS(15);
/*
* Camera used to track vision targets on the airship.
*/
cam1 = new UsbCamera("cam1", 1);
cam1.setResolution(camWidth, camHeight);
cam1.setFPS(15);
/*
* CvSink used to grab and process the image used to output to the
* CvSource
*/
selectedVid = CameraServer.getInstance().getVideo(cam0);
/*
* CvSource used to output the processed image onto the SmartDashboard
* (CameraServer Stream Viewer).
*/
outputStream = CameraServer.getInstance().putVideo("Tracking", camWidth, camHeight);
/*
* Vision Thread uses the high goal contour filtering to find the best
* targets and help lead the robot to the target destination.
*/
visionThreadHighGoal = new VisionThread(cam0, pipeline, pipeline -> {
while (!visionThreadHighGoal.isInterrupted()) {
if (whichCam) {
selectedVid.grabFrame(image);
if (pipeline.filterContoursOutput().size() >= 2) {
// isTargetFound = true;
Rect r = Imgproc.boundingRect(pipeline.filterContoursOutput().get(0));
Rect r1 = Imgproc.boundingRect(pipeline.filterContoursOutput().get(1));
Imgproc.rectangle(image, new Point(r.x, r.y), new Point(r.x + r.width, r.y + r.height),
new Scalar(0, 0, 255), 2);
Imgproc.rectangle(image, new Point(r1.x, r1.y), new Point(r1.x + r1.width, r1.y + r1.height),
new Scalar(0, 0, 255), 2);
outputStream.putFrame(image);
synchronized (imgLockGoal) {
centerX = (r.x + (r1.x + r1.width)) / 2;
width = (r.x + r1.x) / 2;
}
} else {
synchronized (imgLockGoal) {
// isTargetFound = false;
}
outputStream.putFrame(image);
}
}
}try {
Thread.sleep(10);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
});
visionThreadHighGoal.start();
// TODO: change filters to specify for gears
/*
* Vision Thread uses the gear contour filtering to find the best
* targets and help lead the robot to the target destination.
*/
visionThreadGear = new VisionThread(cam1, pipeline, pipeline -> {
while (!visionThreadGear.isInterrupted()) {
if (!whichCam) {
selectedVid.grabFrame(image);
if (pipeline.filterContoursOutput().size() >= 2) {
// isTargetFound = true;
Rect r = Imgproc.boundingRect(pipeline.filterContoursOutput().get(0));
Rect r1 = Imgproc.boundingRect(pipeline.filterContoursOutput().get(1));
Imgproc.rectangle(image, new Point(r.x, r.y), new Point(r.x + r.width, r.y + r.height),
new Scalar(0, 0, 255), 2);
Imgproc.rectangle(image, new Point(r1.x, r1.y), new Point(r1.x + r1.width, r1.y + r1.height),
new Scalar(0, 0, 255), 2);
synchronized (imgLockGear) {
// TODO:
}
outputStream.putFrame(image);
} else {
synchronized (imgLockGear) {
// isTargetFound = false;
}
outputStream.putFrame(image);
}
}
}
try {
Thread.sleep(10);
} catch (InterruptedException e) {
e.printStackTrace();
}
}); visionThreadGear.start();
The idea behind this is that I can set a boolean true or false (whichCam). Both of the threads are running at the same time, but the vision code only runs on one at a time. I see that most people are using Raspberry Pi's to process vision (I would not know where to start). Here are my questions. 1) Will I run into performance issues only using the roboRio? 2) Should I use a coprocessor? (We own a Jetson TK1, but it seems like too much). 3) Where would I start with this? Can I use java? 4) If it's okay for me to stick with vision processing on only the roboRio, is there a better method for me to do this? Thank you all so much for reading. I'd love to read all the replies! The complete code can be viewed here! https://github.com/Lesafian/Nick-s-Truck-SkrtSkrt Last edited by Lesafian : 05-02-2017 at 21:46. |
|
#2
|
||||
|
||||
|
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)
1. Depends on the resolution you're processing at, the implementation of your pipeline, and your definition of "issues". Yes, it will be slow
2. Do you care about getting high processing fps? If yes, use a coprocessor. 3. On the roboRIO, you can use Java for image processing (with OpenCV). 4. It really depends on what you need. Last year we got sub-5fps image processing onboard with NIVision, but it was fine for working with in single frames (ie, calculating angle and then using a gyro to turn, as opposed to a full vision-based closed loop). So are you ok with working in single frames or do you need vision closed loop? |
|
#3
|
|||
|
|||
|
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)
Quote:
|
|
#4
|
|||
|
|||
|
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)
Quote:
My bad. Last edited by Lesafian : 06-02-2017 at 09:04. |
|
#5
|
|||
|
|||
|
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)
Quote:
2) I'm not exactly sure what you mean by the implementation of my pipeline. I have generated the code via GRIP, and have used the VisionThread class to implement it into Robot.java (as seen in the pasted code). When I say issues I mean I have run into "too many simultaneous streams", or if I try to interrupt and start a thread in teleop I get an error, which I then need to run both threads at the same time which in my case leads to fps drops. 3) I currently have my vision code working (Well I can draw rectangles on the contours and get variables, I'm yet to do anything with them) 4) I suppose it's not really what I need. Anything is fine with me, I just need it to work efficiently haha. To be clear, I would like it to run without the roboRio running out of resources, capping the allowed bandwidth, and being able to get to destinations quickly. I have it working to a point where the feed to the smartdashboard is running at 15fps without issues. (although the second camera seems to feed at 8fps, but I think it's because I had the delay in the while loop instead of the else. I've been looking around and have seen that turning to the correct angle with a gyro based on 1 frame is the way to go. I'd love to try that, do you have any example code I could look at? Thank you so much for the reply! Last edited by Lesafian : 06-02-2017 at 09:09. |
|
#6
|
|||
|
|||
|
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)
> 1) Will I run into performance issues only using the roboRio?
Yes. Especially with 2 cameras. > 2) Should I use a coprocessor? (We own a Jetson TK1, but it seems > like too much). IMHO: Get the code you have now working. When you have time, try to figure out the co-processor. > 3) Where would I start with this? Can I use java? Whatever language you know best. We are running into Array math problems with JAVA (much slower than with C++), but too late now to switch. > 4) If it's okay for me to stick with vision processing on only the roboRio, > is there a better method for me to do this? "method"? In order to do vision, you need to spawn a parallel process. You will not be able to do it within "teleop periodc". It takes too long. Options 1 and 2 do not use up the 7mbps wifi band with. Option 1 may not be fast enough. Option 2 you have no experience with. Option 3 is not as hard as Option 2, but does have some of the same problems (how do you get info to/from the RoboRio). Question: What are you trying to accomplish? you talk about "being able to get to destinations quickly", and then "feed to the smartdashboard " Are you concerned about "bandwidth" (which is the 7mbps wifi limitation), or "cpu utilization"? Camera on the Roborio uses CPU, and no bandwidth. Displaying image on the Smartdashboard uses Bandwidth, and minimal CPU. If all you want to do is display the camera feed with a target drawn on it, then I suggest just displaying the camera feed, put a piece of plastic on your driver station screen, and draw on the plastic where you want the driver to place the target. All that takes is wifi bandwidth, and minimal CPU. If you want to use vision to drive the robot to the desired location. That is much more difficult. Not only do you have to figure out where you want to go, but you have to figure out how to drive the robot there. FYI: When you run both threads simultaneously, have a flag that the process checks to see which one is "active". If it not the active one, it returns (ends) without doing anything. Last edited by rich2202 : 06-02-2017 at 10:21. |
|
#7
|
|||
|
|||
|
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)
Quote:
You should then drive to the peg by pre-determined motor commands. encoders really help this year (last year, not so much because of slip). Once you are supposed to be on a straight path to the peg, you can then use vision to fine tune. You can get fancy and use the gyro (along with PID), however, a few pre-determined motor commands may be faster. Let's say that you find yourself 3 degrees to the left at a distance of 10 feet, then giving the right motor an extra 10% power for 1000 encoder clicks may put you back on path. Then: Either ram into the peg/wall (encoders stop counting), or use ultrasonic to determine when you are close. Last edited by rich2202 : 06-02-2017 at 10:50. |
|
#8
|
|||
|
|||
|
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)
Quote:
What I meant by using the gyro to turn the robot is, lets say I want the centerX of one of the contours to be at the middle pixel of the camera feed, and the centerX value is actually 100 pixels to the left of the center pixel. I would then take the current heading of the gyro, and use math to find the angle that I need to turn to. Would this work? My only question about this method is how would I convert the distance between centerX and imageCenter into a degree of rotation, is there an equation for this that I can look at? |
|
#9
|
|||
|
|||
|
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)
I just saw that you are using Mecanum Wheels. Those wheels slip, and encoders only give you an approximation. So, you will have to use Gyro/accelerometer control. The Gyro will tell you which direction you are facing. The accelerometer will tell you which direction you are accelerating. You then accumulate the acceleration to determine velocity, and accumulate velocity to determine distance. Based upon the direction you are facing, and the direction you want to go, you then send the appropriate commands to the drive motors.
Note: You will want to "overshoot" so you end up "normal" when approaching the PEG. Coming at the Peg at an angle (which Mecanum will allow you to do) is not optimal. So, if you are 3 degrees off, drive as if you are 6 degrees off until you are 0 degrees off. Then drive straight. |
|
#10
|
|||
|
|||
|
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)
Quote:
What I'm trying to accomplish is to be able to switch between 2 vision algorithms, and send a 320x240 @ 8fps stream from the roboRio to the SmartDashboard (one at a time) on top of the rest of my robot program without running out of resources, or crashing the roboRio. If none of the problems occur, I should be fine. I really only need to see where the tape is, turn to the center point of the tape, and get to the correct distance of the tape. You state that it would be very difficult to drive the robot based on vision, why is that? I'm pretty sure I have that all figured out, and it should work alright, I just want to make sure that everything runs smoothly and we dont get resource errors or other errors for that matter. That is what I'm doing. When I hit a button on the joystick, it changes the state of a predefined boolean. The vision algorithms run based on whether the boolean is true or false. To be specific, the high goal algorithm runs if "whichCam", and the gear algorithm runs if "!whichCam". Yes, both threads are running at the same time, but will a thread doing nothing but checking if a statement is true with a delay of 10 seconds be resource heavy on the roboRio? I have most of everything figured out, I just want to make sure that our robot wont die during competition. The source code link can be viewed in the initial question ![]() Thank you so much for your help by the way, I really appreciate it! Last edited by Lesafian : 06-02-2017 at 10:56. |
|
#11
|
|||
|
|||
|
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)
Quote:
You also know the distance (in pixels) between the center of the vision targets, and the center of your image. That tells you another side of the triangle. Using geometery, you can determine the angle you are off. You can either assume: 1) a right triangle (distance to the center of your vision is the hypotenuse), or 2) isosceles triangle (distance to peg and distance to center of your vision are the same). Maybe calculate them both, and average. |
|
#12
|
|||
|
|||
|
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)
Quote:
Also, when I said "better method" when using only the roboRio, I meant is there ways I can optimize my code. Such as not using the VisionThread class, and making a single thread that could switch between algorithms, etc. |
|
#13
|
||||||
|
||||||
|
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)
Quote:
Quote:
1) Blur is a problem if you take a picture while the robot is moving 2) It takes a long time (in robot time) to process a picture 3) By the time you process the picture, the robot has moved. So, when you do your math, you have to take that into account. If you stop, take picture, move, rinse and repeat, that takes a lot of time stopping/picture/driving/stopping the robot. Quote:
Regarding 2 camera feeds to the DS: Do a search on switching camera feeds. Quote:
Quote:
Quote:
|
|
#14
|
||||
|
||||
|
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)
Unfortunately our code last year was in NIVision as I said before, but you're free to take a look anyway:
https://github.com/ligerbots/Strongh...ter/src/Vision https://github.com/ligerbots/Strongh...nSubsystem.cpp For GRIP/OpenCV, wpilib screensteps is a great resource: http://wpilib.screenstepslive.com/s/4485/m/24194 |
|
#15
|
|||
|
|||
|
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)
Follow this thread on switching between cameras:
https://www.chiefdelphi.com/forums/s...d.php?t=154806 |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|