Go to Post Holy Orbit ball Batman!!!!!!!!!!:yikes: - yodameister [more]
Home
Go Back   Chief Delphi > Technical > Programming > Java
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 05-02-2017, 21:39
Lesafian Lesafian is offline
Registered User
AKA: Jeremy Styma
FRC #6077 (Wiking Kujon)
Team Role: Programmer
 
Join Date: Feb 2016
Rookie Year: 2016
Location: Posen, Michigan
Posts: 38
Lesafian is an unknown quantity at this point
Question Will the RoboRio handle this? (Upgrade to coprocessor?)

I've made a few threads here on ChiefDelphi about errors I have faced with my vision code and have received no replies. I've used the knowledge I have to fix them as best as I can, but I feel like my code is sloppy and unoptimized. I fear that we will run into bandwidth issues or the vision processing will be slow with my current methods of vision tracking. I am the only programmer on my team, and we have no programming mentors, so I am in dire need of help. I understand that a reply to this might take a while, so kudos to whoever does!

I will run through my code first, then state some questions at the end.

We are going to be using vision processing for gear placement, and high goals. That being said, I thought it would be smart to use 2 separate threads for the vision processing. (We have a camera for each target)

Code:
	@Override
	public void robotInit() {
		/*
		 * Sync vision variables from thread to thread.
		 */
		imgLockGoal = new Object();
		imgLockGear = new Object();

		/*
		 * Image used to be processed by a CvSink and outputted through a
		 * CvSource.
		 */
		image = new Mat();

		/*
		 * Camera used to track vision targets on the boiler.
		 */
		cam0 = new UsbCamera("cam0", 0);
		cam0.setResolution(camWidth, camHeight);
		cam0.setFPS(15);

		/*
		 * Camera used to track vision targets on the airship.
		 */
		cam1 = new UsbCamera("cam1", 1);
		cam1.setResolution(camWidth, camHeight);
		cam1.setFPS(15);

		/*
		 * CvSink used to grab and process the image used to output to the
		 * CvSource
		 */
		selectedVid = CameraServer.getInstance().getVideo(cam0);

		/*
		 * CvSource used to output the processed image onto the SmartDashboard
		 * (CameraServer Stream Viewer).
		 */
		outputStream = CameraServer.getInstance().putVideo("Tracking", camWidth, camHeight);

		/*
		 * Vision Thread uses the high goal contour filtering to find the best
		 * targets and help lead the robot to the target destination.
		 */
		visionThreadHighGoal = new VisionThread(cam0, pipeline, pipeline -> {
			while (!visionThreadHighGoal.isInterrupted()) {
				if (whichCam) {
					selectedVid.grabFrame(image);
					if (pipeline.filterContoursOutput().size() >= 2) {
						// isTargetFound = true;
						Rect r = Imgproc.boundingRect(pipeline.filterContoursOutput().get(0));
						Rect r1 = Imgproc.boundingRect(pipeline.filterContoursOutput().get(1));
						Imgproc.rectangle(image, new Point(r.x, r.y), new Point(r.x + r.width, r.y + r.height),
								new Scalar(0, 0, 255), 2);
						Imgproc.rectangle(image, new Point(r1.x, r1.y), new Point(r1.x + r1.width, r1.y + r1.height),
								new Scalar(0, 0, 255), 2);
						outputStream.putFrame(image);
						synchronized (imgLockGoal) {
							centerX = (r.x + (r1.x + r1.width)) / 2;
							width = (r.x + r1.x) / 2;
						}
					} else {
						synchronized (imgLockGoal) {
							// isTargetFound = false;
						}
						outputStream.putFrame(image);
					}
				}
			}try {
				Thread.sleep(10);
			} catch (InterruptedException e) {
				// TODO Auto-generated catch block
				e.printStackTrace();
			}
			
		});
		visionThreadHighGoal.start();

		// TODO: change filters to specify for gears
		/*
		 * Vision Thread uses the gear contour filtering to find the best
		 * targets and help lead the robot to the target destination.
		 */

		visionThreadGear = new VisionThread(cam1, pipeline, pipeline -> {
			while (!visionThreadGear.isInterrupted()) {
				if (!whichCam) {
					selectedVid.grabFrame(image);
					if (pipeline.filterContoursOutput().size() >= 2) {
						// isTargetFound = true;
						Rect r = Imgproc.boundingRect(pipeline.filterContoursOutput().get(0));
						Rect r1 = Imgproc.boundingRect(pipeline.filterContoursOutput().get(1));
						Imgproc.rectangle(image, new Point(r.x, r.y), new Point(r.x + r.width, r.y + r.height),
								new Scalar(0, 0, 255), 2);
						Imgproc.rectangle(image, new Point(r1.x, r1.y), new Point(r1.x + r1.width, r1.y + r1.height),
								new Scalar(0, 0, 255), 2);
						synchronized (imgLockGear) {
							// TODO:
						}
						outputStream.putFrame(image);
					} else {
						synchronized (imgLockGear) {
							// isTargetFound = false;
						}
						outputStream.putFrame(image);
					}
				}
			}
			try {
				Thread.sleep(10);
			} catch (InterruptedException e) {
				e.printStackTrace();
			}
		}); visionThreadGear.start();
I tried using 1 thread for switching the two cameras, and it seems impossible since the thread is initiated with a specified camera.


The idea behind this is that I can set a boolean true or false (whichCam). Both of the threads are running at the same time, but the vision code only runs on one at a time.

I see that most people are using Raspberry Pi's to process vision (I would not know where to start).

Here are my questions.

1) Will I run into performance issues only using the roboRio?

2) Should I use a coprocessor? (We own a Jetson TK1, but it seems like too much).

3) Where would I start with this? Can I use java?

4) If it's okay for me to stick with vision processing on only the roboRio, is there a better method for me to do this?

Thank you all so much for reading. I'd love to read all the replies!

The complete code can be viewed here!
https://github.com/Lesafian/Nick-s-Truck-SkrtSkrt

Last edited by Lesafian : 05-02-2017 at 21:46.
Reply With Quote
  #2   Spotlight this post!  
Unread 06-02-2017, 03:09
euhlmann's Avatar
euhlmann euhlmann is offline
CTO, Programmer
AKA: Erik Uhlmann
FRC #2877 (LigerBots)
Team Role: Leadership
 
Join Date: Dec 2015
Rookie Year: 2015
Location: United States
Posts: 408
euhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud of
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)

1. Depends on the resolution you're processing at, the implementation of your pipeline, and your definition of "issues". Yes, it will be slow
2. Do you care about getting high processing fps? If yes, use a coprocessor.
3. On the roboRIO, you can use Java for image processing (with OpenCV).
4. It really depends on what you need. Last year we got sub-5fps image processing onboard with NIVision, but it was fine for working with in single frames (ie, calculating angle and then using a gyro to turn, as opposed to a full vision-based closed loop). So are you ok with working in single frames or do you need vision closed loop?
__________________
Creator of SmartDashboard.js, an extensible nodejs/webkit replacement for SmartDashboard


https://ligerbots.org
Reply With Quote
  #3   Spotlight this post!  
Unread 06-02-2017, 08:30
dvanvoorst dvanvoorst is offline
Registered User
FRC #2771 (Code Red)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2012
Location: Grand Rapids, MI
Posts: 73
dvanvoorst is an unknown quantity at this point
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)

Quote:
I've made a few threads here on ChiefDelphi about errors I have faced with my vision code and have received no replies.
This isn't a very good way to encourage help since it's completely untrue. It seemed highly unlikely that you had received no responses, so I checked your posts from the last few weeks. Every one except one was replied to - and you didn't respond back to them with any followup questions. The only post that didn't have a reply was a very wide open question - which is unlikely to get a response during a busy build season. The people on this forum are INCREDIBLY responsive to questions and requests for help - as long as you are reasonably specific about what's going on and what help is needed. A better way to start your post might have been: I've made a few threads here on ChiefDelphi about errors I've faced with my vision code and have received some good feedback and pointers in the right direction which was much appreciated. Now I have another issue.....
__________________

Reply With Quote
  #4   Spotlight this post!  
Unread 06-02-2017, 08:54
Lesafian Lesafian is offline
Registered User
AKA: Jeremy Styma
FRC #6077 (Wiking Kujon)
Team Role: Programmer
 
Join Date: Feb 2016
Rookie Year: 2016
Location: Posen, Michigan
Posts: 38
Lesafian is an unknown quantity at this point
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)

Quote:
Originally Posted by dvanvoorst View Post
This isn't a very good way to encourage help since it's completely untrue. It seemed highly unlikely that you had received no responses, so I checked your posts from the last few weeks. Every one except one was replied to - and you didn't respond back to them with any followup questions. The only post that didn't have a reply was a very wide open question - which is unlikely to get a response during a busy build season. The people on this forum are INCREDIBLY responsive to questions and requests for help - as long as you are reasonably specific about what's going on and what help is needed. A better way to start your post might have been: I've made a few threads here on ChiefDelphi about errors I've faced with my vision code and have received some good feedback and pointers in the right direction which was much appreciated. Now I have another issue.....
You, in a way, took that out of context. I was using it to state that I'm unsure about the quality of my code since I don't really know what I'm doing and fixed my issues to the best of my ability. 2/3 of the posts about vision tracking i've made have not been answered, and the one answered was a "idk I think" response, which I'm all for but did not really answer my question.

My bad.

Last edited by Lesafian : 06-02-2017 at 09:04.
Reply With Quote
  #5   Spotlight this post!  
Unread 06-02-2017, 08:59
Lesafian Lesafian is offline
Registered User
AKA: Jeremy Styma
FRC #6077 (Wiking Kujon)
Team Role: Programmer
 
Join Date: Feb 2016
Rookie Year: 2016
Location: Posen, Michigan
Posts: 38
Lesafian is an unknown quantity at this point
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)

Quote:
Originally Posted by euhlmann View Post
1. Depends on the resolution you're processing at, the implementation of your pipeline, and your definition of "issues". Yes, it will be slow
2. Do you care about getting high processing fps? If yes, use a coprocessor.
3. On the roboRIO, you can use Java for image processing (with OpenCV).
4. It really depends on what you need. Last year we got sub-5fps image processing onboard with NIVision, but it was fine for working with in single frames (ie, calculating angle and then using a gyro to turn, as opposed to a full vision-based closed loop). So are you ok with working in single frames or do you need vision closed loop?
1) I'm processing the images at 320x240

2) I'm not exactly sure what you mean by the implementation of my pipeline. I have generated the code via GRIP, and have used the VisionThread class to implement it into Robot.java (as seen in the pasted code). When I say issues I mean I have run into "too many simultaneous streams", or if I try to interrupt and start a thread in teleop I get an error, which I then need to run both threads at the same time which in my case leads to fps drops.

3) I currently have my vision code working (Well I can draw rectangles on the contours and get variables, I'm yet to do anything with them)

4) I suppose it's not really what I need. Anything is fine with me, I just need it to work efficiently haha. To be clear, I would like it to run without the roboRio running out of resources, capping the allowed bandwidth, and being able to get to destinations quickly. I have it working to a point where the feed to the smartdashboard is running at 15fps without issues. (although the second camera seems to feed at 8fps, but I think it's because I had the delay in the while loop instead of the else. I've been looking around and have seen that turning to the correct angle with a gyro based on 1 frame is the way to go. I'd love to try that, do you have any example code I could look at?

Thank you so much for the reply!

Last edited by Lesafian : 06-02-2017 at 09:09.
Reply With Quote
  #6   Spotlight this post!  
Unread 06-02-2017, 10:19
rich2202 rich2202 is offline
Registered User
FRC #2202 (BEAST Robotics)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2012
Location: Wisconsin
Posts: 1,279
rich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond repute
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)

> 1) Will I run into performance issues only using the roboRio?

Yes. Especially with 2 cameras.

> 2) Should I use a coprocessor? (We own a Jetson TK1, but it seems
> like too much).

IMHO: Get the code you have now working. When you have time, try to figure out the co-processor.

> 3) Where would I start with this? Can I use java?

Whatever language you know best. We are running into Array math problems with JAVA (much slower than with C++), but too late now to switch.

> 4) If it's okay for me to stick with vision processing on only the roboRio,
> is there a better method for me to do this?

"method"? In order to do vision, you need to spawn a parallel process. You will not be able to do it within "teleop periodc". It takes too long.

Options 1 and 2 do not use up the 7mbps wifi band with. Option 1 may not be fast enough. Option 2 you have no experience with. Option 3 is not as hard as Option 2, but does have some of the same problems (how do you get info to/from the RoboRio).

Question: What are you trying to accomplish? you talk about "being able to get to destinations quickly", and then "feed to the smartdashboard "

Are you concerned about "bandwidth" (which is the 7mbps wifi limitation), or "cpu utilization"? Camera on the Roborio uses CPU, and no bandwidth. Displaying image on the Smartdashboard uses Bandwidth, and minimal CPU.

If all you want to do is display the camera feed with a target drawn on it, then I suggest just displaying the camera feed, put a piece of plastic on your driver station screen, and draw on the plastic where you want the driver to place the target. All that takes is wifi bandwidth, and minimal CPU.

If you want to use vision to drive the robot to the desired location. That is much more difficult. Not only do you have to figure out where you want to go, but you have to figure out how to drive the robot there.

FYI: When you run both threads simultaneously, have a flag that the process checks to see which one is "active". If it not the active one, it returns (ends) without doing anything.
__________________


Last edited by rich2202 : 06-02-2017 at 10:21.
Reply With Quote
  #7   Spotlight this post!  
Unread 06-02-2017, 10:34
rich2202 rich2202 is offline
Registered User
FRC #2202 (BEAST Robotics)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2012
Location: Wisconsin
Posts: 1,279
rich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond repute
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)

Quote:
Originally Posted by Lesafian View Post
I've been looking around and have seen that turning to the correct angle with a gyro based on 1 frame is the way to go.
For Autonomus, you should have a selector that tells the robot which position it is starting in (left, middle, right). It will then drive to the corresponding Peg. Placement of the robot on the field by the Drive Team is critical. Know your marks.

You should then drive to the peg by pre-determined motor commands. encoders really help this year (last year, not so much because of slip).

Once you are supposed to be on a straight path to the peg, you can then use vision to fine tune.

You can get fancy and use the gyro (along with PID), however, a few pre-determined motor commands may be faster.

Let's say that you find yourself 3 degrees to the left at a distance of 10 feet, then giving the right motor an extra 10% power for 1000 encoder clicks may put you back on path.

Then: Either ram into the peg/wall (encoders stop counting), or use ultrasonic to determine when you are close.
__________________


Last edited by rich2202 : 06-02-2017 at 10:50.
Reply With Quote
  #8   Spotlight this post!  
Unread 06-02-2017, 10:44
Lesafian Lesafian is offline
Registered User
AKA: Jeremy Styma
FRC #6077 (Wiking Kujon)
Team Role: Programmer
 
Join Date: Feb 2016
Rookie Year: 2016
Location: Posen, Michigan
Posts: 38
Lesafian is an unknown quantity at this point
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)

Quote:
Originally Posted by rich2202 View Post
For Autonomus, you should have a selector that tells the robot which position it is starting in (left, middle, right). It will then drive to the corresponding Peg. Placement of the robot on the field by the Drive Team is critical. Know your marks.

You should then drive to the peg by pre-determined motor commands. encoders really help this year (last year, not so much because of slip).

Once you are supposed to be on a straight path to the peg, you can then use vision to fine tune.

You can get fancy and use the gyro (along with PID), however, a few pre-determined motor commands may be faster.

Let's say that you find yourself 3 degrees to the left at a distance of 10 feet, then giving the right motor an extra 10% power for 1000 encoder clicks may put you back on path.

Then: Either ram into the peg/wall (encoders stop counting), or use ultrasonic to determine when you are close.
We are not using encoders this year, but we are using a gyro to drive straight, so getting to where we want will not be an issue. What I had in mind was to drive somewhere near the peg, then use the camera feed to line up the middle of the gear grabber in between the 2 pieces of retroreflective tape, then just drive straight forward into the peg. Wouldn't that work just fine?

What I meant by using the gyro to turn the robot is, lets say I want the centerX of one of the contours to be at the middle pixel of the camera feed, and the centerX value is actually 100 pixels to the left of the center pixel. I would then take the current heading of the gyro, and use math to find the angle that I need to turn to. Would this work?

My only question about this method is how would I convert the distance between centerX and imageCenter into a degree of rotation, is there an equation for this that I can look at?
Reply With Quote
  #9   Spotlight this post!  
Unread 06-02-2017, 10:49
rich2202 rich2202 is offline
Registered User
FRC #2202 (BEAST Robotics)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2012
Location: Wisconsin
Posts: 1,279
rich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond repute
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)

I just saw that you are using Mecanum Wheels. Those wheels slip, and encoders only give you an approximation. So, you will have to use Gyro/accelerometer control. The Gyro will tell you which direction you are facing. The accelerometer will tell you which direction you are accelerating. You then accumulate the acceleration to determine velocity, and accumulate velocity to determine distance. Based upon the direction you are facing, and the direction you want to go, you then send the appropriate commands to the drive motors.

Note: You will want to "overshoot" so you end up "normal" when approaching the PEG. Coming at the Peg at an angle (which Mecanum will allow you to do) is not optimal. So, if you are 3 degrees off, drive as if you are 6 degrees off until you are 0 degrees off. Then drive straight.
__________________

Reply With Quote
  #10   Spotlight this post!  
Unread 06-02-2017, 10:52
Lesafian Lesafian is offline
Registered User
AKA: Jeremy Styma
FRC #6077 (Wiking Kujon)
Team Role: Programmer
 
Join Date: Feb 2016
Rookie Year: 2016
Location: Posen, Michigan
Posts: 38
Lesafian is an unknown quantity at this point
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)

Quote:
Originally Posted by rich2202 View Post
> 1) Will I run into performance issues only using the roboRio?

Yes. Especially with 2 cameras.

> 2) Should I use a coprocessor? (We own a Jetson TK1, but it seems
> like too much).

IMHO: Get the code you have now working. When you have time, try to figure out the co-processor.

> 3) Where would I start with this? Can I use java?

Whatever language you know best. We are running into Array math problems with JAVA (much slower than with C++), but too late now to switch.

> 4) If it's okay for me to stick with vision processing on only the roboRio,
> is there a better method for me to do this?

"method"? In order to do vision, you need to spawn a parallel process. You will not be able to do it within "teleop periodc". It takes too long.

Options 1 and 2 do not use up the 7mbps wifi band with. Option 1 may not be fast enough. Option 2 you have no experience with. Option 3 is not as hard as Option 2, but does have some of the same problems (how do you get info to/from the RoboRio).

Question: What are you trying to accomplish? you talk about "being able to get to destinations quickly", and then "feed to the smartdashboard "

Are you concerned about "bandwidth" (which is the 7mbps wifi limitation), or "cpu utilization"? Camera on the Roborio uses CPU, and no bandwidth. Displaying image on the Smartdashboard uses Bandwidth, and minimal CPU.

If all you want to do is display the camera feed with a target drawn on it, then I suggest just displaying the camera feed, put a piece of plastic on your driver station screen, and draw on the plastic where you want the driver to place the target. All that takes is wifi bandwidth, and minimal CPU.

If you want to use vision to drive the robot to the desired location. That is much more difficult. Not only do you have to figure out where you want to go, but you have to figure out how to drive the robot there.

FYI: When you run both threads simultaneously, have a flag that the process checks to see which one is "active". If it not the active one, it returns (ends) without doing anything.
Thank you for the insight on the performance issues.

What I'm trying to accomplish is to be able to switch between 2 vision algorithms, and send a 320x240 @ 8fps stream from the roboRio to the SmartDashboard (one at a time) on top of the rest of my robot program without running out of resources, or crashing the roboRio. If none of the problems occur, I should be fine. I really only need to see where the tape is, turn to the center point of the tape, and get to the correct distance of the tape.

You state that it would be very difficult to drive the robot based on vision, why is that? I'm pretty sure I have that all figured out, and it should work alright, I just want to make sure that everything runs smoothly and we dont get resource errors or other errors for that matter.

That is what I'm doing. When I hit a button on the joystick, it changes the state of a predefined boolean. The vision algorithms run based on whether the boolean is true or false. To be specific, the high goal algorithm runs if "whichCam", and the gear algorithm runs if "!whichCam". Yes, both threads are running at the same time, but will a thread doing nothing but checking if a statement is true with a delay of 10 seconds be resource heavy on the roboRio?

I have most of everything figured out, I just want to make sure that our robot wont die during competition.

The source code link can be viewed in the initial question

Thank you so much for your help by the way, I really appreciate it!

Last edited by Lesafian : 06-02-2017 at 10:56.
Reply With Quote
  #11   Spotlight this post!  
Unread 06-02-2017, 10:57
rich2202 rich2202 is offline
Registered User
FRC #2202 (BEAST Robotics)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2012
Location: Wisconsin
Posts: 1,279
rich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond repute
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)

Quote:
Originally Posted by Lesafian View Post
My only question about this method is how would I convert the distance between centerX and imageCenter into a degree of rotation, is there an equation for this that I can look at?
You know how "tall" the vision targets are (in pixels). You also know the distance between the two vision targets. Based upon that, you should be able to tell how far you are from the target. Pixels to inches is based upon your camera, so take some measurements.

You also know the distance (in pixels) between the center of the vision targets, and the center of your image. That tells you another side of the triangle.

Using geometery, you can determine the angle you are off. You can either assume: 1) a right triangle (distance to the center of your vision is the hypotenuse), or 2) isosceles triangle (distance to peg and distance to center of your vision are the same). Maybe calculate them both, and average.
__________________

Reply With Quote
  #12   Spotlight this post!  
Unread 06-02-2017, 11:06
Lesafian Lesafian is offline
Registered User
AKA: Jeremy Styma
FRC #6077 (Wiking Kujon)
Team Role: Programmer
 
Join Date: Feb 2016
Rookie Year: 2016
Location: Posen, Michigan
Posts: 38
Lesafian is an unknown quantity at this point
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)

Quote:
Originally Posted by rich2202 View Post
You know how "tall" the vision targets are (in pixels). You also know the distance between the two vision targets. Based upon that, you should be able to tell how far you are from the target. Pixels to inches is based upon your camera, so take some measurements.

You also know the distance (in pixels) between the center of the vision targets, and the center of your image. That tells you another side of the triangle.

Using geometery, you can determine the angle you are off. You can either assume: 1) a right triangle (distance to the center of your vision is the hypotenuse), or 2) isosceles triangle (distance to peg and distance to center of your vision are the same). Maybe calculate them both, and average.
I think that makes sense. I'm only a sophomore in high school so my math skills are maybe not up to par with this job, but I will try my best.

Also, when I said "better method" when using only the roboRio, I meant is there ways I can optimize my code. Such as not using the VisionThread class, and making a single thread that could switch between algorithms, etc.
Reply With Quote
  #13   Spotlight this post!  
Unread 06-02-2017, 11:17
rich2202 rich2202 is offline
Registered User
FRC #2202 (BEAST Robotics)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2012
Location: Wisconsin
Posts: 1,279
rich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond repute
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)

Quote:
Originally Posted by Lesafian View Post
What I'm trying to accomplish is to be able to switch between 2 vision algorithms
2 "algorithms" or 2 "cameras"? It is my understanding that the Camera Class is only designed for 1 camera. You have to modify it to support 2 cameras, or support the 2nd camera another way.


Quote:
You state that it would be very difficult to drive the robot based on vision, why is that?
You can easily drive based upon 1 picture. If you want to continuously modify your driving based upon pictures, then problem is:

1) Blur is a problem if you take a picture while the robot is moving
2) It takes a long time (in robot time) to process a picture
3) By the time you process the picture, the robot has moved. So, when you do your math, you have to take that into account.

If you stop, take picture, move, rinse and repeat, that takes a lot of time stopping/picture/driving/stopping the robot.

Quote:
To be specific, the high goal algorithm runs if "whichCam", and the gear algorithm runs if "!whichCam"
IMHO, if this is your first time doing vision, worry about the Gear for autonomus, and let the driver worry about the High Goal. Draw a cross hair on your screen (use a plastic transparency over your screen), and let the driver figure out the rest.

Regarding 2 camera feeds to the DS: Do a search on switching camera feeds.

Quote:
Yes, both threads are running at the same time, but will a thread doing nothing but checking if a statement is true with a delay of 10 seconds be resource heavy on the roboRio?
You want a delay of 100 ms, not 10 seconds. A process that only checks 10 times a second does not use up a material amount of cpu time.

Quote:
I have most of everything figured out, I just want to make sure that our robot wont die during competition.
Have a switch to kill the vision task (set both to null processing). If you do run into problems, you can disable it.

Quote:
The source code link can be viewed in the initial question
I know enough to be dangerous, but not enough to be helpful. I can tell them what to do at a high level. When there is a specific problem, I can walk through code with the students, and help them find the logic problem. Unfortunately, I don't know the syntax and function calls well enough to do it without the student.
__________________

Reply With Quote
  #14   Spotlight this post!  
Unread 06-02-2017, 11:21
euhlmann's Avatar
euhlmann euhlmann is offline
CTO, Programmer
AKA: Erik Uhlmann
FRC #2877 (LigerBots)
Team Role: Leadership
 
Join Date: Dec 2015
Rookie Year: 2015
Location: United States
Posts: 408
euhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud of
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)

Quote:
Originally Posted by Lesafian View Post
do you have any example code I could look at?
Unfortunately our code last year was in NIVision as I said before, but you're free to take a look anyway:
https://github.com/ligerbots/Strongh...ter/src/Vision
https://github.com/ligerbots/Strongh...nSubsystem.cpp

For GRIP/OpenCV, wpilib screensteps is a great resource:
http://wpilib.screenstepslive.com/s/4485/m/24194
__________________
Creator of SmartDashboard.js, an extensible nodejs/webkit replacement for SmartDashboard


https://ligerbots.org
Reply With Quote
  #15   Spotlight this post!  
Unread 06-02-2017, 11:42
rich2202 rich2202 is offline
Registered User
FRC #2202 (BEAST Robotics)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2012
Location: Wisconsin
Posts: 1,279
rich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond repute
Re: Will the RoboRio handle this? (Upgrade to coprocessor?)

Follow this thread on switching between cameras:

https://www.chiefdelphi.com/forums/s...d.php?t=154806
__________________

Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 09:43.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi