Vision tracking Java questions

My team was attempting to use Vision tracking in this last year for First steamworks. We had some basic success with a rasberry pi but it wasnt working to the quality of all the other teams this last year. We worked on it for most of the season but always ran into issues of it not working.
We are in the proccess of exsploring the use of an android phone, but we could always use the tips and tricks/other ways you guys have to become successful at vision tracking.
Does anyone have and tips to help us get our vision tracking working?
-Zach

Mind elaborating what you mean by “wasnt working to the quality of all the other teams this last year?” Do you mean you were having trouble detecting the goal? Filtering out other light? Aligning to it?

Android phones are nice because of the packaging, but if you can’t get vision tracking reliably working on a platform like a Pi, it might not be wise to immediately move to a relatively more complex platform (in terms of what you have to do to get it communicating with the Rio and programmed and everything).

I’d guess you’re using Java OpenCV and Networktables?

We had some success with the pi, it was more just unreliable. It worked (as far as filtering out light for the most part and allignment) but it seemed like it just wasnt picking it up as fast, or like it would see the goal attempt to move and then all of the sudden see something else and start following it. We just wamt to see if there is an easyer way or a good video/document to help us see where we went wrong.
-Zach

…then all of the sudden see something else…

Android, Pi, Kangaroo, etc can all work great because they offer faster processing rates than a rio.

GRIP’s generated code approach can work too because it avoids additional networking and handling for network-tables or similar errors.

But with any approach you need to be able to verify that your contour-detection is accurate. The following, as a snip of one example, allows you to have your contours ‘pop’ on an otherwise ordinary stream of the video.


		pipeline.process(image);
		ArrayList<MatOfPoint> contours = pipeline.filterContoursOutput(); 
	
		// autonomousPeriodic or Vision-Command.execute methods can sync on ..This..rectangles too,
		//    and then then can calculate distances and set action/speeds
  		synchronized(rectangles) 
  		{
  			rectangles.clear();  // keep the same instance, synchronized..., but reset it.
  			
			for (int x = 0; x < contours.size(); x++) {
				Rect r = Imgproc.boundingRect(contours.get(x));
				rectangles.add(r);

				Imgproc.rectangle(image,
						new Point(r.x, r.y),
						new Point(r.x+r.width, r.y+r.height),
						WHITE, 5);
			} //end for loop
		// then 'image' needs to be published to a stream...

I made a document describing how our vision processing works. We are now in the “tuning” stage.

https://docs.google.com/document/d/e/2PACX-1vRm4V3sHp87a48S9p90ZPSq-fbY-Hvgp3VBSck1lfaN0cq-RmPDgCw9-r_A-B7o-HFgFes7VY9JPSzq/pub

Sounds like your first problem is a filtering problem. I recommend getting the brightest LEDs you can, and then if you continue to have problems, filter your HSV values to each individual field, as the lighting is often different in various buildings. What is your process for determining your color range?

Sent from my Z971 using Tapatalk