Go to Post "The rio has a boulder sized hole though it? *shrug* Just duct tape it. That doesn't let the code out, right?" - Bkeeneykid [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
View Poll Results: What did you use for vision tracking?
Grip on RoboRio - IP Camera 3 2.07%
Grip on RoboRio - USB Camera 9 6.21%
Grip on Laptop- IP Camera 19 13.10%
Grip on Laptop- USB Camera 6 4.14%
Grip on Raspberry Pi- IP Camera 5 3.45%
Grip on Raspberry Pi- USB Camera 13 8.97%
RoboRealm IP Camera 6 4.14%
RoboRealm USB Camera 7 4.83%
Other - Please Elaborate with a Response 77 53.10%
Voters: 145. You may not vote on this poll

Reply
Thread Tools Rate Thread Display Modes
  #31   Spotlight this post!  
Unread 02-05-2016, 15:18
JamesBrown JamesBrown is offline
Back after 4 years off
FRC #5279
Team Role: Engineer
 
Join Date: Nov 2004
Rookie Year: 2005
Location: Lynchburg VA
Posts: 1,260
JamesBrown has a reputation beyond reputeJamesBrown has a reputation beyond reputeJamesBrown has a reputation beyond reputeJamesBrown has a reputation beyond reputeJamesBrown has a reputation beyond reputeJamesBrown has a reputation beyond reputeJamesBrown has a reputation beyond reputeJamesBrown has a reputation beyond reputeJamesBrown has a reputation beyond reputeJamesBrown has a reputation beyond reputeJamesBrown has a reputation beyond repute
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by MamaSpoldi View Post
The PixyCam is an excellent way to provide auto-targeting without significant impact to the code on the roboRIO or requiring sophisticated integration of additional software.
This is great, our students have vision on their list of skill to add this off season. Our programming students have fairly limited experience, so I was looking for a way to incorporate vision that would be easy enough for them to grasp quickly as we have a lot to work on.

PixyCam looks like a great option.
__________________
I'm Back


5279 (2015-Present)
3594 (2011)
3280 (2010)
1665 (2009)
1350 (2008-2009)
1493 (2007-2008)
1568 (2005-2007)
Reply With Quote
  #32   Spotlight this post!  
Unread 02-05-2016, 15:31
DinerKid's Avatar
DinerKid DinerKid is offline
Registered User
AKA: Zac
FRC #1768 (Nashoba Robotics)
Team Role: Mentor
 
Join Date: Nov 2009
Rookie Year: 2009
Location: MA
Posts: 73
DinerKid is a glorious beacon of lightDinerKid is a glorious beacon of lightDinerKid is a glorious beacon of lightDinerKid is a glorious beacon of lightDinerKid is a glorious beacon of light
Re: What Did you use for Vision Tracking?

1768 began the season using OpenCV and a Jetson TK1, we later switched to a Nexus 5X which became desirable due to its all in one packaging (camera and processor, which made taking it off the robot to do testing between events easy) and because our programmers felt as though it would be simpler to communicate between the roboRIO and the Nexus.

The nexus was used to measure distance and angle to the target, this information was then sent to the roboRIO. Nested PID loops then used the NavX MXP gyro data to alight the robot to the target. Images taken during the auto aligning process were used to adjust the turn set point. After two consecutive images returned an angle to the target of less than 0.5 degrees new images were not used to adjust the set point allowing the PID to maintain a position rather than bounce between slightly varying set points.

~DK
__________________
Reply With Quote
  #33   Spotlight this post!  
Unread 02-05-2016, 16:05
tomy tomy is offline
Registered User
FRC #3038 (I.C.E. Robotics)
Team Role: Mentor
 
Join Date: Jan 2009
Rookie Year: 2009
Location: Stacy, Minnesota
Posts: 490
tomy has a spectacular aura abouttomy has a spectacular aura about
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by snekiam View Post
The installation takes a long time because you need to build it on the pi, which does take several hours. Which language are you looking to get started on?

It seems like openCV is the way to go. Does anyone have a good turtorial location for openCV and vision tracking? I am hoping to put it on raspberry pi.
Reply With Quote
  #34   Spotlight this post!  
Unread 02-05-2016, 20:00
nighterfighter nighterfighter is offline
1771 Alum, 1771 Mentor
AKA: Matt B
FRC #1771 (1771)
Team Role: Mentor
 
Join Date: Sep 2009
Rookie Year: 2007
Location: Suwanee/Kennesaw, GA
Posts: 835
nighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant future
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by MamaSpoldi View Post
Team 230 also used a PixyCam... with awesome results and we integrated it in one day. The PixyCam does the image processing on-board so there is no need to transfer images. You can quickly train it to search for a specific color that you train it for and report when it sees it. We selected the simplest interface option provided by the Pixy which involves a single digital (indicating "I see a target") and a single analog (which provides feedback for where within the frame the target is located). This allowed us to provide a driver interface (and also program the code in autonomous) to use the digital to tell us when the target is in view and then allow the analog value to drive the robot rotation to center the goal.

We already had our tracking code written for the Axis camera, so our P loop only had to have a tiny adjustment (instead of a range of 320 pixels, it was 5 volts), so the PixyCam swap was almost zero code change.

We got our PixyCam hooked up and running in a few hours. We only used the analog output, didn't have time to get the digital output on it working. So if it never saw a target, (output value of around .43 volts I believe) the robot would "track" to the right constantly. But that is easy enough to fix in code...(if the "center" position doesn't update, you aren't actually tracking).

If we had more time we probably would have used I2C or SPI to interface with the camera, in order to get more data.



I know of at least 2 other teams from Georgia who used the PixyCam as well, being added in after/during the DCMP.
__________________
1771- Programmer, Captain, Drive Team (2009-2012)
4509- Mentor (2013-2015)
1771- Mentor (2015)
Reply With Quote
  #35   Spotlight this post!  
Unread 02-05-2016, 20:28
BrianAtlanta's Avatar
BrianAtlanta BrianAtlanta is offline
Registered User
FRC #1261
Team Role: Mentor
 
Join Date: Apr 2014
Rookie Year: 2012
Location: United States
Posts: 67
BrianAtlanta has a spectacular aura aboutBrianAtlanta has a spectacular aura about
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by tomy View Post
It seems like openCV is the way to go. Does anyone have a good turtorial location for openCV and vision tracking? I am hoping to put it on raspberry pi.
Check out my post on the first page for details, but TLDR - pyImageSearch is a great place if you want to use python. As per the 254 vision processing session at worlds, language doesn't really change performance of openCV, since it's c++ under the covers. So, pick a language you're comfortable with, pyImageSearch has a good amount of tutorials, so we went python.

Brian
__________________
Reply With Quote
  #36   Spotlight this post!  
Unread 02-05-2016, 20:39
tomy tomy is offline
Registered User
FRC #3038 (I.C.E. Robotics)
Team Role: Mentor
 
Join Date: Jan 2009
Rookie Year: 2009
Location: Stacy, Minnesota
Posts: 490
tomy has a spectacular aura abouttomy has a spectacular aura about
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by BrianAtlanta View Post
Check out my post on the first page for details, but TLDR - pyImageSearch is a great place if you want to use python. As per the 254 vision processing session at worlds, language doesn't really change performance of openCV, since it's c++ under the covers. So, pick a language you're comfortable with, pyImageSearch has a good amount of tutorials, so we went python.

Brian
What OS do you use. I am struggling with windows trying to find a pain free way of installing OpenCV and I am wondering if I should just virtual box a Linux based OS
Reply With Quote
  #37   Spotlight this post!  
Unread 02-05-2016, 20:56
cad321 cad321 is online now
Jack of all trades, Master of none
AKA: Brian Wagg
FRC #2386 (Trojans)
Team Role: Alumni
 
Join Date: Jan 2013
Rookie Year: 2012
Location: Burlington, Ontario
Posts: 318
cad321 is just really nicecad321 is just really nicecad321 is just really nicecad321 is just really nice
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by nighterfighter View Post
We already had our tracking code written for the Axis camera, so our P loop only had to have a tiny adjustment (instead of a range of 320 pixels, it was 5 volts), so the PixyCam swap was almost zero code change.

We got our PixyCam hooked up and running in a few hours. We only used the analog output, didn't have time to get the digital output on it working. So if it never saw a target, (output value of around .43 volts I believe) the robot would "track" to the right constantly. But that is easy enough to fix in code...(if the "center" position doesn't update, you aren't actually tracking).

If we had more time we probably would have used I2C or SPI to interface with the camera, in order to get more data.



I know of at least 2 other teams from Georgia who used the PixyCam as well, being added in after/during the DCMP.
Do you know if it is possible to take the image that the PixyCam sees and stream it back to the driver station? Perhaps using MJPG Streamer or another method.
Reply With Quote
  #38   Spotlight this post!  
Unread 02-05-2016, 21:03
nighterfighter nighterfighter is offline
1771 Alum, 1771 Mentor
AKA: Matt B
FRC #1771 (1771)
Team Role: Mentor
 
Join Date: Sep 2009
Rookie Year: 2007
Location: Suwanee/Kennesaw, GA
Posts: 835
nighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant future
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by cad321 View Post
Do you know if it is possible to take the image that the PixyCam sees and stream it back to the driver station? Perhaps using MJPG Streamer or another method.
Maybe. It streams the image over USB.

So if you could figure out a way to get the roboRIO to recognize it, you might be able to stream it back.

You might be better off letting the PixyCam do processing, and using the axis camera/USB webcam for driver vision.

Edit: You could probably send back the output of the Pixycam though...or reconstruct it. You can get the size and position of each object it senses. Send those back to the driver station, and have a program draw it on screen for you. Anything it doesn't see is just black. So you would have a 320x240 (or whatever resolution) black box, with green/red/etc boxes based on what the Pixy is processing. However, that would be a few frames behind what it currently is detecting.
__________________
1771- Programmer, Captain, Drive Team (2009-2012)
4509- Mentor (2013-2015)
1771- Mentor (2015)

Last edited by nighterfighter : 02-05-2016 at 21:05.
Reply With Quote
  #39   Spotlight this post!  
Unread 02-05-2016, 23:09
BrianAtlanta's Avatar
BrianAtlanta BrianAtlanta is offline
Registered User
FRC #1261
Team Role: Mentor
 
Join Date: Apr 2014
Rookie Year: 2012
Location: United States
Posts: 67
BrianAtlanta has a spectacular aura aboutBrianAtlanta has a spectacular aura about
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by tomy View Post
What OS do you use. I am struggling with windows trying to find a pain free way of installing OpenCV and I am wondering if I should just virtual box a Linux based OS
We're running Raspbian, wheezy I think. I've attached the link to the install instructions we used. On this page is a link for the Wheezy variant instructions. Be aware the steps below are a 4 hr process, with the OpenCV compiling taking 2 of those 4 hours.

OpenCV Pi Installation Instructions
__________________
Reply With Quote
  #40   Spotlight this post!  
Unread 02-05-2016, 23:17
BrianAtlanta's Avatar
BrianAtlanta BrianAtlanta is offline
Registered User
FRC #1261
Team Role: Mentor
 
Join Date: Apr 2014
Rookie Year: 2012
Location: United States
Posts: 67
BrianAtlanta has a spectacular aura aboutBrianAtlanta has a spectacular aura about
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by cad321 View Post
Do you know if it is possible to take the image that the PixyCam sees and stream it back to the driver station? Perhaps using MJPG Streamer or another method.
We used the mjpg-streamer to stream the processed image with targeting back to driverstation. We did run in to a race condition with our streamer. When the streamer was setup, the -r option was used. This deletes the image after it's streamed. The problem came when the streamer tried to pick up the next image before OpenCV wrote it. The streamer would then crash, usually within 2-5 minute range. We removed the -r option it didn't crash even after an hour of running.

Another note with the streamer. Consider not thrashing the SD when using the Pi. Constantly writing to the SD can reduce the time before corruption. We switched to writing the image to a RAM Disk, so nothing to the SD card, only memory.

Brian
__________________
Reply With Quote
  #41   Spotlight this post!  
Unread 03-05-2016, 00:51
Cabey4 Cabey4 is offline
Vision/Scouting/Strategy/Too much
AKA: Tom Schwarz
no team
Team Role: Programmer
 
Join Date: May 2015
Rookie Year: 2015
Location: Sydney
Posts: 26
Cabey4 is on a distinguished road
Re: What Did you use for Vision Tracking?

We use a Raspberry Pi and RPI Camera, with the exposure turned way way down and a truly ridiculous amount of green LED's. Then we do some image processing stuff with OpenCV (blurring, HSV filtering, etc.) Then draw contours, and filter them out set on criteria. Lastly it communicates that over to the RoboRIO through network Tables. It's all written in Python (the bestest language)

We spent a lot of time trying to get OpenCV in Java to work, and putting it on the RoboRIO. In the end we went with the Raspberry Pi, and didn't feel like GRIP was reliable enough that we would want to use it on our robot during a competition.
__________________
Reply With Quote
  #42   Spotlight this post!  
Unread 04-05-2016, 22:20
slibert slibert is offline
Software Mentor
AKA: Scott Libert
FRC #2465 (Kauaibots)
Team Role: Mentor
 
Join Date: Oct 2011
Rookie Year: 2005
Location: Kauai, Hawaii
Posts: 334
slibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud of
Re: What Did you use for Vision Tracking?

Kauaibots (team 2465) used the JetsonTK1 w/a Logitech C930 webcam (90 degree FOV). Software (C++) was in OpenCV, and it detected the angle/distance to the tower light stack, and also the angle to the lights on the edges of the defenses, as well as distance/angle to the retro-reflective targets in the high goal.

Video processing algorithm ran at 30fps on 640x480 images, and wrote a compressed copy (.MJPG file) to SD card for later review, and also wrote a JPEG image to a directory that was monitored by MJPG-Streamer. The VideoProc algorithm was designed to switch between 2 cameras, though we ended up only using one camera. The operator could choose to optionally overlay the detected object information on top of the raw video, so the drivers could see what the algorithm was doing.

Communication w/the RoboRIO was via Network tables, including a "ping" process to ensure the video processor was running, commands to the video processor to select the current algorithm and camera source, and to communicate detection events back to the RoboRIO.

***

The latency correction discussed in the presentation at worlds is a great idea. We have a plan for that....

Moving ahead, the plan is to use the navX-MXP's 100Hz update rate and it's dual simultaneous outputs (SPI to RoboRIO, USB to Jetson) and high-accuracy timestamp to timestamp the video in the video processor, send that to the RoboRIO, and in the RoboRIO use the timestamp to locate the matching entry in a time-history buffer of unit quaternions (quaternions are the value that is used to derive yaw, pitch and roll). This approach, very similar to what was described in the presentation at worlds, corrects for latency by accounting for any change in orientation (pitch, roll and yaw) after the video has been acquired but before the roboRIO gets the result from the video processor.

We're collaborating with another team who's been working on neural networked detection algorithms, and the plan is to post a whitepaper on the results of this promising concept - if you have any questions please feel free to private message me for details on this effort.
Reply With Quote
  #43   Spotlight this post!  
Unread 05-05-2016, 10:53
marshall's Avatar
marshall marshall is offline
My pants are louder than yours.
FRC #0900 (The Zebracorns)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2003
Location: North Carolina
Posts: 1,193
marshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond repute
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by slibert View Post
We're collaborating with another team who's been working on neural networked detection algorithms, and the plan is to post a whitepaper on the results of this promising concept - if you have any questions please feel free to private message me for details on this effort.
I wonder who that could be?
__________________
"La mejor salsa del mundo es la hambre" - Miguel de Cervantes
"The future is unwritten" - Joe Strummer
"Simplify, then add lightness" - Colin Chapman
Reply With Quote
  #44   Spotlight this post!  
Unread 05-05-2016, 12:42
rod@3711 rod@3711 is offline
Registered User
AKA: rod nelson
FRC #3711 (Iron Mustangs)
Team Role: Mentor
 
Join Date: May 2014
Rookie Year: 2014
Location: Trout Lake, WA
Posts: 64
rod@3711 is an unknown quantity at this point
Re: What Did you use for Vision Tracking?

We used a Logitech Pro 9000 type USB camera connected to the RoboRio. Wrote custom C++ code to track the tower.

A short video of our driver station in autonomous is on youtube. https://youtu.be/PRhgljJ9zus

The yellow box is our region of interest. The light blue highlights show detection of bright vertical lines and yellow highlights show detection of bright horizontal lines. The black circle is our guess at the center-bottom of the tower window.

But alas, we only got it working at the last couple of matches. A lot of fun, but did not help us get to St Louis.

Our tracking code follows:

Code:
void Robot::trackTower(){

	// Tower Tracking.
	// copy the camera image (frame) into a 2D XY array.  The array is
	// 3D with the 3rd dimension being the 3 color 8 bit characters.
	// Restrict search to the upper center of image.
	// In both the horizontal X and vertical Y directions, find occurences
	// of bright pixels with dark pixels on both sides.  Tally every
	// occurence in an X and Y histogram.
	// this all assumes a 320x240 image and a 4 characters for red/green/blue/extra.

	char *arrayP;  // point to 2D array
	arrayP = (char*)imaqImageToArray(frame,IMAQ_NO_RECT,NULL,NULL);
	// not certain how to access array,so copy into local array.
	memcpy (array,arrayP, sizeof(array));

	memset (histoX,0,sizeof(histoX));  // histograms for dark-bright-dark occurances in X
	memset (histoY,0,sizeof(histoY)); // histograms for dark-bright-dark occurances in Y
	const int left = 50;  // upper center search window
	const int right = 210;
	const int top = 0;
	const int bottom = 60;
	const int spread=8; // dark-bright-dark must occur in 6 pixels
	const int threshold = 25; // bright must be 30 bigger than dark

	// look for the bottom horizontal gaffer tape.
	// only look at green color character [1].
	// mark each pixel meeting the dark-bright-dark criteria blue
	// tally each occurance in X histgram.
	for (short col = left; col <= right; col++) {
		for (short row = top+spread; row < bottom; row++) {
			int center = array[row - spread / 2][col][1];
			if (((center - array[row - spread][col][1]) > threshold) &&
				((center - array[row][col][1]) > threshold)) {
				array[row - spread / 2][col][0] = 0;  // blue
		//		array[row - spread / 2][col][1] = 0;
				array[row - spread / 2][col][2] = 255; // red
				array[row - spread / 2][col][3] = 0; // flag

				histoY[row - spread / 2]++;
			}

		}
	}

	// now find horizontal line by finding most occurances.
	int max = 0;
	int maxY =0;  // row number of bottom tape
	for (short row = top+1; row < bottom-1; row++) {
		// use 3 histogram slots
		int sumH = histoY[row-1] + histoY[row] + histoY[row+1];
		if (sumH > max){
			max = sumH;  // found new peak
			maxY = row;
		}
	}

	// now look for vertical tapes.  Only search down to bottom tape maxY
	for (short row = top; row <= maxY; row++) {
		for (short col = left+spread; col < right; col++) {
			int center = array[row][col - spread / 2][1];
			if (((center - array[row][col - spread][1]) > threshold) &&
				((center - array[row][col][1]) > threshold)){
				array[row][col - spread / 2][0] = 255;  // blue
//					array[row][col - spread / 2][1] = 255; // green
				array[row][col - spread / 2][2] = 0;
				array[row][col - spread / 2][3] = 0; // flag
				histoX[col - spread / 2]++;
			}
		}
	}

	// look for the left and right vertical tapes
	int max1 = 0;  // first peak
	int max2 = 0; // second peak
	int maxX1 = 0;
	int maxX2 = 0;
	for (int col=left+1; col<=right-1; col++) {
		// find the biggest peak,  use 3 slots
		int sumH = histoX[col-1] + histoX[col] + histoX[col+1];
		if (sumH > max1){
			max1 = sumH;
			maxX1 = col;
		}
	}

	for (int col=left+1; col<=right-1; col++) {
		// find the 2nd peak
		if (abs(maxX1 - col) < spread)
			continue; // do not look if close to other peak
		int sumH = histoX[col-1] + histoX[col] + histoX[col+1];
		if (sumH > max2){
			max2 = sumH;
			maxX2 = col;
		}
	}

	int maxX = (maxX1 + maxX2) / 2;  // center or 2 peaks
	if (max2 < 5)    // did not find a good second peak
		maxX = 0;  // put it in middle



	int startIndex = 0;
	int maxLength = 0;
	int maxStart = 0;
	int endIndex = 0;

	for (int col=left; col<=right; col++) {
		int count = 0;
		if (array[maxY][col][3] == 0){
			count++;
		}
		if (array[maxY-1][col][3] == 0){
			count++;
		}
		if (array[maxY+1][col][3] == 0){
			count++;
		}
		if (startIndex > 0){
			if (count < 1) {
				endIndex = col;
				if (maxLength < (endIndex - startIndex)){
					maxLength = (endIndex - startIndex);
					maxStart = startIndex;
				}
				startIndex = 0;
			}

		}else{
			if(count > 1) {
				startIndex = col;
			}
		}


	}


	//SmartDashboard::PutNumber("maxLength", maxLength);

	maxX = maxStart + (maxLength /2);






	// mark region of interest in yellow
	for (short row = top; row <= bottom; row++) {
		array[row][left][0] = 0;    // blue
		array[row][left][1] = 255;  // green  R+G = yellow
		array[row][left][2] = 255;  // red   R+G = yellow
		array[row][right][0] = 0;   // blue
		array[row][right][1] = 255; // green
		array[row][right][2] = 255; // red
	}


	for (short col = left; col < right; col++) {
		array[top][col][0] = 0;  // blue
		array[top][col][1] = 255; // green
		array[top][col][2] = 255; // red   R+G = yellow
		array[bottom][col][0] = 0;   // blue
		array[bottom][col][1] = 255; // green
		array[bottom][col][2] = 255; // red
	}


/* look at one color
		for (short col = left; col <= right; col++) {
		for (short row = top; row < bottom; row++) {
				array[row][col][0] = 0;  // blue
				array[row][col][1] = 0; // green
		//		array[row][col][2] = 0; // red
		}
	}
*/
	// copy 2D array back into image
	memcpy(arrayP, array, sizeof(array));
	imaqArrayToImage(frame, array, 320, 240);

	//SmartDashboard::PutNumber("a0",array[20][20][0]);  // blue
	//SmartDashboard::PutNumber("a1",array[20][20][1]);  // green
	//SmartDashboard::PutNumber("a2",array[20][20][2]);  // red
	//SmartDashboard::PutNumber("a3",array[20][20][3]);  // not used
	imaqDispose(arrayP);

//       imaqDrawTextOnImage(frame,frame, {10,10},"hi there",NULL,NULL);

    imaqDrawShapeOnImage(frame, frame, { maxY-5, maxX-5, 10, 10 }, DrawMode::IMAQ_DRAW_VALUE, ShapeMode::IMAQ_SHAPE_OVAL, 0);
    Robot::chassis->trackingX = maxX;  // let the world know
    Robot::chassis->trackingY = maxY;  // let the world know
}
Reply With Quote
  #45   Spotlight this post!  
Unread 05-05-2016, 16:06
slibert slibert is offline
Software Mentor
AKA: Scott Libert
FRC #2465 (Kauaibots)
Team Role: Mentor
 
Join Date: Oct 2011
Rookie Year: 2005
Location: Kauai, Hawaii
Posts: 334
slibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud ofslibert has much to be proud of
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by marshall View Post
I wonder who that could be?
I won't name any names, but our team's Purple Aloha shirts are nowhere as near as loud as the clothing this team likes to wear....
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 04:42.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi