Go to Post We could not make weight so we got rid of the cRIO. - Mike Copioli [more]
Home
Go Back   Chief Delphi > Technical > Programming > C/C++
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 13-01-2016, 16:27
FRC2501's Avatar
FRC2501 FRC2501 is offline
Registered User
FRC #2501 (Bionic Poalrs)
 
Join Date: Jan 2015
Rookie Year: 2008
Location: Minnesota
Posts: 52
FRC2501 is an unknown quantity at this point
Vision tracking and related questions

I am wondering what vision tracking options there are and the relative difficulties of each.

Our team programs in C++, and we've never done any vision tracking before.

Then onto our other questions:
1. Can you use multiple cameras on a single robot?

Last edited by FRC2501 : 13-01-2016 at 17:14.
Reply With Quote
  #2   Spotlight this post!  
Unread 13-01-2016, 18:05
Fauge7 Fauge7 is offline
Head programmer
FRC #3019 (firebird robotics)
Team Role: Programmer
 
Join Date: Jan 2013
Rookie Year: 2012
Location: Scottsdale
Posts: 195
Fauge7 is a name known to allFauge7 is a name known to allFauge7 is a name known to allFauge7 is a name known to allFauge7 is a name known to allFauge7 is a name known to all
Re: Vision tracking and related questions

Vision tracking options: Grip (good gui, easy to use,alpha) opencv(well documented, more complicated, have to develop your own algarithm)

you can have more then one camera on the robot, the problem you will run into is the allowed bandwidth at 7mbps
Reply With Quote
  #3   Spotlight this post!  
Unread 15-01-2016, 06:05
viggy96 viggy96 is offline
Registered User
FRC #3331
Team Role: College Student
 
Join Date: Jan 2015
Rookie Year: 2010
Location: Charlotte
Posts: 52
viggy96 is infamous around these partsviggy96 is infamous around these parts
Quote:
Originally Posted by FRC2501 View Post
I am wondering what vision tracking options there are and the relative difficulties of each.

Our team programs in C++, and we've never done any vision tracking before.

Then onto our other questions:
1. Can you use multiple cameras on a single robot?
If you want to do multiple images, out would be best to use an onboard co-processor, like a Raspberry Pi, or NVIDIA Jetson TK1. That way you can guarantee resources, and not have to worry about the FMS bandwidth limit.

Also, I like OpenCV, GRIP is good to, and generates OpenCV code when used, but I don't know if it can do multiple cameras.

I was thinking of doing stereo vision myself...
Reply With Quote
  #4   Spotlight this post!  
Unread 15-01-2016, 07:24
dubiousSwain's Avatar
dubiousSwain dubiousSwain is offline
The ride never ends
AKA: Christian Steward
FRC #5420 (Velocity)
Team Role: Mentor
 
Join Date: Oct 2011
Rookie Year: 2011
Location: USA
Posts: 304
dubiousSwain has a reputation beyond reputedubiousSwain has a reputation beyond reputedubiousSwain has a reputation beyond reputedubiousSwain has a reputation beyond reputedubiousSwain has a reputation beyond reputedubiousSwain has a reputation beyond reputedubiousSwain has a reputation beyond reputedubiousSwain has a reputation beyond reputedubiousSwain has a reputation beyond reputedubiousSwain has a reputation beyond reputedubiousSwain has a reputation beyond repute
Re: Vision tracking and related questions

Quote:
Originally Posted by Fauge7 View Post
Vision tracking options: Grip (good gui, easy to use,alpha) opencv(well documented, more complicated, have to develop your own algarithm)

you can have more then one camera on the robot, the problem you will run into is the allowed bandwidth at 7mbps
Our team is using Roborealm and Labview, both of which are good options. We are removing Roborealm for the sake of simplifying the stack, but its a good program to get the concepts.
__________________
2015 MAR District Champions




Reply With Quote
  #5   Spotlight this post!  
Unread 15-01-2016, 23:27
jfitz0807 jfitz0807 is offline
Registered User
FRC #2877 (Ligerbots)
Team Role: Parent
 
Join Date: Jan 2009
Rookie Year: 2009
Location: Newton, MA
Posts: 67
jfitz0807 is an unknown quantity at this point
Re: Vision tracking and related questions

We too program in C++. We have some new programming students this year and are very interested in vision tracking.

Since we're just starting out, would you recommend RoboRealm or GRIP to play with the images?

We think that "all we need to do" is find a target image and tell us how far left or right we are. Ideally, we could tell how far away from the goal we are too.

Do you think it's feasible to run the vision processing on the Roborio or do you think wee need a separate platform?

Our first option then would be to run it on the driver station as opposed to a co-processor for simplicity. Unfortunately, we think we'll need a second camera. Our ball pick-up is opposite from our shooter.

I've been thinking about vision processing for years now, but this is the first time was have enough programming students to make it feasible. It seems that with the cRio, off-board vision processing was a must. Has anyone done any throughput studies to see just how much computing resources a typical vision processing algorithm takes on the Roborio?

Thanks.
Reply With Quote
  #6   Spotlight this post!  
Unread 21-01-2016, 22:40
kmckay's Avatar
kmckay kmckay is offline
Registered User
FRC #5401 (Fightin' Robotic Owls)
Team Role: Mentor
 
Join Date: Jan 2016
Rookie Year: 2015
Location: Bensalem, PA
Posts: 46
kmckay will become famous soon enough
Re: Vision tracking and related questions

Do you have to use grip or opencv? The screensteps seems to suggest you can just code it with the wpilibs.
I knowif ican get the data coordinates, i can work out the algorithm.
Also, the sample with the 2016 libraries is simplerobot, does anyone have a command based example?
Reply With Quote
  #7   Spotlight this post!  
Unread 24-01-2016, 22:57
pnitin pnitin is offline
Registered User
no team
 
Join Date: Oct 2015
Location: USA
Posts: 14
pnitin is infamous around these partspnitin is infamous around these parts
Re: Vision tracking and related questions

Look here for tracking shronghold goalpost
http://www.mindsensors.com/blog/how-...our-frc-robot-
Reply With Quote
  #8   Spotlight this post!  
Unread 08-02-2016, 17:33
FRC2501's Avatar
FRC2501 FRC2501 is offline
Registered User
FRC #2501 (Bionic Poalrs)
 
Join Date: Jan 2015
Rookie Year: 2008
Location: Minnesota
Posts: 52
FRC2501 is an unknown quantity at this point
Re: Vision tracking and related questions

Thanks for all the suggestions! Our team has started to play around with GRIP, but we are wondering whether it's best to run it from the driver station laptop (i5-4210U, integrated graphics, 6GB RAM), from a raspberry pi2, the RIO or buy some sort of co-processor like the kangaroo?

We plan to have a USB camera (Logitech) that we plan to plug the camera into whatever we are using for the vision processing, then publishing it to the RIO.

Anyone have suggestions?

I also would like some examples/tutorials on how to read the contours report from C++ robot code.
Reply With Quote
  #9   Spotlight this post!  
Unread 08-02-2016, 19:47
MaikeruKonare's Avatar
MaikeruKonare MaikeruKonare is offline
Programming Division Captain
AKA: Michael Conard
FRC #4237 (Team Lance-a-Bot)
Team Role: Programmer
 
Join Date: Feb 2016
Rookie Year: 2012
Location: Michigan
Posts: 15
MaikeruKonare is an unknown quantity at this point
Re: Vision tracking and related questions

Your main software options are Grip and RoboRealm. Grip feel unfinished and doesn't yet function well, as it is a work in progress. I greatly prefer RoboRealm.

I have succeeded in getting working RoboRealm vision processing this week and I can even adjust the robot's position to within 15 pixels of the center of the vision target.

This is my RoboRealm pipeline:

The Axis Camera module is in order to connect with an IP camera. RoboRealm can only use a USB camera if you're running RoboRealm on that device. (Aka in order to use a USB camera on the robot you would need a Windows machine ON the robot, such as a Kangaroo).

The Adaptive Threshold gray scales the image and filters it so that only intensities of ~190-210 show, which is about the intensity of the reflective tape when an LED light is shown on it.

Convex hull fills in the target U shape and makes it a rectangle.

My blob filter removes all blobs that have made it this far except the largest blob. (If you want multiple targets to come through, remove blobs based on area instead of largest only.)

Center of Gravity gives the X coordinate of the center of the target in pixels.

Network Tables publishes the center of gravity information and image dimensions to the network tables, in order to be read in by your program.

The following is my C++ code that is compatible with the above RoboRealm pipeline, look mainly at the Test() function:
Code:
/*
 * Robot.cpp
 *
 *  Created on: Feb 7, 2016
 *      Author: Michael Conard
 **/

#include "WPILib.h"
#include "Shooter.h"
#include "Constants.h"

class Robot: public SampleRobot
{
private:
	RobotDrive* mRobot;
	BuiltInAccelerometer* mAcc; //RoboRio accelerometer

	Joystick* mStick;		//Xbox controller

	CANTalon* mFrontLeft;
	CANTalon* mFrontRight;
	CANTalon* mRearLeft;
	CANTalon* mRearRight;

	Shooter* mShooter; //Shooter class object

	std::shared_ptr<NetworkTable> roboRealm; //Network table object, communicate with RoboRealm
public:
	Robot();				//Constructor
	~Robot();				//Destructor
	void OperatorControl();
	void Autonomous();
	void Test();
	void Disabled();
};

Robot::Robot():roboRealm(NetworkTable::GetTable("RoboRealm")) //Construct network object within the scope of Robot
{
	printf("File %18s Date %s Time %s Object %p\n",__FILE__,__DATE__, __TIME__, this);

	mStick = new Joystick(XBOX::DRIVER_PORT);
	mAcc = new BuiltInAccelerometer();
	mShooter = new Shooter(VIRTUAL_PORT::SHOOTER_LEFT_WHEEL, VIRTUAL_PORT::SHOOTER_RIGHT_WHEEL,
			VIRTUAL_PORT::SHOOTER_ANGLE, PWM_PORT::PUSH_SERVO_PORT, mStick);

	mFrontLeft = new CANTalon(VIRTUAL_PORT::LEFT_FRONT_DRIVE);
	mFrontRight = new CANTalon(VIRTUAL_PORT::RIGHT_FRONT_DRIVE);
	mRearLeft = new CANTalon(VIRTUAL_PORT::LEFT_REAR_DRIVE);
	mRearRight = new CANTalon(VIRTUAL_PORT::RIGHT_REAR_DRIVE);

	mRobot = new RobotDrive(mFrontLeft, mRearLeft, mFrontRight, mRearRight);
	mRobot->SetSafetyEnabled(false);
}

Robot::~Robot()
{
	delete mRobot;
	delete mAcc;
	delete mStick;
	delete mFrontRight;
	delete mFrontLeft;
	delete mRearRight;
	delete mRearLeft;
	delete mShooter;
}

void Robot::Disabled()
{
	printf("\nDisabled\n");
}

void Robot::OperatorControl() //standard driving and shooter control
{
	while(IsOperatorControl() && IsEnabled())
	{
		//printf("X Acc: %f, Y Acc: %f. Z Acc. %f\n", mAcc->GetX(), mAcc->GetY(), mAcc->GetZ());

		mRobot->ArcadeDrive(-mStick->GetRawAxis(XBOX::LEFT_Y_AXIS), -mStick->GetRawAxis(XBOX::LEFT_X_AXIS)); //standard drive

		if(mStick->GetRawAxis(XBOX::LEFT_TRIGGER_AXIS) >= 0.5) //spin wheels to suck ball in
		{
			mShooter->ActivateMotors(-.8);
		}
		else if(mStick->GetRawAxis(XBOX::RIGHT_TRIGGER_AXIS) >= 0.5) //activate wheels, wait for full speed, shoot ball
		{
			mShooter->ShootBall();
		}
		else
		{
			mShooter->DisableMotors();
		}

		Wait(0.02);
	}
}

void Robot::Test() //tests aligning with vision target
{
	double imageWidth = roboRealm->GetNumber("IMAGE_WIDTH", 320); //get image width
	double xPosition;
	double distFromCenter;
	while(IsTest() && IsEnabled())
	{
		xPosition = roboRealm->GetNumber("COG_X", -1.0); //get center of gravity of vision target
		distFromCenter = imageWidth/2.0 - xPosition; //positive means object too far right, negative means too far left
		printf("xPosition: %f, distFromCenter: %f\n", xPosition, distFromCenter);

		if(xPosition != -1) //if set to default value, network table not found
		{
			if(distFromCenter > 15) //vision target too far right, spin right
				mRobot->ArcadeDrive(0.0,0.40);
			else if(distFromCenter < -15) //vision target too far left, spin left
				mRobot->ArcadeDrive(0.0,-0.40);
			else
			{
				mRobot->ArcadeDrive(0.0,0.0); //stop, centered within 15 pixels
				printf("Centered!  "); //lines up with xPosition print above
			}
		}
		else
		{
			printf("Network table error!!!\n");
		}
	}
}

void Robot::Autonomous() //aligns with vision target then shoots
{
	double imageWidth = roboRealm->GetNumber("IMAGE_WIDTH", 320); //get image width
	double xPosition;
	double distFromCenter;
	while(IsAutonomous() && IsEnabled())
	{
		xPosition = roboRealm->GetNumber("COG_X", -1.0); //get center of gravity of vision target
		distFromCenter = imageWidth/2.0 - xPosition; //positive means object too far right, negative means too far left

		if(xPosition != -1) //if set to default value, network table not found
		{
			if(distFromCenter > 15) //vision target too far right, spin right
				mRobot->ArcadeDrive(0.0,0.2);
			else if(distFromCenter < -15) //vision target too far left, spin left
				mRobot->ArcadeDrive(0.0,-0.2);
			else
			{
				Wait(0.5);
				mShooter->ShootBall(); //centered within 15 pixels, shoot ball
			}
		}
		else
		{
			printf("Network table error!!!\n");
		}

		Wait(0.02);
	}
}

START_ROBOT_CLASS(Robot)
Let me know if you need any clarifications or additional information.
Reply With Quote
  #10   Spotlight this post!  
Unread 09-02-2016, 17:13
FRC2501's Avatar
FRC2501 FRC2501 is offline
Registered User
FRC #2501 (Bionic Poalrs)
 
Join Date: Jan 2015
Rookie Year: 2008
Location: Minnesota
Posts: 52
FRC2501 is an unknown quantity at this point
Re: Vision tracking and related questions

Quote:
Originally Posted by MaikeruKonare View Post
Your main software options are Grip and RoboRealm. Grip feel unfinished and doesn't yet function well, as it is a work in progress. I greatly prefer RoboRealm.
Sadly our school blocks access to RoboRealm on the basis of "Website contains prohibited Forums content.", so I have been "forced" into using GRIP, which while it seems unfinished seems good enough for our usage atm.
Reply With Quote
  #11   Spotlight this post!  
Unread 09-02-2016, 18:15
MaikeruKonare's Avatar
MaikeruKonare MaikeruKonare is offline
Programming Division Captain
AKA: Michael Conard
FRC #4237 (Team Lance-a-Bot)
Team Role: Programmer
 
Join Date: Feb 2016
Rookie Year: 2012
Location: Michigan
Posts: 15
MaikeruKonare is an unknown quantity at this point
Re: Vision tracking and related questions

Quote:
Originally Posted by FRC2501 View Post
Sadly our school blocks access to RoboRealm on the basis of "Website contains prohibited Forums content.", so I have been "forced" into using GRIP, which while it seems unfinished seems good enough for our usage atm.
You could try downloading it at home and bringing it in on a zipdrive. You could also try translating the RoboRealm website in google translate from French to English. Using the translated link brings up the website in a way that it avoids the block that the school places.
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 13:18.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi