Okay, so I’ve been working on Vision Processing in the off season, and I’ve built my LED light ring, and all that. Now, all I have to do is get the code to work. I’m using the Sample code for Java processing, the one where it filters out particles that are not red, and are smaller than a certain size. I’ve got the code to work with the images that have been loaded onto the cRIO, but I can’t get an image capture. Now, I’ve gotten the camera to stream to the SmartDashboard, so the camera is working. The error I get is no camera image available. If I could get some assistance, that would be great. Thank you in advance.
My guess would be that the axis camera isn’t at the IP that the Java program is looking at.
So the example makes the camera as follows:
Axiscamera camera; <-- creates an instance in the class body
camera = AxisCamera.getInstance(); <-- initializes it in the constructor
The getInstance() method creates a new camera instance if one doesn’t already exist. The code in the library is as follows.
/**
* Get a reference to the AxisCamera, or initialize the AxisCamera if it
* has not yet been initialized. By default this will connect to a camera
* with an IP address of 10.x.y.11 with the preference that the camera be
* connected to the Ethernet switch on the robot rather than port 2 of the
* 8-slot cRIO.
* @return A reference to the AxisCamera.
*/
public static synchronized AxisCamera getInstance() {
if (m_instance == null) {
DriverStation.getInstance().waitForData();
int teamNumber = DriverStation.getInstance().getTeamNumber();
String address = "10."+(teamNumber/100)+"."+(teamNumber%100)+".11";
m_instance = new AxisCamera(address);
}
return m_instance;
}
So by default its going to use the IP 10.XX.YY.11 where XXYY is your team number (as set in the java project). Example team 2168 would yield the ip: 10.21.68.11.
The getInstance method is overloaded. So if you don’t want to change the IP of the camera, you should be able to define it by modifying the getInstance call in your example code as shown below
camera = AxisCamera.getInstance("192.168.0.90");
Hope that helps.
I can stream the camera live feed to the smartDashboard by using the same IP that I use in the code. 10.38.81.11. I can post the code if that would be helpful.
Yes, please do.
I’m just using the example vision code, with a few minor alterations to get it to work.
package edu.wpi.first.wpilibj.templates;
import edu.wpi.first.wpilibj.Joystick;
import edu.wpi.first.wpilibj.SimpleRobot;
import edu.wpi.first.wpilibj.Timer;
import edu.wpi.first.wpilibj.camera.AxisCamera;
import edu.wpi.first.wpilibj.camera.AxisCameraException;
import edu.wpi.first.wpilibj.image.BinaryImage;
import edu.wpi.first.wpilibj.image.ColorImage;
import edu.wpi.first.wpilibj.image.CriteriaCollection;
import edu.wpi.first.wpilibj.image.NIVision.MeasurementType;
import edu.wpi.first.wpilibj.image.NIVisionException;
import edu.wpi.first.wpilibj.image.ParticleAnalysisReport;
import edu.wpi.first.wpilibj.image.RGBImage;
import edu.wpi.first.wpilibj.smartdashboard.SmartDashboard;
/**
* Sample program to use NIVision to find rectangles in the scene that are illuminated
* by a red ring light (similar to the model from FIRSTChoice). The camera sensitivity
* is set very low so as to only show light sources and remove any distracting parts
* of the image.
*
* The CriteriaCollection is the set of criteria that is used to filter the set of
* rectangles that are detected. In this example we're looking for rectangles with
* a minimum width of 30 pixels and maximum of 400 pixels. Similar for height (see
* the addCriteria() methods below.
*
* The algorithm first does a color threshold operation that only takes objects in the
* scene that have a significant red color component. Then removes small objects that
* might be caused by red reflection scattered from other parts of the scene. Then
* a convex hull operation fills all the rectangle outlines (even the partially occluded
* ones). Finally a particle filter looks for all the shapes that meet the requirements
* specified in the criteria collection.
*
* Look in the VisionImages directory inside the project that is created for the sample
* images as well as the NI Vision Assistant file that contains the vision command
* chain (open it with the Vision Assistant)
*/
public class CameraTest extends SimpleRobot {
AxisCamera camera; // the axis camera object (connected to the switch)
CriteriaCollection cc; // the criteria for doing the particle filter operation
// Joystick joystick = new Joystick(1);
public void robotInit() {
camera = AxisCamera.getInstance("10.38.81.11"); // get an instance ofthe camera
cc = new CriteriaCollection(); // create the criteria for the particle filter
cc.addCriteria(MeasurementType.IMAQ_MT_BOUNDING_RECT_WIDTH, 30, 400, false);
cc.addCriteria(MeasurementType.IMAQ_MT_BOUNDING_RECT_HEIGHT, 40, 400, false);
}
public void autonomous() {
while (isAutonomous() && isEnabled()) {
System.out.println("Custom Autonomous() method running, consider providing your own.");
try {
/**
* Do the image capture with the camera and apply the algorithm described above. This
* sample will either get images from the camera or from an image file stored in the top
* level directory in the flash memory on the cRIO. The file name in this case is "10ft2.jpg"
*
*/
ColorImage image = camera.getImage(); // comment if using stored images
// ColorImage image; // next 2 lines read image from flash on cRIO
// image = new RGBImage("/10ft2.jpg");
BinaryImage thresholdImage = image.thresholdRGB(25, 255, 0, 45, 0, 47); // keep only red objects
BinaryImage bigObjectsImage = thresholdImage.removeSmallObjects(false, 2); // remove small artifacts
BinaryImage convexHullImage = bigObjectsImage.convexHull(false); // fill in occluded rectangles
BinaryImage filteredImage = convexHullImage.particleFilter(cc); // find filled in rectangles
ParticleAnalysisReport] reports = filteredImage.getOrderedParticleAnalysisReports(); // get list of results
for (int i = 0; i < reports.length; i++) { // print results
ParticleAnalysisReport r = reports*;
System.out.println("Particle: " + i + ": Center of mass x: " + r.center_mass_x);
}
System.out.println(filteredImage.getNumberParticles() + " " + Timer.getFPGATimestamp());
SmartDashboard.putInt("Number of Particles: ",filteredImage.getNumberParticles());
/**
* all images in Java must be freed after they are used since they are allocated out
* of C data structures. Not calling free() will cause the memory to accumulate over
* each pass of this loop.
*/
filteredImage.free();
convexHullImage.free();
bigObjectsImage.free();
thresholdImage.free();
image.free();
}
catch (AxisCameraException ex) { // this is needed if the camera.getImage() is called
ex.printStackTrace();
}
catch (NIVisionException ex) {
ex.printStackTrace();
}
}
}
/**
* This function is called once each time the robot enters operator control.
*/
public void operatorControl() {
// while(isOperatorControl() == true){
// System.out.println(camera);
// }
}
Let me add a few more details, as I have been attempting to help Ben on this one. getInstance() returns without error, and there appears to be some AxisCamera object there. However, getImage() throws an AxisCameraException.
The AxisCameraException thrown on getImage() is indicative of there being a connection problem.
Obviously your camera is on the network since you can connect to it, but maybe the username and password aren’t right? For some reason the cRIO isn’t able to connect to the device.
I would suggest going through the camera setup instructions here:
http://www.usfirst.org/sites/default/files/GettingStartedwiththe2012FRCControlSystem_RevA.pdf
If you’re still having problems, can you describe how your devices are networked together?
SmartDashboard doesn’t use the same settings as the Robot Code. Make sure you set up the camera with the camera setup tool.
I’ve done all of the setup for the Camera through the Camera set up tool. I have also enabled the anonymous viewing for the camera, so we can actually connect. I have also been able to “login” to the camera using a web browser, so the passwords I’ve used are correct, and I’ve not been asked to input passwords anywhere else, aside from the set up of the camera. I will see if I can go through the camera set up again tomorrow.
Sorry, I’m not familiar with all of the terminology, what exactly do you mean by how the devices are networked?
Sorry, I’m not familiar with all of the terminology, what exactly do you mean by how the devices are networked?
I mean, how are they plugged into one another.
My assumption is that the cRIO and camera are plugged into your wireless bridge (in access point mode), and your laptop with the driverstation & dashboard on it is connected over wifi.
IPs hopefully are likely as follows:
cRIO - 10.xx.xx.2
laptop - 10.xx.xx.5
camera - 10.xx.xx.11
I just want to confirm my assumptions so that we don’t overlook any obvious problems. Since you have communications between the laptop and crio, and the laptop and the camera, everything is probably physically wired together correctly. But it can’t hurt to ask.
Those assumptions are correct, although, I am not currently sure what the Classmate’s IP is.
EDIT
The Classmates IP is 10.38.81.5 if the classmate is wired up, and the IP is 10.38.81.6 if the classmate is tethered.