First year for Image Processing. Need Help!

Trying to plan things out right now and have a couple questions:

What’s the easiest/best camera to use? We have the Kinect from last year, but I’ve heard a lot about “Axis” cameras, and I don’t know if there’s another good option besides those two. Are there certain pros/cons for each camera or any deciding factors?

Once I’ve decided which camera to use:
What is the easiest/best PROGRAM to use? I’m coding in LabVIEW this year but also know basic C++ and Java, I have RoboRealm (program provided in the KoP) installed and ready on my laptop, and I’ve also seen stuff about something called SmartDashboard. Again, pros, cons, deciding factors, anything helpful?

Thanks!

I would like to say that I just posted a basic OpenCV tutorial on Chief Delphi. We use a ps3 eye cam because of the high fps.

You have several well supported options in the KOP and WPILib.
The Axis cameras were selected to go in the KOP because they are a pretty good choice to use with the control system and WPILib. Does your team still have one?

The examples that come with LabVIEW, Java, and C++ demonstrate how to process the image on either the cRIO or the DS laptop.

I’d advise that you get the basics working before worrying about high framerate or coprocessors or …

Consider the camera as a sensor. What does it need to tell the rest of the robot, and how often. Will the robot be shooting while flying across the field, or will it be stationary while the saucers fly? Don’t make it any more complicated that you need to.

Greg McKaskle

This page might be a good place to start at. As Greg McKaskle said, there are plenty of example code projects designed with the Axis camera in mind, and it’s become so friendly for the user that at this point you can practically process an image and send a target’s X and Y relative coordinates as well as the diagonal range from the target to your robot, barely modifying the original codes of the Dashboard, default cRIO project or the vision processing example.

You can use other cameras of course, and it might be useful for some purposes, but you’re gonna have to make sure that the camera is legit, as there are certain rules about the bandwidth and the ports you’re allowed to use, and also there are some rules about the physical and electrical aspects of the camera so make sure you read those.

Out of the 2 types of Axis cameras, I’d go with M1011. Note that if you’re gonna use 206, there’s a certain control in Vision where you need to define that (I think it affects something about the lens’ resolution? Kinda forgot that…)

okay so I just checked and we DO indeed have an Axis Camera and the brackets. However we do NOT have the ring of light that was mentioned in the whitepaper on retro-reflectivity. How essential is that, or is it?

From what yall are saying and what Ive found so far in my googling efforts, it sounds like our Axis camera is the way to go.

Now I just need to decide HOW I want to use it. It sounds like yall are saying to 1) Add to the LabVIEW code and 2) use an image processing program, similar to how you have to have a driver for a different controller.
So with that in mind, to get started, do I just need to read the whitepaper on LabVIEW image processing programming and decide on an image processing program (such as the KoP provided RoboRealm)?

My teacher and I were discussing it all today and realized we actually didn’t know if people let it automatically adjust shots, or if they use the camera on the driving computer to see through the camera while playing the game and use that to manually adjust the shooter based on what you can see through the camera’s vision.

The ring light isn’t necessary, but provides the ability to concentrate light around the camera, so it’s pointed at the target.

2473 used a pair of high-density LED lamps, which blinded any inspector looking at the bot, but they worked beautifully on the field (targets were completely saturated, so the lines stood out really well).

You can use the camera to automatically calculate a firing solution, if you can write the appropriate algorithms. You could simply draw a crosshairs on the DS and aim like that. The more advanced solution would be automatic target acquisition and firing.

Would that be a safety issue?

OP, have you looked at Roborealms? There was a voucher for a copy included in the KoP.

Inspectors loved it. So…

Installed it on my laptop but have yet to look at it at all (still don’t have a good enough idea about the whole subject to get into more specific things like individual programs)

What exactly does RoboRealm do as opposed to the extra coding in LabVIEW if I want to use automatic tracking with an algorith? What does it do opposed to LabVIEW if I want to just use a crosshair and the camera feed?

Sorry if I’m slow on the uptake here. Last year I just gave some ideas and helped build things, so I’m overwhelmed with all the technical stuff now. I can’t thank yall enough for all the help

EDIT: When I mention extra coding in LabVIEW btw, I’m referring to the stuff mentioned in the LabVIEW whitepaper. …Which brings up another question. What’s the difference between the PC and cRIO parts of this?
As far as my understanding, you code the robot’s stuff in LabVIEW on the computer, then copy/image it over to the cRIO where it actually runs on the robot. However that means they’d be the same so surely there’s something I’m missing

Vision processing may be done either on the cRIO or the DS or on a coprocessor.

What’s the difference between doing it for one as opposed to the other though? If it’s automatic tracking do you just do it on the cRIO and if it’s a crosshair and the camera output do you just do it on the PC, or is it more complicated than that?

You may write code for automatic tracking using any combination of cRIO, DS computer, or coprocessor. There are tradeoffs between different approaches.

I suspect you have a cRIO and a DS computer already. I suspect you do not yet have a coprocessor. If you get a coprocessor, you have to power it, mount it, boot it, and program it.

There are a number of ways to use the DS laptop, but you must also make it send data back to the robot.

The cRIO is perhaps the easiest to program, but is not a very fast CPU.

And there are many other tradeoffs you can use to evaluate which solution is for you.

Greg McKaskle

The driver station has more CPU power than a cRIO, but there will be more latency because the picture has to be transmitted over wifi. Alternatively, you could keep a dedicated image processing computer on the robot and get the best of both worlds.

By default, you probably are already streaming the camera image to the driver station PC, so it should be fairly easy to adapt the dashboard code to run image recognition. Last year the only thing we had to send back to the robot were the coordinates of the rectangles we were tracking, so we didn’t have any latency issues.

I would like to point out that they are prioratizing packages this year so anything not directly related to the robot or field info will be put in a que… We are continuing to use our separate pc because we saw a team loose two regionals because of this issue.

The robot control and status packets are being boosted in priority to use the voice level of priority. Other data is at the same priority as last year. The field are also limiting data used by each robot. It should be less likely that any robot hits data limits unless it is due to its own camera or communications traffic.

Greg McKaskle

So here’s my plan. I want to put the camera feed with a crosshair on the drive station (though it’s not hugely important if it causes latency issues), so that either looking at the robot during the game or looking through the camera feed, we can aim ROUGHLY at the right area.

Then I want to use the camera to evaluate the (x, y, z) coordinates from the target goal, plug that into an algorithm to slightly adjust and fine-tune the angle the robot is facing, set the shooter’s speed/power, and shoot it accurately every time.

So should I code the camera feed on my laptop and the coordinate evaluation/adjustments on the cRIO?

What you are looking for is an overlay. What you would need to do for that is get the camera feed from the robot via software. (i.e. SmartDashboard). Then download some sort of crosshair image in the same resolution as your camera outputs. SmartDashboard can do images and Camera stream so you could simply pull up the image on the SmartDashboard and put it over your Camera feed. Also, you already know the height of the targets, and the height of your camera. So thats how you get the z coordinate. To do x and y, I recommend you read the vision processing guide:
http://wpilib.screenstepslive.com/s/3120/m/8731/l/90361?data-resolve=true&data-manual-id=8731