Help with Axis Cameras

Hello everyone! This is my first year in my schools robotics team and i have been tasked with basically everything computer based. Coding drive code etc. The team wants to put on a Axis Camera and although they said that they did it just last year it is not working and i am very confused. Ive tried changing the IP address on the computer while its connected wired and i got through the setup of the camera and thats it. Any help or a step by step walk through would be greatly appreciated.

Honestly, I would highly suggest using a USB webcam.

Here are a few steps to get you in the right direction:
http://wpilib.screenstepslive.com/s/4485/m/24194/l/288981-using-the-microsoft-lifecam-hd-3000

The problem with USB webcams is they use a lot of processing time on the RoboRIO. This could be fine if you are doing low quality video, but the higher quality you want, the more processor time you are taking away from other tasks. Granted the RoboRIO is a beast and probably can handle the load, but it’s nice to keep your main processor doing as little excess as possible.

Axis cameras, though they are expensive, are typically the better choice imo. FRC gives a tool to set them up that should be relatively self-explanatory. I believe it is either in C:/Program Files or C:/Program Files/National Instruments/Labview 2014/projects if my memory serves me correctly.

The USB cameras are a bit finicky, if your team uses the SmartDashboard (Java/C++) to view robot data and such, you will need to add a patched camera widget to the dashboard in order to view the camera feed if you are using the Simple Vision example code. The problem with using the stock camera viewer and the Simple Vision code is that the CameraServer (used in Simple Vision) does not send the mJPEG data with Huffman tables and causes an error on the Driver Station, preventing you from seeing your video feed. One way to solve this would be to use the Intermediate Vision example class, however it takes a lot of the RoboRIO CPU to decode and then re-encode the video for viewing on the dashboard, but adds the Huffman tables in the process. If you want to save your RoboRIO some CPU load, and insert the Huffman tables in on the DS computer, use this camera widget.
So you can use the USB camera with the Simple Vision example class (which uses the WPILib CameraServer) and view the image on the DS with the patched widget.

There’s a really well-written guide on setting up an Axis Camera here: https://wpilib.screenstepslive.com/s/4485/m/24194/l/144985-configuring-an-axis-camera

I would argue that if this your first year on an FRC team and you are the only programmer without a mentor (guessing because you asked CD not them) you should probably stick with LabVIEW for at least your first year. I was in practically the same boat as you with a little more experience and I can tell you nothing goes smoothly the first year. Even simple tasks like driving become challenging when you have no experience. Your goal for your first year should be to get the robot working, not to do anything fancy with it. I think for beginner robot programmers LabVIEW is much more forgiving of syntax mistakes and easier to do basic things than C++ or Java (even if you have non-robot programming experience in those languages). With that said, if you do choose to use LabVIEW, USB cameras are easier to use and should be fine for your purposes. The code is written into the sample robot project so all you have to do is plug in the camera and click USB camera (either HW or SW) on the sample dashboard. If you choose to use LabVIEW and need any help, PM me and I will be glad to help.

See http://www.chiefdelphi.com/forums/showthread.php?t=135835 for some helpful suggestions. Basically, use dynamic addressing everywhere, and set the camera’s host name to “axis-camera” in the advanced TCP configuration.

A big thanks for new newbie it’s work at my side

on topic: Those resources should help, otherwise make everything a static ip and use a static ip with the form 10.xx.xx.y for everything where xxxx is your team number(made everything easier along with advanced programs).

little bit off topic: I think first is in a transition phase with the axis/usb cameras. They didnt update any of the axis camera api but they made the whole camera server thing. It really complicated the vision tracking this year and it seemed like it was thrown into the dust as NO team that had the 3 tote auto used vision or really any team as it took too much time and you couldnt use the tape for anything. The workarounds for the tape were easy. For position/orientation a simple encoder gyro sensor fusion gave you easily enough accuracy within the first 15 seconds that there was no need for any other sensors for auto.