Last year, our team decided to send the images from the Axis camera through the DLink router, in order to save the cRIO some effort and improve the video qualtiy. In order to make this change, we had to modify the dashboard to get the images from the network, vs. the cRIO (I am not sure about this part). However, we lost the ability to process images. This gave me an idea, but have no idea how to implement it.
Would it be possible to send the images from the camera >> to the router >> to the dashboard where the images are viewed >> to some code built into the dashboard which recieves images and returns x,y coordinates >> back to the cRIO where the coordinates are proccessed and acted upon? If so, how could it be done?
The advantages would be that it puts the image proccessing on the driverstation, where it can run faster, and also make use of NI’s machine vision library.
Our team uses Java for the robot environement, and has very little or no experience with Labview.
In 2011 this was both legal and possible - I saw a few teams that did it, though mine didn’t.
Here are some things to keep in mind:
–The cRIO can only do limited image processing without sacrificing other functionality
–The cRIO can download images from the camera over a wired network (fast)
–The driverstation can do extensive image processing
–The driverstation can download images from the camera over wireless (slower)
If you want very high fidelity information (such if a robot needs to distinguish between things that look similar), then sending images via wireless to a laptop makes sense.
If you want very responsive information (such as tracking a brightly colored moving ball in order to interact with it) you probably want to do processing on the cRIO.
The camera can handle multiple simultaneous streams. We connected the camera to the dlink and had the dashboard connect to it. We also changed the IP address for the camera in the robot code so that it connected to the new IP address. We did this in LabVIEW, but I assume it should be similar for the other languages.
This is exactly what i had in mind (specifically the target in lunacy or or the target in breakaway or the scoring rack in logomotion). However, our team has always had problems with image proccessing and java (see statement). In fact, our programming leader is against even using the camera with the cRIO because the robot ran over a computer last year as a result of an error in the code! I suppose I could always convince her otherwise, but I don’t know how I would get the robot to proccess images efficiently.
This seems like the best option. But is there a method to set the camera IP in java? I don’t see any option besides the normal init. statement. I have heard that classes for image proccessing in java aren’t as complete as those for C or LabVIEW. Could it be because of this? (I would love to be proven wrong here…)
We used a similar method. Set the camera’s ip to different address (.19 because it was one away from the DHCP). We then fetched the image file from the camera using Java File IO and URL. This worked pretty well because of the Axis camera constatnly stores and over writes a jpeg in /jpg/image.jpg We then do image processing and display of the results with processing.org’s library (its really simplified java, you can extract the library and use it in a full blown java applet. however it needs to be an applet). We are still working on sending the proccessed info back to the crio
Where would that /jpg/image.jpg be located? On the cRIO?
Hmm…as JohnGilb pointed out, this wouldn’t be as fast as cRIO proccessing. (can anyone confirm this?) But if is a good idea…
Why use Java? (we are only limited to using Java as the cRIO environement). So if you are saying that we can write another program (not limited to LabVIEW), that would run on the DriverStation computer, and could get information back from the camera and somehow send some information back to the cRIO, what other libraries could we use? I was thinking about C#.