|
Re: Vision... what? how?
1.There are three ways this can be done. On the cRIO, on the Driver Station laptop, and on an on board processor. The cRIO isn't great for vision because it doesn't get a high frame rate with complex processing stuff on it. The laptop is what we used with a lot of success. We used a laptop other than the classmate, and it worked really well. It doesn't have to be fancy, we used a cheap toshiba one. The onboard processor is the most complicated/difficult, and works the same as the laptop.
2. Some teams use laptops, some use a board like raspberry pi or beaglebone, others use modified desktop computers, some use a camera board with a processor build in, like CMUcam.
3. Yes, you can use ethernet. You can also use serial to the cRIO, or I2C.
4. If you are using an axis camera, you can grab the image from the network. You can also use other cameras like a USB webcam or kinect, and get the image using OpenCV.
5. Yes, you can use the axis camera. However, if you chose to use the on board processor, you could also use a webcam, but there's nothing wrong with the axis camera they give you.
6. You could program it with labview's vision stuff, but I would recommend using openCV, which can be used with C, Java, and Python. See 341's example of how they used openCV in 2012 on their driver station laptop.
7. I've never actually used roborealm. It is one way to do vision, but you don't need it.
8. You can send the image data to the cRIO through the ethernet connection.
Here's my advice. If you're trying out vision, it's going to be much easier to play around with running the vision processing on the driver station laptop. You can do this with Labview, roborealm, or openCV.
|