|
Re: Dual Cameras - Dual Purposes
The existing dashboard code reads directly from the camera IP on port 80. It requests an image stream as a CGI GET and processes the stream from the resulting session. If parameters change, it closes and reopens a new CGI session.
If the "camera" on the ITX could be made to look identical to the Axis, you could use the DB code as is. This would involve making an mjpeg on the ITX and serving it up on port 80 as if you were an Axis camera. While cool hackery, this is not really what I'd recommend, but it is a starting point for understanding one way to do this.
What I'd recommend making a simple ask/response protocol over TCP. When asked for it, take the IR or depth image on the robot and writing it. When it is read on the DB, format it as needed and transfer it into an image of some sort. I have some Kinect code in LV that moves the data into efficiently into different formats if you find you need it.
For display, you can use the LV picture control or the IMAQ display. You could even use the intensity graph if you would like it to do the color mapping as part of the blit.
Using an accelerometer to determine distance is a very hard problem. To do this, you need to know orientation as well as accelerations and you need to have high quality sensors and/or knowledge of how the chassis can be affected.
If you hook an analog sensor to the cRIO and use the DMA interface, you can pull back a high speed signal. This helps demonstrate how a tiny tilt of the sensor accumulates and results in an erroneous velocity as you integrate. You can also play with this if you have the right app on your phone.
Greg McKaskle
|