View Single Post
  #3   Spotlight this post!  
Unread 23-07-2013, 02:46
craigboez's Avatar
craigboez craigboez is offline
Mechanical Engineer
AKA: Craig Boezwinkle
FRC #2811 (StormBots)
Team Role: Mentor
 
Join Date: Oct 2008
Rookie Year: 2009
Location: Chicago, IL
Posts: 217
craigboez is just really nicecraigboez is just really nicecraigboez is just really nicecraigboez is just really nicecraigboez is just really nice
Re: Getting an ODROID Up and Running

An update:

I have successfully installed Arch Linux ARM on the Odroid U2. They officially support the platform and provide all the packages necessary for what we're doing here, which made things nice and simple. With Arch Linux there is no GUI so all interaction with the Odroid has to happen over Ethernet via ssh. I see this as a feature, as it means less overhead. The more CPU cycles available for vision processing the better.

Installing OpenCV was relatively simple. Arch provides a package for this.

I purchased a Logitech C615 webcam. This model was chosen because of price, availability, attempted futureproofness (hopefully it will be available for a little while), and because the internet told me that it works on V4L (video for Linux) without any issues. So far I've found this to be accurate.

I have successfully connected to the Odroid on my home network, wrote a basic C++ program, compiled, and executed it.

Next steps:
  • Capture a frame from the camera
  • Add a placeholder for vision analysis code. Students can deal with this later.
  • Figure out how to send this information to the cRIO

I like the idea above of using the QT framework to deal with all the low level details for sending data. Based on some research (this thread), the position information should be put in a datagram and transmitted to the cRIO using a UDP packet. I have no idea how to do this, but it sounds like a solid theory.

It looks like there are packages in Arch for QT so I will begin exploring and experimenting. Any tips or tricks are appreciated.