Quote:
Originally Posted by magnets
I think you should reread the my last post.
If you're looking to have an separate processor with camera capable of processing image data and sending it back to the cRIO, then look at the CMUcam 5. It costs $69, which is way cheaper than a pi and a webcam, and I guarantee that this will be cheaper.
Also, it's a ton of work to make something like this work well, something beyond one person, or even one team could do. Just think, is it worth it to spend all the time, money, and resources of you and your team developing this solution when there are already solutions? Driver Station vision processing is proven to be as effective as onboard, and it's simpler, cheaper, and more reliable. You can't argue against any of these. When you say "it's not rocket science" it shows me that you haven't really spent much time with vision in the past. This stuff is tricky. One of the best and most technical teams in FIRST didn't move on Einstein because of a little glitch related to networking with a vision processing board, and some of the people who were involved in the development process were rocket scientists! You vision code needs to work early in build season so you have time to design your robot around it and test and tune. If you have detailed knowledge of the linux kernel, linux USB drivers, FFmpeg, OpenCV, networking, and a lot of programming experience, it is possible. Things that seem like straight forward and simple like installing the OpenCV libraries can be difficult and more time consuming than you expect.
Also, how do you plan on building the power supply? On a printed circuit board? Coming up with a good layout is tough. Trust me. If you don't know what you're doing, and you don't have a good design with an effective ground plane to isolate interference, all your electronics will be plagued by random freezes, resets, and other unexpected behavior.
If you're really set on using onboard vision, why not use a beaglebone? Or, wait until 2015, and use the dual core ARM processor in the roboRIO.
|
I agree with the fact that driver-station vision processing is a great approach. However, there are still latencies. The comparison between onboard processing with the Pi and the DS would be totally dominated by the DS. However, if you were using something much more powerful, e.g. the oDroid, if the team is able to conquer doing it, that seems like a better option!
I am posting in this thread because this problem is not only in the Pi, but in many other SoCs that are suitable for this.
Of course I don't have much experience. After all, I'm just a student. However, I am very curious and try to research and learn everything I come across. This is another reason why I come to CD! I'm just like a human octopus!
I agree that without a plan, things will get out of hands quickly. Before I start anything, I brainstorm it and go through an engineering design process. Then, I draw a block diagram of what will happen, followed by a drawn diagram of the parts I will use and how they will be hooked up. From there, I will move to CAD and create a schematic. I first use something like Fritzing because that will help me breadboard things. I will fix errors that I find, then. Finally, I will CAD the thing in DipTrace, get the schematic down and then finally use DipTrace to generate a PCB layout. Finally, I make the PCB and then populate it with components. Then, I test it to make sure it works. I then start marketing it. I start a KickStarter, work on improving on it and raising popularity in it. Then, I will work on making it much more robust, by adding an aluminum case, over-rated heatsink and all sorts of other safety measures. Then, I will finally report the final product to FRC. Note: I have been calling it a product. It will be a product to be possible sold somewhere like AM or VexPro, but it will be non/little profit!
I actually have looked at the CMUCam KickStarter and really liked it. However, it doesn't seem powerful enough to work well!
I like to go with simplicity. I was thinking about a MOSFET-triggered, inductor-based step-up/down converter to slowly fill a capacitor to just 5v, +-.1v! I think that is in the working range of the Pi, and it should be a no-problemo, using a microcontroller to trigger the MOSFET. I wanted to use PID to do the voltage-holding!