Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Electrical (http://www.chiefdelphi.com/forums/forumdisplay.php?f=53)
-   -   On Board Computer (http://www.chiefdelphi.com/forums/showthread.php?t=106462)

dcarr 15-05-2012 17:27

Re: On Board Computer
 
We ran Ubuntu on ours. I want to say the TDP of the Atom itself is like 13W but I'd have to check, it's very low.

We had the server side launch on startup so it was running when the robot was powered up.

Hjelstrom 15-05-2012 19:34

Re: On Board Computer
 
We used a Pandaboard running Ubuntu. It runs on 5V, 2A so under 10W. We set up a user that auto-logs in on startup and then launched our program in that user's .profile file. We're working on a paper that describes everything in detail, hopefully it won't take too much longer.

yottabyte 15-05-2012 19:40

Re: On Board Computer
 
Quote:

Originally Posted by Hjelstrom (Post 1169614)
We used a Pandaboard running Ubuntu. It runs on 5V, 2A so under 10W. We set up a user that auto-logs in on startup and then launched our program in that user's .profile file. We're working on a paper that describes everything in detail, hopefully it won't take too much longer.

Hope to see the paper soon

Tom Line 16-05-2012 16:00

Re: On Board Computer
 
Quote:

Originally Posted by KylerHagler (Post 1169420)
You could always just do your image processing on your C-rio? I believe we did that and didn't have a problem but your just have to have like absolutely as much off load on the cpu on board so that the cpu can actually do it. This resorted to us using 2CAN and integrated PID loops on the Jaguars them selves.

That's not necessarily true. Reconsider how you perform your image processing. It's perfectly reasonable to process a single frame to get all the information you need to lock on to the target and shoot accurately.

Vision processing does not necessarily mean processing vision in real-time, at real-time speeds. That is a mistake many programmers make. In fact, I can tell you that most manufacturing vision systems we have in our production lines use the single-frame method.

Tom Bottiglieri 16-05-2012 16:26

Re: On Board Computer
 
Quote:

Originally Posted by Tom Line (Post 1169860)
Vision processing does not necessarily mean processing vision in real-time, at real-time speeds. That is a mistake many programmers make. In fact, I can tell you that most manufacturing vision systems we have in our production lines use the single-frame method.

Yup. We grabbed one frame, processed it in about 100ms, then fed the result into a control loop using the gyro. We were even able to pull out our lateral position on the field to shoot off center if we were on the side of the key, allowing us to have an accurate alliance bridge autonomous mode. We did this spending 0 dollars and 0 hours on an external computer.

Brian Selle 16-05-2012 16:44

Re: On Board Computer
 
Quote:

Originally Posted by Tom Bottiglieri (Post 1169869)
Yup. We grabbed one frame, processed it in about 100ms, then fed the result into a control loop using the gyro. We were even able to pull out our lateral position on the field to shoot off center if we were on the side of the key, allowing us to have an accurate alliance bridge autonomous mode. We did this spending 0 dollars and 0 hours on an external computer.

That's exactly what we did except we fed turret position. The cRIO CPU would spike to around 80-90% for a moment during the image processing but since nothing else was happening during the shot sequence it was never an issue. Never saw the need for continuous image processing or offloading to an external CPU...

Tom Bottiglieri 16-05-2012 16:50

Re: On Board Computer
 
Quote:

Originally Posted by btslaser (Post 1169873)
That's exactly what we did except we fed turret position. The cRIO CPU would spike to around 80-90% for a moment during the image processing but since nothing else was happening during the shot sequence it was never an issue. Never saw the need for continuous image processing or offloading to an external CPU...

I'l like to elaborate on my last post a bit. We did run the vision code continuously, but the control loops only relied on one valid frame and the vision processing ran in a separate task with lower priority than the main robot code/communications task. The vision task slept enough to not cause particularly high CPU usage. When we were looking for a frame, we would use the last processed frame (if we got a good result within the last, say, 100ms) or wait for a fresh result to appear.

Brian Selle 16-05-2012 18:18

Re: On Board Computer
 
Quote:

Originally Posted by Tom Bottiglieri (Post 1169874)
I'l like to elaborate on my last post a bit. We did run the vision code continuously, but the control loops only relied on one valid frame and the vision processing ran in a separate task with lower priority than the main robot code/communications task. The vision task slept enough to not cause particularly high CPU usage. When we were looking for a frame, we would use the last processed frame (if we got a good result within the last, say, 100ms) or wait for a fresh result to appear.

Just to make sure I understand this... on a separate low priority thread you were continuously grabbing the latest image, thresholding, filtering, calculating distance/angle/position then sleeping for a bit. Then whenever your shooter pressed the shoot button you would check to make sure a valid calculation was made within the last 100ms or so and then use the last calculated distance/angle/position to align the robot and spool up the shooter wheel?

If the image processing was only taking 100ms what was the advantage of running it continuously? Was the image quality degraded because the robot may have been still moving?

Tom Line 16-05-2012 18:19

Re: On Board Computer
 
Quote:

Originally Posted by Tom Bottiglieri (Post 1169874)
I'l like to elaborate on my last post a bit. We did run the vision code continuously, but the control loops only relied on one valid frame and the vision processing ran in a separate task with lower priority than the main robot code/communications task. The vision task slept enough to not cause particularly high CPU usage. When we were looking for a frame, we would use the last processed frame (if we got a good result within the last, say, 100ms) or wait for a fresh result to appear.

We did a bit of the opposite. Our vision acquisition and processing didn't run unless the 'fire' trigger was pulled. Then, once it gathered the first valid image and had a target, it saved those values to reference against our turrent pot and shut down the vision code.

We'd see a temporary spike to 90% during the one frame acquisition and processing, but usually our code hovered around 70-75% cpu.

Tom Bottiglieri 16-05-2012 19:24

Re: On Board Computer
 
Quote:

Originally Posted by btslaser (Post 1169892)
Just to make sure I understand this... on a separate low priority thread you were continuously grabbing the latest image, thresholding, filtering, calculating distance/angle/position then sleeping for a bit. Then whenever your shooter pressed the shoot button you would check to make sure a valid calculation was made within the last 100ms or so and then use the last calculated distance/angle/position to align the robot and spool up the shooter wheel?

If the image processing was only taking 100ms what was the advantage of running it continuously? Was the image quality degraded because the robot may have been still moving?

Yes, for a few reasons.

First off, we didn't want to block the control code. While most control loops ran in their own threads, there were still some things that were 'close' to time dependent in the main thread. Also, doing this made it easier to debug as we could stop the robot at any point and look at what it thought about the target. This allowed us to use the vision system to align the robot for autonomous mode without any additional glue code. The cost of running it all the time vs. one shot at a time is pretty minimal, and this is just the way we chose to implement it.

Brian Selle 16-05-2012 23:20

Re: On Board Computer
 
Quote:

Originally Posted by Tom Bottiglieri (Post 1169909)
Yes, for a few reasons.

First off, we didn't want to block the control code. While most control loops ran in their own threads, there were still some things that were 'close' to time dependent in the main thread. Also, doing this made it easier to debug as we could stop the robot at any point and look at what it thought about the target. This allowed us to use the vision system to align the robot for autonomous mode without any additional glue code. The cost of running it all the time vs. one shot at a time is pretty minimal, and this is just the way we chose to implement it.

Thanks for info. We are using the Java command based framework and it seemed more natural to do the single frame analysis when required but I like the idea of not blocking and keeping everything running at a steady pace. We had one connection issue on our first practice match of the season that I'm pretty sure was because the cRIO spiked at 100% CPU during an image processing step. We cleaned up the code and made it more efficient and never had that issue again but it was always in the back of my mind.

Kyler Hagler 17-05-2012 00:20

Re: On Board Computer
 
Quote:

Originally Posted by Tom Line (Post 1169860)
That's not necessarily true. Reconsider how you perform your image processing. It's perfectly reasonable to process a single frame to get all the information you need to lock on to the target and shoot accurately.

Vision processing does not necessarily mean processing vision in real-time, at real-time speeds. That is a mistake many programmers make. In fact, I can tell you that most manufacturing vision systems we have in our production lines use the single-frame method.

My bad, Btslaser knows more about this then i do. We did do a frame method instead of real time processing.

RyanN 24-05-2012 04:26

Re: On Board Computer
 
We're about to give it a try. We found the cRIO and our driver station computer to be lacking. It seems that the network camera acquisition is terribly inefficient. Using a USB webcam uses much less CPU. Anyway, we're building to use this for a few years.

Intel Core i7-3770S Ivy Bridge 3.1GHz (3.9GHz Turbo) LGA 1155 65W Quad Core Desktop Processor Intel HD Graphics 4000
4GB DDR3 1600MHz G.SKILL Rimjaws Series
64GB Crucial M4 SSD
COOLER MASTER Hyper 212 Plus 120mm Heatsink (for 120mm fam compatibility)
Intel BOXDH77DF LGA 1155 Intel H77 HDMI SATA 6Gb/s USB 3.0 Mini ITX Intel Motherboard
M2-ATX Power Supplu from Mini-box.com

I think we're going to attempt a custom made enclosure unless someone has a lightweight mini-ITX case they would recommend.

This will also be our primary programming computer where we'll remote in to program the cRIO.

A bit overkill for everything, but it should last for a few years.

Phyrxes 24-05-2012 08:22

Re: On Board Computer
 
Every mini-itx cases I've looked at, even the ones advertised as aluminum, are still listed by their manufacturer as being 2 to 3 pounds for the chassis. What kind of weight allowance are you looking for?

Question: Would it before more effective to get a tablet to handle the processing?

RyanN 24-05-2012 09:43

Re: On Board Computer
 
Quote:

Originally Posted by Phyrxes (Post 1171311)
Every mini-itx cases I've looked at, even the ones advertised as aluminum, are still listed by their manufacturer as being 2 to 3 pounds for the chassis. What kind of weight allowance are you looking for?

Question: Would it before more effective to get a tablet to handle the processing?

I would be afraid of a tablet for a couple of reasons...

Usually, performance wise, they suck. To keep the costs down, manufacturers cut down the core performance in order to allow the use of the touch screen and the associated hardware and hinges needed.

Even though the costs are lowered by the performance hit, they're still too expensive, and fall way above the $400 price tag for single COTS item [R41]. By building your own PC, you can get every part (and really nice parts might I add) for less than $400 each.

Find a decent performing, sub $400 tablet and I might be interested. My main interest is to keep the platform in Windows 7, and use LabVIEW (FRC version) to keep the learning curve down, while also allowing us to have much more capability.

By having the PC onboard the robot, not only can we offload vision processing, but we can drop off some of our closed loop controls to the PC to improve responsiveness as long as we keep the network threshold below 100Mbits (Robot LAN), which I cannot see being an issue for a LONG time.


All times are GMT -5. The time now is 09:58.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi