Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   Demystifying autonomous... (http://www.chiefdelphi.com/forums/showthread.php?t=85096)

Joe Ross 09-04-2010 18:30

Re: Demystifying autonomous...
 
Quote:

Originally Posted by Ether (Post 950899)
I don't know for sure, but a few weeks back when a LabVIEW programmer was showing me the cRIO code that packages the Dashboard data it looked to me like the camera image was not being packed into the 50-element array but rather was being updated separately at a higher rate. Second hand info: I asked a team member who had seen the dashboard camera video and he said it looked to be updating a lot faster than once per second. This was with unmodified (default) 2010 FRC LabVIEW framework code.

~

You are correct that the camera images is handled completely separately from the dashboard data.

The default LabVIEW framework code updates the high priority dashboard data in Robot Main (50hz). The Low Priority dashboard data is updated at 2hz (IIRC). For an example of the high priority data updating fast, look at the camera tracking information in the lower right corner of the dashboard.

ideasrule 09-04-2010 18:40

Re: Demystifying autonomous...
 
Team 610 is another team that, after a lot of effort, got a real-time (as far as our eyes could tell) feed at 320x240 resolution. The drivers depended on it heavily at the beginning, not so much after they got more experience in aligning the robot with the goal.

Radical Pi 09-04-2010 18:50

Re: Demystifying autonomous...
 
Quote:

Originally Posted by Joe Ross (Post 951080)
You are correct that the camera is handled completely separately from the dashboard data.

The default LabVIEW framework code updates the high priority dashboard data in Robot Main (50hz). The Low Priority dashboard data is updated at 2hz (IIRC). For an example of the high priority data updating fast, look at the camera tracking information in the lower right corner of the dashboard.

Actually, that is only the tracking data for the camera. The actual images are sent independently in a (UDP I believe) stream that is started when the camera is first called in code. The dashboard packers have nothing to do with this stream

apalrd 09-04-2010 19:01

Re: Demystifying autonomous...
 
We did some camera-driving on our practice bot. We found that, while the image gets a good framerate, it does is not close to realtime. It appeared to be around 10hz video, however it was around 1s behind reality. This fooled our driver into thinking it was actually updating at the speed it appeared, which it did not (btw, I noticed this same thing on the Dashboard I wrote - the data not the camera - no graphs). After the driver (and I, since I was the only other one there and was having fun), got used to the delay, we were able to drive the practice bot through a door (without bumpers), even with the camera mounted far to the side of the bot and partially obstructed by the claw. We also noticed some (lots of) lag with the controls when using vision processing (find ellipse), but with just the camera to the dashboard it was fine. We were able to keep the robot aligned with a line on the floor (edge of the road inside CTC) at full speed in high gear (around 12fps) using only the camera, then shift to low and navigate a narrower hallway into a room, from a different room. It works quite well.

As to the original intent of this thread, I once taught a programming class at an FLL camp, and we played this game where we had two students, sitting back-to-back with identical bags of legos, and we had one student build something and describe vocally to the other how to build it. This taught them how important good instruction is for good execution.

Chris Fultz 09-04-2010 19:05

Re: Demystifying autonomous...
 
Getting back to the OP, that is a great idea.

We did something close and played a human bots game of breakaway, where each student was a robot with certain skills. it made everyone realize the value of different trades and also how small the field became with 2 or 3 robots in one zone.

I like the idea of blindfolds and then someone giving instructions to move the student around the field.

This is actually how i learned FORTRAN in one of my first programming classes. The professor decided to make a peanut butter cracker and eat it. We had to give him verbal instructions on what to do, and he did exactly what we said. Not what we meant, but what we actually said. I still remember the class. It made a good impression!

Greg McKaskle 10-04-2010 15:05

Re: Demystifying autonomous...
 
I'm a firm believer that programmers need to learn how to anthropomorphize with the computer/robot. Don't go through life that way, but think of it like a pair of glasses or a hat you can put on when you want to see and think, knowing only what the function or algorithm knows, or being able to identify exactly what they will need in order to function.

As for the camera and dashboard discussion. The upper rate on the dashboard data is 50Hz and about 1KB per packet. The default framework doesn't read sensors and transmit them at that rate because it would load up the CPU reading a bunch of unallocated channels. I suspect that the framework will start to take advantage of the Open list of I/O to do a better job of this in the future.

Video back to the PC involves lots of elements, meaning that each must perform well, or the frame rate will drop and/or lag will be introduced. Thinking it through, the camera doesn't introduce much lag, but be sure that it is told to acquire and send at a fast rate. The images are delivered on port two over TCP, then sent out over port one over TCP with a small header added for versioning. The issue I've seen with the cRIO is with the memory manager. Big image buffers can be pretty slow to allocate. Keeping the image buffer below 16KB gets rid of this bottleneck. Next in the chain is the bridge, then the router. I haven't seen issues with these elements as they are special purpose and that is all they do. Next is the dashboard computer. If the CPU gets loaded, the images will sit in the IP stack and be lagged by up to five seconds. The default dashboard unfortunately had two elements which are both invalidating the screen and causing drawing cost. The easiest fix is to hide the image info. I believe I've also seen lag introduced when lots of errors are being sent to the DS. With a faster computer, this wouldn't matter as much either.

As I mentioned, an issue at any link in the chain, and the fps can drop and lag can be introduced. If each of these are handled well, I believe you can get less lag down to about 100ms, and frame rate above 25.

Greg McKaskle

gvarndell 11-04-2010 13:20

Re: Demystifying autonomous...
 
Quote:

Originally Posted by kamocat (Post 950815)
What I did to demonstrate the (un)usefulnes camera is to close one eye, and put your hands in a tube around the other eye, to give yourself that 30 degree view angle. (+-15 degrees)
What I didn't have them do is chop it to 10 frames per second. (blinking continuously might work)

Certainly not if autistic children are around, but how about adding a strobe light in a dark room?
Wear your Grandmother's glasses, patch over one eye, restrict FOV on other eye, and have a strobe going.
I would hazard a guess that most teens could fairly consistently catch a randomly tossed (soccer) ball under such conditions -- even if the strobe was off more than on.
I guess I would even predict that 6 kids could split into 2 alliances and play some decent soccer under these conditions.
They probably ought to wear helmets though :yikes:

Pondering how humans can perform such a feat might foster some appreciation for the fact that robot autonomy cannot be dependent upon _knowing_ everything all the time.
A robot that could, even very poorly, approximate our powers of prediction and our ability to fill in the blanks wrt our sensory inputs would be truly amazing.

JBotAlan 27-04-2010 16:49

Re: Demystifying autonomous...
 
Quote:

Originally Posted by gvarndell (Post 952116)
A robot that could, even very poorly, approximate our powers of prediction and our ability to fill in the blanks wrt our sensory inputs would be truly amazing.

This is exactly the kind of thing that is so hard to explain to non-believers. What they don't realize is that the robot is more literal than their 7-year-old boy cousin, and completely unable to do anything beyond some quick math.

I'm glad to see the response to this thread. If I put anything together, I'll pass it along on CD.

I am taking at least a year off of FIRST, though, so it may not be for the next little while.

slavik262 28-04-2010 10:17

Re: Demystifying autonomous...
 
Quote:

Originally Posted by Greg McKaskle (Post 951638)
If the CPU gets loaded, the images will sit in the IP stack and be lagged by up to five seconds. The default dashboard unfortunately had two elements which are both invalidating the screen and causing drawing cost. The easiest fix is to hide the image info. I believe I've also seen lag introduced when lots of errors are being sent to the DS. With a faster computer, this wouldn't matter as much either.

It was great talking to you in Atlanta about this. Does National Instruments have any thoughts on possibly using DirectX or OpenGL to render the video? Using the video dashboard I wrote (which copies the incoming frames directly onto a DirectX texture instead of using GDI to render and uses a separate thread for receiving images via Winsock), we were consistently getting 25+ frames per second on the field in Atlanta. I also distributed it to a few other teams, including Team 175 who were finalists in the Curie division and used it in all of their matches. Granted, I wasn't rendering anything else but video on the dashboard, but with the combination of hardware accelerated rendering and blocking networking I/O, I got CPU usage down to about 15-20% (as opposed to the default dashboard pegging the CPU at 100%).

Greg McKaskle 29-04-2010 20:34

Re: Demystifying autonomous...
 
It was good seeing what you've developed as well. I can't guarantee what is happening in the IMAQ group, but NI has had a series of vision displays since 1994. Periodically, they look at the options for doing the display of the different the image types. The display shows 8 bit mono with color table, 16 bit and floating point monochrome, and true color, perhaps others I can't remember.

With most of the time being spent on the DS, I didn't pay enough attention to the default DB, and because of two indicators that were invalidating on opposite sides of the screen, most of the screen was being redrawn for each new image and chart update. I didn't have a classmate at the time, but fixing either the chart overlap or hiding the Image Information display definitely dropped the CPU load. I can't tell you what rates the IMAQ display is capable of on the classmate, but my assumption is that it is similar in speed or faster than DirectX or that is what they would already be using. If you are able to make a direct comparison and publish the data, I'll report the results to the developer in IMAQ.

Meanwhile, I'm glad you were able to make so much progress on your DB. It was impressive and I hope your team can take it even farther.

Greg McKaskle

byteit101 29-04-2010 20:49

Re: Demystifying autonomous...
 
I know my dashboard (ZomB) is similar in speed to what you were getting. Although I did not have a CPU or FPS indicator, I had about 5 other controls on the dashboard, and at one point I looked down at the image, realized that our camera was pointed at us, and waved, and watched my hand in real time. (We got tipped on our side, video here: http://thecatattack.org/media/view/2596 (I wave at 1:25 and at the end) )
I had actually been noticing an interesting delay that eventually built up between reboots, that caused the UI of the DS and DB to lag by about 3-4 seconds to respond to mouse events after 6 hours of restarting the DS and DB (clearing FMS Locked), and was surprised that the video was still not laggy.
I would think the difference between DirectX, IMAQ, GDI/GDI+, and WPF is negligible unless some other process is hogging CPU (like many charts)


All times are GMT -5. The time now is 02:27.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi