Log in

View Full Version : Demystifying autonomous...


JBotAlan
08-04-2010, 23:07
I just had a bit of a brain blast. It probably won't be so great grammatically or organizationally because I'm not feeling so hot right now, but wanted to write this down.

It is so difficult to explain to those uninitiated with programming why I can't "just make it go straight for 3 feet, then turn left"...what they don't seem to get is how the robot actually sees the world.

So...

How about having a session before the build season in which we explore what life is like for a robot? Take each of the kids, put a blindfold on, and have them navigate through an obstacle course. Ask them what they did to keep on course, and then draw the parallel to different sensors on the robot.

Then you could proceed to write up steps of how to get through said course, after outlining which sensors would be necessary.

Maybe even give a good heads-up display of sensor values (SANS CAMERA) from the robot sitting around the corner. Give the controls over to a student and watch them figure out the shape of the course based on a few microswitches, an ultrasonic, and a gyro.

It would require a bit of elbow grease on the programmer's part, but I think the effort would pay off.

Input? Anyone done something like this before?

Andrew Schreiber
08-04-2010, 23:20
Actually, sorta. Was working at an FLL camp, we were teaching concepts of programming. We blindfolded one student and had another give them instructions. It really made them realize how blind their robots are.

Mark McLeod
08-04-2010, 23:20
I use pin-the-tail-on-the-donkey as a practical example.
Blindfolded, spun around, don't touch anything - now find the target in one shot.

gvarndell
09-04-2010, 09:55
How about having a session before the build season in which we explore what life is like for a robot? Take each of the kids, put a blindfold on, and have them navigate through an obstacle course. Ask them what they did to keep on course, and then draw the parallel to different sensors on the robot.


It's analytical, I love it.

Dkt01
09-04-2010, 10:29
This would be hilarious to watch. But it would also be a pretty good way to show what autonomous really is. The closest thing we did was at the beginning, we set up a field and walked around as robots. To simulate autonomous, we were blindfolded. Needless to say, everyone struggled with the "autonomous period", but your idea sounds like one of the best ways to simulate what the robot is thinking.

kamocat
09-04-2010, 12:55
What I did to demonstrate the (un)usefulnes camera is to close one eye, and put your hands in a tube around the other eye, to give yourself that 30 degree view angle. (+-15 degrees)
What I didn't have them do is chop it to 10 frames per second. (blinking continuously might work)

Anyways, it was effective at eliminating the wish to get it sent back to the dashboard, where it would only be updated once a second.

Ether
09-04-2010, 13:15
Anyways, it was effective at eliminating the wish to get [the camera video] sent back to the dashboard, where it would only be updated once a second.


I know that the other dashboard data gets updated only once per second, but is this true for the camera image also? I thought the camera image was not part of that 50-element array.


~

mcb
09-04-2010, 13:21
My team did this activity in the fall with the FLL team that we mentored. Not only did it help to explain programming but it also served as a team-building exercise. One kid, the "robot," put the blindfold on and another would direct them where they needed to go, acting as the code. It worked really well!

kamocat
09-04-2010, 13:51
I know that the other dashboard data gets updated only once per second, but is this true for the camera image also? I thought the camera image was not part of that 50-element array.


~
The dashboard uses "Get camera image on PC".
I can't find anywhere in the code that it *says* it takes 1000ms to happen, but I tested it, and I think that's how long it turned out to be. This may be on purpose, seeing as they were worried last year about the bandwidth it would incur.

AdamHeard
09-04-2010, 14:03
The dashboard uses "Get camera image on PC".
I can't find anywhere in the code that it *says* it takes 1000ms to happen, but I tested it, and I think that's how long it turned out to be. This may be on purpose, seeing as they were worried last year about the bandwidth it would incur.

I know teams were able to get the camera image to be nearly real time.

biojae
09-04-2010, 14:06
Anyways, it was effective at eliminating the wish to get it sent back to the dashboard, where it would only be updated once a second.

Is that only in labview?
I have seen the dashboard updating at ~50hz using C++ and Java.

And as far as the camera goes, once the graphs were taken off the dashboard, I have gotten near realtime images at ~15 fps.

Which dashboard are you using? a custom one, or the one that was set with the updater?

Ether
09-04-2010, 14:17
The dashboard uses "Get camera image on PC".
I can't find anywhere in the code that it *says* it takes 1000ms to happen, but I tested it, and I think that's how long it turned out to be. This may be on purpose, seeing as they were worried last year about the bandwidth it would incur.

I don't know for sure, but a few weeks back when a LabVIEW programmer was showing me the cRIO code that packages the Dashboard data it looked to me like the camera image was not being packed into the 50-element array but rather was being updated separately at a higher rate. Second hand info: I asked a team member who had seen the dashboard camera video and he said it looked to be updating a lot faster than once per second. This was with unmodified (default) 2010 FRC LabVIEW framework code.

~

Vikesrock
09-04-2010, 14:23
The dashboard uses "Get camera image on PC".
I can't find anywhere in the code that it *says* it takes 1000ms to happen, but I tested it, and I think that's how long it turned out to be. This may be on purpose, seeing as they were worried last year about the bandwidth it would incur.

We definitely had a dashboard image that updated MUCH faster than this (Labview on both robot and Dashboard). Unfortunately our camera was slightly skewed relative to the robot and fixing it was not a priority compared to other issues that cropped up at competition.

When I talked with the 16 drivers at Midwest, they said they used the camera feed pretty heavily. Based on the success of that robot I would say it is certainly possible to use the dashboard camera to good effect.

Radical Pi
09-04-2010, 14:52
We got a near-live feed from the camera after I rebuilt the dashboard without any graphs. The harder part was convincing mechanical to let me put it on the bot :P

biojae
09-04-2010, 15:04
We got a near-live feed from the camera after I rebuilt the dashboard without any graphs. The harder part was convincing mechanical to let me put it on the bot :P

So, they wouldn't let you put the dashboard on the robot? They have that much say in the software? :P

The camera on the other hand makes much more sense.

Joe Ross
09-04-2010, 18:30
I don't know for sure, but a few weeks back when a LabVIEW programmer was showing me the cRIO code that packages the Dashboard data it looked to me like the camera image was not being packed into the 50-element array but rather was being updated separately at a higher rate. Second hand info: I asked a team member who had seen the dashboard camera video and he said it looked to be updating a lot faster than once per second. This was with unmodified (default) 2010 FRC LabVIEW framework code.

~

You are correct that the camera images is handled completely separately from the dashboard data.

The default LabVIEW framework code updates the high priority dashboard data in Robot Main (50hz). The Low Priority dashboard data is updated at 2hz (IIRC). For an example of the high priority data updating fast, look at the camera tracking information in the lower right corner of the dashboard.

ideasrule
09-04-2010, 18:40
Team 610 is another team that, after a lot of effort, got a real-time (as far as our eyes could tell) feed at 320x240 resolution. The drivers depended on it heavily at the beginning, not so much after they got more experience in aligning the robot with the goal.

Radical Pi
09-04-2010, 18:50
You are correct that the camera is handled completely separately from the dashboard data.

The default LabVIEW framework code updates the high priority dashboard data in Robot Main (50hz). The Low Priority dashboard data is updated at 2hz (IIRC). For an example of the high priority data updating fast, look at the camera tracking information in the lower right corner of the dashboard.

Actually, that is only the tracking data for the camera. The actual images are sent independently in a (UDP I believe) stream that is started when the camera is first called in code. The dashboard packers have nothing to do with this stream

apalrd
09-04-2010, 19:01
We did some camera-driving on our practice bot. We found that, while the image gets a good framerate, it does is not close to realtime. It appeared to be around 10hz video, however it was around 1s behind reality. This fooled our driver into thinking it was actually updating at the speed it appeared, which it did not (btw, I noticed this same thing on the Dashboard I wrote - the data not the camera - no graphs). After the driver (and I, since I was the only other one there and was having fun), got used to the delay, we were able to drive the practice bot through a door (without bumpers), even with the camera mounted far to the side of the bot and partially obstructed by the claw. We also noticed some (lots of) lag with the controls when using vision processing (find ellipse), but with just the camera to the dashboard it was fine. We were able to keep the robot aligned with a line on the floor (edge of the road inside CTC) at full speed in high gear (around 12fps) using only the camera, then shift to low and navigate a narrower hallway into a room, from a different room. It works quite well.

As to the original intent of this thread, I once taught a programming class at an FLL camp, and we played this game where we had two students, sitting back-to-back with identical bags of legos, and we had one student build something and describe vocally to the other how to build it. This taught them how important good instruction is for good execution.

Chris Fultz
09-04-2010, 19:05
Getting back to the OP, that is a great idea.

We did something close and played a human bots game of breakaway, where each student was a robot with certain skills. it made everyone realize the value of different trades and also how small the field became with 2 or 3 robots in one zone.

I like the idea of blindfolds and then someone giving instructions to move the student around the field.

This is actually how i learned FORTRAN in one of my first programming classes. The professor decided to make a peanut butter cracker and eat it. We had to give him verbal instructions on what to do, and he did exactly what we said. Not what we meant, but what we actually said. I still remember the class. It made a good impression!

Greg McKaskle
10-04-2010, 15:05
I'm a firm believer that programmers need to learn how to anthropomorphize with the computer/robot. Don't go through life that way, but think of it like a pair of glasses or a hat you can put on when you want to see and think, knowing only what the function or algorithm knows, or being able to identify exactly what they will need in order to function.

As for the camera and dashboard discussion. The upper rate on the dashboard data is 50Hz and about 1KB per packet. The default framework doesn't read sensors and transmit them at that rate because it would load up the CPU reading a bunch of unallocated channels. I suspect that the framework will start to take advantage of the Open list of I/O to do a better job of this in the future.

Video back to the PC involves lots of elements, meaning that each must perform well, or the frame rate will drop and/or lag will be introduced. Thinking it through, the camera doesn't introduce much lag, but be sure that it is told to acquire and send at a fast rate. The images are delivered on port two over TCP, then sent out over port one over TCP with a small header added for versioning. The issue I've seen with the cRIO is with the memory manager. Big image buffers can be pretty slow to allocate. Keeping the image buffer below 16KB gets rid of this bottleneck. Next in the chain is the bridge, then the router. I haven't seen issues with these elements as they are special purpose and that is all they do. Next is the dashboard computer. If the CPU gets loaded, the images will sit in the IP stack and be lagged by up to five seconds. The default dashboard unfortunately had two elements which are both invalidating the screen and causing drawing cost. The easiest fix is to hide the image info. I believe I've also seen lag introduced when lots of errors are being sent to the DS. With a faster computer, this wouldn't matter as much either.

As I mentioned, an issue at any link in the chain, and the fps can drop and lag can be introduced. If each of these are handled well, I believe you can get less lag down to about 100ms, and frame rate above 25.

Greg McKaskle

gvarndell
11-04-2010, 13:20
What I did to demonstrate the (un)usefulnes camera is to close one eye, and put your hands in a tube around the other eye, to give yourself that 30 degree view angle. (+-15 degrees)
What I didn't have them do is chop it to 10 frames per second. (blinking continuously might work)

Certainly not if autistic children are around, but how about adding a strobe light in a dark room?
Wear your Grandmother's glasses, patch over one eye, restrict FOV on other eye, and have a strobe going.
I would hazard a guess that most teens could fairly consistently catch a randomly tossed (soccer) ball under such conditions -- even if the strobe was off more than on.
I guess I would even predict that 6 kids could split into 2 alliances and play some decent soccer under these conditions.
They probably ought to wear helmets though :yikes:

Pondering how humans can perform such a feat might foster some appreciation for the fact that robot autonomy cannot be dependent upon _knowing_ everything all the time.
A robot that could, even very poorly, approximate our powers of prediction and our ability to fill in the blanks wrt our sensory inputs would be truly amazing.

JBotAlan
27-04-2010, 16:49
A robot that could, even very poorly, approximate our powers of prediction and our ability to fill in the blanks wrt our sensory inputs would be truly amazing.

This is exactly the kind of thing that is so hard to explain to non-believers. What they don't realize is that the robot is more literal than their 7-year-old boy cousin, and completely unable to do anything beyond some quick math.

I'm glad to see the response to this thread. If I put anything together, I'll pass it along on CD.

I am taking at least a year off of FIRST, though, so it may not be for the next little while.

slavik262
28-04-2010, 10:17
If the CPU gets loaded, the images will sit in the IP stack and be lagged by up to five seconds. The default dashboard unfortunately had two elements which are both invalidating the screen and causing drawing cost. The easiest fix is to hide the image info. I believe I've also seen lag introduced when lots of errors are being sent to the DS. With a faster computer, this wouldn't matter as much either.

It was great talking to you in Atlanta about this. Does National Instruments have any thoughts on possibly using DirectX or OpenGL to render the video? Using the video dashboard I wrote (which copies the incoming frames directly onto a DirectX texture instead of using GDI to render and uses a separate thread for receiving images via Winsock), we were consistently getting 25+ frames per second on the field in Atlanta. I also distributed it to a few other teams, including Team 175 who were finalists in the Curie division and used it in all of their matches. Granted, I wasn't rendering anything else but video on the dashboard, but with the combination of hardware accelerated rendering and blocking networking I/O, I got CPU usage down to about 15-20% (as opposed to the default dashboard pegging the CPU at 100%).

Greg McKaskle
29-04-2010, 20:34
It was good seeing what you've developed as well. I can't guarantee what is happening in the IMAQ group, but NI has had a series of vision displays since 1994. Periodically, they look at the options for doing the display of the different the image types. The display shows 8 bit mono with color table, 16 bit and floating point monochrome, and true color, perhaps others I can't remember.

With most of the time being spent on the DS, I didn't pay enough attention to the default DB, and because of two indicators that were invalidating on opposite sides of the screen, most of the screen was being redrawn for each new image and chart update. I didn't have a classmate at the time, but fixing either the chart overlap or hiding the Image Information display definitely dropped the CPU load. I can't tell you what rates the IMAQ display is capable of on the classmate, but my assumption is that it is similar in speed or faster than DirectX or that is what they would already be using. If you are able to make a direct comparison and publish the data, I'll report the results to the developer in IMAQ.

Meanwhile, I'm glad you were able to make so much progress on your DB. It was impressive and I hope your team can take it even farther.

Greg McKaskle

byteit101
29-04-2010, 20:49
I know my dashboard (ZomB) is similar in speed to what you were getting. Although I did not have a CPU or FPS indicator, I had about 5 other controls on the dashboard, and at one point I looked down at the image, realized that our camera was pointed at us, and waved, and watched my hand in real time. (We got tipped on our side, video here: http://thecatattack.org/media/view/2596 (I wave at 1:25 and at the end) )
I had actually been noticing an interesting delay that eventually built up between reboots, that caused the UI of the DS and DB to lag by about 3-4 seconds to respond to mouse events after 6 hours of restarting the DS and DB (clearing FMS Locked), and was surprised that the video was still not laggy.
I would think the difference between DirectX, IMAQ, GDI/GDI+, and WPF is negligible unless some other process is hogging CPU (like many charts)