|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#16
|
||||||
|
||||||
|
Re: Demystifying autonomous...
Quote:
The default LabVIEW framework code updates the high priority dashboard data in Robot Main (50hz). The Low Priority dashboard data is updated at 2hz (IIRC). For an example of the high priority data updating fast, look at the camera tracking information in the lower right corner of the dashboard. Last edited by Joe Ross : 09-04-2010 at 19:05. |
|
#17
|
||||
|
||||
|
Re: Demystifying autonomous...
Team 610 is another team that, after a lot of effort, got a real-time (as far as our eyes could tell) feed at 320x240 resolution. The drivers depended on it heavily at the beginning, not so much after they got more experience in aligning the robot with the goal.
|
|
#18
|
|||
|
|||
|
Re: Demystifying autonomous...
Quote:
|
|
#19
|
|||||
|
|||||
|
Re: Demystifying autonomous...
We did some camera-driving on our practice bot. We found that, while the image gets a good framerate, it does is not close to realtime. It appeared to be around 10hz video, however it was around 1s behind reality. This fooled our driver into thinking it was actually updating at the speed it appeared, which it did not (btw, I noticed this same thing on the Dashboard I wrote - the data not the camera - no graphs). After the driver (and I, since I was the only other one there and was having fun), got used to the delay, we were able to drive the practice bot through a door (without bumpers), even with the camera mounted far to the side of the bot and partially obstructed by the claw. We also noticed some (lots of) lag with the controls when using vision processing (find ellipse), but with just the camera to the dashboard it was fine. We were able to keep the robot aligned with a line on the floor (edge of the road inside CTC) at full speed in high gear (around 12fps) using only the camera, then shift to low and navigate a narrower hallway into a room, from a different room. It works quite well.
As to the original intent of this thread, I once taught a programming class at an FLL camp, and we played this game where we had two students, sitting back-to-back with identical bags of legos, and we had one student build something and describe vocally to the other how to build it. This taught them how important good instruction is for good execution. |
|
#20
|
|||||
|
|||||
|
Re: Demystifying autonomous...
Getting back to the OP, that is a great idea.
We did something close and played a human bots game of breakaway, where each student was a robot with certain skills. it made everyone realize the value of different trades and also how small the field became with 2 or 3 robots in one zone. I like the idea of blindfolds and then someone giving instructions to move the student around the field. This is actually how i learned FORTRAN in one of my first programming classes. The professor decided to make a peanut butter cracker and eat it. We had to give him verbal instructions on what to do, and he did exactly what we said. Not what we meant, but what we actually said. I still remember the class. It made a good impression! |
|
#21
|
|||
|
|||
|
Re: Demystifying autonomous...
I'm a firm believer that programmers need to learn how to anthropomorphize with the computer/robot. Don't go through life that way, but think of it like a pair of glasses or a hat you can put on when you want to see and think, knowing only what the function or algorithm knows, or being able to identify exactly what they will need in order to function.
As for the camera and dashboard discussion. The upper rate on the dashboard data is 50Hz and about 1KB per packet. The default framework doesn't read sensors and transmit them at that rate because it would load up the CPU reading a bunch of unallocated channels. I suspect that the framework will start to take advantage of the Open list of I/O to do a better job of this in the future. Video back to the PC involves lots of elements, meaning that each must perform well, or the frame rate will drop and/or lag will be introduced. Thinking it through, the camera doesn't introduce much lag, but be sure that it is told to acquire and send at a fast rate. The images are delivered on port two over TCP, then sent out over port one over TCP with a small header added for versioning. The issue I've seen with the cRIO is with the memory manager. Big image buffers can be pretty slow to allocate. Keeping the image buffer below 16KB gets rid of this bottleneck. Next in the chain is the bridge, then the router. I haven't seen issues with these elements as they are special purpose and that is all they do. Next is the dashboard computer. If the CPU gets loaded, the images will sit in the IP stack and be lagged by up to five seconds. The default dashboard unfortunately had two elements which are both invalidating the screen and causing drawing cost. The easiest fix is to hide the image info. I believe I've also seen lag introduced when lots of errors are being sent to the DS. With a faster computer, this wouldn't matter as much either. As I mentioned, an issue at any link in the chain, and the fps can drop and lag can be introduced. If each of these are handled well, I believe you can get less lag down to about 100ms, and frame rate above 25. Greg McKaskle |
|
#22
|
||||
|
||||
|
Re: Demystifying autonomous...
Quote:
Wear your Grandmother's glasses, patch over one eye, restrict FOV on other eye, and have a strobe going. I would hazard a guess that most teens could fairly consistently catch a randomly tossed (soccer) ball under such conditions -- even if the strobe was off more than on. I guess I would even predict that 6 kids could split into 2 alliances and play some decent soccer under these conditions. They probably ought to wear helmets though Pondering how humans can perform such a feat might foster some appreciation for the fact that robot autonomy cannot be dependent upon _knowing_ everything all the time. A robot that could, even very poorly, approximate our powers of prediction and our ability to fill in the blanks wrt our sensory inputs would be truly amazing. |
|
#23
|
|||||
|
|||||
|
Re: Demystifying autonomous...
Quote:
I'm glad to see the response to this thread. If I put anything together, I'll pass it along on CD. I am taking at least a year off of FIRST, though, so it may not be for the next little while. |
|
#24
|
||||
|
||||
|
Re: Demystifying autonomous...
Quote:
Last edited by slavik262 : 28-04-2010 at 10:21. |
|
#25
|
|||
|
|||
|
Re: Demystifying autonomous...
It was good seeing what you've developed as well. I can't guarantee what is happening in the IMAQ group, but NI has had a series of vision displays since 1994. Periodically, they look at the options for doing the display of the different the image types. The display shows 8 bit mono with color table, 16 bit and floating point monochrome, and true color, perhaps others I can't remember.
With most of the time being spent on the DS, I didn't pay enough attention to the default DB, and because of two indicators that were invalidating on opposite sides of the screen, most of the screen was being redrawn for each new image and chart update. I didn't have a classmate at the time, but fixing either the chart overlap or hiding the Image Information display definitely dropped the CPU load. I can't tell you what rates the IMAQ display is capable of on the classmate, but my assumption is that it is similar in speed or faster than DirectX or that is what they would already be using. If you are able to make a direct comparison and publish the data, I'll report the results to the developer in IMAQ. Meanwhile, I'm glad you were able to make so much progress on your DB. It was impressive and I hope your team can take it even farther. Greg McKaskle |
|
#26
|
||||
|
||||
|
Re: Demystifying autonomous...
I know my dashboard (ZomB) is similar in speed to what you were getting. Although I did not have a CPU or FPS indicator, I had about 5 other controls on the dashboard, and at one point I looked down at the image, realized that our camera was pointed at us, and waved, and watched my hand in real time. (We got tipped on our side, video here: http://thecatattack.org/media/view/2596 (I wave at 1:25 and at the end) )
I had actually been noticing an interesting delay that eventually built up between reboots, that caused the UI of the DS and DB to lag by about 3-4 seconds to respond to mouse events after 6 hours of restarting the DS and DB (clearing FMS Locked), and was surprised that the video was still not laggy. I would think the difference between DirectX, IMAQ, GDI/GDI+, and WPF is negligible unless some other process is hogging CPU (like many charts) |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Autonomous | Rafi Ahmed | Rules/Strategy | 13 | 08-01-2007 01:34 |
| I <3 Autonomous | Mike | General Forum | 3 | 26-04-2005 22:21 |
| Autonomous | danielkitchener | Rumor Mill | 3 | 03-01-2004 01:08 |
| autonomous..... | Arefin Bari | Rumor Mill | 30 | 19-12-2003 10:53 |