Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   Programmers: I Have A Challenge For You (http://www.chiefdelphi.com/forums/showthread.php?t=84797)

Kevin Watson 04-04-2010 01:33

Re: Programmers: I Have A Challenge For You
 
Quote:

Originally Posted by Boydean (Post 947674)
I would say about 20ish feet.

Without any vision-based autonomy running, the rovers might be able to move about nine feet. With vision-based hazard avoidance and path planning turned-on, each rover would be hard pressed to move 20 cm, or about 8 inches, in that same amount of time. This is a very, very tough problem to solve without some very serious hardware and software. It might be more rewarding to solve smaller problems before tackling something like this. One example might be a vision-based ball detector that could properly align the 'bot before kicking.

-Kevin

Boydean 04-04-2010 02:00

Re: Programmers: I Have A Challenge For You
 
Quote:

Originally Posted by Kevin Watson (Post 947683)
...With vision-based hazard avoidance and path planning turned-on, each rover would be hard pressed to move 20 cm, or about 8 inches, in that same amount of time...

-Kevin

You just blew my mind. :eek:

ideasrule 04-04-2010 03:10

Re: Programmers: I Have A Challenge For You
 
Quote:

Originally Posted by Kevin Watson (Post 947683)
Without any vision-based autonomy running, the rovers might be able to move about nine feet. With vision-based hazard avoidance and path planning turned-on, each rover would be hard pressed to move 20 cm, or about 8 inches, in that same amount of time.

The MERs have a top speed (mechanically limited) of 5 cm/s, but they stop for 10 seconds every 20 seconds for the vision processing to complete. Average speed is 1 cm/s. That's 120 cm in 2 minutes, which is really bad, but more than 20 cm. This is one of the strongest reasons for human space exploration: a four-year-old child on Mars' surface could accomplish far more science and moving around in ten minutes than the rovers could in a day.

Quote:

This is a very, very tough problem to solve without some very serious hardware and software. It might be more rewarding to solve smaller problems before tackling something like this. One example might be a vision-based ball detector that could properly align the 'bot before kicking.

-Kevin
Note that the hardware on board the MERs are extremely limited compared to computers on Earth, even by 2004 standards, because MER hardware must be radiation-hardened. Processor speed is 20 MHz, which is 20 times slower than the cRIO.

gvarndell 04-04-2010 11:27

Re: Programmers: I Have A Challenge For You
 
Quote:

Originally Posted by Kevin Watson (Post 947683)
Without any vision-based autonomy running, the rovers might be able to move about nine feet. With vision-based hazard avoidance and path planning turned-on, each rover would be hard pressed to move 20 cm, or about 8 inches, in that same amount of time. This is a very, very tough problem to solve without some very serious hardware and software. It might be more rewarding to solve smaller problems before tackling something like this. One example might be a vision-based ball detector that could properly align the 'bot before kicking.

-Kevin

This is the essence of what I've been trying to get across in several previous posts to this thread -- the point just seems to land with a thud.

If you had realtime vision that could identify and track numerous objects out to 100 feet and down to roughly 1 cubic foot in size, and the information about those objects was streaming into your robot control system via a simple socket connection, *** what would you do with the data??? ***.

You can fairly easily synthesize a virtual playing field, populate it with virtual allies, opponents, fixed obstacles, and game pieces.
You can animate this virtual world and provide well defined information about what's going on in it to your autonomy functions.
You can then start *really* exploring the difficulties of implementing autonomous behavior in software.

Or, you can pipe-dream about strapping more computers onto your robot in hopes of solving.... what?

davidthefat 04-04-2010 11:47

Re: Programmers: I Have A Challenge For You
 
Quote:

Originally Posted by Kevin Watson (Post 947683)
Without any vision-based autonomy running, the rovers might be able to move about nine feet. With vision-based hazard avoidance and path planning turned-on, each rover would be hard pressed to move 20 cm, or about 8 inches, in that same amount of time. This is a very, very tough problem to solve without some very serious hardware and software. It might be more rewarding to solve smaller problems before tackling something like this. One example might be a vision-based ball detector that could properly align the 'bot before kicking.

-Kevin

:ahh: Its La Canada... Well, I think CV will be able to pull it off before you guys (no hate) BTW Good Job on this year's football team, you guys actually rocked, was a close game.

BTW The way I am thinking of doing it is obviously smaller functions all ran in succession;
like:
Code:

/*
*This is BS code, just made out of my butt
*I am aiming to make the main cpp file to have minimum code
*All the code would be in the individual functions to allow reusage
*That means you can use it in teleop mode too Like what you said
*/
while(isAuto)
{
CheckEnvirn();
if(CheckObject() == true)
{
GetObject();
DoSomething();
}
else
{
Move();
}
}


Chris27 04-04-2010 12:01

Re: Programmers: I Have A Challenge For You
 
I would not issue motion commands and do vision processing on the same thread of execution. Vision processing takes a long time and usually you would like to issue/modify motion commands on very regular small intervals.

If you want a model to base code off of, check out Tekkotsu
http://www.tekkotsu.org/
http://www.tekkotsu.org/dox/

davidthefat 04-04-2010 12:03

Re: Programmers: I Have A Challenge For You
 
Quote:

Originally Posted by Chris27 (Post 947773)
I would not issue motion commands and do vision processing on the same thread of execution. Vision processing takes a long time and usually you would like to issue/modify motion commands on very regular small intervals.

If you want a model to base code off of, check out Tekkotsu
http://www.tekkotsu.org/
http://www.tekkotsu.org/dox/

I was planning on doing the vision stuff on a board that I will make (as soon as I order the chip...)with ATmega1284p chip (pdip package) so it takes the load off the crio


apparently ATmega1284p chip is industrial use status... It must be hard core

The Lucas 04-04-2010 12:49

Re: Programmers: I Have A Challenge For You
 
If any of the control system higher ups are still monitoring this thread: Are we going to be allowed to send packets to alliance robots next year?

When the control system was introduced 2 years ago there was talk about communication between alliance robots. Since we have been rolling out new features of this system every year (CAN and vision feed are new this year). Is alliance communication still in the plans?

mwtidd 04-04-2010 13:47

Re: Programmers: I Have A Challenge For You
 
Quote:

Originally Posted by The Lucas (Post 947790)
If any of the control system higher ups are still monitoring this thread: Are we going to be allowed to send packets to alliance robots next year?

Is it possible? yes. Is it allowed? not yet.

I spoke with Brad Miller (wpilib) about this a couple weeks ago regarding robot-robot com. From his response it didn't seem like something FIRST was considering right now. Seeing as the Zigbee is a legal device except for the price tag (cheapest one is $700), I think the best way to have something like this is if FIRST would exempt Zigbee module from the price restriction, or even better if it could be registered as a KOP item of the bill of materials.

http://shop.sea-gmbh.com/crio-produk...-modul-10.html

It would also be nice if they used a localization system like the star gazer for target recognition. I think it would be much easier on the crio to use this sensor rather than the camera. Also could possibly be used for robot identification

http://www.robotshop.com/hagisonic-s...-system-1.html

Kevin Watson 04-04-2010 14:02

Re: Programmers: I Have A Challenge For You
 
Quote:

Originally Posted by ideasrule (Post 947699)
...but they stop for 10 seconds every 20 seconds for the vision processing to complete.

I'm not sure where your numbers are from, but each rover drives 10cm and then stops while calculating the next arc to drive. The stereo image to range map calculations alone take about 50 seconds.

-Kevin

AustinSchuh 04-04-2010 15:49

Re: Programmers: I Have A Challenge For You
 
Quote:

Originally Posted by davidthefat (Post 947774)
I was planning on doing the vision stuff on a board that I will make (as soon as I order the chip...)with ATmega1284p chip (pdip package) so it takes the load off the crio

apparently ATmega1284p chip is industrial use status... It must be hard core

I would be very impressed if you were able to do much vision with the ATmega. It's a 20 MIPS CPU[1]. The CPU in the cRIO is a FreeScale MPC5200 runs at 750 MIPS at 400 MHz, and includes a floating point unit. So, the power added by the Atmega has about 2% of the CPU power of the cRIO's CPU. And if you need to do any floating point on the ATmega, the cRIO would outperform the ATmega by an even larger margin if it were doing the same job. And none of that even takes into account the extra hardware that some CPU's will have to deal with getting the picture into memory to operate on.

The ATmega is typically used by Industry when someone needs a cheap CPU that has a fair amount of performance and uses very little power.

I'm not saying that the ATmega is a bad CPU. I'm just trying to say that it isn't very well fit for the job you are trying to offload to it. If you do get an ATmega and start to program it, you will learn a lot about how embedded systems are put together. And while that may not be the lesson you are looking for, it's definitely pretty cool to learn how that kind of stuff works.

Something like the Beagleboard or Gumstix that were posted earlier are quite a bit faster. They both use OMAP3 series CPU's. The Beagleboard clocks in at 600 MHz and 1200 MIPS. The Gumstix uses the same CPU. Both of those also have DSP's, which will let you do even more computations. The DSP it's self clocks in at 500 MHz and 4000 MIPS, and that's on top of the 1200 MIPS from the CPU.

[1] MIPS stands for million instructions per second. Since each instruction on different architectures will do different amounts of work, it's still comparing apples to oranges, but it's a lot better than comparing MHz due to all the fun stuff you can do with superscalar CPUs.

Kevin Watson 04-04-2010 17:15

Re: Programmers: I Have A Challenge For You
 
Quote:

Originally Posted by AustinSchuh (Post 947857)
Something like the Beagleboard or Gumstix that were posted earlier are quite a bit faster. They both use OMAP3 series CPU's. The Beagleboard clocks in at 600 MHz and 1200 MIPS. The Gumstix uses the same CPU. Both of those also have DSP's, which will let you do even more computations. The DSP it's self clocks in at 500 MHz and 4000 MIPS, and that's on top of the 1200 MIPS from the CPU.

If I were to attack this problem, I'd start with an Intel Atom 330 with NVIDIA ION GPU and then start reading up on SIMD, CUDA and OpenCL. Assuming stereo vision is used, anyone who wants to attempt this would need to read up and fully comprehend algorithms like SLAM (Simultaneous Localization and Mapping) for localization, scale and rotation invariant model matching for identification and tracking of moving objects, and D* (pronounced "dee star") for path planning. I'd be happy to provide some guidance to teams seriously interested in implementing any kind of autonomy.

-Kevin

mwtidd 04-04-2010 18:52

Re: Programmers: I Have A Challenge For You
 
ICRA Robotic Planetary Contingency Challenge...

The goal is to program and build a robot for an unknown task that you receive at the event.

May be another goal for teams considering a fully autonomous robot.

http://modlabupenn.org/icra/icra-2008/

Robototes2412 04-04-2010 22:14

Re: Programmers: I Have A Challenge For You
 
actually, I wrote a method that would manually (no gyro) align a robot to a target using a mecanum drive. Sadly, I lost it. :(

EDIT:

What it did was get the target radius (size), target x-pos, and target y-pos. It then turned the robot arbitrarily to match up and strafed accordingly. It was slow, it was ugly, it ate/raped small children, bur it worked. Until I lost it that is.

kamocat 05-04-2010 14:00

Re: Programmers: I Have A Challenge For You
 
I've been thinking about full-autonomous since 2008.
My approach is through high-level functions, field awareness, and inter-robot communication.
Our team has done some work on all three of those, however, it all tends to get bogged down with lack of testing.
For high-level functions, we have a forward(ft), turn(deg), strafe(ft), and kick(ms).
For field-awareness, I have an algorithm for detecting the soccer balls on the green carpet when in view of the camera.
For inter-robot communication, I was planning on using an ultrasonic signal generated by the cRIO from the digital sidecar, but I found I could only generate a 3khz signal. I may have to resort to using more hardware (making it more expensive to implement for a sizable quantity of teams) Modulated IR is still an option.


I'll stand up on my soapbox for a moment to mention a couple of ways that FIRST could further encourage autonomous:
make autonomous 30s, and put it at the END of the match
OR
make autonomous a necessary part of the match (e.g, make autonomous/teleop determined by where the robot is on the field, so that in certain essential parts of the field, robots must be in autonomous)
OR
encourage a method of communication BETWEEN robots, so that they can be more field-aware
OR
use RFID so the robots can tell when they are in a certain region
OR
broadcast beacons (modulated IR?) that the robots can triangulate off of
OR
make the game piece stand out and be easily acquired by a camera or other common sensor
OR
provide an objective in autonomous that can ONLY be completed in autonomous. (For example, something that allows the robots to complete a finale objective)


All times are GMT -5. The time now is 14:58.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi