pic: Behind BLUE EYES



=D

Is the Axis going to have a high enough resolution to do anything with that?

Love the idea…reminds me of a certain camera rig we are prototyping in our shop.

Interesting to see how it goes.

Matthew

Is the Axis going to have a high enough resolution to do anything with that?

In comparison with Sony Cybershot the answer is: No!

But, if you consider the enviroment of the field this year, you’ll check that resolution doesn’t need to be very high, because the idea is locate white balls in a green floor!

That’d be a pretty neat system to use for autonomous. It’ll be interesting to see how well it works.

What ya’ll using for the spherical mirror? I imagine a Christmas tree ball ornament might do well. :stuck_out_tongue:

-Tanner

That’d be a pretty neat system to use for autonomous. It’ll be interesting to see how well it works.

What ya’ll using for the spherical mirror? I imagine a Christmas tree ball ornament might do well.

-Tanner

Well, we tryed many things to validate the concept, as shell’s bean, pan’s cover, security mirrors and at the final we will build our own mirror with polished aluminum and it will be hyperbolic!!!

At the moment we are developing the camera algorithm to detect the balls, and we won’t use these system in autonomous period. We will use on teloperated to provide a good vision for our driver when he plays in Far Zone!

Cool. Very neat.

That works too. I could see that being very useful as it would end up being a type of radar for them. You could also probably do some fun things with it during autonomous.

-Tanner

Do you have any plans for detecting the target? I thin it would be a very interesting project to try to scale the vision processing along the curvature functions of the mirror. Reminds me a bit of the Robocup medium-scale vision systems used for exactly the same thing :smiley: Good luck, and keep us posted on how it goes!

Do you have any plans for detecting the target? I thin it would be a very interesting project to try to scale the vision processing along the curvature functions of the mirror.

Detect the target is relatively easy, the main trouble is provide to CRIO how much far it’s from the robot.

We think that the image processing for transform a circular image in a plane image to get the location of the target will make our live stream lost quality (you’ll need to reorganize yours image pixels based on quadrants of circle).

Today we have less than .5s of lag (we improve the Dashboard processing for this) and, if we need to calculate the angle between robot and target to provide the location at robot this lag will extend.

So we don’t pretend to use the camera by the spherical mirror for any automatic control.
Our main purpose is provide a good vision to our drive and indicate completely what is an ball on image!!!

Today we are implementing the mechanism that will make the mirror rise up and down in order to let us go trough the tunnel!
I’ll post the news!

Thanks for the atention!