Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   General Forum (http://www.chiefdelphi.com/forums/forumdisplay.php?f=16)
-   -   Aerial Camera for FIRST matches (http://www.chiefdelphi.com/forums/showthread.php?t=130247)

faust1706 08-08-2014 16:28

Aerial Camera for FIRST matches
 
If FIRST would add an aerial camera and give the feed to each team playing, I believe it would greatly increase the level of competition, given that teams actually use it. My team has been working on the software for an autonomous robot for 2 years now, and this problem could be solved in A WEEK if FIRST would simply add an aerial camera. I hope one day this will be a reality.

MrTechCenter 08-08-2014 16:43

Re: Aerial Camera for FIRST matches
 
Quote:

Originally Posted by faust1706 (Post 1395838)
If FIRST would add an aerial camera and give the feed to each team playing, I believe it would greatly increase the level of competition, given that teams actually use it. My team has been working on the software for an autonomous robot for 2 years now, and this problem could be solved in A WEEK if FIRST would simply add an aerial camera. I hope one day this will be a reality.

Way too soon for this to happen. While it would certainly be helpful, it's just not going to happen anytime soon. Although, at the MadTown Throwdown last year, Team 1671 the Bird Brains developed a web app that allowed coaches to have a tablet and see the live score as well as the live stream.

pwnageNick 08-08-2014 16:47

Re: Aerial Camera for FIRST matches
 
I know for the robocup competition, teams are required to have certain labels and symbols on the top of their robot down to a spec, and they have a top view camera that teams can access during the match to track where players are and do autonomous programming off of that. This would be a very cool thing to have in FRC. While it would take some time (years) for teams to fully utilize it, it could make autonomous mode a lot more interesting and have more action between alliances.

Think about the defense we saw on Einstein and at IRI with the goalie bots such as 1114 against/with 254. Now think about the crazier chess matches that would ensue if both robots knew where every robot was on the field and tried to outmaneuver the other. And that would be without a human driving it using a kinect, which I know some people are against being legal.

I don't see this happening for this coming season, or even the one after, but it is a very intriguing thought for the future.

-Nick

Greg McKaskle 08-08-2014 16:54

Re: Aerial Camera for FIRST matches
 
The biggest complication with this is that teams would probably not have access to this until an event. It would probably be pretty hard to replicate it in a build space, and if you did, there would probably be a number of significant difference.

We've done this a number of times with NXT robots or smaller and slower robots playing soccer or other games. It is still a pretty difficult challenge, and you really want robots wear consistent markers that the camera can view -- robots with hats.

If you believe that one mounted camera and one week is all you need, you may want to investigate where your team can mount cameras. What did the rules say last year?

Greg McKaskle

faust1706 08-08-2014 16:57

Re: Aerial Camera for FIRST matches
 
Quote:

Originally Posted by Greg McKaskle (Post 1395846)
If you believe that one mounted camera and one week is all you need, you may want to investigate where your team can mount cameras. What did the rules say last year?

Greg McKaskle

I was thinking that collecting data at the first regional we go to, then play around with it. I wonder if we could get a quadcopter to give us a camera feed like this team did just to simply video tape the match...

https://www.youtube.com/watch?v=wCDUrJ4M6pk

Andrew Schreiber 08-08-2014 17:02

Re: Aerial Camera for FIRST matches
 
Quote:

Originally Posted by faust1706 (Post 1395847)
I was thinking that collecting data at the first regional we go to, then play around with it. I wonder if we could get a quadcopter to give us a camera feed like this team did just to simply video tape the match...

https://www.youtube.com/watch?v=wCDUrJ4M6pk

The less things flying over my head at events the better. The last three years have had me watching for things flying at my head... I don't want to have to worry about something dropping ON me.

BBray_T1296 09-08-2014 16:42

Re: Aerial Camera for FIRST matches
 
Quadcopters in crouded rooms/arena floors is really a no-go. I know the stuff is relatively reliable but it makes a lot of people (even myself) worried when we see a chopper with no blade guards hovering over the crowded stands.

^This happened at TRR for a little bit

I am all for the versatility and capability of quadcopters/etc, but it only has to come down once to be a major problem.

Also FTC events have major problems with other unencrypted 2.4GHz stuff is going on in the same roof. Their system does not handle bandwidth pollution hardly at all before faulting.

yash101 10-08-2014 13:24

Re: Aerial Camera for FIRST matches
 
Using an aerial view, tracking all the field targets would be just as simple as background substraction. However, I have great doubts that FIRST will actually do this. However, there is a way how you can simulate this. Put a camera at the driver station window, as high as possible. Use a suction cup to keep it in place. If done right, the camera should be above most of the robots. Now, you can apply a perspective transform to make the image look like an aerial view. This would look quite close to an aerial view. Of course, you are using a perspective transform and transforming nearly 90 degrees so there will be a heavy loss of resolution, but that shouldn't cause any problems. Just use a 1080p camera. All the processing is quite easy, so it wouldn't hurt to use a high resolution camera. After the perspective transform, you can scale it down for your actual tracking!

Quote:

Originally Posted by BBray_T1296 (Post 1395932)
Quadcopters in crouded rooms/arena floors is really a no-go. I know the stuff is relatively reliable but it makes a lot of people (even myself) worried when we see a chopper with no blade guards hovering over the crowded stands.

^This happened at TRR for a little bit

I am all for the versatility and capability of quadcopters/etc, but it only has to come down once to be a major problem.

Also FTC events have major problems with other unencrypted 2.4GHz stuff is going on in the same roof. Their system does not handle bandwidth pollution hardly at all before faulting.

I thought of this really cool idea a long time ago that could make quadcopters safe in an application like this -- MAGNETS!!!. A servo could be on a worm-gear drive, lowering and raising a magnet into a separator tube. The magnet is strong enough to lift the entire robot (maybe 2-3KGs). Now, what happens is that during setup, the quadcopter, on which this is mounted will fly up to some steel truss on the ceiling. The servo would be then triggered to move the magnet out, so that the craft can hold onto the truss. The craft then could shut down it's propellers and just idle. For safety measures, an accelerometer could be onboard to alert the craft if it falls, so it can turn on it's propellers and safely land! I am pretty sure that would be extremely safe and a good way to get an aerial view of the field. To make the craft come down, it would first start it's propellers and create enough thrust to keep itself in air. The magnet would then be retracted, causing the craft to detach. The craft can now fly down safely for it to be picked up by field staff and stashed for cleanup

BBray_T1296 10-08-2014 14:33

Re: Aerial Camera for FIRST matches
 
What if on my robot at the top of the height limit, I place a carpet square that completely hides my robot from said aerial cam. (unfolding to the 20" extension, to mask bumpers too)

Call me the invisible bot.

:D

MrRoboSteve 10-08-2014 14:38

Re: Aerial Camera for FIRST matches
 
2 Attachment(s)
In the "more practical" department, consider this:

1. Put cameras in known locations on field

Attachment 17229

2. Robots have markers attached.

Attachment 17230

You get the idea.

3. FMS calculates position estimates for each target and delivers them each cycle to the driver station.

4. In order to make it clear that the FMS position estimates are best efforts, matches are randomly selected to have the position estimates disabled. This forces teams to have a workable strategy in case there are issues with the position estimates.

Tom Line 10-08-2014 15:08

Re: Aerial Camera for FIRST matches
 
An easy and practical way of doing this at most venues would be to string a high-tension spectra, amsteel, or kevlar line across above the field and put a go pro on a pulley in the middle of it facing downward.

If FIRST would concur to put a 10 foot tall aluminum pole on top of the driverstation at each end of the field to string the line between, you could easily do this at ANY venue.

EricDrost 10-08-2014 15:09

Re: Aerial Camera for FIRST matches
 
Curie 2014 Match 135 from catwalk:
http://youtu.be/hqoQ5pmK2jI?t=60s

faust1706 10-08-2014 15:19

Re: Aerial Camera for FIRST matches
 
Quote:

Originally Posted by MrRoboSteve (Post 1395981)
In the "more practical" department, consider this:
3. FMS calculates position estimates for each target and delivers them each cycle to the driver station.

4. In order to make it clear that the FMS position estimates are best efforts, matches are randomly selected to have the position estimates disabled. This forces teams to have a workable strategy in case there are issues with the position estimates.

I don't necessarily agree with number 3. They should simply feed the image to the driver station and let them do the heavy lifting of finding the position of the robots. I know the arguments against this ("teams wouldn't have data to play around with before competition"), but all you'd need is an image from all 4 of those cameras with no robots or game pieces in it, then implement a transformation on the image like Yash mentioned to change the perspective. You could even do sterio vision (with homograph (yash)), to calculate distance and use simple trig to calculate distance.

As for number 4, if the fms did number 3, and it sometimes didn't work, it should be counted as a field fault, not an "oh well" situation.

Quote:

Originally Posted by EricDrost (Post 1395986)
Curie 2014 Match 135 from catwalk:
http://youtu.be/hqoQ5pmK2jI?t=60s

Hm....that's exactly what I'm talking about.

Greg McKaskle 11-08-2014 09:18

Re: Aerial Camera for FIRST matches
 
So. You have overhead footage of a match. Plenty of high res pixels to analyze.

The remaining steps are to isolate and track robots. Estimate their heading and velocity. Do the same for game objects. Perhaps superimpose your own graphic robot on top of the image showing how your AI would move your robot.

Sounds like a good project. And as I said in the earlier post, we've done it a few times for demos and it is still a bunch of work and doesn't necessarily work that well.

Greg McKaskle

faust1706 11-08-2014 14:25

Re: Aerial Camera for FIRST matches
 
Quote:

Originally Posted by Greg McKaskle (Post 1396017)
The remaining steps are to isolate and track robots. Estimate their heading and velocity. Do the same for game objects. Perhaps superimpose your own graphic robot on top of the image showing how your AI would move your robot.

All it'd take is a simple background subtraction. Take a calibration image with nothing on the field. Then you grab an image during a match, and delete your calibration image from the image you just grabbed. As for velocity, you simply record the position of your object(s) , find the distance it traveled between frames and divide by frame time.

We do a background subtraction for depth tracking with the kinect (and asus xtion) and it works perfectly.

I don't think knowing the velocity (speed and heading) of another robot would be that useful considering most robots can turn on a dime, and some don't even hsbe to turn to go in a different direction. For game pieces it'd be somewhat useful, but I feel that having a camera above your intake would be more beneficial, but that's just me.

Tom Line 11-08-2014 15:48

Re: Aerial Camera for FIRST matches
 
It would be an interesting challenge. I know we've struggled in the past with real-time tracking. We did it in 09 with the trailers because of computation and communication lag time.

A 18 foot/second robot moving full speed will move 21.6 inches in 100 ms. Inconsistent timing on communications will make that vary, since you'll be waiting for a video stream from FMS, working with that, and then sending the result to the robot.

The height, viewing angle, and lens distortion of the camera will also bring some variability into your measurements. It is something fun to kick around though.

faust1706 11-08-2014 18:34

Re: Aerial Camera for FIRST matches
 
Quote:

Originally Posted by Tom Line (Post 1396064)
A 18 foot/second robot moving full speed will move 21.6 inches in 100 ms. Inconsistent timing on communications will make that vary, since you'll be waiting for a video stream from FMS, working with that, and then sending the result to the robot.

The height, viewing angle, and lens distortion of the camera will also bring some variability into your measurements. It is something fun to kick around though.

The sheer speed of some robots over the years could be a problem with real time tracking. With a camera like the play station eye, you can get very fast frame rates, upwards to 120 apparently, and background subtraction isn't that computationally intensive. A student on 1706 optimized our a star path planning to solve a 500x500 grid on .03 seconds. Then there is sending the data over the network and the robot beginning to act on it. That's probably 10ms. The whole process could be done in msybe 20, 30ms if optimized. And if you add where the robot will be in half a second if it continues it's velocity into path planning, that could combat the time to do all this. Of course, this all goes down the drain when a robot decides to shove you up a wall for 3 seconds.

Greg McKaskle 11-08-2014 21:27

Re: Aerial Camera for FIRST matches
 
If you choose not to pay attention to the momentum of a robot or game pieces, that is your choice, but it is a piece of info that predicts future location. As noted, your measurements will lag. You can minimize the lag, but you cannot eliminate it. Knowing the amount of lag will give you a confidence interval on object locations.

RoboCup allows teams to use an omni cam for some of its levels. Those robots are super nimble and fast as well. See https://www.youtube.com/watch?v=6Bch...ocW BtQthSwqx for an example.

The level of swarm play in robocup is very inspiring. Yes it would be cool to incorporate it into FRC, but it is quite difficult, much harder than you make it sound. And the availability of data for the programmers to practice on remains my biggest issue.

I believe the robocup teams are required to mount their own camera. Perhaps your team could incorporate it similarly.

Greg McKaskle

Michael Hill 11-08-2014 22:08

Re: Aerial Camera for FIRST matches
 
Quote:

Originally Posted by Greg McKaskle (Post 1396095)
If you choose not to pay attention to the momentum of a robot or game pieces, that is your choice, but it is a piece of info that predicts future location. As noted, your measurements will lag. You can minimize the lag, but you cannot eliminate it. Knowing the amount of lag will give you a confidence interval on object locations.

RoboCup allows teams to use an omni cam for some of its levels. Those robots are super nimble and fast as well. See https://www.youtube.com/watch?v=6Bch...ocW BtQthSwqx for an example.

The level of swarm play in robocup is very inspiring. Yes it would be cool to incorporate it into FRC, but it is quite difficult, much harder than you make it sound. And the availability of data for the programmers to practice on remains my biggest issue.

I believe the robocup teams are required to mount their own camera. Perhaps your team could incorporate it similarly.

Greg McKaskle

Yay Kalman Filters! The roboRio can handle that for multiple objects....right? Lol

RyanCahoon 12-08-2014 00:59

Re: Aerial Camera for FIRST matches
 
Quote:

Originally Posted by Greg McKaskle (Post 1396095)
I believe the robocup teams are required to mount their own camera. Perhaps your team could incorporate it similarly.

In RoboCup Small Size League (the video you posted), the robot tracking software was moved to a standardized, shared system in 2010 (though most solutions had converged well before that), and it, along with the cameras, are provided by the competition field (though, of course, all teams have a practice field set up in their laboratories). Teams get the processed position/heading information for all robots and position of the ball from the field. The robots, planning software/computer, and communication between them are provided by the teams. Robots are required to have standardized tracking patterns on their top surface.

The Mid-Size League shows what's possible using only on-board sensors; of course, they have the advantage of having multiple views of the field (one from each robot).

Greg McKaskle 12-08-2014 08:18

Re: Aerial Camera for FIRST matches
 
Thanks for the links. The last time I researched RoboCup, that wasn't in place, or was new enough that I didn't find it.

Greg McKaskle

neshera 12-08-2014 19:51

Re: Aerial Camera for FIRST matches
 
I shot UAS ("drone") footage for our recent R2OC event. There is a thread on this here: http://www.chiefdelphi.com/forums/sh...hreadid=130179

Andrew Schreiber and BBay_T1296 raise valid concerns. We all agreed beforehand to not fly over the field during match play, and there were some other safety rules as well.
In addition, I think a truly "aerial" camera will not be stable enough for good registration/tracking of the robots.

So I agree with the notion of either a camera fixed to some element of the arena over the field, or on a cable/pulley system as suggested by Tom Line.

The notion of an electromagnetic "dock" for the UAS is interesting; I am not sure I would want something with GPS antennae at its apex, and with motors/electronic compass, etc. encountering a strong electromagnet.

faust1706 13-08-2014 14:12

Re: Aerial Camera for FIRST matches
 
The good people of /r/frc pointed out rule R73.

"R73
Any decorations that involve broadcasting a signal to/from the ROBOT, such as remote cameras, must be approved by FIRST (via e-mail to frcparts@usfirst.org) prior to the event and tested for communications interference at the venue. Such devices, if reviewed and approved, are excluded from R61."

So I'm going to email them and ask if this idea would be ok (simply somehow getting a camera pointed at the field from a very high vantage point).

Update: I got a reply (I know, so fast?): "Thanks for your note. I’ve forwarded to the rest of the team and this will be considered when the rules are drafted, however and of course, I can’t promise anything."

My request was to allow us to send a message to our driver station wirelessly about where everything is on the field. A team could literally do all of the image processing before sending it and only send motor values to go to the designated point on the field instead of sending a stream of an entire image.

yash101 13-08-2014 18:18

Re: Aerial Camera for FIRST matches
 
And if everything works out properly, it could be literally a matter of background substraction to get all the robots. Place three color dots on the bumper of the robot and you can then calculate position, direction, velocity and acceleration!

NotInControl 13-08-2014 19:05

Re: Aerial Camera for FIRST matches
 
It would be a lot easier, and cheaper to implement a LPS system instead of trying to extrapolate global position via a camera.

The camera would need to be fixed, so mounting it on a quad copter is a no-go if you want accuracy, unless you have some way to track the position of the quadcopter relative to the reference point on the field. A single camera will skew the image so distance will only be if the camera was directly overhead. Plus lighting conditions, and reflective materials unique to each venue will make each site have different behavior.

I do not believe a universal system for all fields, at all events can be accomplished in this manner.

An LPS system is a local positioning system, it works almost as similar to GPS but on a smaller scale. Beacons placed at know locations around a field perimeter each have a unique ID and broadcast the time. A receiver on the robot can calculate distance from it and the beacon by calculating the time of flight between itself and the beacon. (It knows the time the signal was sent because it is in the data, and it knows the current time.) The signal is transmitted via RF, and as such is commonly referred to as WLPS (wireless local positioning system).

Multiple beacons allow for triangulation. If multiple WIFI Access points were added to the field, that would be all that you needed to set this up.

You can do this with many different spectrums with great distance and accuracy. Different systems can be used based on whether you are indoors, or outdoors, and maximum distance. Google's indoor maps uses WIFI signal strength triangulation from known hotspot locations, however for FRC you could do this with beacons using Bluetooth, which would not interfere with the current 802.11 protocol we use for robot control.

Some of the problems with this system would be reducing TTFF. Which is time to first fix, to make it fair, a match couldn't start until each Robot was synced and triangulated.

Just a thought,
Kevin

NotInControl 13-08-2014 19:14

Re: Aerial Camera for FIRST matches
 
Quote:

Originally Posted by yash101 (Post 1396330)
And if everything works out properly, it could be literally a matter of background substraction to get all the robots. Place three color dots on the bumper of the robot and you can then calculate position, direction, velocity and acceleration!

Just a note on this, you could only calculate instantaneous velocity and acceleration, using previously stored data for the tracked object.

If you want to determine track of an object, which is where the object might go next, then you can only *Estimate* the track based on current heading, velocity, etc. using probablity theory and other aprior knowledge. An enhanced Kalman filter will help you out in this scenario, but will never be accurate.

Consider you are programming an autonomous car, you need to track the other cars around you, their position, and velocity, and lets say you want to change lanes, well how do you determine that another car is not switching into that same lane at that moment in time as well. No aprior knowledge can tell you if that car instantly changes course. There is no way to do this with 100% accuracy, unless there is communication between all the cars. If you don't have this communication, the best you can do is predict with some level of certainty less than 100%.

Regards,
Kevin

faust1706 13-08-2014 19:20

Re: Aerial Camera for FIRST matches
 
Quote:

Originally Posted by NotInControl (Post 1396335)
It would be a lot easier, and cheaper to implement a LPS system instead of trying to extrapolate global position via a camera.

The camera would need to be fixed, so mounting it on a quad copter is a no-go if you want accuracy, unless you have some way to track the position of the quadcopter relative to the reference point on the field. A single camera will skew the image so distance will only be if the camera was directly overhead. Plus lighting conditions, and reflective materials unique to each venue will make each site have different behavior.

The calculations on the image are not distance, so it doesn't really matter where the camera is as long as it is high enough up. Pardon my crude paint skills: incase the image doesn't show up: http://imgur.com/U4V1cxm

Imagine that black rectangle is exactly containing the field. All I would be doing is finding the position of the robot with respect to the black rectangle. So it doesn't really matter where the camera is on top. Yes, it would alter the values some, but not by much. I do agree with the lighting conditions comment, that could be a problem.

As for your other idea, that would be most ideal, but it requires other teams to participate in it. I want to do this project without have to ask other teams to alter their robots or do any extra work. I could easily see your idea be implemented and used to great success, but it requires other teams to play along.

Quote:

Originally Posted by NotInControl (Post 1396340)

Consider you are programming an autonomous car, you need to track the other cars around you, their position, and velocity, and lets say you want to change lanes, well how do you determine that another car is not switching into that same lane at that moment in time as well. No aprior knowledge can tell you if that car instantly changes course. There is no way to do this with 100% accuracy, unless there is communication between all the cars. If you don't have this communication, the best you can do is predict with some level of certainty less than 100%.

It would be something to have access to the inputs of other robots on the field, such as their joystick position(s). At this state though, it would seem that a machine learning algorithm (deep learning) could be used to solve this task (given that each and every team has the exact same inputs for every action, which isn't the case).

NotInControl 13-08-2014 20:12

Re: Aerial Camera for FIRST matches
 
Quote:

Originally Posted by faust1706 (Post 1396341)
The calculations on the image are not distance, so it doesn't really matter where the camera is as long as it is high enough up. Pardon my crude paint skills: incase the image doesn't show up: http://imgur.com/U4V1cxm

Imagine that black rectangle is exactly containing the field. All I would be doing is finding the position of the robot with respect to the black rectangle. So it doesn't really matter where the camera is on top. Yes, it would alter the values some, but not by much. I do agree with the lighting conditions comment, that could be a problem.

I assume the end result is calculating distance between objects in the frame. In order to do this, you either need to know the distance between the Camera and the object, or keep an object in view with known dimensions in frame at all times. This allows you to calculate the scaling factor for the height and width of the image and distance between pixels.

I assume you would want to calculate lateral distance between object in frame, because just knowing that object is at pixel x1,y1 and object 2 is at x2,y2 isnt of any value, unless you have determined the proper scaling factor (distance / pixel).

It is a lot easier to mount the camera fixed and have a constant distance, then assuming you can keep an object of known height in field of view at all times. Note again, even if you want to say the field will be in view at all times, as the quadcopter moves around a bit in its watch circle hovering, the skew of the lines of the field perimeter will change causing the scaling factor to be inaccurate and any distance between objects to be more inaccurate.


Quote:

Originally Posted by faust1706 (Post 1396341)
As for your other idea, that would be most ideal, but it requires other teams to participate in it. I want to do this project without have to ask other teams to alter their robots or do any extra work. I could easily see your idea be implemented and used to great success, but it requires other teams to play along.

Why do you believe this? Just like GPS in your car, you do not need all other cars to have GPS for you to use it. You can use the LPS to track yourself even if no other team uses it.

I assume you are stating this because you would like to know the location of all other objects. This is a true statement, you wouldn't, but even if you did, that information would be useful, but does not allow you to navigate without any local obstacle avoidance.

In either system, you still need to perform local object detection, because neither system can guarantee you won't collide into another non-stationary object.

The above-head camera, can not determine where a non-stationary object is going next. So if you must develop local obstacle avoidance anyway, then you should be able to navigate sucessfully, without other teams broadcasting their location as well.

How ever you plan to implement it, cool project, good luck,
Kevin

faust1706 13-08-2014 20:42

Re: Aerial Camera for FIRST matches
 
The "end result" for knowing where everything is on the field is for path planning. If you're interested: https://www.dropbox.com/sh/uvmzxrgz8...Bz8k6p_pmR_Zua

all we need to know is where the thing are on the field in some coordinate system, then input their coordinates into our path finding as obstacles. Right now I can track them using a depth camera, then I do a linear transformation between the camera's coordinates (3d coordinates with camera being the origin) to the field coordinates (where the bottom left corner is the origin). By using an aerial camera, it eliminates the need for the depth map, which means on less sensor on our robot. And as a bonus, it can see the whole field, unlike a depth map.

As for your idea, we don't need it, though it is clever. For the past three years, we have been able to calculate where we are on the field solely from the vision tapes. (See also: http://www.chiefdelphi.com/media/photos/38819 this is a pose estimation. It knows where we are in 3 dimensions with respect to the center of the top hoop, as well as how rotated we are in pitch roll and yaw). I still want to try out your method though. I see extreme value in it. The only downside is that you'd have to set it up at competition, which could be problematic.

NotInControl 13-08-2014 21:01

Re: Aerial Camera for FIRST matches
 
Quote:

Originally Posted by faust1706 (Post 1396360)
... then I do a linear transformation between the camera's coordinates (3d coordinates with camera being the origin) to the field coordinates (where the bottom left corner is the origin)...

How do you do this linear transformation from local image coordinates to global coordinates used in your path planning, if you do not have either a constant object of known dimension in view at all times, or you have a fixed camera distance with fixed focal length? What is the equation you use that ignores using either of those paramters?


Quote:

Originally Posted by faust1706 (Post 1396360)
... (See also: http://www.chiefdelphi.com/media/photos/38819 this is a pose estimation. It knows where we are in 3 dimensions with respect to the center of the top hoop, as well as how rotated we are in pitch roll and yaw).

Correct me if I am wrong, but is this not localization, by keeping a known object of fixed dimension in frame, and determining the scaling factor of your image based on that? Then determining your location based on the assumed distance, the object is from camera, and the angle between the center of the frame and center of the object?

If this is the case, then once the goals are out of frame, you can no longer determine where you are in the world, correct? How do you plan to do something similar just based on pixel value, without having either a fixed camera distance, or a fixed object of known dimension in the frame?

I don't know of a method that only uses pixel location without knowing the distance to the object, or keeping a fixed dimension object in frame to determine the scaling value. You need one of those to calculate global position. Also, depending on your camera and field of view, the reason I keep bringing up skew is because going from local to global coordinates is not linear, the edges of the frame will have a different (skewed) distance per pixel where the center of the image will have another. As long as you are more focused on the center of the image, you can use the small angle approximation in order to linearize distance per pixel.

Keep us posted on the project.

Regards,
Kevin

faust1706 13-08-2014 21:39

Re: Aerial Camera for FIRST matches
 
It isn't pixel coordinates I am transforming, but real world coordinates of the objects from the kinect. x is left and right, y straight distance. Here is an example: http://www.chiefdelphi.com/media/photos/39138

If you're really interested, here is the code:

Code:

std::pair<int, int> Translation(int mapx, int mapy, double robotx, double roboty, double depthx, double depthy, double heading)
{
    mapx = (robotx + depthx);
    mapy = (robotx - depthy);
    mapx = mapx*cos(heading) + mapy*sin(heading);
    mapy = -mapx*sin(heading) + mapy*cos(heading);
    return std::make_pair(mapx, mapy)
}

It requires to know where the robot is on the field, which we know by our vision solution. First we assume the robot has a heading of 0 degrees, facing directly to the left. Then I account for heading by multiplying by the rotation matrix (though it doesn't look like it).

You're right, it is localization. It is a little (a lot) more complex than calculating scaling factors. It is called a pose estimation (http://docs.opencv.org/modules/calib...struction.html)

You are also right about being blind when the goals are out of frame. In 2012 we had a really high camera (relative to the other robot heights) that rotated to always face the goal. In 2013 our camera was rather low, but so were most robots, and all we used was distance. Those pesky pyramids were also a problem. This year, we used 3, 120 degree cameras (http://www.geniusnet.com/Genius/wSit...14&ctNode=161). There were also vision tapes in all 4 corners. We made a custom gps type triangulation (intersection of n circles where 2<n <=8). This proved very accurate, but we didn't use it in the game, just testing for future years and obtaining knowledge for knowledge's sake. Code can be found here: https://www.dropbox.com/sh/arj7y11wf...QfPB8v0EZaff5a

Skew can be accounted for by calibrating the camera: which we didn't do this year (http://www.chiefdelphi.com/media/photos/39466 and for a read: http://docs.opencv.org/doc/tutorials...libration.html) For the pose estimation, it DOES take into account focal length and what not. In 2013, I said screw it when developing the program and didn't add in a fixing feature, same in 2014. If you look closely at the 2012 image I sent, there are purple crosshairs near the center of each target. That is my projection of my projections of where they are in 3d to where they would be in the screen.

As for linearization...I did a (custom) regression on data for the .y value of the pixels vs distance. I don't have the image on this computer, I'll add it in later tonight. I think that is what you mean by linearization.

Your method seems more versatile and robust, which is why I am interested in it.

Sorry for the wall of text.


All times are GMT -5. The time now is 01:33.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi