![]() |
Re: Aerial Camera for FIRST matches
It would be an interesting challenge. I know we've struggled in the past with real-time tracking. We did it in 09 with the trailers because of computation and communication lag time.
A 18 foot/second robot moving full speed will move 21.6 inches in 100 ms. Inconsistent timing on communications will make that vary, since you'll be waiting for a video stream from FMS, working with that, and then sending the result to the robot. The height, viewing angle, and lens distortion of the camera will also bring some variability into your measurements. It is something fun to kick around though. |
Re: Aerial Camera for FIRST matches
Quote:
|
Re: Aerial Camera for FIRST matches
If you choose not to pay attention to the momentum of a robot or game pieces, that is your choice, but it is a piece of info that predicts future location. As noted, your measurements will lag. You can minimize the lag, but you cannot eliminate it. Knowing the amount of lag will give you a confidence interval on object locations.
RoboCup allows teams to use an omni cam for some of its levels. Those robots are super nimble and fast as well. See https://www.youtube.com/watch?v=6Bch...ocW BtQthSwqx for an example. The level of swarm play in robocup is very inspiring. Yes it would be cool to incorporate it into FRC, but it is quite difficult, much harder than you make it sound. And the availability of data for the programmers to practice on remains my biggest issue. I believe the robocup teams are required to mount their own camera. Perhaps your team could incorporate it similarly. Greg McKaskle |
Re: Aerial Camera for FIRST matches
Quote:
|
Re: Aerial Camera for FIRST matches
Quote:
The Mid-Size League shows what's possible using only on-board sensors; of course, they have the advantage of having multiple views of the field (one from each robot). |
Re: Aerial Camera for FIRST matches
Thanks for the links. The last time I researched RoboCup, that wasn't in place, or was new enough that I didn't find it.
Greg McKaskle |
Re: Aerial Camera for FIRST matches
I shot UAS ("drone") footage for our recent R2OC event. There is a thread on this here: http://www.chiefdelphi.com/forums/sh...hreadid=130179
Andrew Schreiber and BBay_T1296 raise valid concerns. We all agreed beforehand to not fly over the field during match play, and there were some other safety rules as well. In addition, I think a truly "aerial" camera will not be stable enough for good registration/tracking of the robots. So I agree with the notion of either a camera fixed to some element of the arena over the field, or on a cable/pulley system as suggested by Tom Line. The notion of an electromagnetic "dock" for the UAS is interesting; I am not sure I would want something with GPS antennae at its apex, and with motors/electronic compass, etc. encountering a strong electromagnet. |
Re: Aerial Camera for FIRST matches
The good people of /r/frc pointed out rule R73.
"R73 Any decorations that involve broadcasting a signal to/from the ROBOT, such as remote cameras, must be approved by FIRST (via e-mail to frcparts@usfirst.org) prior to the event and tested for communications interference at the venue. Such devices, if reviewed and approved, are excluded from R61." So I'm going to email them and ask if this idea would be ok (simply somehow getting a camera pointed at the field from a very high vantage point). Update: I got a reply (I know, so fast?): "Thanks for your note. I’ve forwarded to the rest of the team and this will be considered when the rules are drafted, however and of course, I can’t promise anything." My request was to allow us to send a message to our driver station wirelessly about where everything is on the field. A team could literally do all of the image processing before sending it and only send motor values to go to the designated point on the field instead of sending a stream of an entire image. |
Re: Aerial Camera for FIRST matches
And if everything works out properly, it could be literally a matter of background substraction to get all the robots. Place three color dots on the bumper of the robot and you can then calculate position, direction, velocity and acceleration!
|
Re: Aerial Camera for FIRST matches
It would be a lot easier, and cheaper to implement a LPS system instead of trying to extrapolate global position via a camera.
The camera would need to be fixed, so mounting it on a quad copter is a no-go if you want accuracy, unless you have some way to track the position of the quadcopter relative to the reference point on the field. A single camera will skew the image so distance will only be if the camera was directly overhead. Plus lighting conditions, and reflective materials unique to each venue will make each site have different behavior. I do not believe a universal system for all fields, at all events can be accomplished in this manner. An LPS system is a local positioning system, it works almost as similar to GPS but on a smaller scale. Beacons placed at know locations around a field perimeter each have a unique ID and broadcast the time. A receiver on the robot can calculate distance from it and the beacon by calculating the time of flight between itself and the beacon. (It knows the time the signal was sent because it is in the data, and it knows the current time.) The signal is transmitted via RF, and as such is commonly referred to as WLPS (wireless local positioning system). Multiple beacons allow for triangulation. If multiple WIFI Access points were added to the field, that would be all that you needed to set this up. You can do this with many different spectrums with great distance and accuracy. Different systems can be used based on whether you are indoors, or outdoors, and maximum distance. Google's indoor maps uses WIFI signal strength triangulation from known hotspot locations, however for FRC you could do this with beacons using Bluetooth, which would not interfere with the current 802.11 protocol we use for robot control. Some of the problems with this system would be reducing TTFF. Which is time to first fix, to make it fair, a match couldn't start until each Robot was synced and triangulated. Just a thought, Kevin |
Re: Aerial Camera for FIRST matches
Quote:
If you want to determine track of an object, which is where the object might go next, then you can only *Estimate* the track based on current heading, velocity, etc. using probablity theory and other aprior knowledge. An enhanced Kalman filter will help you out in this scenario, but will never be accurate. Consider you are programming an autonomous car, you need to track the other cars around you, their position, and velocity, and lets say you want to change lanes, well how do you determine that another car is not switching into that same lane at that moment in time as well. No aprior knowledge can tell you if that car instantly changes course. There is no way to do this with 100% accuracy, unless there is communication between all the cars. If you don't have this communication, the best you can do is predict with some level of certainty less than 100%. Regards, Kevin |
Re: Aerial Camera for FIRST matches
Quote:
Imagine that black rectangle is exactly containing the field. All I would be doing is finding the position of the robot with respect to the black rectangle. So it doesn't really matter where the camera is on top. Yes, it would alter the values some, but not by much. I do agree with the lighting conditions comment, that could be a problem. As for your other idea, that would be most ideal, but it requires other teams to participate in it. I want to do this project without have to ask other teams to alter their robots or do any extra work. I could easily see your idea be implemented and used to great success, but it requires other teams to play along. Quote:
|
Re: Aerial Camera for FIRST matches
Quote:
I assume you would want to calculate lateral distance between object in frame, because just knowing that object is at pixel x1,y1 and object 2 is at x2,y2 isnt of any value, unless you have determined the proper scaling factor (distance / pixel). It is a lot easier to mount the camera fixed and have a constant distance, then assuming you can keep an object of known height in field of view at all times. Note again, even if you want to say the field will be in view at all times, as the quadcopter moves around a bit in its watch circle hovering, the skew of the lines of the field perimeter will change causing the scaling factor to be inaccurate and any distance between objects to be more inaccurate. Quote:
I assume you are stating this because you would like to know the location of all other objects. This is a true statement, you wouldn't, but even if you did, that information would be useful, but does not allow you to navigate without any local obstacle avoidance. In either system, you still need to perform local object detection, because neither system can guarantee you won't collide into another non-stationary object. The above-head camera, can not determine where a non-stationary object is going next. So if you must develop local obstacle avoidance anyway, then you should be able to navigate sucessfully, without other teams broadcasting their location as well. How ever you plan to implement it, cool project, good luck, Kevin |
Re: Aerial Camera for FIRST matches
The "end result" for knowing where everything is on the field is for path planning. If you're interested: https://www.dropbox.com/sh/uvmzxrgz8...Bz8k6p_pmR_Zua
all we need to know is where the thing are on the field in some coordinate system, then input their coordinates into our path finding as obstacles. Right now I can track them using a depth camera, then I do a linear transformation between the camera's coordinates (3d coordinates with camera being the origin) to the field coordinates (where the bottom left corner is the origin). By using an aerial camera, it eliminates the need for the depth map, which means on less sensor on our robot. And as a bonus, it can see the whole field, unlike a depth map. As for your idea, we don't need it, though it is clever. For the past three years, we have been able to calculate where we are on the field solely from the vision tapes. (See also: http://www.chiefdelphi.com/media/photos/38819 this is a pose estimation. It knows where we are in 3 dimensions with respect to the center of the top hoop, as well as how rotated we are in pitch roll and yaw). I still want to try out your method though. I see extreme value in it. The only downside is that you'd have to set it up at competition, which could be problematic. |
Re: Aerial Camera for FIRST matches
Quote:
Quote:
If this is the case, then once the goals are out of frame, you can no longer determine where you are in the world, correct? How do you plan to do something similar just based on pixel value, without having either a fixed camera distance, or a fixed object of known dimension in the frame? I don't know of a method that only uses pixel location without knowing the distance to the object, or keeping a fixed dimension object in frame to determine the scaling value. You need one of those to calculate global position. Also, depending on your camera and field of view, the reason I keep bringing up skew is because going from local to global coordinates is not linear, the edges of the frame will have a different (skewed) distance per pixel where the center of the image will have another. As long as you are more focused on the center of the image, you can use the small angle approximation in order to linearize distance per pixel. Keep us posted on the project. Regards, Kevin |
| All times are GMT -5. The time now is 01:33. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi