![]() |
Re: Vision Targeting for Aerial Assist 2014
I am seeing at least two strands of assumptions in this thread.
Identifying specific objects and tracking them under ideal conditions is an very interesting, worthwhile and fairly achievable project within the skill set of many capable teams. Doing so in the real world on a robot, in competition, with intervening fast moving objects, variable lighting, possible network issues (if DS processing), another team playing heavy defense on you, then relying on the data to perform a critical dynamic function on your robot is a totally different scale of problem. |
Re: Vision Targeting for Aerial Assist 2014
Since auto is the only part where a single robot can score competitively by itself i think it will be even more important than usual to score the max possible in auto. At weaker districts and regional teams that can score their alliance partners balls in the hot goal will be far ahead of most other teams. To me vision tracking of balls and hot goals in auto is a top priority
|
Re: Vision Targeting for Aerial Assist 2014
Quote:
video of simbotics here: https://www.youtube.com/watch?v=4l-Kq_tZ8cA |
Re: Vision Targeting for Aerial Assist 2014
Quote:
How do you plan to track other robots? |
Re: Vision Targeting for Aerial Assist 2014
Quote:
|
Re: Vision Targeting for Aerial Assist 2014
Quote:
The approach I am going is a depth camera. The ball is going to return a sphere with the closest point (theoretically) being it's center via a nth order moment calculation. So, with this being known, you can do a couple of things. If the object you are looking at isn't a circle, it isn't a ball, simple enough, right? I think so. Another thing to do is the take the moment of a given contour (or just segment of the screen if canny is chosen), then check to see if the center is indeed the closest point within all of the contour. I'm going to bet it isn't, which me and 2 mentors agreed was another fair assumption. Then you can calculate it's velocity by recording it's position relative to you from the previous frame, then calculating it for the current frame, and doing vector math to get it's velocity because you can calculate how much time passed between the two frame. Same math applies for calculating the velocity of the ball. |
Re: Vision Targeting for Aerial Assist 2014
Quote:
|
Re: Vision Targeting for Aerial Assist 2014
That's true, you would be hindered. But, if something is blocking part or all of a ball, then you can't do autonomous retrieval via path planning anyway because a robot is in the way. I don't know. I haven't put much thought into ball or robot tracking yet, I'm still focusing on tracking the vision tapes because that is the most important vision task of the competition in our team's eyes. Priorities. Also, I wouldnt even trust my own program for retrieving a ball multiple yards away. In my mind, ball tracking will be used when the drivers gets close to a ball, then software can take over for the last meter or so. It would be cool to see a robot play fetch with itself and throwing it into the high goal every time. We might do that for a school assembly.....an idea for another day.
|
Re: Vision Targeting for Aerial Assist 2014
You guys are making this too complicated.
In autonomous, you need to know which goal is hot to score the maximum number of points for that ball. Alternatively, you can get 15pts by scoring on a non hot goal (or possibly 20pts if youre lucky) You already know precisely (to the nearest inch or two) where everything else on the field is on your side of the field if your drive team knows how to setup the robot (training them is far easier than writing code around this). If your intake device needs a higher precision than that then you will have a very hard time playing this game. If you choose to be a blocker, you could try to find robots by moving along the wall until you find one. There's a possibility that they are turned, which you can't reliably know until you see the ball path, and it's probably too late to block it at that point. It's also possible that the one robot you find is not scoring in autonomous. During teleop, you have these guys who drive the robot, who know how to drive robots. In the time it takes you to calibrate your vision system on the robot, you could train them to play the game faster, with more strategic input from the coach than an autonomous system could. IMHO, find the hot goals, and train your drivers. |
Re: Vision Targeting for Aerial Assist 2014
That may be true, but what if a ball retrieving algorithm is more efficient time wise than a human doing it? First is about fostering the creativity and wonder about science, technology, math, and engineering in students (and mentors, I've seen a few mechanical mentors inspired by a program and then I see them playing around with ardiuno. You're never to old to learn, especially in a stem career). I personally would rather make a complex robot, learn a lot, and preform mid to bottom table than make a simple robot that relies on driver skill and makes it to eliminations. That's just me though. To each their own. My sophomore year I wrote my first program, the team's vision program. I finished it a week before our first regional and we had 5 days to implement it on our practice robot. We had communication issues to the field and maybe a short in our wiring somewhere to our shooter that prevented us from spinning our wheels to the appropriate speed. We didn't win any matches on our own. Our alliance partners won them for us. So the philosopher could argue that the program I wrote was for nothing. But it wasn't. I now am deciding between going to wash u or mit and studying computer science with an emphasis on computer vision (medical applications in mind). Because my team decided to do something complex and unique, it changed my life.
|
Re: Vision Targeting for Aerial Assist 2014
Quote:
|
Re: Vision Targeting for Aerial Assist 2014
Quote:
However, there are plenty of ways to go above and beyond in programming that will be FAR more beneficial to your team than automated path planning for ball retrieval. Can your robot autonomously turn in place to within a degree of the desired angle? Can you keep track of where your robot is (in a field-centric coordinate system) as it drives? If you give your robot a series of waypoints to follow in autonomous mode, can it drive to each of them within a couple inches? Even if you are bumped? Do you have a way to provide such waypoints to your robot that doesn't require re-compiling and downloading code? These may sound simple, but they aren't. Being able to do any of the above places you in the top 90th percentile of teams in programming. As it turns out, you would probably need each of these capabilities anyhow if you wanted to be fully automated and do path planning. So start simple and build from there. See how far you get. If you don't achieve your grandiose vision, at least you have developed some useful (and still pretty cool) capabilities on the way. |
Re: Vision Targeting for Aerial Assist 2014
Personally my priority list would be the following:
Priority 1 : Find / Range the goal. Missing shots will be very costly. Priority 2 : Detect / Find Second or Missed Ball. Priority 3 : Detect a Goalie. Same reason as priority 1. Priority 4 : Detect Hot Goal. You have a 50% chance of hitting it anyway. |
Re: Vision Targeting for Aerial Assist 2014
Quote:
http://www.youtube.com/watch?v=JqnwC3eHdZ0 This is merely tracking a primitive solid color rectangle, and I can say with 100% confidence there is nothing elementary about it. There is nothing elementary about figuring out all of the tricks needed to have that tracking in a real-time environment. If you do get around to tracking robots... please do like I did and post a demo of it in action. As Tim Jenison once said to me as I tried and failed... "real video is a dirty world". I admire your persistence and I hope you can pull it off... good luck! ;) I do have a few questions for you... what kind of tracking setup are you going to use (e.g. Raspberry pi, m1011, m1013 etc.)? Will it be on-board or over the network and processed on the driver station? |
Re: Vision Targeting for Aerial Assist 2014
Quote:
We are going the route of 3, 120 degree cameras for a complete view of the field. A case sensitive pose calculation will be done to calculate x, y, and z displacement and pitch, roll, and yaw, with respect to the middle of the wall on the floor. There are 3 cases: left corner, right corner, and both. We are also going to have an asus xtion to do robot and ball detection, maybe 2, still unsure, we are prioritizing our tasks in case we run out of time. The xtions will be for ball and robot detection, and then we can measure velocity of whatever we see (assuming we aren't moving and everything around us) I'm going to be outsourcing some of the code that tracks the vision tape in the next few days. I taught another student some computer vision and he is starting to work on depth tracking with the xtion. A concern we have with the xtion is lighting in the area. The depth works by projecting a pattern of dots in ir, but if there is so much light, the depth won't work, so it's a gamble. |
| All times are GMT -5. The time now is 22:53. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi