Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   Vision Targeting for Aerial Assist 2014 (http://www.chiefdelphi.com/forums/showthread.php?t=123994)

mechanical_robot 04-01-2014 14:35

Vision Targeting for Aerial Assist 2014
 
Alright what are some ideas for vision targeting this year for the 2014 game? My ideas were to for autonomous use the hot zones to target.

Any ideas what are your ideas. I'm currently looking at the game/arena setup PDF.

Hypnotoad 04-01-2014 14:49

Re: Vision Targeting for Aerial Assist 2014
 
It seemed pretty clear that that was the intention of the hot zone from the very beginning.

faust1706 04-01-2014 19:28

Re: Vision Targeting for Aerial Assist 2014
 
Track the hot zone, the balls (calculate its speed and go to where it will be autonomously), track other robot and pass the ball so it will meet them in their path. Track opposing robot so you can autonomous move around them. Track the vision tape on the wall and index from that.

Not going to lie, this this very elementary stuff.

N00bfirst 04-01-2014 20:50

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by faust1706 (Post 1320719)
Track the hot zone, the balls (calculate its speed and go to where it will be autonomously), track other robot and pass the ball so it will meet them in their path. Track opposing robot so you can autonomous move around them. Track the vision tape on the wall and index from that.

Not going to lie, this this very elementary stuff.

Not for us new to this.

RufflesRidge 04-01-2014 21:23

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by N00bfirst (Post 1320863)
Not for us new to this.

http://wpilib.screenstepslive.com/s/3120/m/8731

N00bfirst 04-01-2014 22:00

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by RufflesRidge (Post 1320908)

wow...

thanks a lot!

MikeE 04-01-2014 22:31

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by faust1706 (Post 1320719)
Track the hot zone, the balls (calculate its speed and go to where it will be autonomously), track other robot and pass the ball so it will meet them in their path. Track opposing robot so you can autonomous move around them. Track the vision tape on the wall and index from that.

Not going to lie, this this very elementary stuff.

I agree that determining from the vision target if one of the autonomous goals is hot from a relatively defined stationary position is an easy task with already published examples.

But I don't see how you can possibly argue that tracking other robots and/or flying ball(s) is elementary. That betrays either a misunderstanding of how difficult it really is, or such a deep mastery of computer vision that you've forgotten how difficult it is to the inexperienced. I'll assume the latter.

faust1706 05-01-2014 11:18

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by MikeE (Post 1321022)
I agree that determining from the vision target if one of the autonomous goals is hot from a relatively defined stationary position is an easy task with already published examples.

But I don't see how you can possibly argue that tracking other robots and/or flying ball(s) is elementary. That betrays either a misunderstanding of how difficult it really is, or such a deep mastery of computer vision that you've forgotten how difficult it is to the inexperienced. I'll assume the latter.

I"m going to finish it as soon as I can so I can then outsource the code. If you do a cascade on the object, then you can calculate distance via sterio or depth. You can "measure" the distance traveled between 2 frames for this ball or robot, along with the distance on screen. Size you can assume that the robot's frame dimensions will be almost identical for every robot (or you could save that robot's dimensions when doing a cascade of them), you can measure the velocities in the other two, then use simply add all 3 and now you know it's velocity. Then you know how fast you are going to roll the ball at them, so you do a vector problem. This does assume that they will continue their speed.

The ball is a fairly easy. You already know how big the ball is (therefore its diameter). You could simply use the area of the ball to calculate how far away it is. It would take some testing, but it is very doable. Then the same math applies to the previous example.

mechanical_robot 05-01-2014 11:38

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by faust1706 (Post 1320719)
Track the hot zone, the balls (calculate its speed and go to where it will be autonomously), track other robot and pass the ball so it will meet them in their path. Track opposing robot so you can autonomous move around them. Track the vision tape on the wall and index from that.

Not going to lie, this this very elementary stuff.

Never asked or said anything about how easy or hard this all was.

yash101 05-01-2014 12:22

Re: Vision Targeting for Aerial Assist 2014
 
Take a look at 341's vision paper. It's well written and has what you need to learn! While the programming isn't necessarily the easiest part of vision, the algorithm still seems quite simple to work with!

yash101 05-01-2014 12:37

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by faust1706 (Post 1321398)
I"m going to finish it as soon as I can so I can then outsource the code. If you do a cascade on the object, then you can calculate distance via sterio or depth. You can "measure" the distance traveled between 2 frames for this ball or robot, along with the distance on screen. Size you can assume that the robot's frame dimensions will be almost identical for every robot (or you could save that robot's dimensions when doing a cascade of them), you can measure the velocities in the other two, then use simply add all 3 and now you know it's velocity. Then you know how fast you are going to roll the ball at them, so you do a vector problem. This does assume that they will continue their speed.

The ball is a fairly easy. You already know how big the ball is (therefore its diameter). You could simply use the area of the ball to calculate how far away it is. It would take some testing, but it is very doable. Then the same math applies to the previous example.

The math involved is quite simple. You know the diameter of the ball in real life. You can get the diameter of the ball from theNumber of pixels out is wide. Use convexhull to find the balls and process them. So you will need a reference image showing how large 25 inches is. You can use this info to extrapolate the size and distance to three ball. This will allow you to find the distance to the ball and find the speed and acceleration.

This is simpler than what it seems. Have you gotten pence installed with python bindings or Windows c++?

Read 341's vision whitepaper!!!

topgun 05-01-2014 12:41

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by faust1706 (Post 1321398)
I"m going to finish it as soon as I can so I can then outsource the code. If you do a cascade on the object, then you can calculate distance via sterio or depth. You can "measure" the distance traveled between 2 frames for this ball or robot, along with the distance on screen. Size you can assume that the robot's frame dimensions will be almost identical for every robot (or you could save that robot's dimensions when doing a cascade of them), you can measure the velocities in the other two, then use simply add all 3 and now you know it's velocity. Then you know how fast you are going to roll the ball at them, so you do a vector problem. This does assume that they will continue their speed.

The ball is a fairly easy. You already know how big the ball is (therefore its diameter). You could simply use the area of the ball to calculate how far away it is. It would take some testing, but it is very doable. Then the same math applies to the previous example.

If you didn't outsource the code before Kickoff you are to late and you can't use it on your 2014 robot:
R13 ROBOT elements created before Kickoff are not permitted. ROBOT elements, including software, that are designed before Kickoff are not permitted, unless they or their source files are publicly available prior to Kickoff.
I hope that you did outsource it before Kickoff as I am interested in seeing your techniques as a learning tool for my team's programmers. Please include a publicly available link to your source code.

Hypnotoad 05-01-2014 13:23

Re: Vision Targeting for Aerial Assist 2014
 
I doubt there will be much practical use in tracking anything but the hot zone. Anything else and you're adding unreliable gimmicks to your code. Letting the drivers practice for a few extra days at picking up and shooting is most likely much more beneficial than a ball tracking algorithm.

What I'm worried about is the brightness of the led strips around the hot zone. It seems like they will be a bit dim to track. If anyone has seen stills of teams using the reflective tape, you will see that the leds have to be blindingly bright for them to be as easy to destinguish.

RufflesRidge 05-01-2014 13:24

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by Hypnotoad (Post 1321514)
What I'm worried about is the brightness of the led strips around the hot zone. It seems like they will be a bit dim to track. If anyone has seen stills of teams using the reflective tape, you will see that the leds have to be blindingly bright for them to be as easy to destinguish.

Then why not track the reflective targets?

http://www.youtube.com/watch?v=8-vZm...ru_5& index=3

Hypnotoad 05-01-2014 13:33

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by RufflesRidge (Post 1321515)
Then why not track the reflective targets?

http://www.youtube.com/watch?v=8-vZm...ru_5& index=3

Huh. I totally missed that part while looking through the rulebook. I guess ill read through it one more time. That's one less problem I have to deal with then. Thanks!

MikeE 10-01-2014 14:43

Re: Vision Targeting for Aerial Assist 2014
 
I am seeing at least two strands of assumptions in this thread.

Identifying specific objects and tracking them under ideal conditions is an very interesting, worthwhile and fairly achievable project within the skill set of many capable teams.

Doing so in the real world on a robot, in competition, with intervening fast moving objects, variable lighting, possible network issues (if DS processing), another team playing heavy defense on you, then relying on the data to perform a critical dynamic function on your robot is a totally different scale of problem.

Justin Shelley 10-01-2014 18:29

Re: Vision Targeting for Aerial Assist 2014
 
Since auto is the only part where a single robot can score competitively by itself i think it will be even more important than usual to score the max possible in auto. At weaker districts and regional teams that can score their alliance partners balls in the hot goal will be far ahead of most other teams. To me vision tracking of balls and hot goals in auto is a top priority

faust1706 12-01-2014 14:09

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by Justin Shelley (Post 1325215)
Since auto is the only part where a single robot can score competitively by itself i think it will be even more important than usual to score the max possible in auto. At weaker districts and regional teams that can score their alliance partners balls in the hot goal will be far ahead of most other teams. To me vision tracking of balls and hot goals in auto is a top priority

took at what simbotics did in logomotion, 2011. They would hang their uber tube and their partner's. I believe I saw them hang all 3, but I could be mistaken. A team on einstein that year hung their partner's tube as well. It was crazy and ingenius because a lot of teams had trouble with autonomous that year (at the regional I went to and galileo), us included for the first few matches. But, this year autonomous mode is only 10 seconds, so it will be more difficult, but not impossible.

video of simbotics here: https://www.youtube.com/watch?v=4l-Kq_tZ8cA

bs7280 12-01-2014 19:27

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by faust1706 (Post 1320719)
Track the hot zone, the balls (calculate its speed and go to where it will be autonomously), track other robot and pass the ball so it will meet them in their path. Track opposing robot so you can autonomous move around them. Track the vision tape on the wall and index from that.

Not going to lie, this this very elementary stuff.


How do you plan to track other robots?

SoftwareBug2.0 12-01-2014 21:37

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by bs7280 (Post 1326095)
How do you plan to track other robots?

He seems to be assuming that all robots are rectangles with bumpers colors in the obvious locations. If that's the case and he goes through with his plan for an autonomous robot then I'll look forward to seeing what his robot does when going up against a robot that looks like this: http://www.idleloop.com/frctracker/p.../2013/2972.jpg

faust1706 13-01-2014 00:15

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by SoftwareBug2.0 (Post 1326170)
He seems to be assuming that all robots are rectangles with bumpers colors in the obvious locations. If that's the case and he goes through with his plan for an autonomous robot then I'll look forward to seeing what his robot does when going up against a robot that looks like this: http://www.idleloop.com/frctracker/p.../2013/2972.jpg

eh. There are multiple ways to track other robots. One is cascade training, but that would require me, or someone else, going around to every other robot at our regional and taking a multitude of pictures of them, which might not be welcomed. Also, cascading is notoriously slow in terms of algorithm speed.

The approach I am going is a depth camera. The ball is going to return a sphere with the closest point (theoretically) being it's center via a nth order moment calculation. So, with this being known, you can do a couple of things. If the object you are looking at isn't a circle, it isn't a ball, simple enough, right? I think so. Another thing to do is the take the moment of a given contour (or just segment of the screen if canny is chosen), then check to see if the center is indeed the closest point within all of the contour. I'm going to bet it isn't, which me and 2 mentors agreed was another fair assumption.

Then you can calculate it's velocity by recording it's position relative to you from the previous frame, then calculating it for the current frame, and doing vector math to get it's velocity because you can calculate how much time passed between the two frame. Same math applies for calculating the velocity of the ball.

Alan Anderson 13-01-2014 09:34

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by faust1706 (Post 1326264)
If the object you are looking at isn't a circle, it isn't a ball, simple enough, right? I think so.

What if your view of the ball is obscured by a piece of robot? It won't look like a circle then.

faust1706 13-01-2014 09:42

Re: Vision Targeting for Aerial Assist 2014
 
That's true, you would be hindered. But, if something is blocking part or all of a ball, then you can't do autonomous retrieval via path planning anyway because a robot is in the way. I don't know. I haven't put much thought into ball or robot tracking yet, I'm still focusing on tracking the vision tapes because that is the most important vision task of the competition in our team's eyes. Priorities. Also, I wouldnt even trust my own program for retrieving a ball multiple yards away. In my mind, ball tracking will be used when the drivers gets close to a ball, then software can take over for the last meter or so. It would be cool to see a robot play fetch with itself and throwing it into the high goal every time. We might do that for a school assembly.....an idea for another day.

apalrd 13-01-2014 09:45

Re: Vision Targeting for Aerial Assist 2014
 
You guys are making this too complicated.

In autonomous, you need to know which goal is hot to score the maximum number of points for that ball. Alternatively, you can get 15pts by scoring on a non hot goal (or possibly 20pts if youre lucky)

You already know precisely (to the nearest inch or two) where everything else on the field is on your side of the field if your drive team knows how to setup the robot (training them is far easier than writing code around this). If your intake device needs a higher precision than that then you will have a very hard time playing this game.

If you choose to be a blocker, you could try to find robots by moving along the wall until you find one. There's a possibility that they are turned, which you can't reliably know until you see the ball path, and it's probably too late to block it at that point. It's also possible that the one robot you find is not scoring in autonomous.

During teleop, you have these guys who drive the robot, who know how to drive robots. In the time it takes you to calibrate your vision system on the robot, you could train them to play the game faster, with more strategic input from the coach than an autonomous system could.

IMHO, find the hot goals, and train your drivers.

faust1706 13-01-2014 09:59

Re: Vision Targeting for Aerial Assist 2014
 
That may be true, but what if a ball retrieving algorithm is more efficient time wise than a human doing it? First is about fostering the creativity and wonder about science, technology, math, and engineering in students (and mentors, I've seen a few mechanical mentors inspired by a program and then I see them playing around with ardiuno. You're never to old to learn, especially in a stem career). I personally would rather make a complex robot, learn a lot, and preform mid to bottom table than make a simple robot that relies on driver skill and makes it to eliminations. That's just me though. To each their own. My sophomore year I wrote my first program, the team's vision program. I finished it a week before our first regional and we had 5 days to implement it on our practice robot. We had communication issues to the field and maybe a short in our wiring somewhere to our shooter that prevented us from spinning our wheels to the appropriate speed. We didn't win any matches on our own. Our alliance partners won them for us. So the philosopher could argue that the program I wrote was for nothing. But it wasn't. I now am deciding between going to wash u or mit and studying computer science with an emphasis on computer vision (medical applications in mind). Because my team decided to do something complex and unique, it changed my life.

Sparkyshires 13-01-2014 10:53

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by apalrd (Post 1326351)
IMHO, find the hot goals, and train your drivers.

I agree completely. At absolute most have an autonomous to pick up a second ball, but thats only if you have ample time as its not necessary and will really only be game changing at worlds.

Jared Russell 13-01-2014 11:12

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by faust1706 (Post 1326358)
... snip ...

I admire your passion and sure, sometimes cool things are worth doing just because they are cool.

However, there are plenty of ways to go above and beyond in programming that will be FAR more beneficial to your team than automated path planning for ball retrieval.

Can your robot autonomously turn in place to within a degree of the desired angle?

Can you keep track of where your robot is (in a field-centric coordinate system) as it drives?

If you give your robot a series of waypoints to follow in autonomous mode, can it drive to each of them within a couple inches? Even if you are bumped?

Do you have a way to provide such waypoints to your robot that doesn't require re-compiling and downloading code?

These may sound simple, but they aren't. Being able to do any of the above places you in the top 90th percentile of teams in programming. As it turns out, you would probably need each of these capabilities anyhow if you wanted to be fully automated and do path planning. So start simple and build from there. See how far you get. If you don't achieve your grandiose vision, at least you have developed some useful (and still pretty cool) capabilities on the way.

mwtidd 13-01-2014 11:15

Re: Vision Targeting for Aerial Assist 2014
 
Personally my priority list would be the following:

Priority 1 : Find / Range the goal. Missing shots will be very costly.
Priority 2 : Detect / Find Second or Missed Ball.
Priority 3 : Detect a Goalie. Same reason as priority 1.
Priority 4 : Detect Hot Goal. You have a 50% chance of hitting it anyway.

JamesTerm 13-01-2014 13:27

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by faust1706 (Post 1320719)
Track the hot zone, the balls (calculate its speed and go to where it will be autonomously), track other robot and pass the ball so it will meet them in their path. Track opposing robot so you can autonomous move around them. Track the vision tape on the wall and index from that.

Not going to lie, this this very elementary stuff.

We at NewTek have spent years on vision tracking... check this out:
http://www.youtube.com/watch?v=JqnwC3eHdZ0
This is merely tracking a primitive solid color rectangle, and I can say with 100% confidence there is nothing elementary about it. There is nothing elementary about figuring out all of the tricks needed to have that tracking in a real-time environment. If you do get around to tracking robots... please do like I did and post a demo of it in action. As Tim Jenison once said to me as I tried and failed... "real video is a dirty world". I admire your persistence and I hope you can pull it off... good luck! ;)

I do have a few questions for you... what kind of tracking setup are you going to use (e.g. Raspberry pi, m1011, m1013 etc.)? Will it be on-board or over the network and processed on the driver station?

faust1706 13-01-2014 13:44

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by JamesTerm (Post 1326478)

I do have a question for you... what kind of tracking setup are you going to use (e.g. Raspberry pi, m1011, m1013 etc.). Will it be on-board or over the network and processed on the driver station?

After much (semi) heated discussion between me, another student, our mentor who is a ee, a mentor who is a biomed engineer that does biotechnology, and our teacher sponsor, we decided on the O-droid xu. Last year we used the O-droid x2. It will be on board and will relay info from the xu to the labview side of things via a udp message. We might have multiple xus. we are not sure how much computational power we will need this year. As of right now, the tape tracking program runs at ~27-28 fps without uncapping the 30 fps limit or threading it.

We are going the route of 3, 120 degree cameras for a complete view of the field. A case sensitive pose calculation will be done to calculate x, y, and z displacement and pitch, roll, and yaw, with respect to the middle of the wall on the floor. There are 3 cases: left corner, right corner, and both.

We are also going to have an asus xtion to do robot and ball detection, maybe 2, still unsure, we are prioritizing our tasks in case we run out of time. The xtions will be for ball and robot detection, and then we can measure velocity of whatever we see (assuming we aren't moving and everything around us)

I'm going to be outsourcing some of the code that tracks the vision tape in the next few days. I taught another student some computer vision and he is starting to work on depth tracking with the xtion. A concern we have with the xtion is lighting in the area. The depth works by projecting a pattern of dots in ir, but if there is so much light, the depth won't work, so it's a gamble.

JamesTerm 13-01-2014 14:12

Re: Vision Targeting for Aerial Assist 2014
 
Thanks for providing this information that is very cool and perhaps you can show a demo of it in action sometime.

Quote:

Originally Posted by faust1706 (Post 1326491)
It will be on board and will relay info from the xu to the labview side of things via a udp message.

You may know this already, but in case you do not... and for anyone else considering to use UDP. There is still a remaining VxWorks bug which is also the same bug in Winsock (i.e. not Winsock2)... if the client receiver sends UDP packets to the robot without the robot being able to receive them (i.e. recvfrom())... the buffer will overflow and start to corrupt the TCP/IP packets. All of a sudden the driver station will disconnect and reconnect and blink between these when that happens. Team 118 experienced this in 2012 as well as our team... I worked around this problem by spawning a new task to receive the packets as soon as the robot starts up, but I still feel a bit uneasy about this solution as we haven't had enough test time with the FMS environment, as it can become a race condition. For this season we'll use the Network Tables code, which uses TCP/IP. We are doing vision processing over the network through the driver station.

faust1706 13-01-2014 16:06

Re: Vision Targeting for Aerial Assist 2014
 
Does anyone have a preference as to where I should outsource the vision tape code? Our team's website people have other more important tasks to do, such as build the robot, so that media platform won't work.

As for the udp, I have no idea how they are received on the labview side, but i do know we have not had cotrouble withmmunications via udp for the past 2 years and past 5 competitions. This is the first I've heard of this issue. Interesting.

JamesTerm 13-01-2014 16:17

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by faust1706 (Post 1326582)
Does anyone have a preference as to where I should outsource the vision tape code? Our team's website people have other more important tasks to do, such as build the robot, so that media platform won't work.

As for the udp, I have no idea how they are received on the labview side, but i do know we have not had cotrouble withmmunications via udp for the past 2 years and past 5 competitions. This is the first I've heard of this issue. Interesting.

You should try SourceForge or FirstForge this it what I have used... for example:
http://firstforge.wpi.edu/sf/projects/smartcppdashboard
This is a great setup for releasing source, binaries, and it has a place for documentation and discussion.

As for the UDP... Greg McKaskle is the one who told me of this problem, so I believe NI has already made this problem go away for labview... 118 was the only other team I know who used c++ and vision. I do not know how Java teams send messages back to the robot, but I suspect they were probably already using Network Tables behind the scenes since it was available on the Java platform.

faust1706 13-01-2014 16:57

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by JamesTerm (Post 1326597)
You should try SourceForge or FirstForge this it what I have used... for example:
http://firstforge.wpi.edu/sf/projects/smartcppdashboard
This is a great setup for releasing source, binaries, and it has a place for documentation and discussion.

As for the UDP... Greg McKaskle is the one who told me of this problem, so I believe NI has already made this problem go away for labview... 118 was the only other team I know who used c++ and vision. I do not know how Java teams send messages back to the robot, but I suspect they were probably already using Network Tables behind the scenes since it was available on the Java platform.

Alright, I'm going to have to go in an rename some variables and add comments to make things for readable to the thirdparty viewer. I just realized that. I'll try to have it up before the sun sets on the midwest.

Ah, Greg McKaskle. I always learn something when I read a post of his. We go c++ to labview, so that could explain why we haven't experienced it before.

update 1: project is being submitted for approval. more updates to come.

update 2: project was approved, trying to figure out how to add .cpp files.

Animal Control 13-01-2014 17:50

Re: Vision Targeting for Aerial Assist 2014
 
if any of you is willing to send the code it would help my team, we have never used the camera really.

faust1706 13-01-2014 18:01

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by Animal Control (Post 1326643)
if any of you is willing to send the code it would help my team, we have never used the camera really.

I'm trying, and failing. If someone could third party it for me, that'd be great. i have the project created, just can't figure out how to add files to it....

http://firstforge.wpi.edu/sf/project...on_source_code

Just message me and I'll email you a bunch of it with descriptions and you could post it on the project.

faust1706 13-01-2014 20:04

Re: Vision Targeting for Aerial Assist 2014
 
Another student put this up online for me: https://cmastudios.me/owncloud/publi...74f 6a3486745

it is an hsv code with trackbars. It grabs an image, converts it to hsv, thresholds it, I think it dilates, find the contours, and colours the contour according to how many sides it has. Enjoy!

Greg McKaskle 13-01-2014 21:11

Re: Vision Targeting for Aerial Assist 2014
 
The UDP problem as it is being called, is fundamental to how VxWorks did its networking in the version of the networking libraries and OS on the cRIO. It will affect all languages, but shouldn't affect LV as readily because it is a more threaded environment. Team 118 and others who saw this were doing many things in a single thread and were therefore congesting traffic. If you let traffic buffer in LV because you fail to read from a port, you will see symptoms where other network ports fail to operate correctly.

Greg McKaskle

SoftwareBug2.0 14-01-2014 01:51

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by faust1706 (Post 1326582)
Does anyone have a preference as to where I should outsource the vision tape code?

How about Bangalore?

Jerry Ballard 14-01-2014 07:46

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by faust1706 (Post 1326582)
Does anyone have a preference as to where I should outsource the vision tape code? Our team's website people have other more important tasks to do, such as build the robot, so that media platform won't work.

As for the udp, I have no idea how they are received on the labview side, but i do know we have not had cotrouble withmmunications via udp for the past 2 years and past 5 competitions. This is the first I've heard of this issue. Interesting.

I would also recommend using github for your code repositories. Github has been donating free repository slots (5 - 10) for FRC teams this year with no limit for user accounts. You'll just have to send them an email requesting a team account.

Go to http://github.com and search for "FRC team" and you'll see several great examples of FRC team's code for previous years.

faust1706 14-01-2014 08:57

Re: Vision Targeting for Aerial Assist 2014
 
I posted a bunch of stuff here:

https://cmastudios.me/owncloud/publi...928edef612274f 6a3486745

It has....2 of some tutorial like programs I wrote. One does a bunch of stuff, so it isn't efficient, but it will be good to learn from. The other has to do with camera calibration and doesn't require a camera. There are also some programming textbooks and the opencv textbook, as well as a 2 research papers, one that wrote the solvepnp algorithm, and the other is my attempt at an academic paper I had to write for a class about the 2012 program.

JamesTerm 14-01-2014 10:23

Re: Vision Targeting for Aerial Assist 2014
 
2 Attachment(s)
Quote:

Originally Posted by faust1706 (Post 1326901)
I posted a bunch of stuff here:

https://cmastudios.me/owncloud/publi...928edef612274f 6a3486745

When clicking that link I get the following message:
(See attachment)
Then clicking to continue I get the second attachment message saying the link is gone.

Maybe GitHub is the way to go... I don't like git at all, but at least it works, and it sounds like you can have your own repository for free, which is great.

With First Forge you can request for a subversion account and once it is granted you can upload there. subversion is great in that there is one repository and easy to use for people to be able to "glv" (get latest version). Git on the other hand has many repositories (each client is a repository) and you can't glv you fetch and merge. I'm sure those you use git everyday have gotten a good workflow going but it is not intuitive. So far all the git experts I have ran into are the console unix type people who prefer not to use UI. Me on the otherhand I'll use tortoise for all of them!<Ok James is now off his soapbox about git>.

Sieber 14-01-2014 12:21

Re: Vision Targeting for Aerial Assist 2014
 
All,
Our team is just starting out with vision and object tracking this year. I created a site to document "How to" do something. It is pretty basic so far but we will keep updating it. We have been posting code there also.

https://sites.google.com/site/sieberschool/

faust1706 14-01-2014 12:52

Re: Vision Targeting for Aerial Assist 2014
 
Oh boy. Time to figure out how to use git.

plnyyanks 14-01-2014 13:16

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by faust1706 (Post 1327018)
Oh boy. Time to figure out how to use git.

Github has some good tutorials on getting started that you can use to learn the syntax.

JamesTerm 14-01-2014 14:57

Re: Vision Targeting for Aerial Assist 2014
 
Quote:

Originally Posted by faust1706 (Post 1327018)
Oh boy. Time to figure out how to use git.

I'm not sure if you like the command line approach to source control or the UI approach... if you like a UI solution for git... this is what I use http://code.google.com/p/tortoisegit/ I can use a svn like work flow and not need to know any commands. It is great for the typical checkin cases.


All times are GMT -5. The time now is 22:53.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi