Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   Vision Software comparison (http://www.chiefdelphi.com/forums/showthread.php?t=124283)

NotInControl 26-02-2014 13:59

Re: Vision Software comparison
 
Quote:

Originally Posted by yash101 (Post 1323553)
OpenCV vs RoboRealm vs SimpleCV...What's the main difference? Also, what do you teams (with vision) use? I would like to know which is better prformance-wise, feature-wise and simplicity-wise. I currently have OpenCV and RoboRealm.

I think you are asking an incomplete question. Most of these libraries contain similar functions which should suffice for FRC. filters, masks, histograms...etc.

What you should be more interested in, is what system a team is using, and the type of performance they are achieving.

i.e on dashboard, achieving 20fps with 500ms lag behind real-time, on crio - 4fps with 2 second lag behind real-time. Co-processor - 10fps real-time.

(Note, I just made these up, do not take the above to be factual, it is purly for illustraing a point.)

In the above scenario, which vision system would you choose? I know exactly which one I would choose. The performace other teams achieve with their Vision setup goes beyond what library they use.

Regards,
Kevin

gpetilli 26-02-2014 14:29

Re: Vision Software comparison
 
Quote:

Originally Posted by NotInControl (Post 1350013)
I think you are asking an incomplete question. Most of these libraries contain similar functions which should suffice for FRC. filters, masks, histograms...etc.

What you should be more interested in, is what system a team is using, and the type of performance they are achieving.

i.e on dashboard, achieving 20fps with 500ms lag behind real-time, on crio - 4fps with 2 second lag behind real-time. Co-processor - 10fps real-time.

(Note, I just made these up, do not take the above to be factual, it is purly for illustraing a point.)

In the above scenario, which vision system would you choose? I know exactly which one I would choose. The performace other teams achieve with their Vision setup goes beyond what library they use.

Regards,
Kevin

We are hoping to use a new camera this year called PIXY from Charmedlabs. The specs claim it finds "blobs" and transmits only the x,y coordinates via RS232 - not the entire image. This allows it to process 50FPS with 20ms lag - which is as fast at the cRIO can consume the data anyway. If we get it in time, we hope to have a ball tracking goalie in AUTO.

Jerry Ballard 26-02-2014 14:42

Re: Vision Software comparison
 
We're using OpenCV with an ODROID U3. Running two USB cameras, where one is for ball tracking and other other one for auton "hot" target detection.

Vision coprocessor communication is two-way between CRIO and ODROID via UDP ports.

faust1706 26-02-2014 14:46

Re: Vision Software comparison
 
Quote:

Originally Posted by Jerry Ballard (Post 1350065)
We're using OpenCV with an ODROID U3. Running two USB cameras, where one is for ball tracking and other other one for auton "hot" target detection.

Vision coprocessor communication is two-way between CRIO and ODROID via UDP ports.

I'm glad to see other people using O-Droid products.

Bidirectional udp: what are you sending to the U3? Just out of curiosity.

NotInControl 26-02-2014 14:52

Re: Vision Software comparison
 
Quote:

Originally Posted by faust1706 (Post 1350068)
I'm glad to see other people using O-Droid products.

Bidirectional udp: what are you sending to the U3? Just out of curiosity.

We use two-way TCP stream between our CRIO and beaglebone. For this year, the cRIO send a "MatchStart" flag.

We had to do this because, when your setting up, you will be looking at a hot target, and must wait for it to actuate away (indicating other side is hot) or stay up (indication this side is hot). If we didn't send this data, the coprocessor could not differentiate between a hot target seen in disabled mode while setting up, or a hot target seen in autonomous mode (when it really counts).

The co-processor uses this data to then wait some steadystate time(time for the hot goal panel to actuate) and return a "ValidData" bit as well as a bunch of useful information about hot goal or not back to the RIO, which is used to make decisions on how the rest of auton is played out.


You could do this on the cRIO as well, and keep the stream one way, we just opted to do it this way. We choose this way because it makes the cRIO side interface really simple: Just wait for the "ValidData" bit, and "HotOrNot" bit, or timeout.

The commands on the cRIO, have timeouts so that if for some reason we loose the camera or the stream, our auton doesn't hang and still continues, just assuming the closest goal is hot.


Hope this helps,
Kevin

mwtidd 26-02-2014 15:01

Re: Vision Software comparison
 
We're using a hybrid of different methods.

At week zero our crio based hot detection was perfect, so we figured if it isn't broken why fix it. So that will be staying there.


This year we're spoiled as far as spare weight goes, so we'll also have a D830 on board with a logitech c310, an arduino uno, and radioshack led strip. The beauty of this set up is that it can be placed on any robot, tuned to the sweet spot and will be able to provide an "in range" meter. We use OpenCV for Java on this machine.

We also plan on using the pwm output on the arduino and inputting that into a counter so we can have a PID based on the range outputted by the UNO, but that won't be implemented until our second district event. Essentially it will be a really big "optical" rangefinder.

ayeckley 26-02-2014 15:09

Re: Vision Software comparison
 
We've found that the NI Vision solution (running on the driver station laptop) is more than adequate for our needs this year. I promise I'm not just saying that because we are an NI Grant recipient :)

billbo911 26-02-2014 15:12

Re: Vision Software comparison
 
This year we are using both cRio vision libraries and OpenCV on an PCDuino co-processor.

At the beginning of Auton, we will use the Apex camera and cRio to detect the Hot Goal. After detection, the vision processing is turned off.

Ball detection is done via a USB webcam and the PCDuino, this runs continuously. The cRio has a "Socket Request" vi that polls the Polls the PCDuino at 10Hz. The PCDuino responds to the socket request by only returning the latest target "X" coordinate. All code on the PCDuino is scripted in Python.

We are hoping to move ALL vision processing to the PCDuino later this season.

ykarkason 26-02-2014 16:47

Re: Vision Software comparison
 
This year we are using the cRIO as the only vision processor, using the NIVision libraries in Java.
We use it on-demand, so the cRIO handles it pretty well, making our full goals Report fairly useful in auton.

faust1706 26-02-2014 19:19

Re: Vision Software comparison
 
Quote:

Originally Posted by NotInControl (Post 1350069)
We use two-way TCP stream between our CRIO and beaglebone. For this year, the cRIO send a "MatchStart" flag.

We had to do this because, when your setting up, you will be looking at a hot target, and must wait for it to actuate away (indicating other side is hot) or stay up (indication this side is hot). If we didn't send this data, the coprocessor could not differentiate between a hot target seen in disabled mode while setting up, or a hot target seen in autonomous mode (when it really counts).

The co-processor uses this data to then wait some steadystate time(time for the hot goal panel to actuate) and return a "ValidData" bit as well as a bunch of useful information about hot goal or not back to the RIO, which is used to make decisions on how the rest of auton is played out.


You could do this on the cRIO as well, and keep the stream one way, we just opted to do it this way. We choose this way because it makes the cRIO side interface really simple: Just wait for the "ValidData" bit, and "HotOrNot" bit, or timeout.

The commands on the cRIO, have timeouts so that if for some reason we loose the camera or the stream, our auton doesn't hang and still continues, just assuming the closest goal is hot.


Hope this helps,
Kevin

That's really clever. We thought about doing bidirectional communication but decided against it, mostly due to our lack of size on our programming team. How complex is it to do? I'd love to figure this out for my team even though I'm graduating.

We have 3 modes of vision, target tracking to get x,y and heading on the field in relation to the left corner on the side we are scoring on, ball tracking, and robot tracking. All of them currently run at the same time, and it is rather intensive on our XU. It'd be great to hit a button on the DS and switch between the modes when needed. While doing all 3 we get about 10 fps with minimal lag, and maybe 15 when we don't show any images and just let the program spam the udp packets to the cRIO.

NotInControl 27-02-2014 12:48

Re: Vision Software comparison
 
Quote:

Originally Posted by faust1706 (Post 1350208)
That's really clever. We thought about doing bidirectional communication but decided against it, mostly due to our lack of size on our programming team. How complex is it to do? I'd love to figure this out for my team even though I'm graduating.

We have 3 modes of vision, target tracking to get x,y and heading on the field in relation to the left corner on the side we are scoring on, ball tracking, and robot tracking. All of them currently run at the same time, and it is rather intensive on our XU. It'd be great to hit a button on the DS and switch between the modes when needed. While doing all 3 we get about 10 fps with minimal lag, and maybe 15 when we don't show any images and just let the program spam the udp packets to the cRIO.

Its not complicated, but it does take some effort to do it well. We program the robot in Java, and the FRC version of the JVM running on the robot does not support UDP, so we are forced to use TCP for all communications. TCP is an open stream and is bi-directional inheriently. It opens a socket which can simply be viewed as just an IO stream so you can write to it or read from it from either end (client or server). Once the stream is up, it stays up until it is closed or lossed.

This is different from UDP where you send data and close the connection, and have no confidence that your message was delivered.

With that said, where TCP does get complicated in the "doing it well part" is multithreading. We have multiple threads running on the robot and the beaglebone each to handle asycronous communications. Three threads run on the robot side, One to listen for incomming connections, one to read from the stream, decompose the strings, and save it in local variables, and another to send data from the robot to the bone. We have mutable looks that allow all of these threads to share data. Right now, each one of these threads run in their own periodic loop just sending and receiving data.

On the beaglebone, which we program in c++, we open 3 threads for TCP as well. One for the client, which wakes up and trys to connect to the robot once every second until a connection is established (this takes care of not having to worrying who turns on first). And the other two do similar functions of sending and receiving messages.

Now you also need to manage these threads in the event the stream is lossed and terminate gracefully. We have code on the crio and the bone, that if the connection is lost, the threads die, and then restart, where the crio goes back to listening, and the bone trys to reconnect once a second. This helps us re-establish comms in the event of power failures or other events.

The bone runs three additional threads, one running FFMPEG grabbing real-time frames from the Axis camera and saving it in a shared, mutable variable, another which runs at a slower frequency and does our image processing, grabbing the latest frame from the FFMPEG thread when needed, and a third which is a counter. The counter thread gets spawned when the matchStart flag is sent from the RIO, and just counts to 5 seconds using the hardware counter on the bone, and changes a shared variable to indicate the hot taget should have changed to the other side, and changes a shared variable again after 10 seconds to indicate auto is over, then it dies. This allows the bone to keep track of what state of autonomous we are in.

Now you do not have to do it this way, but unless you run mutiple threads your program will block when trying to read from the stream (it waits until data is on the pipe, and then pulls it in.).

We choose the robot to be the server, and the bone to be the client so that, when the robot turns on, it just listens for connections. Nothing else needs to be done until a client connects. It doesn't have to wake up and call out to anything, so in the event we loose a connection or test the crio without having the onboard processor, the cRIO is not doing any additional processing.

Right now we send data to the bone 5 times a second from the CRIO. We send data from the bone to the RIO, 15 times a second. We are still working on adjusting our timing loops, but these are working for us at the moment.

Hope this helps,
Kevin

NotInControl 27-02-2014 13:30

Re: Vision Software comparison
 
Quote:

Originally Posted by faust1706 (Post 1350208)
While doing all 3 we get about 10 fps with minimal lag, and maybe 15 when we don't show any images and just let the program spam the udp packets to the cRIO.


Just as a reminder. Be carefull when spamming the robot with packets. The VxWorks operating system that all of our cRIOs run on irreguardless of the language used to program it has a hardware buffer that buffers network messages. If the software doesn't wake up in time, or enters a infinite loop for some reason and fails to reads the messages off the buffer, the network buffer will get filled, and new packets will be lossed. This will result in a loss of communication because the driverstation packets will not get through. This was discovered back in 2012.

This is a very real problem and has very devistating results. You loose comms, but more importantly, you can't even reset the cRIO from the driverstation because no packets will get through. Thus rendering your robot motionless for the remainder of the match. Any team doing co-processor related network communications should be aware of this.

Our alliance partner and good friends that year had trouble with this. And they are not only a team we highly respect, but are highly respected amongst the entire FRC community.

This was discussed in the FMS/Enisten Investigation Report Whitepaper released after Championships in 2012.

Hope this helps,
Kevin

mhaeberli 28-02-2014 19:36

Re: Vision Software comparison
 
We're still figuring it out. We have played a bit with the cRIO based vision sample and with RoboRealm and OpenCV. We do have a working homebrow interface from Visual Studio C++ to NetworkTables on the robot (which is the way we're thinking of going). It appears that the python library implementation of NetworkTables that can be found here also has a complete NetworkTables implementation.

faust1706 28-02-2014 19:54

Re: Vision Software comparison
 
Quote:

Originally Posted by NotInControl (Post 1350550)
Just as a reminder. Be carefull when spamming the robot with packets. The VxWorks operating system that all of our cRIOs run on irreguardless of the language used to program it has a hardware buffer that buffers network messages. If the software doesn't wake up in time, or enters a infinite loop for some reason and fails to reads the messages off the buffer, the network buffer will get filled, and new packets will be lossed. This will result in a loss of communication because the driverstation packets will not get through. This was discovered back in 2012.

This is a very real problem and has very devistating results. You loose comms, but more importantly, you can't even reset the cRIO from the driverstation because no packets will get through. Thus rendering your robot motionless for the remainder of the match. Any team doing co-processor related network communications should be aware of this.

Our alliance partner and good friends that year had trouble with this. And they are not only a team we highly respect, but are highly respected amongst the entire FRC community.

This was discussed in the FMS/Enisten Investigation Report Whitepaper released after Championships in 2012.

Hope this helps,
Kevin

I understand the concern. When I say spam the cRIO, I simply mean sending the vision solutions. This isn't even the fastest vision program in the history of the team. Last year was at 27 with an odroid x2, during competition and 33 at school on a much more powerful computer. Sorry for the poor wording.

I remember hearing about that. It would be the absolute worst to make it to Einstein then not be able to compete at full capacity due to network errors. I remember in 2011 a robot never moved in the finals on einstein in both matches. I think FIRST is doing better at addressing this issue with the practice matches, but it is sad that it happens.

blujackolantern 09-03-2014 15:56

Re: Vision Software comparison
 
We use the NI vision libraries in Java. All we do is separate the particle based on color and size and then wait for a horizontal target. We do that by comparing the top and sides length relative to each other.


All times are GMT -5. The time now is 22:58.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi