![]() |
Re: Vision Software comparison
Quote:
What you should be more interested in, is what system a team is using, and the type of performance they are achieving. i.e on dashboard, achieving 20fps with 500ms lag behind real-time, on crio - 4fps with 2 second lag behind real-time. Co-processor - 10fps real-time. (Note, I just made these up, do not take the above to be factual, it is purly for illustraing a point.) In the above scenario, which vision system would you choose? I know exactly which one I would choose. The performace other teams achieve with their Vision setup goes beyond what library they use. Regards, Kevin |
Re: Vision Software comparison
Quote:
|
Re: Vision Software comparison
We're using OpenCV with an ODROID U3. Running two USB cameras, where one is for ball tracking and other other one for auton "hot" target detection.
Vision coprocessor communication is two-way between CRIO and ODROID via UDP ports. |
Re: Vision Software comparison
Quote:
Bidirectional udp: what are you sending to the U3? Just out of curiosity. |
Re: Vision Software comparison
Quote:
We had to do this because, when your setting up, you will be looking at a hot target, and must wait for it to actuate away (indicating other side is hot) or stay up (indication this side is hot). If we didn't send this data, the coprocessor could not differentiate between a hot target seen in disabled mode while setting up, or a hot target seen in autonomous mode (when it really counts). The co-processor uses this data to then wait some steadystate time(time for the hot goal panel to actuate) and return a "ValidData" bit as well as a bunch of useful information about hot goal or not back to the RIO, which is used to make decisions on how the rest of auton is played out. You could do this on the cRIO as well, and keep the stream one way, we just opted to do it this way. We choose this way because it makes the cRIO side interface really simple: Just wait for the "ValidData" bit, and "HotOrNot" bit, or timeout. The commands on the cRIO, have timeouts so that if for some reason we loose the camera or the stream, our auton doesn't hang and still continues, just assuming the closest goal is hot. Hope this helps, Kevin |
Re: Vision Software comparison
We're using a hybrid of different methods.
At week zero our crio based hot detection was perfect, so we figured if it isn't broken why fix it. So that will be staying there. This year we're spoiled as far as spare weight goes, so we'll also have a D830 on board with a logitech c310, an arduino uno, and radioshack led strip. The beauty of this set up is that it can be placed on any robot, tuned to the sweet spot and will be able to provide an "in range" meter. We use OpenCV for Java on this machine. We also plan on using the pwm output on the arduino and inputting that into a counter so we can have a PID based on the range outputted by the UNO, but that won't be implemented until our second district event. Essentially it will be a really big "optical" rangefinder. |
Re: Vision Software comparison
We've found that the NI Vision solution (running on the driver station laptop) is more than adequate for our needs this year. I promise I'm not just saying that because we are an NI Grant recipient :)
|
Re: Vision Software comparison
This year we are using both cRio vision libraries and OpenCV on an PCDuino co-processor.
At the beginning of Auton, we will use the Apex camera and cRio to detect the Hot Goal. After detection, the vision processing is turned off. Ball detection is done via a USB webcam and the PCDuino, this runs continuously. The cRio has a "Socket Request" vi that polls the Polls the PCDuino at 10Hz. The PCDuino responds to the socket request by only returning the latest target "X" coordinate. All code on the PCDuino is scripted in Python. We are hoping to move ALL vision processing to the PCDuino later this season. |
Re: Vision Software comparison
This year we are using the cRIO as the only vision processor, using the NIVision libraries in Java.
We use it on-demand, so the cRIO handles it pretty well, making our full goals Report fairly useful in auton. |
Re: Vision Software comparison
Quote:
We have 3 modes of vision, target tracking to get x,y and heading on the field in relation to the left corner on the side we are scoring on, ball tracking, and robot tracking. All of them currently run at the same time, and it is rather intensive on our XU. It'd be great to hit a button on the DS and switch between the modes when needed. While doing all 3 we get about 10 fps with minimal lag, and maybe 15 when we don't show any images and just let the program spam the udp packets to the cRIO. |
Re: Vision Software comparison
Quote:
This is different from UDP where you send data and close the connection, and have no confidence that your message was delivered. With that said, where TCP does get complicated in the "doing it well part" is multithreading. We have multiple threads running on the robot and the beaglebone each to handle asycronous communications. Three threads run on the robot side, One to listen for incomming connections, one to read from the stream, decompose the strings, and save it in local variables, and another to send data from the robot to the bone. We have mutable looks that allow all of these threads to share data. Right now, each one of these threads run in their own periodic loop just sending and receiving data. On the beaglebone, which we program in c++, we open 3 threads for TCP as well. One for the client, which wakes up and trys to connect to the robot once every second until a connection is established (this takes care of not having to worrying who turns on first). And the other two do similar functions of sending and receiving messages. Now you also need to manage these threads in the event the stream is lossed and terminate gracefully. We have code on the crio and the bone, that if the connection is lost, the threads die, and then restart, where the crio goes back to listening, and the bone trys to reconnect once a second. This helps us re-establish comms in the event of power failures or other events. The bone runs three additional threads, one running FFMPEG grabbing real-time frames from the Axis camera and saving it in a shared, mutable variable, another which runs at a slower frequency and does our image processing, grabbing the latest frame from the FFMPEG thread when needed, and a third which is a counter. The counter thread gets spawned when the matchStart flag is sent from the RIO, and just counts to 5 seconds using the hardware counter on the bone, and changes a shared variable to indicate the hot taget should have changed to the other side, and changes a shared variable again after 10 seconds to indicate auto is over, then it dies. This allows the bone to keep track of what state of autonomous we are in. Now you do not have to do it this way, but unless you run mutiple threads your program will block when trying to read from the stream (it waits until data is on the pipe, and then pulls it in.). We choose the robot to be the server, and the bone to be the client so that, when the robot turns on, it just listens for connections. Nothing else needs to be done until a client connects. It doesn't have to wake up and call out to anything, so in the event we loose a connection or test the crio without having the onboard processor, the cRIO is not doing any additional processing. Right now we send data to the bone 5 times a second from the CRIO. We send data from the bone to the RIO, 15 times a second. We are still working on adjusting our timing loops, but these are working for us at the moment. Hope this helps, Kevin |
Re: Vision Software comparison
Quote:
Just as a reminder. Be carefull when spamming the robot with packets. The VxWorks operating system that all of our cRIOs run on irreguardless of the language used to program it has a hardware buffer that buffers network messages. If the software doesn't wake up in time, or enters a infinite loop for some reason and fails to reads the messages off the buffer, the network buffer will get filled, and new packets will be lossed. This will result in a loss of communication because the driverstation packets will not get through. This was discovered back in 2012. This is a very real problem and has very devistating results. You loose comms, but more importantly, you can't even reset the cRIO from the driverstation because no packets will get through. Thus rendering your robot motionless for the remainder of the match. Any team doing co-processor related network communications should be aware of this. Our alliance partner and good friends that year had trouble with this. And they are not only a team we highly respect, but are highly respected amongst the entire FRC community. This was discussed in the FMS/Enisten Investigation Report Whitepaper released after Championships in 2012. Hope this helps, Kevin |
Re: Vision Software comparison
We're still figuring it out. We have played a bit with the cRIO based vision sample and with RoboRealm and OpenCV. We do have a working homebrow interface from Visual Studio C++ to NetworkTables on the robot (which is the way we're thinking of going). It appears that the python library implementation of NetworkTables that can be found here also has a complete NetworkTables implementation.
|
Re: Vision Software comparison
Quote:
I remember hearing about that. It would be the absolute worst to make it to Einstein then not be able to compete at full capacity due to network errors. I remember in 2011 a robot never moved in the finals on einstein in both matches. I think FIRST is doing better at addressing this issue with the practice matches, but it is sad that it happens. |
Re: Vision Software comparison
We use the NI vision libraries in Java. All we do is separate the particle based on color and size and then wait for a horizontal target. We do that by comparing the top and sides length relative to each other.
|
| All times are GMT -5. The time now is 22:58. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi