View Full Version : Active Goal detection - on cRIO or Drive station
Hi,
we are using labview.
for the active goal detection should I do Vision part it in autonomous part of the cRIO Code or should I do on drive station and just send the active status to cRIO? During autonomous stage will drive station interact with the robot?
Conor Ryan
28-02-2014, 13:15
The best practice is to do all of the processing on the crio, you want to transmit as little data as possible and keep everthing local on the crio for closer to real time analysis.
Plus, for the autonomous hot goal detection, you only really have to process a picture once near the start. If the indicator isn't on the side you're lookign at, your side will be hot for the 5-10 sec period.
(Note: this can become more complicated and require additional logic depending on how and where you line up and what your camera can see)
Alan Anderson
28-02-2014, 17:21
During autonomous stage will drive station interact with the robot?
Communication from the Dashboard program to the cRIO is not blocked during autonomous mode. You will be able to do vision processing on the computer running the Driver Station and send the "hot or not" status to the robot, if you wish.
Brandon_L
28-02-2014, 17:22
The best practice is to do all of the processing on the crio, you want to transmit as little data as possible and keep everthing local on the crio for closer to real time analysis.
I beg to differ, Processing on the cRIO drags it down, and sometimes will create 'lag' as it processes it.
You're already sending your webcam image to the dashboard. Why not process that? Retrieve whatever goal info you need, and send a string back to the robot.
All we're sending back is a Boolean if the goal we're looking at is hot or not.
Greg McKaskle
02-03-2014, 09:01
The tutorial and example code attempted to show how both approaches can be made to work.
You will not bog down the cRIO unless you send it more pixels/second than it can process. How many that is depends on how you decide to process, image size, framerate, etc.
The processor on your laptop is more powerful, but can pretty easily be overwhelmed by image processing too. Also be careful not to request more images than you are processing or this can introduce video lag.
Greg McKaskle
ykarkason
06-03-2014, 05:06
You will not bog down the cRIO of you won't try real-time processing.
Our vision system relied on processing the targets on-demand, and extracting the useful data out of it.
During auton we ran the processing a max of 3 times, based on if the program didn't land into an error. And that's it - distance, angle, hot-or-not, once, and you're done for auton.
Alpharex
07-03-2014, 10:16
I can send you our code to look at if you want to use it or get an idea about how to do it. we process the dashboard image and then send a true(or false) to the Crio. This is method recommended by National Instruments
use the dashboard and send the data to crio, or else do it on the crio and suffer from super high latency
cmwilson13
07-03-2014, 23:38
use the dashboard and send the data to crio, or else do it on the crio and suffer from super high latency
thats simply not true. you just cant process in large resolutions and 30 fps
SoftwareBug2.0
07-03-2014, 23:54
I recommend a third option: Do vision processing on neither on the cRIO or driver station.
We've done both, and doing it on the driver station has been both easier and more effective this year over when we tried it on the Rio.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.