|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
|||
|
|||
|
Re: How to run vison processing code on classmate during matches?
I'm not sure what to do about that. OpenCV is definitely an option, and if someone is willing to do wrappers, NI-Vision is too.
Greg McKaskle |
|
#2
|
|||
|
|||
|
Re: How to run vison processing code on classmate during matches?
Quote:
Also, what I am really trying to ask in this thread is once the code is written, how do I execute it. Quote:
As an aside, the plan my team has is to use the rectangle as a way of positioning the robot, as we plan to have an extremely short range shooter. Would you still suggest processing at less than 30 FPS and on the cRIO? Today 08:13 PM |
|
#3
|
||||||
|
||||||
|
Re: How to run vison processing code on classmate during matches?
I don't want to hijack this thread, but I have heard usually reliable sources say that sending pictures back to the Classmate (or whatever Win7 Machine you use on your driver station) is a bad idea.
The gist of their argument is that nearby the wifi is up to the task but from across the field with other robots all talking too, the frame rate drops to the low single digits (3-4 fps). This is the kind of thing that scares me to death because I can't know that it is a problem until we get to an actual competition. So... ...should I worry about the 10,000 other problems I have to worry about or should I continue to worry about this one (and perhaps decide that I don't really want to send the camera data to the remote Win7 Machine after all)? Do tell. Joe J |
|
#4
|
||||
|
||||
|
Re: How to run vison processing code on classmate during matches?
Quote:
However we did have some small issues with the robot hiccuping. We believe the problem was the classmate's DS commands due to high latency (we graphed some data obtained from wireshark). We have not ruled out the camera directly attached to the wifi with 100% certainty... |
|
#5
|
|||
|
|||
|
Re: How to run vison processing code on classmate during matches?
Quote:
|
|
#6
|
|||
|
|||
|
Re: How to run vison processing code on classmate during matches?
You can't communicate between the robot and the laptop in hybrid/autonomous can you? or are things like the keyboard and joysticks the only things disabled?
|
|
#7
|
||||
|
||||
|
Re: How to run vison processing code on classmate during matches?
If allowed to put another machine on the robot for image processing, you may as well use the kinect on it instead of the axis! I've heard its very easy to get good data, much more than the axis could ever give you and with a computer with usb on the robot it is very possible and legal to do it. (The kinect anyways, I know nothing of the computer on the robot)
|
|
#8
|
|||
|
|||
|
Re: How to run vison processing code on classmate during matches?
I can vouch that grabbing the image stream from the Axis camera works fine. Our team got ~29 fps using the smaller resolution. The 3-4 fps I would imagine comes from someone trying to use the cRIO to relay the images.
As for communication during hybrid, you can, the only thing disallowed is human input (well, excepting the kinect). |
|
#9
|
|||
|
|||
|
Re: How to run vison processing code on classmate during matches?
Not knowing much about this situation, I can't say for sure what their framerate was on and off the field, but if they used the default dashboard and the M1011, then the framerate would have been single digits to start with.
The default dashboard, along with the cRIO camera communications moved from SW timed requests of individual JPEGs to HW camera timed MJPG stream this year once it was discovered that the M1011 had poor performance with the JPEG route. Both options are still in the LV palette, but the default is now MJPG. The initial decision was somewhat arbitrary, and C and therefore Java, were already using MJPGs. Depending on the setup, other factors could have played a part. It is easy to chalk it up to the "field", but I have never witnessed this and it doesn't feel like the actual culprit. As mentioned, it is somewhat difficult to simulate a match in your shop, but on the network you have, you can look at utilization and latency. You can look at how/if it changes when a second robot is added. You can also do some back-of-envelope calculations to see how much of the N speed network you are using. My final advice is to look at the elements being measured and use those requirements to determine the rates and resolutions needed, as well as the appropriate sensor to use. Due to slow speeds (30 Hz) max, somewhat high latency (>60ms, often 150ms), and variable jitter, cameras are not necessarily a good sensor to close a loop with. It is far better to calculate where the target is and use an encoder or pot to turn the turret. If the robot is to be turned, use a gyro. More CPU does little to improve the numbers. Higher speed cameras exist, but they are not in the kit, their cost is pretty high, and it may be difficult to integrate them. I think the camera is a very valuable sensor, but it all depends on how it is used. To the original topic, the laptop allows you to bring more CPU to the table, to process images more thoroughly, at a higher resolution, and perhaps at a faster rate. Once you have an algorithm that demands more CPU, this seems like a good step. Until then, ... Greg McKaskle |
|
#10
|
|||
|
|||
|
Re: How to run vison processing code on classmate during matches?
All right, so, I have gotten a ton of really helpful information, now what I would like is some help figuring out how I should put it all together.
Here is what I am hoping to do, in its entirety: Use the camera on the robot to find the vision target, Export images from the camera to either our driverstation computer or a laptop mounted on the robot, Use the NI-Vision software to find where the rectangles are, Take the rectangles found by the NI-Vision software and calculate the distance from and angle off of perpendicular with the backboards, Using these distances and angles, which will be updated throughout the course of the robots movement, calculate how our joystick would need to be moved in order to move our robot with a short range shooter up to the backboards, And finally, send these theoretical joystick controls back to the cRIO and use them to move the robot to where it needs to go. I am starting to get how to get the images from the camera, I can theoretically do the calculations on the angles and movement required, but what I still need to figure out is how to create the virtual joystick and then how to send the virtual joystick controls to the robot, as well as how to make the program I write run on whatever computer I decide it needs to run on. |
|
#11
|
||||
|
||||
|
Re: How to run vison processing code on classmate during matches?
So can you do processing on the robot with a laptop as long as you don't control the robot or can't you?
|
|
#12
|
|||
|
|||
|
Re: How to run vison processing code on classmate during matches?
Currently the rules make it look like you can do the processing on a laptop that is mounted on the robot as long as it doesn't directly control the robot, however, we are not fully sure on this as of right now. Q+A qestion perhaps?
|
|
#13
|
|||
|
|||
|
Re: How to run vision processing code on classmate during matches?
Quote:
|
|
#14
|
|||
|
|||
|
Re: How to run vison processing code on classmate during matches?
I'm looking for the exact same information. We are using Java on our robot, and are trying to decide which is best - using the cRio to do the image processing, or the drivers station laptop. We had been assuming the laptop, but I'm having a hard time finding the info regarding how exactly you communicate information from Labview back to the Java code running on the robot. Do you customize the driver station software to do the image processing and send it through the packets there? Or do you do a completely standalone Labview application and communicate via TCPIP somehow?
I'm starting to lean back towards doing it all on the cRIO to avoid that whole issue, but then I'm not sure if I can take advantage of the NI Vision Assistant modeling information..... So many questions....thanks in advance for any additional guidance! |
|
#15
|
|||
|
|||
|
Re: How to run vison processing code on classmate during matches?
So, I was running the vision assistant software, and when I told it to acquire an image, it asked me for the NI-IMAQdx driver which I should download from the NI image acquisition software, just something to keep an eye out for when any other teams are working on setting something like this up.
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|