|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||
|
||||
|
Vision CPU Overloading
Hello all.
I've spent about 24 hours trying to figure out how to properly do vision processing on our robot. I've basically copy-pasted the vision example code for this year so that we can detect the hot targets, and it works great, other than the fact that the robot CPU runs at 100% cpu power and runs with an average lag of 1.5 seconds because of it (I've tried putting a slow in the while loops but that doesn't help) I tried moving the processing to the driver station but that just overloads the computer CPU. Does anyone know how to properly do vision processing? All I need is to know whether one side is hot or not hot (booleans!). Is there a tutorial anywhere that I can follow? I'm having the same issue as these people, except that when I put the vision code on the driver station, I ran the driver station CPU at 100% as well. Last edited by bspymaster : 08-02-2014 at 16:16. Reason: added more information |
|
#2
|
||||
|
||||
|
Re: Vision CPU Overloading
Vision code is extremely CPU intensive. It is beneficial to offload that processing to your dashboard and just send back values you need to be checking. It's honestly just as fast and a little bit quicker, in fact.
|
|
#3
|
|||
|
|||
|
Re: Vision CPU Overloading
What rate are you running the camera or loop at? What resolution is the camera? How long is the loop taking to process an image?
Greg McKaskle |
|
#4
|
|||
|
|||
|
Re: Vision CPU Overloading
What follows is more theory than demonstration, because I haven't fully tested the vision code yet. We will in a couple of days. I anticipated that the vision code would take more than 20 ms to execute. I have a test program wrapped in a timer to find out just how long it really takes. Other notes: clearly the code will run faster if there are fewer particles to process. So adjusting the threshold so you get fewer particles is a good place to start. My Autonomous Command starts with just doing the vision processing. Only once the HotOrNot decision is made, do I begin to create scheduled commands for to execute, like driving forward. I'm not completely sure that will work, but I want to give the cRIO every chance to concentrate on the vision work without distractions. Lastly, I expect to try to find some performance improvements in the code. For example, I already eliminated the distance calcs, since we don't need them. We're working in Java, by the way. More as testing proceeds.
|
|
#5
|
|||||
|
|||||
|
Re: Vision CPU Overloading
Quote:
See Tutorial 8 - Integrating Vision into Robot Code. |
|
#6
|
||||
|
||||
|
Re: Vision CPU Overloading
We had this problem before, and we decided to have therobot not do any of the processing and have vision processed on the dashboard. If its running on the dasboard then it will be using you computer cpu which can handle a higher load.
Last edited by killer_rabbit3 : 10-02-2014 at 08:59. |
|
#7
|
|||
|
|||
|
Re: Vision CPU Overloading
Follow up from my previous post about theory. Here is the reality. Our code is written in Java. The first time an AxisCamera.getInstance() is called, it takes our four slot cRIO, over the D-Link, between 4.3 and 4.7 seconds to deliver the first image. After that, each image takes about 25ms to deliver. We're running the camera at 320x240 resolution, 15 fps, compression 30. The live image is showing up on the SmartDashboard. The threshold and filtered images, along with the original image, are being stored in the cRIO flash memory, and are viewable via FTP through a web browser. Because of the time it takes to get an image, we put the image.get() statements inside a loop, like so:
imageWidth = 0; while (imageWidth == 0) { try { image = camera.getImage; imageWidth = image.getWidth; } catch (NIVisionException er) {....} catch (AxisCameraException er) {....} } When this loop exits, we have a picture! So we have the initial call to getInstance() in the RobotInit() routine. But we can't use the picture from there, because the hot goals aren't indicated until the Autonomous period starts. When that happens, we wait 0.1 seconds, get another picture, and process that one for hot goal detection. Processing should take no more than 300ms. Then we start moving the robot. It would save a lot of discovery if the javadocs or some other documentation about Vision Processing would have mentioned that the instancing or getImage functions allow program execution to continue before their function is complete. Or did I miss it somewhere? |
|
#8
|
|||
|
|||
|
Re: Vision CPU Overloading
Thanks for the details.
Setting up the camera connection is not taxing on the CPU, but the camera and cRIO must both be booted up and share data over http before the first image comes back. Until this communication completes, it seems that the getImage() times out and returns an empty image. The LV implementation doesn't do this, but blocks instead. The difference in implementation matches the scheduling characteristics of the language. Unless this is in its own thread, rather than loop until you get an image, I'd suggest you test the width to skip the processing, note that the vision isn't working yet, but otherwise allowing the code to do other things. Does the code store every image for ftp? While this will obviously work, this is somewhat taxing on the cRIO and the flash. It is certainly good for debugging the vision code, but you don't want to fill up the disk/flash, and perhaps you don't need to store all of them. Greg McKaskle |
|
#9
|
|||
|
|||
|
Re: Vision CPU Overloading
Greg, great info, thanks. We only store three images on the cRIO, and that's only for getting all the camera and filtering settings right. Once we have them tuned, we can remove them. Might keep one during competition for later diagnostics. Since our plan is to initiate the long time period to bring up an image in the RobotInit() method, by the time the game starts we'll be on our way. So effectively we're running vision in its own threads, one for init, and a much shorter one for snapshot processing. We won't ask the bot to do anything else until the hot goal decision is made, which we estimate will take about 300 ms. After that we don't do any further CPU-intensive processing during the match. Fortunately, we can recognize the hot goal without moving the robot from it's starting position.
|
|
#10
|
|||
|
|||
|
Re: Vision CPU Overloading
Sounds great. Post if you have issues.
Greg McKaskle |
|
#11
|
|||
|
|||
|
Re: Vision CPU Overloading
All,
I haven't actually measured in the case of a real cRIO, but theory and practice suggest strongly that, in addition to valid concerns about wearing out one's cRIO flash memory, file writes to flash memory can be much slower than file reads. I may instrument that and report back when I get a chance. If you like, perhaps try measuring how fast you can process images on the cRIO with and without flash writes... I'd be interested in hearing the outcome. Best, Martin |
|
#12
|
||||
|
||||
|
Re: Vision CPU Overloading
I strongly recommend running in your dashboard rather than your cRio, especially if you're using a stronger PC for your drive station (we've switched to a PC with an i7 and 12GB of RAM to compensate.) Personally, I've experienced extreme lag when trying to run complex vision code on a cRio but virtually none when running it off the dash.
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|