Log in

View Full Version : 30fps Vision Tracking on the RoboRIO without Coprocessor


Jaci
14-11-2016, 08:24
Howdy,

A hot topic surrounding the FRC community is Vision Tracking and Processing. Faster and faster, vision processing is becoming more accessible, with community projects, code releases, frameworks and new hardware to play with. There's also a common misconception that the RoboRIO just isn't powerful enough to run a Vision System, with CPU time to spare for your own program. Let's debunk that.

Here (http://imjac.in/ta/post/2016/11/14/08-00-15-generated/) you can find the post I've made on how we can achieve 30fps, 640x480 Vision Processing on the RoboRIO itself without the need for a coprocessor.

In short, we can process 30 frames in about 231ms (7.7ms per frame), which is about 23% of the 30fps boundary. This leaves processing room for the FRC Network Daemon, as well as your own user code.

The code used in this investigation is available here (https://github.com/JacisNonsense/neon_vision)

nickbrickmaster
14-11-2016, 09:57
This is pretty cool. Who's the guy on the R thread that was complaining about optimization? :P

Is this feasible to use in competition? How flexible is it? (I have limited experience with vision. Is a threshold the only thing that you need?)
How much CPU time does a typical robot program take up (as I don't have one in front of me?) What if I'm running 3-4 control loops on the RIO?

Jaci
14-11-2016, 10:20
This is pretty cool. Who's the guy on the R thread that was complaining about optimization? :P

Is this feasible to use in competition? How flexible is it? (I have limited experience with vision. Is a threshold the only thing that you need?)
How much CPU time does a typical robot program take up (as I don't have one in front of me?) What if I'm running 3-4 control loops on the RIO?

1) Feasible, for sure. In a way, it may be a better alternative to a coprocessor for a few reasons, such as freeing up a port on the router, or not having to worry about network latency and bandwidth. I can easily see this being used inside of a competition environment.

2) Flexible is depending on how you want to develop the code further and/or use it. The most expensive functions in Computer Vision are memory allocations and copying. Thresholding is on of the biggest culprits of this, and a threshold is present in just about every algorithm. The assembly can be modified to work on different types of thresholding (less than instead of greater than, or both!), or on other algorithms depending on your use case. The code I've provided is just a stub of all the possibilities. Normal OpenCV functions and operations still apply, leaving it about as flexible as any other vision program. The actual copy function itself only takes 2ms, leaving you with 31ms per frame to do everything else.

3) The CPU usage of a robot program is pretty hard to judge, as most of it is dependent on how the code is written. I'll take the closest example that I have, and that is ToastC++. Running at 1000Hz update rate, the main process (which interfaces with WPILib) uses about 20% CPU, and the child process (the actual user control) uses about 2% CPU. This 1000Hz is updating 4 motors based on 4 axis of a joystick (although the main process actually updates all allocated motors, digital IO, analog IO and joysticks each loop). In a competition I wouldn't recommend a 1000Hz update rate, something like 200Hz would be way more than plenty, as you likely have a lot more stuff going on. If you design your control loops carefully (that is, running them all in a single loop, see this (https://github.com/JacisNonsense/ToastCPP/blob/master/Toast-Core/src/command.cpp) for implementation details), you should easily be able to saturate your needs without hitting 100% avg CPU. If you're still afraid, thread priorities are your friend. Obviously this depends on a number of factors (what you're doing, whether you're C++ or Java, etc)

euhlmann
14-11-2016, 10:41
Pretty cool!
Now if only all of OpenCV could be NEON-optimized :rolleyes:

Or if somebody could teach me what black magic I need to invoke to get OpenCV GPU acceleration on Android :D

Jaci
14-11-2016, 10:55
Pretty cool!
Now if only all of OpenCV could be NEON-optimized :rolleyes:

Or if somebody could teach me what black magic I need to invoke to get OpenCV GPU acceleration on Android :D

OpenCV does have a NEON and VFP build option, both of which were enabled during these tests, which is part of the reason cv::inRange executed so quickly

euhlmann
14-11-2016, 11:01
OpenCV does have a NEON and VFP build option, both of which were enabled during these tests, which is part of the reason cv::inRange executed so quickly

Yes, but few things have been NEON-optimized so far

RyanShoff
14-11-2016, 11:12
Have you looked at how much overhead comes from getting 30fps from a USB camera?

Also findContours() should run faster on non-random data.

Jaci
14-11-2016, 11:23
Have you looked at how much overhead comes from getting 30fps from a USB camera?

Also findContours() should run faster on non-random data.

I don't have a USB camera to test with, and I have to fix my Kinect adapter before I can run this live.

I understand the findContours() method will run faster with non-random data, however I chose random data to provide a worst-case scenario. Using a real image from a Kinect, the speed is somewhat faster.

Andrew Schreiber
14-11-2016, 11:41
First, pretty awesome write up. Running on board removes a lot of risk associated with reliance on vision processing. The communication step is hard.

Second, I'd be curious how you derived the requirement of 640x480. It seems to me that using a lower resolution image would process faster and the quickest win in this whole process would be to compute what the min image resolution required would be.

I've attached some of the test images 125 produced that I've down sampled as an example if folks want to play with it. They were taken at 14 feet away dead straight on and then scaled using imagemagick to 1280x960 -> 80x60. While the 80x60 image is just silly I do believe there are applications where much lower resolutions are just as effective.

It also opens the possibility of using low res images for identifying ROI and then processing just the smaller region in the higher resolutions.

Jared Russell
14-11-2016, 11:49
This is very cool, though I'm not (yet) convinced that you can get 30fps @ 640x480 with an RGB USB camera using a "conventional" FRC vision algorithm. But now you have me thinking...

Why I think we're still a ways off from RGB webcam-based 30fps @ 640x480: Your Kinect is doing several of the most expensive image processing steps for you in hardware.

With a USB webcam, you need to:

1. Possibly decode the image into a pixel array (many webcams encode their images in formats that aren't friendly to processing).

2. Convert the pixel array into a color space that is favorable for background-lighting-agnostic thresholding (HSV/HSL). This is done once per pixel per channel (3*640*480), and each op is the evaluation of a floating point (or fixed point) linear function, and usually also involves evaluating a decision tree for numerical reasons.

3. Do inRange thresholding on each channel separately (3x as many operations as in your example) and then AND together the outputs into a binary image.

4. Run FindContours, filter, etc... These are usually really cheap, since the input is sparse.

So in order to do this with an RGB webcam, we're talking at least 6x as many operations assuming a color space conversion and per-channel thresholding, and likely more because color space conversion is more expensive than thresholding. Plus possible decoding and USB overhead penalties. Even if we ignore that, we're at 7.7 * 6 = 42.6ms per frame, which would be 15 frames per second at 64% CPU utilization. Anecdotally, I'd expect another 30+ ms per frame of overhead.

The Kinect is doing all of the decoding for you, does not require a color space conversion, and gives you a single channel image that is already in a form that is appropriate for robust performance in FRC venues. No Step 1, No Step 2, and Step 3 is 1/3 as complex when compared to the above.

However...

Great idea hacking the ASM to use SIMD for inRange. I wonder if you could also write an ASM function to do color space conversion, thresholding, and ANDing in a single function that only touches registers (may require fixed point arithmetic; I'm not sure what the RoboRIO register set looks like). This would add several more ops to your program, and have 3x as many memory reads, but would have the same number of memory writes.

Jared Russell
14-11-2016, 11:57
Second, I'd be curious how you derived the requirement of 640x480. It seems to me that using a lower resolution image would process faster and the quickest win in this whole process would be to compute what the min image resolution required would be.

This is definitely true. The resolution you need is a function of range, target geometry, angle of incidence, camera field of view, the frequency and type of non-target objects that pass the threshold, and required precision. 640x480 has been overkill for all vision challenges to date.

640x480x30 fps is a convenient benchmark, though, as it is achievable with largely unoptimized code by many forms of coprocessors.

Jaci
14-11-2016, 11:58
First, pretty awesome write up. Running on board removes a lot of risk associated with reliance on vision processing. The communication step is hard.

Second, I'd be curious how you derived the requirement of 640x480. It seems to me that using a lower resolution image would process faster and the quickest win in this whole process would be to compute what the min image resolution required would be.

I've attached some of the test images 125 produced that I've down sampled as an example if folks want to play with it. They were taken at 14 feet away dead straight on and then scaled using imagemagick to 1280x960 -> 80x60. While the 80x60 image is just silly I do believe there are applications where much lower resolutions are just as effective.

It also opens the possibility of using low res images for identifying ROI and then processing just the smaller region in the higher resolutions.

Honestly I used 640x480 as a kind of 'boast' as to how much potential this can hold (that, and it's also the default resolution of a Kinect camera @ 30fps). You can actually downscale this image entirely using the VFP, by using vld1.64 to load into the D registers, and a variation of vst to shift back out to memory, interleaved, discarding the extras, or saving them for later use as you proposed in your last paragraph. This is 'effectively' zero cost to the entire algorithm, as it does it 128 bits at a time.

Andrew Schreiber
14-11-2016, 12:05
This is definitely true. The resolution you need is a function of range, target geometry, angle of incidence, camera field of view, the frequency and type of non-target objects that pass the threshold, and required precision. 640x480 has been overkill for all vision challenges to date.

640x480x30 fps is a convenient benchmark, though, as it is achievable with largely unoptimized code by many forms of coprocessors.

Honestly I used 640x480 as a kind of 'boast' as to how much potential this can hold (that, and it's also the default resolution of a Kinect camera @ 30fps). You can actually downscale this image entirely using the VFP, by using vld1.64 to load into the D registers, and a variation of vst to shift back out to memory, interleaved, discarding the extras, or saving them for later use as you proposed in your last paragraph. This is 'effectively' zero cost to the entire algorithm, as it does it 128 bits at a time.

Understood, just wanted to make sure other folks reading the thread didn't get the idea that 640x480 was required.

Jaci
14-11-2016, 12:14
This is very cool, though I'm not (yet) convinced that you can get 30fps @ 640x480 with an RGB USB camera using a "conventional" FRC vision algorithm. But now you have me thinking...

Why I think we're still a ways off from RGB webcam-based 30fps @ 640x480: Your Kinect is doing several of the most expensive image processing steps for you in hardware.

With a USB webcam, you need to:

1. Possibly decode the image into a pixel array (many webcams encode their images in formats that aren't friendly to processing).


This is certainly true. As I mentioned, I don't really have a benchmark to gather data from a conventional USB webcam, so I can't really provide input to this part.


2. Convert the pixel array into a color space that is favorable for background-lighting-agnostic thresholding (HSV/HSL). This is done once per pixel per channel (3*640*480), and each op is the evaluation of a floating point (or fixed point) linear function, and usually also involves evaluating a decision tree for numerical reasons.

3. Do inRange thresholding on each channel separately (3x as many operations as in your example) and then AND together the outputs into a binary image.


These can actually both be turned into 1 set of instructions if your use case is target-finding.
Most robots use some sort of light source to find the retro-reflective target. Most typically this is the green ring (for our Kinect, it's the IR projector). If your image is already in the RGB form, you can actually just isolate the Green channel (which you can do with SIMD extremely simply, vld3.8) and proceed onward. Storing the R and B channels out to a D register but not writing it to RAM will save a lot of time here, and then your thresholding function will only take one set of data.

Something similar can be done with HSV/HSL, however this will require a bit more math on the assembly side of things to isolate the Lightness for a specific hue or saturation. Nonetheless, it's still faster than calculating for all 3 channels.


However...

Great idea hacking the ASM to use SIMD for inRange. I wonder if you could also write an ASM function to do color space conversion, thresholding, and ANDing in a single function that only touches registers (may require fixed point arithmetic; I'm not sure what the RoboRIO register set looks like). This would add several more ops to your program, and have 3x as many memory reads, but would have the same number of memory writes.


I believe it would be possible to do HS{L,V}/RGB color space correction with SIMD if you're willing to take on the challenge. I may give this a try when I have some time to burn.
Putting them all into one set of instructions dealing only with the NEON registers is entirely possible, in fact the thresholding and ANDing are already grouped together, operating on the Q registers (https://github.com/JacisNonsense/neon_vision/blob/master/src/asm/mem.S#L19-L20). I can confirm that the ARM NEON instruction set does include fixed-point arithmetic, although it requires the vcvt instruction to convert them to floating-point first, which is also done by the NEON system.

NotInControl
14-11-2016, 14:19
Interesting work.

We took a look at using the RoboRio for Vision Processing back in 2014 under the alpha test of the new hardware. We tried IP and Web Cams using the same vision detection algorithm to find hot goals as implemented on our 2014 robot.

This was an OpenCV implementation in C++ which was compiled using Neon running on the Roborio.

Take a look at our data, at the below link, under Vision, at the IP camera test.

We would need to dust it off, but for our complete end to end solution I think we could only get 20fps at 320x240 on the Rio.

http://controls.team2168.org/


Over the past few years we have grown to develop a decoupled, off-board vision system, for various reasons we deemed beneficial, but I am glad to see progress in this area.

tcjinaz
14-11-2016, 22:39
No fair heading down so close to bare metal
:)

Gdeaver
15-11-2016, 07:28
No one knows what vision processing will be needed in the future. For this year we found that feeding the results of processing into a control loop did not work well. We take a picture calculate the degrees of offset from the target. Then use this offset and the IMU to rotate the robot. Take another frame and check that we are on target. If not rotate and check. If on target shoot. We did not need a high frame rate and it worked very well. I'll note that our biggest problem was not the vision but, the control loop to rotate the bot. There was a thread on this earlier. We hosted MAR Vision day this past weekend. It has become very apparent that most teams are struggling with vision. While it's nice to see work like this, I would like to see more of an effort to bring vision to the masses. GRIP helped allot this year.

KJaget
16-11-2016, 08:25
Given that HSV requires a bunch of conditional code it's going to be tough to vectorize. You could give our approach from last year a try :

vector<Mat> splitImage;
Mat bluePlusRed;

split(imageIn, splitImage);
addWeighted(splitImage[0], _blue_scale / 100.0,
splitImage[2], _red_scale / 100.0, 0.0,
bluePlusRed);
subtract(splitImage[1], bluePlusRed, imageOut);

This converts an RGB image into a grayscale one where higher grayscale values indicate more pure green. Pixels with lots of green and nothing else end up with high values. Pixels with no green end up as small values. Pixels with high green but also lots of blue and red also end up as low values. That last part filters out, say, white or yellow or whatever that have high G values but also high values for the other channels.

After that we did a threshold on the newly created single-channel image. We used Otsu thresholding to handle different lighting conditions but you might get away with a fixed threshold as in your previous code.

To make this fast you'd probably want to invert the red_scale and blue_scale multipliers so you could do an integer divide rather than convert to float and back - but you'd have to see which is quicker. Should be able to vdup them into all the uint8 lanes in a q register at the start of the loop and just reuse them. And be sure to do this in saturating math because overflow/underflow would ruin the result.

Oh, and I had some luck getting the compiler to vectorize your C code if it was rewritten to match the ASM code. That is, set a mask to either 0 or 0xff then and the mask with the source. Be sure to mark the function args as __restrict__ to get this to work. The code was using d and q regs but seemed a bit sketchy otherwise, but it might be fast enough where you could avoid coding in ASM.

Greg McKaskle
16-11-2016, 10:10
If you web-search for Zynq vision systems or Zynq vision processing, you will see that a number of companies and integrators use this for professional systems. Zynq from Xilinx is the processor of the RoboRIO.

So I think my take on this is ...

You likely don't need 640x480. It is almost as if that were taken into account when the vision targets were designed.

You likely don't need 30 fps. Closing the loop with a slow-noisy sensor is far more challenging than a fast and less-noisy one. Some avoid challenges, but others double-down.

The latency of the image capture and processing is important to measure for any form of movement (robot or target). Knowing the latency is often good enough, minimizing this is of course good. If there isn't much movement, it is far less important.

The vision challenge has many solutions. Jaci has shown, and I think the search results also show that many people are successful using Zync for vision. But this does take careful measurements and consideration of image capture and processing details.

By the way, folks typically go for color and color processing. This is easy to understand and teach, but it is worth pointing out that most industrial vision processing is done with monochrome captures.

Greg McKaskle

adciv
16-11-2016, 10:28
By the way, folks typically go for color and color processing. This is easy to understand and teach, but it is worth pointing out that most industrial vision processing is done with monochrome captures.
Can you explain the reason for this? Are the systems designed to be used with monochrome or is it just worked out until that's all that's necessary?

Jared Russell
16-11-2016, 12:07
Can you explain the reason for this? Are the systems designed to be used with monochrome or is it just worked out until that's all that's necessary?

Most industrial vision system applications use a pretty controlled background, so intensity-based detection and segmentation works well and has few false positives. Pointing a camera towards the ceiling in an arbitrary high school gym or sports arena is not as controlled, so you often need to use other cues to differentiate the target from the background. These cues could include color, shape, size, etc.

Jared Russell
16-11-2016, 12:13
Given that HSV requires a bunch of conditional code it's going to be tough to vectorize.

Full HSV requires evaluating conditions to compute hue, but if you use (ex.) a green LED ring, you can pretty well assume that if green is not the most abundant component for any given pixel then hue is irrelevant; the pixel is likely not part of the target.

Jaci
16-11-2016, 12:22
Given that HSV requires a bunch of conditional code it's going to be tough to vectorize.

I managed to implement an RGB->HSV colour space conversion by using the VBIT (bitwise insert if true) (http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0489c/CJAJIIGG.html) instruction in combination with VGT (greater than). VBIT is a way to vectorize branching operations to a certain degree.

I'll do a write up on this at some time, but I've got a lot on my plate over the next 2 weeks and I have to clean up the code a bit.

Jaci
16-11-2016, 12:32
You likely don't need 640x480. It is almost as if that were taken into account when the vision targets were designed.

You likely don't need 30 fps. Closing the loop with a slow-noisy sensor is far more challenging than a fast and less-noisy one. Some avoid challenges, but others double-down.

I agree with the not-needed 640x480 resolution, however I used it as a test case to provide more of a worst-case scenario (i.e. "hey look at what I can do"). I'll be writing another post some time in the future testing more 'applicable' vision target sizes as well as other things discussed in this thread.

However, I disagree with the not-needing 30fps. Vision targetting in FRC is very hit-or-miss. While a high update-rate might not be needed for alignment operations, I find sending back to the driver station a 30fps ("""""natural framerate""""") outline of what targets have been found is quite useful. For example, this year I sent back the bounding boxes of contours our vision system found to the driver station. This had the advantage that the driver had some kind of feedback about just how accurate we were lined up (and could adjust if necessary), and took next to no bandwidth as we were only sending back a very small amount of data 30 times a second (per contour). This was insanely useful and you can see that if you look at our matches (we implemented it between Aus Regional and Champs).

Jaci
16-11-2016, 12:33
No fair heading down so close to bare metal
:)

It's only unfair if I say I've done it and then leave the whole thing closed source ;)

euhlmann
16-11-2016, 17:06
It's only unfair if I say I've done it and then leave the whole thing closed source ;)

Then you'd just need to make as many things private as you possibly could (except for a single "private" undocumented function that you let WPILib use), document a bunch of error codes but return undocumented ones on errors, and turn your whole project into proprietaryception and you have NIVision :rolleyes:

Greg McKaskle
18-11-2016, 08:28
"Then you'd just need to make as many things private as you possibly could (except for a single "private" undocumented function that you let WPILib use), document a bunch of error codes but return undocumented ones on errors, and turn your whole project into proprietaryception and you have NIVision"

Good job with the hyperbole! At least I hope that isn't how you think things intentionally worked out.

WPILib did a poor job of wrapping NIVision (work NOT done by NI, by the way). The history is that a few folks tried to make a dumbed-down version for the first year, and it was a dud. Then some students hacked at a small class library of wrappers. But the hack showed through.

That doesn't mean NIVision, the real product, is undocumented or trying to be sneaky. NI publishes three language wrappers for NIVision (.NET, C, and LV). The documentation for NIVision is located here -- C:\Program Files (x86)\National Instruments\Vision\Documentation. And one level up is lots of samples, help files, utilities, etc. If the same people did the wrappers on top of OpenCV, it would have been just as smelly. Luckily, good people are involved in doing this newer version of vision for WPILib. But I see no reason to make NIVision into the bad guy here.

If you choose to ignore the WPILib vision stuff and code straight to NIVision libraries from NI, I think you'll find that it is a much better experience. That is what LV teams do, by the way. LV-WPILib has wrappers for the camera, but none for image processing. They just use NIVision directly.

If my time machine batteries were charged up, I guess it would be worth trying to fix the time-line. But the I'm still worried about the kids, Marty.

Greg McKaskle

Greg McKaskle
18-11-2016, 09:22
Can you explain the reason for this? Are the systems designed to be used with monochrome or is it just worked out until that's all that's necessary?

Both.

If you can set the camera where it will simplify your task, and put the targets where it will simplify your task, you can simplify the system, lower cost, increase effectiveness, increase throughput, etc. The FRC robot field is not nearly as controllable or predictable, but it is beneficial to spend some time thinking about what you can control.

Also, monochrome cameras can have about 3x frame rate at the same resolution, or higher resolution at the same frame rate. They can have higher sensitivity, allowing faster exposures. Monochrome doesn't have to have a broad spectrum of lighting or capture. Lasers are already monochrome. Filters on your lens or light source make it narrower. Lenses don't have to worry about different refraction for different wavelengths.

The first step most team code perform is an HSL threshold -- turning an RGB image into a binary/monochrome one.

So, I'm not saying monochrome is better, but it is different, and powerful, and common. My point is that color cameras aren't a requirement to make a working solution and there are benefits and new challenges in each approach.

As for frame rate:
30fps is based on a human perception threshold. Industrial cameras, and SLRs for that matter, operate at many different exposures and rates. If the 30 fps is to align with a driver feedback mechanism, then it is a good choice. If it is to align with a control feedback mechanism, slower but more accurate may be better, or far faster may be needed. The task should define the requirements, then you do your best to achieve them with the tools you have. It is exciting to see folks reevaluate and sharpening the tools.

Greg McKaskle

tcjinaz
20-11-2016, 22:06
If you web-search for Zynq vision systems or Zynq vision processing, you will see that a number of companies and integrators use this for professional systems. Zynq from Xilinx is the processor of the RoboRIO.

<snip>

Greg McKaskle

So does the FPGA in the FRC RoboRIO have some vision processing in hardware? Jaci seems to be operating (to good effect) at the ARM assembly language level. Is there something of use in the programmable part of the Zynq? If not, it could be that project I'm looking for to allow teams some access to the leftover gates in the FPGA, if there are any.

Tim
.

Greg McKaskle
21-11-2016, 09:55
The Zynq architecture has a hard ARM CPU and an FPGA on a single chip. The ARM is completely open because the safety code for FRC has been pushed into the hard-realtime FPGA. It is not possible with current tools to easily allow a partial FPGA update. So the FPGA is static for FRC during the regular season. If you want to use tools to change it in offseason, go for it.

The FRC FPGA doesn't currently have any vision processing code in it. It wasn't a priority compared to accumulators, PWM generators, I2C and SPI and other bus implementations. If you get specific enough about how you want the images processed, I suspect that there are some gates to devote. But many times, the advantage of using an FPGA is to make a highly parallel, highly pipelined implementation, and that can take many many gates. And if the algorithm isn't exactly what you need, you are back to the CPU.

So, with todays tools, CPU, GPU, and FPGA are all viable ways to do image processing. All have advantages, and all are challenging. There are many ways to solve the FRC image processing challenges, and none of them are bullet-proof. That is part of what makes it a good fit.

Greg McKaskle

euhlmann
22-11-2016, 07:32
I was digging through the source and found this
https://github.com/opencv/opencv/blob/master/modules/core/src/arithm.cpp#L1536-L1690

So now I'm curious: does the performance boost come from not using CV_NEON in your OpenCV library build, or because NEON intrinsics are significantly slower than using plain assembly?

KJaget
22-11-2016, 13:29
I was digging through the source and found this
https://github.com/opencv/opencv/blob/master/modules/core/src/arithm.cpp#L1536-L1690

So now I'm curious: does the performance boost come from not using CV_NEON in your OpenCV library build, or because NEON intrinsics are significantly slower than using plain assembly?

Ideally intrinsics will get pretty close to ASM. But older versions of GCC had some significant problems with them. Newer versions have improved but can still be caught out by weird code. The optimization efforts were somewhere in the 4.8->4.9 range, so if the code is built with older versions that could be the issue.

An objdump would tell for sure, though, if anyone's up to a challenge.