Yes, it's blobs, but one robots blob is another robots object.
I'm looking at the off board processing. They are doing most of the heavy lifting on the camera processor and sending it down to the host for further processing. So the overall load is less, which frees up CPU cycles for other processing. Does that open up a chance for two cameras to get more exact range information?
It's also not as demanding to know OpenCV, so teams with less hard core programming resources may be able to use this easier.
Anyway, it's a while before I get one, but I'm excited about the possibilities!