Quote:
|
Greg, when I thought about this I assumed that the Labview dashboard live-stream is already an mpeg stream and that Labview users can pull frames from that and use the Labview example vision code to process it. Is that the case? Is there a reason not to do it that way?
|
Last year, the default dashboard changed from reading individual JPEGs to doing an mjpeg stream. As you mention, it has always been possible to branch the image wire and connect it to the vision VIs for processing. Getting the processing back to the robot would involved UDP or TCP done on the open ports, or possibly using the beta-quality Network-Tables.
The example code ofr vision and tutorial that went with it supported both laptop and cRIO. It didn't integrate it into the dashboard, but you pretty much just needed to copy and paste the loop and connect it to the mjpeg wire.
So yeah, no reason not to. If the processing is done when needed or on low resolution images, the cRIO should have plenty juice to process the images. But the added power of the laptop makes it far easier to get a working solution with less optimization. For reference, the cRIO is about 800 MIPs. Image processing is almost entirely integer, so that is a pretty good metric to use. The Atom in the classmates is around 3000 MIPs.
Greg McKaskle