In the new wpiLib documentation there is a section for doing vision processing via an onboard robot. This section is supposed to be documented within the next couple days.
I believe once this is documented utilizing OpenCV to analyze images from the camera or depth images from the kinect will be much better supported as you will be taking the intensive processing off of the cRio.
A note about kinect: you will still need to do the step of getting the depth image yourself, however once you have extracted that, the OpenCV algroithms should work roughly the same for kinect as it is for the image.
I only hope they provide an example of the reverse communication too (cRio to laptop), that way I can provide feedback to the vision algorithm how I plan on using the on field kinect ]