Our 2016 Hack-y Vision Processing

So here is a story:
Last year (rookie year) our teams programmers were looking to program autonomous ( drive under low bar, turn right 45 degrees, use vision to line up with the goal drive forward, score low goal) we were using roborealm but couldn’t get the camera image to roborealm. Played with ports, couldn’t figure it out. Then one day it was working. How did they get it there?

They had the drive station open on the left side of the drive station screen and roborealm on the right side. They were using the screen capture to take the image directly from the computer screen. A little laggy, and touchy to get working, but worked well enough at out regional.

FYI: this year we are using robo realm but getting the image via https