What types of Vision Processing are teams using for 2019

Hey all. Was quickly wondering what kinds of vision processing teams are using to make use of those reflective surfaces this year. I’ve heard of limelight but it’s too expensive for us this year. we’re looking into roborealm which seems really cool. can we use any simple camera for this? we still haven’t finished researching this, but can anybody comment on this? or what vision processing they’re using this year?

*edit. I now realize roborealm is more expensive

Unless I am mistaken, Roborealm looks like software that is more expensive than a limelight system.
Is there another Roborealm?
I only did a quick google search so maybe you found something else?

You can use a Raspberry pi 3 and you will get around 10-15 calculations done per second. There are many other boards/co-processors you can use like Nvidia Jetson

oh shoot. I guess I just looked at it briefly and didn’t find that. how about GRIP can anybody tell me about that?

I’ve been toying around with GRIP, it seems quite easy to get targets but the trick is to pair the correct targets together. I’ll be looking into how to do this in the next few days.

1 Like

is there anything special I should know about it? special cameras? anything?

Try doing a blur on your images, I think it slows down the pipeline a little bit but it makes it easier to find the targets imo

would we be using a USB cam for this? or can we go IP cam? is there any sample code or anything for this you have?

You can use a USB cam or IP cam, whatever you like. We typically take the camera stream from the robot, throw it into grip, and post the data to NetworkTables from the computer. You can build a pipeline on GRIP and process everything on the driver station laptop, or generate code and throw it into your robot code. Up to you.

I really like the looks of Jevois, and intend to use it this year. I’m expecting the benefit to be ease of integration and packaging,not to mention low cost. The down side I expect will be lower frame rates than a jetson, and probably a less than perfect camera.

If anyone knows of any red flags we should be careful of using that device, I would appreciate a heads up warning before we go too far down the path.

1 Like

Team 1777 will be using a Jetson TX1 with custom Python code for vision processing. We aren’t sure if we’ll be using it for tracking the vision targets or for tracking the alignment lines (or both), but they both seem to be useful depending on your bot’s design.

1 Like