Jevois with GripLine

A little background: My team wants to use vision processing to detect reflective tape on the rocketships and the white gaffers tape on the floor. We used vision for the first time last year using a basic USB web camera that applied code generated from GRIP to detect the power-up boxes. Seeing that the roboRio can’t process vision and robot code at the same time, we used a Jetson as a separate processor.

This year, we want to use a better camera like JeVois or Pixy2(though, I’ve heard better things about jeVois). I found that both these cameras have their own programs to recognize an object( written in C++ or python). I’d like to stick to GRIP and use Java. How can I benefit from the advantages of using a more integrated camera like JeVois and still use GRIP? Should I be using both?

Any reference code would be highly appreciated! Thanks.

If I’m not mistaken, there is a white paper written that explains the process of using Grip on the JeVois.
I have never used Grip with JeVois so I’m not an expert on that subject.

What did you use instead?

You can convert the exported GRIP pipeline in Java and do the opencv functions in Python directly on the Jevois.

My team is also using JeVois this season.
Here are some of the resources I’ve been using:

I wrote one of the two papers linked by DeveloperBlue. The other paper, the Design Simple guide, is the one I was trying to think of.

Basically we code in Python using OpenCV and write it using Notepad++.

JeVois Inventor is quickly becoming my favorite IDE.

My team and I had trouble using GRIP’s autogenerated code along with Jevois. I’m not sure if other teams had trouble like we did, but if so, I’d recommend making your own code to work with the Jevois. I used EagleForce 2073’s vision code as a basis. The above documents were also helpful.

Thanks for all the replies, but I’m a little confused. What is the difference between using a normal web camera with GRIP and a Jevois with Grip?

To be honest, I’m not entirely sure, but here’s what I think. Both need to run code that would help them better see the vision targets. That’s where GRIP would come in. However, Jevois has the benefit of having its own processor whereas you would probably need a rasberry pi or some kind of coprocessor for a normal web camera to process the image. For the Jevois, you could then send your data through serial, and for a webcamera, you would send via network tables.

Essentially grip is a software generation tool. It creates Vision processing Pipelines. That code needs to be run on some sort of processor. You can typically do that on a Raspberry Pi, or you can do that in jevois. If you choose to use a Raspberry Pi, then you will have to attach a webcam to the Raspberry Pi. The vision processing pipeline then would generate Target location information. The same thing can be said of JeVois except it does all the processing internally without the need from it for an external processor like a Raspberry Pi.
Basically jevois is a combination of a webcam and processor in one tiny little package.

So in that case, jevois can be connected directly to the rio without causing the robot to freeze? (as I stated above, we had the camera connected to a Jetson to handle the vision processing).

Yes, via one of the two USB ports. JeVois gets power over USB, sends both video and/or tracking data over USB. It is a simple and clean setup.