JeVois = video sensor + quad-core CPU + USB video + serial port, all in a tiny, self-contained package (28 cc or 1.7 cubic inches). Insert a microSD card loaded with the provided open-source machine vision algorithms (including OpenCV 3.3 and many others), connect to your desktop, laptop, and/or Arduino, and give your projects the sense of sight immediately.
It’s basically $50 for a ready-to-go vision system similar to the CMUCam/Pixy.
Ordered! I’ll evaluate it before recommending it to 299 or 1072.
EDIT: Just got it in the mail. I’ll post some updates if/when I get it working. Amazon Same-day shipping is available with it for a total of $50 if you have Prime!
You will need to supply your own micro SD card and mini-USB (not micro-usb) cable.
Just thought I would give a quick update:
After I ordered it, I dove right in with the basics of getting it setup on a Windows computer. I thought I needed Linux to program it, but it’s possible to do it on a Windows computer if you’re ok with taking slightly more time.
The steps I followed to get it working with the basics using the Jevois Quickstart guide:
Find a 16gb micro SD card and mini USB cable
Download the Jevois OS image
Flash image to microSD (I used Win32 Disk Imager to do this)
Plug in microSD and the miniUSB cable to computer
Use AMCap to view the Jevois video stream. AMCap lets you switch between camera modes easily.
Open Arduino software and open the Serial monitor to start the command interface with the Jevois. Make sure to set the “Newline” line ending instead of the default “None”; this confused me for a few minutes.
Send the “help” command and get a list of all commands and the current settings. You can control exposure, contrast, gain, brightness, color balance, and set them to be automatic or manual. The settings aren’t saved so I have to resend it upon bootup.
All of the above took me 2 hours, of which 1/2 hour was finding a uSD and 1/2 hour was finding out that you had to use “newline” mode in the Arduino Serial monitor.
If you are using Windows or Mac to program the Jevois you have to run Python code instead of C++. You also can’t run it easily on Windows as far as I know; you just have to let the Jevois find runtime errors. This can be frustrating if you are looking to get a lot of custom code working.
I viewed some of the resources on the Jevois website, but for FRC I wanted to be able to take code generated in Python by a GRIP pipeline and just deploy that to the Jevois. Specifically, I wanted to take an image, run an HSV filter on it, erode it, dilate it, find the contours, and filter the contours. This pipeline works for recognizing retroreflective tape in 99% of situations with the proper bright green LED ring and low exposure settings. I took the PythonSandbox example and experimented with copy/paste solutions, and finally found one that worked every time. I just took all the grip code and copied over the stuff in the “Process” function into the “Process” function of the Jevois, put the constants in the constructor, and put all static methods below all of that. Ran the filter with 0 runtime errors, and all I have to do now is plug in constants that GRIP gets me. My LED rings are coming in the mail in a few days, and once I confirm that this works consistently I’ll write up a whitepaper.
All of that took me roughly 5 hours, of which most of was just tracking down syntax errors and finding out how to use the output settings of the Jevois. There’s still some things I’m not sure about, such as how to change the resolution and FPS and how to run code not in the “sandbox” directory; I know it’s possible but I haven’t worked on this in a few days due to time crunch.
TL;DR: Jevois works great and I would 8/10 recommend it to teams, particularly those who have Linux programmers. I’ll be putting up a whitepaper soon enough about using this.
I received ours as well, it took me a bit less time to get rolling - I REALLY don’t like the lack of echo on the serial line - but yeah… this thing is the perfect graduation present for teams who were using the pixy and now want something with a bit more horsepower. Don’t expect a white paper from us about it unless we use it in an event or on a robot… but it’s good.
Also the fan sounds like a pneumatic device after being shot by an FRC mentor for experimentation.
Going to third the above comments. Mine came in Saturday and I haven’t had a TON of time to mess with it. It seems to work well. If Asid is writing a white paper good.
I can’t stress how small this thing is.
My big complaint with it is that the microSD card slot is in a garbage location (it’s too deep, the case makes inserting it much harder than it needs to be). Also, the lack of echo in terminal makes life harder but that’s a minor hiccup.
+1, the fan is much louder than I would like. I’m considering replacing it with an aluminum heatsink and a much slower fan, or even just a quieter fan of the same size.
It really is tiny, even smaller than a Pixy I would argue in terms of effective volume. I had to push in the microSD with an SD card, but it’s one of the nice holders that you feel a “click” in.
Would anyone who has this please do a benchmark with various OpenCV functions (preferably in both C++ and Python) at various resolutions? While a tiny size is great and all, I feel another important metric is fps to $ ratio for your run of the mill OpenCV FRC code. That and the headaches that will inevitably arise when configuring the device for the first time.
I was running my simple pipeline around 100fps. I don’t have the know-how to benchmark effectively but I can ask a 299 or 1072 member to look into it for me. As long as it runs at an appreciable speed I’m ok with what it does.
I’m a Gadget Geek. I do a LOT of vision processing work with the team. I can afford to add this to my toy list even if it doesn’t make it to our robot.
That said, if it performs on par or better than our RPi vision system, it will be a lock!
It’s better than the rPi from everything I’ve thrown at it so far. I’m not sure what is being optimized on the backend but something seems to be…
For those who believe frame rates are king and are too cool/busy/whatever to look it up on the website:
Camera sensor: 1.3MP camera with
SXGA (1280 x 1024) up to 15 fps (frames/second)
VGA (640 x 480) up to 30 fps
CIF (352 x 288) up to 60 fps
QVGA (320 x 240) up to 60 fps
QCIF (176 x 144) up to 120 fps
QQVGA (160 x 120) up to 60 fps
QQCIF (88 x 72) up to 120 fps
Now those are advertised frame rates that the camera supports and not benchmark tests but if all you care about are frame rates then that’s what it supports.
I do have some concerns about lens distortion but some calibration routines can take care of that easily enough.
For the really cool amongst the crowd… The JeVois shows up as a USB webcam so you could plug it into a Jetson and use something like a neural network on the Jetson and sample images from the JeVois after doing a bunch of pre-filtering using the JeVois:
Not quite 100% accurate. You haven’t had it send target info to a RoboRio via UDP.
I know I’m splitting hairs, but the Pi does have some capabilities the JeVois just doesn’t.
This is going to be the usual comparison that happens in engineering all the time. Do the advantages out weigh the disadvantages. In other words, is the trade off worth it?