JeVois Smart Machine Vision

#21

On that we agree 100%!
Honestly, from what I’ve seen so far, it may find it’s way onto our Robot in 2018. Time will tell.

#22

What was in your pipeline? Capture image, blur, threshold (or inRange for multiple channeled images), some edge detector, and some filtering of edges?

#23

Capture image, HSV threshold, erode once, dilate once, find contours, filter the contours, and draw them on the original image. I like using a median blur but it eats up processing power like crazy. I’ll try it tomorrow.
I just got a 5W green led bulb today. Once I secure some retroreflective tape tomorrow I should be doing my tests.

#24

Nice. Have me sold. Only thing is that there is no room for memory leaks. Better make sure your programmers know how to use valgrind!

#25

Just a quick post with some updates and things to be mindful of.

If your host computer is Win 10, find another host computer. I tried for 2 hours to get images to show up with both OBS Studio and AMCap. The bottom line is, Win 10 doesn’t recognize the JeVois as a USB Web Cam properly. Once I moved to my Win 8 laptop, I had an image in just a couple minute.

For a serial connection, I use Putty. The only caveat is that you have set local echo to “force” to get it to work.

More updates tomorrow.

#26

Interesting, I’m using AMCap with very good results on Windows 10. The only problem I’m facing right now is the resolution that GRIP selects by default being the wrong one.

#27

Progress update: I have a very nice workflow with GRIP and the Jevois working that lets me tune on my laptop and export to Jevois fairly quickly. 62fps and it works pretty well! I’m using a 5W MR12 LED bulb right now to get green light. Once my LED rings arrive in a couple weeks I’ll try again with those too, but so far my goal of being able to write GRIP code and copy/paste into Jevois is satisfied. I’ll work on getting a tutorial up soon.

Picture of my GRIP pipeline:

And a video of the performance. I want to try and get it working on a robot soon but that relies on my getting access to a RoboRIO soon.

You’ll notice that there are some artefacts with the performance. This is caused by 1) light reflecting off the target onto my shiny wooden desk and 2) exposure being set low to ward off issues. Once I start testing with a robot I’ll know exactly how suitable it is for FRC, but for my LED setup it isn’t going to get much better than it is now. I’m working on setting up some post-processing to reject any contours that aren’t close to each other in the horizontal axis.

Ultimately, the Jevois itself isn’t the source of errors, and it runs fast enough that I’d be comfortable using it on a robot for 2018.

#28

Just want a sanity check that I’m understanding everything right.

It seems that if you wanted to stream video with this camera, you would have to use both USB ports on the roboRio, correct? Because the JeVois requires a Y cable and 2 ports for USB 2.0, which it seems is the roboRio’s USB ports.

Otherwise, I’m assuming the idea is to send the output of the image processing over the serial connection to the roboRio?

#29

That is probably the easiest way to power it but all you need to do is supply more than the 500mA that the ports on the RIO can source. There are a decent number of places to pull that from.

Also, I’m suspecting that the win10 issue posted above might also be related to power but I’m not sure. I haven’t had issues with OSX or Win10.

#30

OK, so I had to revisit the Win 10 issue based on your success.
I have no clue what changed, but as soon as I connected it to my system this morning, it worked perfectly!
Cool, now I have two platforms to experiment on.

#31

I was planning on using something like this: https://i.imgur.com/fhQoDN2h.png

#32

And another option.
https://www.amazon.com/WINGONEER-DC-DC-module-mobile-charger/dp/B06XHJ2RJD/ref=sr_1_5?ie=UTF8&qid=1510269788&sr=8-5&keywords=buck+converter+usb

#33

+1

This will be fun to play with in the upcoming weeks.

Awe, the intricacies of python on windows. Ya gotta love it :wink:

#34

Without a doubt, 2073 WILL be sporting this camera on our 2018 robot, as long as the game lends it’s self to vision and/or vision tracking! (Actually 2 JeVois)

At this point in time, we intend to release our code some time near the end of December, 2017. In addition, we will put together a “Using JeVois in an FRC environment” whitepaper, unless someone else beats us to the punch.

One thing I find kinda funny is that with our current code and configuration, programming and tuning the tracking parameters takes a combination of:
1 JeVois (obviously)
1 Windows PC (Preferably NOT Win 10)
1 RPi (If we can transcode our GUI tuner to run in Windows, this might be dropped)
1 Arduino Pro Micro (Acts as a Bidirectional USB to TTL converter and stand in for a RoboRio)

#35

From our team discussions, our plan was to try to use one this year as well.

Our hookup design (purely theoretical at this point) was to directly use the RS232 port on the roboRIO (possibly with level shifter) to get target info from the JeVois, and properly configure it at init. We’d use the JeVois to act as a USB webcam (probably with debug annotations), and stream that from the roboRIO with the mjpeg streamer.

Any forseen issues with this setup?

#36

You can avoid needing to use the level shifter for the RS232 connection if you connect it to the TTL pins (Tx, Rx, 3.3v, and DGND) in the MXP port.
Depending on what you do with the camera to fine tune it for tracking, you might consider not streaming video through the RoboRio and just send your targeting data to the RoboRio via USB.
Just my $.02.

#37

Awesome, thanks for the tips!

I’m still not certain about streaming. I personally would like to have the option (would make debugging easier), but that doesn’t mean we have to leave that enabled on the field.

From reading the docs, I’m wondering if it’s possible to adjust the exposure settings rapidly with serial commands? This way, we could potentially get dual usage out of the camera - one to stream to the drivers to help with navigation (use normal exposure), and the second to do vision processing. Basically, when the operator hits the “vision align request” button, the roboRIO quickly reconfigures the camera for dark exposure, activates the vision processing algorithm, and starts listening for target info. Stuff to experiment with…

#38

Yes, it is quite simple. The following would set manual exposure mode to a value of 50.

setcam autoexp 1
setcam absexp 50

To set it back to Auto Exposure:

setcam autoexp 0

These commands can be found by simply typing “help” into a serial command line. (Via Putty or “Adruino Terminal monitor” for example)

#39

Please, beat me to the punch. :stuck_out_tongue: It doesn’t seem like I’ll be able to add RoborRIO integration for a while. I was planning on getting a start to everything tonight.

#40

following…