I’m going to be responsible for figuring out vision over the summer, meaning I will have to leave the world of Eclipse/IntelliJ and find a way to program lots of C++ in a Linux environment (Jetson TX1)
I have a fair amount of experience with command line/Linux, and feel comfortable using text editors like nano, but I feel like creating an entire git repository with multiple C++ classes and libraries will just be a huge mess if I try to do it all through the command line. Not to mention, debugging could take a while and installing libraries is always somehow counter-intuitive on Linux.
Any advice for decreasing the learning curve or at least suggestions for alternative text editors which may serve the purpose better?
I usually do my code in IDEs, then SCP it over to the device. I highly recommend WinSCP if you’re on Windows, which also allows you to keep directories synced so you don’t have to think about it.
This is a good idea. Substitute Emacs for vi and it’s an excellent one :).
Side note: what’s the issue with having an IDE on your Jetson? We did all our coprocessor work with Eclipse on a Kangaroo PC running Ubuntu and it worked fine.
TL;DR: No reason to leave IDEs behind if you don’t want to, but you should try it out.
Vim has a surprising number of plugins available, nearly to the point where it could be used as an IDE (code completion, syntax, linting).
As for building the project entirely through the command line, I’ve never seen vision code that gets particularly complex on its own. Generally effective vision programs for FRC are small enough to fit in a single source file, so building them isn’t that complex of a process.
Finally, with the jetson, you’re going to want to use their closed source version of opencv for normal vision stuff because it’s optimized to use the shared nature of the memory. Those libraries are already installed when you set the jetson up with jetpack, so that’s not an issue.
Since the intent with this library is to present the exact same API to the programmer as OpenCV would normally, I’ve found you can write and build things on your own machine, and clone the repo on the jetson to build from there and make sure everything is still hunky dory. We don’t do much development on the actual jetson because of this.
This year we also coded our vision with the TX1 in ubuntu. One way you could organize your code is use your own laptop and code it in an IDE of choice like Atom and then SFTP the file over to the Jetson. Another way is just using nano/gedit on the Jetson. I feel like there is no better text editor option because they are all similar and I have not found any auto complete for OpenCV (C++ version anyways. I think there is a Pi one though). Go for whats convenient for you. Using an IDE on the laptop and putting all the vision files on your laptop might make organization easier. Or you could use an SSHFS to see the jetson file system. Then from your laptop filesystem or through SSH you can perform all the git commands. Another option is integrating git into your IDE of choice. There is really lots of ways to go with this and I would suggest picking up what your comfortable with, experiment around on what is best for organization and go for it. If you guys ever need any help on the Jetson or robot (flashing the system, OpenCV installation, vision errors, anything at all) feel free to PM me or email us at [email protected].
EDIT: One thing that will help a lot in organization is to keep an up to date version of your vision code on Github/laptop. Having outdated versions can be a pain when debugging scripts. If something breaks (which will 100% happen on the Jetson ) you can also be able to revert back and see what happened. Having updated copies will also act as a backup if anything ever happens.
How does this play with the Jetson in particular? Wouldn’t the IDE just find a bunch of errors from libraries which can only be installed on the Jetson?
We are looking into using the ZED camera with the Jetson, which features depth mapping. We think that using multiple classes could possibly simplify the end project with the depth mapping calculations in mind.
The ZED sdk has high level abstractions for extracting depth data. You don’t need to do it yourself.
ASIDE: Word of warning, I’ve found the ZED to be quite finnicky in any indoor situation, with rather disappointing precision compared to even the kinect v1. My communications with their rather helpful and knowledgeable support staff/devs has come to the conclusion of ¯_(ツ)_/¯ That’s the best you’ll get out of it. I wouldn’t expect anything resembling reasonable accuracy past 10m indoors, and I found that SLAM performance was significantly worse than with a kinect.
The ZED sdk has high level abstractions for extracting depth data. You don’t need to do it yourself.
ASIDE: Word of warning, I’ve found the ZED to be quite finnicky in most situations, with rather disappointing precision compared to even the kinect v1. My communications with their rather helpful and knowledgeable support staff/devs has come to the conclusion of ¯_(ツ)_/¯ That’s the best you’ll get out of it. I wouldn’t expect anything resembling reasonable accuracy past 10m indoors, and I found that SLAM performance was significantly worse than with a kinect.
Huh, good to know. What do you recommend using instead? We were planning on using the depth mapping feature in combination with encoders for latency compensation.
As for the kinect… do you use pre-existing libraries for the depth mapping features or did your team write something custom?
I imagine further discussion on this topic would need it’s own thread (in fact it’s likely there already is one), but for the moment, here’s what I can tell you from personal experience (as the students on my team have so far been focused on regular vision and controls robustness).
The kinect v1 at the very least (someone will have to fact check this regarding the kinect v2) does depth mapping in hardware, since it extracts depth with a special infrared emitter/camera system. This means no processing on your end. It’s range is a bit short though.
I’ve read good things about the kinect v2, as it has more range, but have yet to test it. Now that prices have fallen for used variants I’m going to have to do some experimentation on my own.
As for getting the data, opencv supports capturing directly from it using OPENNI drivers although for the purposes of doing localization, there are actually ROSpackages that actually do SLAM without you having to touch much code. If you then want to also do vision on those images, ROS has a reasonable API for getting image data that might be needed by seprate processes (in this case your own program and RTABMAP or Cartographer).
Perhaps I’ve simply been spoiled by hardware with dedicated sensors for distance measurements? Throughout all of my experiments the kinect outperformed the ZED at all ranges the kinect was capable of, and the ZED was entirely unusable outside of that range anyways. I thought something was wrong with my unit but the ZED people said (paraphrased) “yup, that’s how it’s supposed to be in that environment. You can tweak the settings to make it a bit better.” (It did in fact make it a bit better, but not enough.)
Yup, Generally peopleintegrate these things into ROS too so you get a standard way of getting data out of them (in exchange for some overhead of course).
Back on topic, I’ll agree with what other people said: You can use an IDE to program the Jetson if you want to, but if you want to use a just a text editor, you can make that work as well (We use vim + command line tools to do all of our editing, for the most part).
If you want to use an IDE, you’ll need to either:
set it up to cross compile for the jetson (quite a pain to do, but it is doable)
Write the code in an IDE, then scp it to the jetson to be compiled (losing some of the features of the IDE, but letting you use a familiar environment easily)
Run the IDE on the Jetson (although I’ve never done this, so YMMV)
If you want to not run and IDE though (and you’re willing to learn), I’d recommend trying out using the command line to edit/build it. Especially if you’re doing this over the summer, since then you’ll have a lot more time to invest in figuring out your tools than you will during build season.