![]() |
Re: NVIDIA Jetson TK1
Quote:
|
Re: NVIDIA Jetson TK1
Since this popped to the list today, there were a number of teams that were looking / got this board to support vision for next year.
How are your boards working out for you? |
Re: NVIDIA Jetson TK1
Quote:
Our impressions of the board are favorable though (thus why we now have 3 of them). Running X11 on them can be unstable at times but not bad most of the time. We will disable X11 for competition. One of our student programmers suggested switching to wayland... sadly, he became an example for the other students and was shot. ;) The boards run linux but they do require some system knowledge to enable certain features (USB 3.0 is not enabled by default). You have to update the image using dd or some similar commands. You will encounter driver issues inevitably with USB devices or other things. C++ is the way to go if you are going to use these boards since you are paying for the GPU and writing OpenCV code in any language other than C++ doesn't seem to give you access to it. You of course gain all of the pain associated with C++, including memory management and that problem is doubled with the GPU because you have to swap images to and from it. We have not touched on optimization with our students yet but we will soon. When we do, we are going to start them down the road of threading and running the network on a different thread than the image grabbing thread. We also have not started to look at optimizing the offload to the GPU, though we are using it now, which is cool. We have not put this board on a robot yet. We are still doing bench-top testing with pre-recorded videos. We will be putting one on a robot before too long. The power draw from one of these board is capable of overloading the VRM but it is doubtful that it will based on calculations. Our intent will be to post a paper sometime before the end of the season about our efforts. If you have specific questions then feel free to ask or PM me. I'm always happy to chat. All of the above being said, the following is my personal opinion and not necessarily shared by my team: I think the vision challenges from the last 5+ years can be done without this board using the new RoboRIO and a cheap-ish USB webcams (We are also a beta team) and the examples that WPI/NI/FIRST provide along with some dedicated students/mentors looking at the problems and writing some clever color filters. These boards can do substantially more than that. Vision processing with OpenCV is capable of doing object recognition (Think: looking for and recognizing the bumpers of other robots and playing automated defense: "No, I didn't pin them for 5 seconds, it was exactly 4.99 seconds and I have logs to prove it" ;) ). If you are going to use this board then I suggest you plan on doing something above and beyond the basic vision challenge of tracking an object by color alone or determining if a goal is simply hot/cold. Granted, I'm a bit of an ambitious dreamer and not always a realist but my students keep surprising me. EDIT: In no way take my above comments as negative or that teams shouldn't try to do awesome stuff with OpenCV. Please, try everything. I want to be amazed and I know all teams will continue to impress upon me how awesome FRC is for that. I just want to be clear that these boards are both expensive and powerful and can be used for some awesome stuff. |
Re: NVIDIA Jetson TK1
Quote:
There are a number of ways of doing object recognition, the most common method is by thresholding based on color. The act of classifying every pixel into, usually, 2 groups: foreground and background. It is an optimization problem if you get down to the roots of it. Pictorial representation This works method usually works for game piece detection, as well as target detection. A problem occurs when you threshold by color for bumpers. Take the 2014 game for example. The balls were blue and red. The bumpers are blue and red. There is not a strict color requirement for bumpers, however. Yes, the have to be red and blue, but they can be different shades. So, this leaves a program that learns what a bumper is. There are a few ways to do this, but all are very computationally intensive. Facial recognition programs uses these types of algorithms. One such algorithm is the haar cascade. What this requires from you is to take as many pictures of bumpers as possible, then train your program on the data set. To get the best results, you'd have to go around at competition and take as many pictures as possible of every robot. Then you have to train the program, which it is not uncommon for it to take several hours to do so. I personally believe that there needs to be an objective (aerial) camera in order for the level of autonomous play to increase. End of irrelevant rant. |
Re: NVIDIA Jetson TK1
I just got OpenNI Kinect drivers and Point Cloud Library to compile and work on one of these. It might be possible to do some cool on robot stuff with this setup. I'm just starting to explore it.
|
Re: NVIDIA Jetson TK1
So I finally spent some time over the weekend unboxing my Jetson and getting it set up. It's been sitting on my shelf for the past month and a half.
My initial plan is to take the same c++ vision binary (using OpenCV) we used for our BeagleBone in the 2014 season, and run a comparison test between the beaglebone, Jetson, and RoboRio. Well as of right now I don't have anything to show. I am going the much harder route of setting up a cross-compiler instruction, and unfortunately the beaglebone white uses soft floating point instructions, and the Jetson is showing incompatibility issues with that architecture (because the jetson is a hard floating point target). Trying to re-compile with hard floating point works for simple projects (like hello world), but the binutils tools for hard floating point seem to have a bunch of bugs. I am working through them 1 by 1. The linker currently crashes when I try to cross-compile the version of openCV I have on my machine (2.4.6) to use VFP. (Using soft floating point OpenCV compiles perfectly). I am trying to avoid compiling directly on the Jetson (for now), just so I can put together an instruction set for setting up a cross compiler. (But in the interest of time, I can always cheat, compile OpenCV on the jetson to get a hardfp version of the libraries, and transfer the shared libraries back to my desktop just to get some bechmarking done.) I will make sure all three are using the same source code and OpenCV version. It looks like my wish of using the same binary won't work, because at a minimum, I would have to re-compile the binary to use vfp on the jetson, and eventually the GPU (but I expected that already). I do have the same binary from the beaglebone running directly on the RoboRio, but do not have comparison numbers as of yet. So I will get those posted as soon as I can. Regards, Kevin |
Re: NVIDIA Jetson TK1
Quote:
We've got our students writing code and compiling on them so that's why I am asking. It just made sense for us given the number of students and limited workstations. It was easier for us to just use the boards. |
Re: NVIDIA Jetson TK1
Quote:
I understand everyone can SSH into the board, and have a different session, but that is slow, and it makes it hard for us because our development boards typically stay at the school (where we don't have remote access through schools firewall). With cross-compiler tools set up I can give my students homework where they can write code at home, build it, push to github, and then we can test it on the board later - saves us a lot more time. We only had 2 beaglebone this past season, a team owned and a mentor owned so it was really important for us to be able to develop off the target. Right now we only have one jetson, which is mentor owned. If we get things rolling on this, I will probably just donate it to my team, but it still means having one dev board and multiple programmers. Plus, I haven't come across any real clear tutorials yet working with the jetson in a cross-compiled environment, so I decided to tackle the challenge. Not sure how smart this was just yet lol. |
Re: NVIDIA Jetson TK1
Quote:
|
Re: NVIDIA Jetson TK1
Quote:
That might also complicate your cross compilation issues. |
Re: NVIDIA Jetson TK1
Quote:
|
Re: NVIDIA Jetson TK1
Quote:
|
Re: NVIDIA Jetson TK1
So just an update:
I finally got the cross-compiler for the Jetson working. I cross-compiled OpenCV for the Jetson's ARM-hf processor, and now have the same Vision code we used last year running on a BeagleBone, running on the RoboRio, and Jetson. Setting up the cross-compiler in Eclipse for this go around was a bit of a nightmare because I was using an older version of OpenCV for the bone (we wrote that code back in January 2014) and it was using old versions of FFMPEG, and LibGTK as well as LibC version 2.17. Once I got hold of those old libraries, and recompiled them to armhf, fixed over 100 broken symlinks, the cross-compiler was working. I am running Ubuntu 12.04 on a Dell Latitude for my development. The cross-compiler I am using is arm-linux-armeabihf-g++ version 4.6.3 So far the OpenCV I cross-compiled and have running on the Jetson just has support for Neon, FFMPEG, and LibGTK, as well as JPEG and python binding (although I don't use them). It does not have support for CUDA yet. After I run my benchmark tests, using the binaries we ran on the Beaglebone last season, I will upgrade to the latest versions of OpenCV, FFMPEG, GTKLib, and incorporate CUDA, how I set up that cross-compiler in eclipse will be what I release. Now that I have this working on all 3 of my test platforms, I will be publishing initial test results sometime this weekend, and then follow up with a GPU benchmark later on. If anyone has any specific questions with how the beaglebone white, RoboRio, and Jetson compare please let me know and ill see what I can do. Also look for a compete how-to on setting up the cross compiler in eclipse with CUDA support. (This should be a lot easier, because all I should really need to do is install the official OpenCV for Tegra released by Nvidia with CUDA support on the Jetson, and transfer those binaries to my laptop. And after install the CUDA SDK to my laptop. Hopefully I can get to this by next week. Regards, Kevin P.S. I only had to recompile my code to armhf to run on the Jetson, the same binaries and shared libraries I had on the beaglebone (armsoftfp) ran directly on the RoboRio without any recompilation (just symlink fixing), so if you currently use a beagle bone, and want to port your code to the RoboRio, its a no-brainer. Quote:
|
Re: NVIDIA Jetson TK1
So I know I said I would be posting a comparison of the jetsons performance against a couple of different devices, including the RoboRio, and I will, I just keep getting involved with more pressing matters.
I have a ton of data, and most of my testing is done. I just need to sift through it. Here is a draft of the stuff I have documented so far: http://khengineering.github.io/RoboR...on/cameratest/ More data will be posted very shortly, (i.e. few days). I am also in the process of rewriting our vision code to make use of the Jetson GPU. All tests so far were cpu vs cpu. So that should be done in about 2 weeks time. Regards, Kevin |
Re: NVIDIA Jetson TK1
All,
We have added a few more updates to our performance analysis. So far based on our testing the Tegra TK1 is capable of processing 640x480 images well over 30 frames per second without any lag, just using OpenCV on the CPU. There is a lot of CPU headroom left. We need to perform additional tests on the RoboRio. I remember one test where we were able to run 320x240 at 30 frames per second without any noticeable lag with x11 forwarding enabled, but the data for other frame rates do not support that conclusion. We are double backing here and re-running our testing to ensure accuracy. We also need to make sure that all cores are being used on the Rio. We can safely conclude however, at the moment that under our test conditions, the RoboRio can not process 640x480 images at 10fps or higher without experiencing noticeable lag. We are still trying to determine at what framerate we can achieve lag free 640x480 processing on the RoboRio. Our baseline test suggests 8fps but we have not run any performance test to confirm. We still have yet to post any processing results from the BeagleBone black, so look for those soon. The URL where we are documenting these tests is here: http://khengineering.github.io/RoboR...on/cameratest/ If you have any questions about our test methods, or conclusions, please let me know. |
| All times are GMT -5. The time now is 11:06. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi