Log in

View Full Version : Definitive Guide to Using the Jetson TK1/TX1?


Poseidon5817
15-11-2016, 16:10
Title explains itself, has anyone written a guide for how to set up, program, and use the Jetson boards in FRC? If not, this could be a good resource for teams who want to use a good coprocessor, but are turned away by the complexity of the project.





(I may or may not be in this group :D)

frcguy
15-11-2016, 16:19
I'm pretty interested in this too - we have a TX1 but haven't set it up yet. We would love to see an FRC-specific guide for it to help us get it running!

marshall
15-11-2016, 16:30
Title explains itself, has anyone written a guide for how to set up, program, and use the Jetson boards in FRC? If not, this could be a good resource for teams who want to use a good coprocessor, but are turned away by the complexity of the project.

(I may or may not be in this group :D)

It hasn't been done yet that I know of but I am far from an authoritative on the TX1. That being said, we released some white papers over the summer and at the end of last season discussing how we used it successfully.

What I will also tell you is that there are lots of groups working on resources to better use the TX1 and ultimately what you are trying to do should determine what hardware and software you are using.

If your goal is to get an awesome vision system running on an FRC robot then have you looked at GRIP yet? Or the PIXY cam?

If your goal is to put a TX1 on your robot and use the GPU then do you realize that OpenCV programming in C/C++ is required because there is no python support for the GPU at present? This also means you are responsible for memory management between the CPU and GPU (it's not done automagically unfortunately). There are tons of great examples for OpenCV though so don't let that scare you but it's something to be aware of.

Also something to consider is size and power requirements for the TX1. Do you know how large the TX1 carrier board is and are you prepared to sacrifice that much space on the robot to hold it? Does your CAD/design team know they need to take that space into account?

Do you know how to power the TX1? It's a bit finicky but with the suggestion from one of the 254 mentors we were able to get ours going last year. Check out the white papers we published, they'll give you a good starting point but there is really no definitive guide.

Poseidon5817
15-11-2016, 17:25
I was meaning more of a guide on how to set it up software-wise, like how to install the OS, openCV, etc. I ask because we already have a TK1 that is unused. We used GRIP last year with some issues due to the latency.

jman4747
15-11-2016, 18:09
If your goal is to get an awesome vision system running on an FRC robot then have you looked at GRIP yet? Or the PIXY cam?


Do you know of any example code for the PIXY on an FRC robot?

R.C.
15-11-2016, 18:19
I was meaning more of a guide on how to set it up software-wise, like how to install the OS, openCV, etc. I ask because we already have a TK1 that is unused. We used GRIP last year with some issues due to the latency.

What issues did you have with grip? We used grip and it ran on our driver station. Our shot was fairly fast as well.

Andrew_L
15-11-2016, 18:29
I was meaning more of a guide on how to set it up software-wise, like how to install the OS, openCV, etc. I ask because we already have a TK1 that is unused. We used GRIP last year with some issues due to the latency.

What issues did you have with grip? We used grip and it ran on our driver station. Our shot was fairly fast as well.

If you have some time in the offseason, I'd suggest trying out this method of running vision without a coprocessor (http://imjac.in/ta/post/2016/11/14/08-00-15-generated/) alongside GRIP. Like RC said, GRIP worked fantastically for us this season and made fast vision tracking a breeze - if you did experience some latency, maybe coupling it with this method could help speed things up a bit.

Poseidon5817
15-11-2016, 20:08
What issues did you have with grip? We used grip and it ran on our driver station. Our shot was fairly fast as well.

The issues we had with GRIP were the framerate and latency, the latter of which was not too horrendous, actually. However, the framerate was not too amazing. Since we already have a Jetson TK1 (from FIRST Choice last year IIRC, but I might be wrong) and an excess of USB cameras, we wanted to try to do some development with that in the hopes that it would improve our vision, which it would. Just thought I'd drop on and see if anyone had already created a guide for this, as I have no experience in the coprocessor department

marshall
16-11-2016, 07:43
The issues we had with GRIP were the framerate and latency, the latter of which was not too horrendous, actually. However, the framerate was not too amazing. Since we already have a Jetson TK1 (from FIRST Choice last year IIRC, but I might be wrong) and an excess of USB cameras, we wanted to try to do some development with that in the hopes that it would improve our vision, which it would. Just thought I'd drop on and see if anyone had already created a guide for this, as I have no experience in the coprocessor department

So how were you running GRIP? Was it on the driver station? It's possible to run GRIP on a raspberry pi and then narrow the gap to communicate back to the robot.

The TK1 is what came via FIRST Choice last year, it's a bit smaller than the TX1 physically (at least the board is). It's also a slightly different architecture than the TX1.

At any rate, to your original question, there is no definitive guide for how to use a TX1/TK1 for FRC. These boards are meant more as development environments for companies seeking to build products using these chips and not as turn-key hobbyist robot solutions, even though many people are using them for that.

Some resources for you:

Awesome presentation by Greg McKaskle on vision in FRC:
http://wp.wpi.edu/wpilib/files/2016/05/Vision-in-FIRST.pdf

Wiki for Jetson TX1 info (not always current but useful):
http://elinux.org/Jetson_TX1

As close to a getting started guide as I could find for the TX1:
http://www.jetsonhacks.com/2016/08/16/thoughts-on-programming-languages-and-environments-jetson-dev-kits/
http://www.jetsonhacks.com/2016/08/17/thoughts-on-programming-languages-and-environments-part-ii-jetson-dev-kits/
http://www.jetsonhacks.com/2016/08/17/thoughts-on-programming-languages-and-environments-part-iii-jetson-dev-kits/
http://www.jetsonhacks.com/2016/08/18/thoughts-on-programming-languages-and-environments-part-iv-jetson-dev-kits/

One more highly over-looked camera for FRC is the Sony Playstation Eye:
http://www.jetsonhacks.com/2016/09/29/sony-playstation-eye-nvidia-jetson-tx1/

If you are just trying to improve framerate and latency... try to find out what the smallest possible frame size you can use is and still get back the data necessary to perform the task.

Someone else above asked about FRC code for the pixy:
https://github.com/Round-Table-Robotics/FRCDashboard---Pixy_Virtual_Screen

KJaget
16-11-2016, 08:13
I was meaning more of a guide on how to set it up software-wise, like how to install the OS, openCV, etc. I ask because we already have a TK1 that is unused. We used GRIP last year with some issues due to the latency.

Depends on what you mean by etc, but to install the latest OS on the board use JetPack - https://developer.nvidia.com/embedded/jetpack. That will give you the latest OS, OpenCV, CUDA, and so on. At that point it is a Ubuntu desktop machine, so "sudo apt-get install" will get you a ton of other stuff if you need it.

And for fixing latency, try maxing out the clocks on board when you start up : http://elinux.org/Jetson/Performance

We've had good luck this year using python for prototyping vision code. See the tutorials at http://opencv24-python-tutorials.readthedocs.io/en/stable/py_tutorials/py_tutorials.html. Sure, you're not going to get GPU acceleration from this, but realistically you should be able to get goal detection going at a reasonable speed just using the CPU.

RyanShoff
16-11-2016, 11:44
I just did this a couple of months ago. Next time we'll try to document it better.

You need a system running Ubuntu 14.04 to install the Jetpack on. A vm might work but I haven't tried it. I did manage to get it work from 16.04, but I would stick to 14.04 if you can.

The current Jetpack documentation is here:http://docs.nvidia.com/jetpack-l4t/index.html#developertools/mobile/jetpack/l4t/2.3/jetpack_l4t_install.htm

Make sure you install OpenCV for Tegra and CUDA toolkit. You might not want to install some of the samples or machine learning packages.

The install will walk you through putting the board in recovery mode.

Last year we used two builds of our code. One with a GUI to view the processing for calibration and one without the GUI. We would ssh with X forwarding into the TK1 to use the GUI version.

Our code with here: https://github.com/FRC-Team-4143/vision-stronghold
It documents most of the tweaks to the standard configuration we used. I need to clean up a couple of symbolic links in /etc that git didn't follow.

Poseidon5817
16-11-2016, 14:32
I just did this a couple of months ago. Next time we'll try to document it better.

You need a system running Ubuntu 14.04 to install the Jetpack on. A vm might work but I haven't tried it. I did manage to get it work from 16.04, but I would stick to 14.04 if you can.

The current Jetpack documentation is here:http://docs.nvidia.com/jetpack-l4t/index.html#developertools/mobile/jetpack/l4t/2.3/jetpack_l4t_install.htm

Make sure you install OpenCV for Tegra and CUDA toolkit. You might not want to install some of the samples or machine learning packages.

The install will walk you through putting the board in recovery mode.

Last year we used two builds of our code. One with a GUI to view the processing for calibration and one without the GUI. We would ssh with X forwarding into the TK1 to use the GUI version.

Our code with here: https://github.com/FRC-Team-4143/vision-stronghold
It documents most of the tweaks to the standard configuration we used. I need to clean up a couple of symbolic links in /etc that git didn't follow.

So you do need a system running Ubuntu to use the Jetson?

marshall
16-11-2016, 15:08
So you do need a system running Ubuntu to use the Jetson?

It makes updating it easier because of the jetpack software and yes a VM should work without any issues as that is how I did it. Granted, I was using workstation pro on a windows host so your mileage may vary.

Once you have it updated, you can do development locally.

KJaget
17-11-2016, 09:10
Our code with here: https://github.com/FRC-Team-4143/vision-stronghold

Nice. Definitely "borrowing" the CUDA inRange implementation for next year.

sanddrag
28-12-2016, 01:53
I was looking for a "Getting started with the Jetson TX1 for Dummies" type of guide and came up empty handed. Our unit from FIRST Choice arrived today, and I'm already kind of regretting getting it. I understand what you're supposed to be able to do with it, and it seems like it has a lot of potential, but actually using this thing seems way over my head. Other than Team 900, I haven't really found any accounts of a FIRST team successfully using it. Does anyone have any kind of FIRST-specific easy guide for this thing? If not, maybe someone wants to trade me for something simple that I can figure out, like a bunch of hex shaft collars....

Penchant
28-12-2016, 03:46
I was looking for a "Getting started with the Jetson TX1 for Dummies" type of guide and came up empty handed. Our unit from FIRST Choice arrived today, and I'm already kind of regretting getting it. I understand what you're supposed to be able to do with it, and it seems like it has a lot of potential, but actually using this thing seems way over my head. Other than Team 900, I haven't really found any accounts of a FIRST team successfully using it. Does anyone have any kind of FIRST-specific easy guide for this thing? If not, maybe someone wants to trade me for something simple that I can figure out, like a bunch of hex shaft collars....

I am no expert with the Jetson TX1 but I do have some experience from last year when the team I was on got one. I will say it certainly requires some time to be put in use but it is quite powerful and as a student on the team at the time I know I learned a lot from it. As to how to set up, use Jetpack with a guide mentioned above in the thread to get OpenCV installed and start small.

We started late due to focusing on other things with programming since we were switching to C++ that year but that was definitely a mistake on not getting started earlier. As well, we looked at going very ambitious right away with it which meant a lot of development time to get something made but starting small and continually ramping up would have been the much better way to go about it to ensure we had something ready in time for our competitions.

Now the above was generic approach advice but that is mostly because other than the resources Marshall already recommended above, there aren't a ton of things I can specifically recommend you look into since I don't really know your knowledge with networking or working with Linux systems. To try to give a little specific help, for networking an option that may be easier for you to integrate would be Network Tables (https://github.com/wpilibsuite/ntcore) although its not hard to make an argument for pursuing a custom option with TCP or UDP. As well, something I have found advised is using something like the NavX for timestamps so that you know exactly how long it was from when the picture was first taken to when you are about to act on that data so you can account for robot movement in that time.

Now beyond that I would be happy to give answer any specific questions you have about how to get started with the TX1 with my limited experience. If you are more so wondering on how to design your vision program, we did eventually get a successful vision program made so I can certainly give you a general overview of what I would recommend to do on that end as well.

JamesBrown
28-12-2016, 17:34
I'm actually about to go pick up our TX1 that came in with our first choice shipment. This is going to be a bit of a pet project for me, with the plan being to use it for the robot this year if practical. We spent time this off season exploring other vision options so this may get delayed if something like pixy cam is sufficient for this year.

I will do my best to document as I go, and I will try to be active on here to discuss and answer questions when I can. I don't think there is a single source for reference at this point, but I am happy to contribute. I don't think there will be a definitive reference for this season, but perhaps the best solution is to have a go to thread for discussion and help, and then in the offseason we can work to compile a guide.

Poseidon5817
28-12-2016, 19:35
I was looking for a "Getting started with the Jetson TX1 for Dummies" type of guide and came up empty handed. Our unit from FIRST Choice arrived today, and I'm already kind of regretting getting it. I understand what you're supposed to be able to do with it, and it seems like it has a lot of potential, but actually using this thing seems way over my head. Other than Team 900, I haven't really found any accounts of a FIRST team successfully using it. Does anyone have any kind of FIRST-specific easy guide for this thing? If not, maybe someone wants to trade me for something simple that I can figure out, like a bunch of hex shaft collars....

This is exactly how I feel about it: that it might be a little too complex for some people to figure out without a step by step guide, and I'm definitely in that group.