Log in

View Full Version : NVIDIA Jetson TK1


jesseclayton
18-06-2014, 19:03
FRC Teams:

As some of you may know, NVIDIA visited the FIRST Championships in St. Louis this year.

While NVIDIA has been involved in various ways in the past, this was my first experience with FIRST.

I was completely blown away. I was amazed by the technical excellence, the competition, the cooperation, the professionalism. Having been in the tech industry for almost 18 years, I can tell you that FIRST embodies the best aspects of science and technology.

One of the reasons NVIDIA was at Championships was to show off the new Jetson TK1 developer platform. It’s a small, low-power, fully functional computer. Great for computer vision for robotics. You can learn more about it here: http://developer.nvidia.com/jetson-tk1.

Other relevant information, including list of compatible cameras, is here: http://elinux.org/Jetson_TK1

We thought it would be fun for FRC teams to show off how they would use Jetson TK1 to solve this previous year’s challenges, and share their work with the rest of the community. NVIDIA is offering the Jetson TK1 for $130 (normally $192) to FRC teams. If you are interested please fill out the form here: https://www.surveymonkey.com/s/JetsonTK1-First

Edit: The discount will be available until July 12, 2014.

Thanks

Jesse Clayton
Product Manager, Mobile Embedded | NVIDIA

lucas.alvarez96
18-06-2014, 20:45
Jesse:

It was quite a pleasant surprise to see NVIDIA at the FIRST champs alongside so many other sponsors! Sadly, I only passed by your booth, as the great list of things to do, such as presenting the Chairman's Award, kept me busy throughout the entirety of the event. Besides presenting the award on behalf of my team, I was very actively involved in programming this season. Not just robot programming, but off-board vision solutions as well. I just talked with my team's head mentor, and he would be happy to consider purchasing this board for our team, so as to further improve in the vision area. We have just one problem. The link you provided does not include a "Country" field, which leads me to believe that this product is only available within the United States. Is there any possibility of buying the board from Chile?

Thanks for the support your company is giving to FIRST!

Lucas

Kevin Watson
18-06-2014, 23:45
For years we've all wanted enough compute power to do some very serious computer vision on our 'bots, and now we have it. I've been working with the Tegra TK1 on a project at Google for the last eleven months and what we're able to do with this device at Google is jaw-dropping. Want to autonomously navigate the field using just a camera and a gyro, you now have enough compute power to do it on your 'bot at video rates. Over the Summer I hope teams will take advantage of this cool offer and start developing software to solve the typical problems you face every year on the field that (formally) required a human-in-the-loop to solve (e.g., game piece tracking, hazard avoidance, path planning and navigation, etc.) If you do this, and are prepared for the game reveal next January, imagine the amazingly cool things you'll be able to do.

If you've read this far and you're still not convinced that you should learn CUDA, VisionWorks and OpenCV programming over the summer instead of playing Call of Duty XVII for countless hours, do yourself a favor and spend just sixteen minutes of time watching this video starting at 1:20:00:

http://www.gputechconf.com/attendees/keynotes-replay

-Kevin

jesseclayton
19-06-2014, 00:02
The link you provided does not include a "Country" field

That was an oversight. The survey has been updated to include a country field. Thanks for the catch!

Jesse Clayton
Product Manager, Mobile Embedded | NVIDIA

markmcgary
19-06-2014, 11:30
Thank you. Survey form completed.

ehochstein
19-06-2014, 11:52
This looks awesome! Thank you for the opportunity.

sparkytwd
19-06-2014, 17:13
Wow, a very cool devboard. My team has ran an onboard computer the past 3 years. I really like the SATA on board. Last year we did realtime HD recording from 2 cameras (and a 3rd in standard def) and we hit a major bottleneck in disk bandwidth.

My only request would be to have more USB ports, as we used 5 last year.

Does this have support for accelerated video encoding?

Kevin Watson
19-06-2014, 17:40
My only request would be to have more USB ports, as we used 5 last year.Aside from the additional board space needed, I suspect the reason there is just one of each type of USB port is most people would use a powered hub(s) with the board.

Does this have support for accelerated video encoding? Yes, it has a built-in high performance H.264 hardware video encoder. I'm not sure how it is exposed to Linux user space, so I would go over to the support forum and ask. Here's a link:

https://devtalk.nvidia.com/default/board/139/embedded-systems/

-Kevin

sparkytwd
19-06-2014, 17:46
Aside from the additional board space needed, I suspect the reason there is just one of each type of USB port is most people would use a powered hub(s) with the board.


Sure, and it's what we did for 2013. However having fewer things that need to get separate wiring and power are always good.


Yes, it has a built-in high performance H.264 hardware video encoder. I'm not sure how it is exposed to Linux user space, so I would go over to the support forum and ask. Here's a link:

https://devtalk.nvidia.com/default/board/139/embedded-systems/

I'll have to check that out, thanks.

-Kevin

Kevin Watson
25-06-2014, 21:55
...I've been working with the Tegra TK1 on a project at Google for the last eleven months and what we're able to do with this device at Google is jaw-dropping...

For those that may be curious about the project I mentioned above, we unveiled the Google Project Tango tablet at Google I/O today in San Francisco. I'm sure there will be much more info released about it it the coming days, but this video will give you an idea of what you can do with the Tegra K1-based Jetson TK1 computer:

http://www.youtube.com/watch?v=4KrkW1afnuI (http://www.youtube.com/watch?v=4KrkW1afnuI)

At 0:31 of the video is a short demonstration of odometry that may be of interest to a few teams :-)

-Kevin

sparkytwd
06-07-2014, 16:50
My dev board just showed up, thanks so much to Jesse. When I saw it had a 12V power supply instead of 5v like the other boards, my concern is that this would be an issue on the robot.

Under heavy motor load, even a fully charged battery can dip to 11v, which causes a problem for onboard systems that require a steady 12v. Looks like this won't be an issue: http://developer.download.nvidia.com/embedded/jetson/TK1/docs/Jetson_TK1_FAQ_2014May01_V2.pdf

The range seems to be 9.5v to 16v (13.2v if you're using a SATA drive that uses the 12v rail, though I think that's only spinning drives that regularly use that, which on the robot will already have more issues)

Joe Ross
06-07-2014, 17:30
My dev board just showed up, thanks so much to Jesse. When I saw it had a 12V power supply instead of 5v like the other boards, my concern is that this would be an issue on the robot.

While it won't help the Jetson (because of the high power draw), the 2015 control system will include a Voltage Regulator Module (VRM) by Cross the Road Electronics that will provide regulated 5v @ .5amp, 5v @ 2amp, 12v @ .5amp, and 12v @ 2amp. We'll have to wait for the 2015 rules to see which rails are required for control system components and which are available for team use.

RufflesRidge
06-07-2014, 19:48
While it won't help the Jetson (because of the high power draw), the 2015 control system will include a Voltage Regulator Module (VRM) by Cross the Road Electronics that will provide regulated 5v @ .5amp, 5v @ 2amp, 12v @ .5amp, and 12v @ 2amp. We'll have to wait for the 2015 rules to see which rails are required for control system components and which are available for team use.

Based on the numbers in the Technical Brief (http://developer.download.nvidia.com/embedded/jetson/TK1/docs/Jetson_platform_brief_May2014.pdf) and Technical FAQ (http://developer.download.nvidia.com/embedded/jetson/TK1/docs/Jetson_TK1_FAQ_2014May01_V2.pdf) the 12V 2A VRM supply may be near the edge, but it is possible it would work for running the Jetson board.

If the wireless solution is the 1522 as has been rumored, the 12V rails should be fair game with the radio on the 5V 2A supply.

sparkytwd
06-07-2014, 22:36
Based on the numbers in the Technical Brief (http://developer.download.nvidia.com/embedded/jetson/TK1/docs/Jetson_platform_brief_May2014.pdf) and Technical FAQ (http://developer.download.nvidia.com/embedded/jetson/TK1/docs/Jetson_TK1_FAQ_2014May01_V2.pdf) the 12V 2A VRM supply may be near the edge, but it is possible it would work for running the Jetson board.

I think you would still want to run the Jetson directly from the unregulated power.

The present kit includes a more than ample 12V @ 5A (60W) power supply. NVIDIA is evaluating smaller power supplies
for the production kit, since the reasonable stressful applications NVIDIA has tested so far are below 30W (12V @ 2.5A).


They also mention the 12v only directly powering the fan and SATA power, so if you're not running a 12v SATA drive (not aware of any SSDs that use that), the only unregulated load on the Jetson is the fan.

jesseclayton
01-08-2014, 14:10
My dev board just showed up, thanks so much to Jesse. When I saw it had a 12V power supply instead of 5v like the other boards, my concern is that this would be an issue on the robot.

Under heavy motor load, even a fully charged battery can dip to 11v, which causes a problem for onboard systems that require a steady 12v. Looks like this won't be an issue: http://developer.download.nvidia.com/embedded/jetson/TK1/docs/Jetson_TK1_FAQ_2014May01_V2.pdf

The range seems to be 9.5v to 16v (13.2v if you're using a SATA drive that uses the 12v rail, though I think that's only spinning drives that regularly use that, which on the robot will already have more issues)

In addition to what others have posted, there is some information on using alternative power sources on the Jetson public wiki: http://elinux.org/Jetson/Jetson_TK1_Power .

Notably:

The Jetson TK1 accepts a standard 2.1mm DC barrel plug (center-pin is positive while the outer ring is negative) and is rated for 12V DC input, but will actually work with any input voltage between 9.5V to 13.5V. Note that SATA disks require a fairly precise 12V, so you shouldn't be using voltages at those ranges if you will power SATA hard drives from the Jetson TK1. It is known that the Jetson TK1 board won't turn on at less than 9.5V and it will likely be damaged at 16V or above. It may also be possible to power the Jetson TK1 board somewhere in the 13.5V to 16V range but NVIDIA has not tested this.

sparkytwd
04-08-2014, 18:14
Note that SATA disks require a fairly precise 12V, so you shouldn't be using voltages at those ranges if you will power SATA hard drives from the Jetson TK1.

I believe most SATA SSDs require only 5v, such as the Samsung Pro 840 (http://www.samsung.com/us/pdf/memory-storage/840PRO_25_SATA_III_Spec.pdf).

If you want to be extra sure, you can cut the yellow wire on a standard molex adapter. 12V is generally made available for running the spindle motor, which even assuming a stable 12v, is a bad idea on a robot.

NotInControl
05-08-2014, 16:10
I am having one of my buddies ship me their TK1 dev board so I can test the Jetson TK1 with the Alpha/Beta hardware I have.

We used a beagleBone running openCV this year for vision in auto (it worked very well). With a 320x240 image we were getting 20fps with about 100ms lag behind real-time (not very noticeable, and very well within the requirements of hot goal detection).

Hopefully, I can get around to this, but I am going to recompile openCV with CUDA support, and see how much more we can do with the extra compute power.

I also wanted to take the same binary I had on the beagleBone, and run it directly on the RoboRio for comparative purposes, but haven't gotten around to that as of yet. But that is coming as well.

I am going to see how well this 12V board integrates with the RoboRio and new PDP. I may also end up putting it on a buck-boost supply so that when the robot dips to voltage, this guy doesn't loose its power. But I will only do that if other tests prove it is necessary. (my hunch right now is some boost supply will be necessary knowing how low our robots dip in Voltage).

The drivetrain I am running for these test is a 8 wheel (Traction Colson), 4-cim, single speed, so it should be possible to get the battery to dip to 8V instantaneously during normal driving.

I will report back what ever I can, as soon as I can. I just wanted to make the community aware that someone with Beta hardware is testing this out.

I am not sure I will buy one of these yet, it also looks like NVIDIA plans to release an upgrade to the TK1 early 2015 (maybe the new board can support 2015 build season?!).

-Kevin

Gdeaver
05-08-2014, 20:53
If this board is used for auto only voltage dip should not be a problem for most teams. However, seeing our voltage logs and some other teams this year the voltage drops would most likely be a problem for many teams. There are many automotive ATX style power supplies available. Not cheap and come with a standard ATX plug. 6 - 24 volt range is very common and they are designed to take a hard engine cranking in an automotive environment.

NotInControl
05-08-2014, 21:46
Thanks for the info. I personally probably wouldn't go the automotive ATX route. Those are typically used for CAR PC/infotainment systems with heavy power usages, which is why they are so expensive. I would be surprised if you could find one under 100 watts which is overkill for this application.

I was thinking more along the lines of a custom power circuit with one of these at the heart, if power conditioning was required (I think is will be).

$10 - http://www.digikey.com/product-detail/en/LT3791EFE%23PBF/LT3791EFE%23PBF-ND/3074261


$11 - http://www.digikey.com/product-detail/en/LTC3780EG%23TRPBF/LTC3780EG%23TRPBFCT-ND/3885241

I haven't down selected between the two, or even done an exhaustive search, but these two chips should provide more then enough power to support the Jetson TK1 at full performance on an FRC bot.

When I get around to it, and have a time to compare the features of these and other chips I will pick one, but at the moment I am leaning more towards the LED driver chip.

This is a more pocket friendly and application specific alternative then an ATX power supply

- Kevin

Gdeaver
06-08-2014, 08:09
Those are raw chips. Can you as a DIY project buy the BOM and make the boards? If you have boards made is the cost and time less than a COTS automotive power supply? There seams to be a void with small modules at about .5 amp and cheap and > 10 amp and expensive in the buck boost auto motive grade modules. I found a 12 volt 10amp module for 57$ singe output but nothing less till the low power stuff at 8$. If you are going to make your own, TI has some reference designs with everything figured out and ready to make. I think most teams would just want to buy the power supply and not get in to custom made power supplies.

NotInControl
06-08-2014, 12:44
Those are raw chips. Can you as a DIY project buy the BOM and make the boards? If you have boards made is the cost and time less than a COTS automotive power supply? There seams to be a void with small modules at about .5 amp and cheap and > 10 amp and expensive in the buck boost auto motive grade modules. I found a 12 volt 10amp module for 57$ singe output but nothing less till the low power stuff at 8$. If you are going to make your own, TI has some reference designs with everything figured out and ready to make. I think most teams would just want to buy the power supply and not get in to custom made power supplies.

Im sorry if what I am doing is not clear. Below is my approach so it is clear what I will try to accomplish, and the information I will try to make available to the community during my efforts.

The Beta Test equipment comes with a Voltage Regulator Module (VRM), it has four different outputs simultaneously. The VRM supports 2 channels of 5V@2A, 2 channels of 5V@500mA, 2 channels of 12V@2A and 2 channels of 12V@500mA simultaneously.

It can maintain constant output with input voltages ranging from 4-24VDC. The VRM gets 12V input power from a dedicated port on the 2015 power distribution panel. I believe most teams will use the VRM for many of their applications. I am not sure if you were aware of this module.

The only reason I would venture into the DIY route for power conditioning is:
1. running the TK1 at 12V@2amp was limiting full performance of the GPU
2. and it is not possible to sink 4amps from the VRM by combining output pins in parallel as is possible with other embedded devices (I need to confirm this with CTRE first). (Even if this is possible, it may be illegal to do against 2015 FRC rules)

I was jumping the gun by saying that for the very few teams, that may need to run this guy using full 60W performance, as was indicated by previous posts in this thread, (i.e running SATA drives and such) there is a way out by making your own Buck-boost converter. If I got that far with my very limited time, I proposed a solution I would take based on my comfort level with electronics. There are many DIY plans available on the net already for DIY buck-boost regulators. But I think a very small number of teams will need to go that route

I believe running the TK1 at 2amps using the 2015 VRM provided in the kit of parts will be more than sufficient for most teams if they wanted to venture and use this board for off-board processing. The small percentage of other teams needing to push the board to the limits can venture into the DIY route, or buy their own power supply. I will make available what ever I do with regards to this board during the off-season. My previous statements of "I believe some sort of Boost converter would be required" if it wasn't clear, was assuming the user required full performance power and was drawing more than 2amps continuous from the VRM, in which case the VRM could no longer work and another solution is required.

Hopefully this clears it up for everyone.
Regards,
Kevin

Gdeaver
06-08-2014, 13:08
Wasn't aware of the CRE specs. There probably will not be that many teams Pushing vision on board to the limit and the ones who do can find there own solution. I don't think CRE needs to supply a solution that supports a small population of teams.

Foster
06-08-2014, 15:01
Thanks Kevin for the posts with the info about the CTR power unit. I'm interested to see what your current levels are when the TK1 is at full load.

marshall
06-08-2014, 15:26
This chart of power draw for vision tasks might also be useful given what teams are likely to use the TK1 for:

http://elinux.org/Tegra/OpenCV_Performance#Power_draw_during_computer_visi on_tasks

adciv
07-08-2014, 09:54
Side note, for fairly low cost you can buy these ATX PSUs. Sure, the smallest is 80w, but they are small and they are low cost. We ran a Kinect on this in 2013. I plan to reuse it with the new boards once I have a chance to start playing with them.
http://www.mini-box.com/DC-DC

sparkytwd
07-08-2014, 10:08
Side note, for fairly low cost you can buy these ATX PSUs. Sure, the smallest is 80w, but they are small and they are low cost. We ran a Kinect on this in 2013. I plan to reuse it with the new boards once I have a chance to start playing with them.
http://www.mini-box.com/DC-DC

Keep in mind not all of those converters can handle sub 12v input. You need to select one that explicitly mentions either a low (usually 6v) acceptable range or boost design.

marshall
12-08-2014, 09:16
Would sir care for a more robust design fit for a FIRST team?

http://www.amazon.com/gp/product/B00MHX6TIA/ref=amb_link_423623342_3?pf_rd_m=ATVPDKIKX0DER&pf_rd_s=hero-quick-promo&pf_rd_r=18CFWXXR1SNCHZBS0D3S&pf_rd_t=201&pf_rd_p=1878722382&pf_rd_i=B00MHX6V88

http://www.amazon.com/Acer-Chromebook-CB5-311-T7NN-13-3-inch-NVIDIA/dp/B00MHX6V88/ref=sr_tr_sr_1?s=pc&ie=UTF8&qid=1407849069&sr=1-1&keywords=acer+chromebook+13

Supposedly there will be an OEM/Educational variant as well that will have 4GB of RAM but only 16GB of storage which will fall midway between the above two on the price scale and make me very happy.

Kevin Watson
09-09-2014, 16:22
If your team purchased a Jetson board, how is it going? Have you done anything cool with it yet? Do you need help? Feedback would be greatly appreciated.

-Kevin

marshall
09-09-2014, 16:43
If your team purchased a Jetson board, how is it going? Have you done anything cool with it yet? Do you need help? Feedback would be greatly appreciated.

-Kevin

We have a couple. The mentors have been playing with them so far but we're going to hand one over to some students soon (our season just started). It's a nice development board. There are stability issues with running X11 on it in our experience. We've flashed it a few times and the process is painfully slow to update them with a clean image. It boots fast enough for a FRC team to use but it is linux so at some point it's going to get annoyed with being hard rebooted and will run fsck when you least expect it.

It doesn't do USB 3.0 by default, that requires some bit fiddling but not a big deal.

OpenCV is really fast on it, particularly for our needs. Nvidia has just released a new version of the CUDA software kit as well so there will be more updates coming soon probably.

matan129
22-10-2014, 15:55
FRC Teams:

As some of you may know, NVIDIA visited the FIRST Championships in St. Louis this year.

While NVIDIA has been involved in various ways in the past, this was my first experience with FIRST.

I was completely blown away. I was amazed by the technical excellence, the competition, the cooperation, the professionalism. Having been in the tech industry for almost 18 years, I can tell you that FIRST embodies the best aspects of science and technology.

One of the reasons NVIDIA was at Championships was to show off the new Jetson TK1 developer platform. It’s a small, low-power, fully functional computer. Great for computer vision for robotics. You can learn more about it here: http://developer.nvidia.com/jetson-tk1.

Other relevant information, including list of compatible cameras, is here: http://elinux.org/Jetson_TK1

We thought it would be fun for FRC teams to show off how they would use Jetson TK1 to solve this previous year’s challenges, and share their work with the rest of the community. NVIDIA is offering the Jetson TK1 for $130 (normally $192) to FRC teams. If you are interested please fill out the form here: https://www.surveymonkey.com/s/JetsonTK1-First

Edit: The discount will be available until July 12, 2014.

Thanks

Jesse Clayton
Product Manager, Mobile Embedded | NVIDIA
Is there any way to get the jetson in the discounter price even though it has ended?

marshall
22-10-2014, 16:01
Is there any way to get the jetson in the discounter price even though it has ended?

Jesse is probably not actively checking CD for comments. That being said, I suspect the discount is still available. I would shoot him an email to ask: jclayton@nvidia.com

Foster
22-10-2014, 18:36
Since this popped to the list today, there were a number of teams that were looking / got this board to support vision for next year.

How are your boards working out for you?

marshall
23-10-2014, 09:35
Since this popped to the list today, there were a number of teams that were looking / got this board to support vision for next year.

How are your boards working out for you?

I haven't seen too many teams actively talking about their development on these here on CD but we now have 3 of them for our students to test/play with (4 of them if you count the mentor owned one). So far, most of the challenges have been around getting the students acclimated to C++, getting a C++ program on the board to communicate over the network to LabView, and then getting LabView to understand that communication.

Our impressions of the board are favorable though (thus why we now have 3 of them). Running X11 on them can be unstable at times but not bad most of the time. We will disable X11 for competition. One of our student programmers suggested switching to wayland... sadly, he became an example for the other students and was shot. ;)

The boards run linux but they do require some system knowledge to enable certain features (USB 3.0 is not enabled by default). You have to update the image using dd or some similar commands. You will encounter driver issues inevitably with USB devices or other things.

C++ is the way to go if you are going to use these boards since you are paying for the GPU and writing OpenCV code in any language other than C++ doesn't seem to give you access to it. You of course gain all of the pain associated with C++, including memory management and that problem is doubled with the GPU because you have to swap images to and from it.

We have not touched on optimization with our students yet but we will soon. When we do, we are going to start them down the road of threading and running the network on a different thread than the image grabbing thread. We also have not started to look at optimizing the offload to the GPU, though we are using it now, which is cool.

We have not put this board on a robot yet. We are still doing bench-top testing with pre-recorded videos. We will be putting one on a robot before too long. The power draw from one of these board is capable of overloading the VRM but it is doubtful that it will based on calculations.

Our intent will be to post a paper sometime before the end of the season about our efforts. If you have specific questions then feel free to ask or PM me. I'm always happy to chat.

All of the above being said, the following is my personal opinion and not necessarily shared by my team:
I think the vision challenges from the last 5+ years can be done without this board using the new RoboRIO and a cheap-ish USB webcams (We are also a beta team) and the examples that WPI/NI/FIRST provide along with some dedicated students/mentors looking at the problems and writing some clever color filters.

These boards can do substantially more than that. Vision processing with OpenCV is capable of doing object recognition (Think: looking for and recognizing the bumpers of other robots and playing automated defense: "No, I didn't pin them for 5 seconds, it was exactly 4.99 seconds and I have logs to prove it" ;) ). If you are going to use this board then I suggest you plan on doing something above and beyond the basic vision challenge of tracking an object by color alone or determining if a goal is simply hot/cold. Granted, I'm a bit of an ambitious dreamer and not always a realist but my students keep surprising me.

EDIT: In no way take my above comments as negative or that teams shouldn't try to do awesome stuff with OpenCV. Please, try everything. I want to be amazed and I know all teams will continue to impress upon me how awesome FRC is for that. I just want to be clear that these boards are both expensive and powerful and can be used for some awesome stuff.

faust1706
24-10-2014, 14:44
These boards can do substantially more than that. Vision processing with OpenCV is capable of doing object recognition (Think: looking for and recognizing the bumpers of other robots and playing automated defense: "No, I didn't pin them for 5 seconds, it was exactly 4.99 seconds and I have logs to prove it" ;) ). If you are going to use this board then I suggest you plan on doing something above and beyond the basic vision challenge of tracking an object by color alone or determining if a goal is simply hot/cold. Granted, I'm a bit of an ambitious dreamer and not always a realist but my students keep surprising me.


To expand on this:

There are a number of ways of doing object recognition, the most common method is by thresholding based on color. The act of classifying every pixel into, usually, 2 groups: foreground and background. It is an optimization problem if you get down to the roots of it. Pictorial representation (http://what-when-how.com/wp-content/uploads/2011/06/tmp2F47_thumb1.jpg)

This works method usually works for game piece detection, as well as target detection. A problem occurs when you threshold by color for bumpers. Take the 2014 game for example. The balls were blue and red. The bumpers are blue and red. There is not a strict color requirement for bumpers, however. Yes, the have to be red and blue, but they can be different shades.

So, this leaves a program that learns what a bumper is. There are a few ways to do this, but all are very computationally intensive. Facial recognition programs uses these types of algorithms. One such algorithm is the haar cascade. What this requires from you is to take as many pictures of bumpers as possible, then train your program on the data set. To get the best results, you'd have to go around at competition and take as many pictures as possible of every robot. Then you have to train the program, which it is not uncommon for it to take several hours to do so.

I personally believe that there needs to be an objective (aerial) camera in order for the level of autonomous play to increase.

End of irrelevant rant.

RyanShoff
25-10-2014, 00:00
I just got OpenNI Kinect drivers and Point Cloud Library to compile and work on one of these. It might be possible to do some cool on robot stuff with this setup. I'm just starting to explore it.

NotInControl
03-11-2014, 15:07
So I finally spent some time over the weekend unboxing my Jetson and getting it set up. It's been sitting on my shelf for the past month and a half.

My initial plan is to take the same c++ vision binary (using OpenCV) we used for our BeagleBone in the 2014 season, and run a comparison test between the beaglebone, Jetson, and RoboRio.

Well as of right now I don't have anything to show. I am going the much harder route of setting up a cross-compiler instruction, and unfortunately the beaglebone white uses soft floating point instructions, and the Jetson is showing incompatibility issues with that architecture (because the jetson is a hard floating point target).

Trying to re-compile with hard floating point works for simple projects (like hello world), but the binutils tools for hard floating point seem to have a bunch of bugs. I am working through them 1 by 1. The linker currently crashes when I try to cross-compile the version of openCV I have on my machine (2.4.6) to use VFP. (Using soft floating point OpenCV compiles perfectly).

I am trying to avoid compiling directly on the Jetson (for now), just so I can put together an instruction set for setting up a cross compiler. (But in the interest of time, I can always cheat, compile OpenCV on the jetson to get a hardfp version of the libraries, and transfer the shared libraries back to my desktop just to get some bechmarking done.)

I will make sure all three are using the same source code and OpenCV version. It looks like my wish of using the same binary won't work, because at a minimum, I would have to re-compile the binary to use vfp on the jetson, and eventually the GPU (but I expected that already).


I do have the same binary from the beaglebone running directly on the RoboRio, but do not have comparison numbers as of yet. So I will get those posted as soon as I can.

Regards,
Kevin

marshall
03-11-2014, 15:27
So I finally spent some time over the weekend unboxing my Jetson and getting it set up. It's been sitting on my shelf for the past month and a half.

My initial plan is to take the same c++ vision binary (using OpenCV) we used for our BeagleBone in the 2014 season, and run a comparison test between the beaglebone, Jetson, and RoboRio.

Well as of right now I don't have anything to show. I am going the much harder route of setting up a cross-compiler instruction, and unfortunately the beaglebone white uses soft floating point instructions, and the Jetson is showing incompatibility issues with that architecture (because the jetson is a hard floating point target).

Trying to re-compile with hard floating point works for simple projects (like hello world), but the binutils tools for hard floating point seem to have a bunch of bugs. I am working through them 1 by 1. The linker currently crashes when I try to cross-compile the version of openCV I have on my machine (2.4.6) to use VFP. (Using soft floating point OpenCV compiles perfectly).

I am trying to avoid compiling directly on the Jetson (for now), just so I can put together an instruction set for setting up a cross compiler. (But in the interest of time, I can always cheat, compile OpenCV on the jetson to get a hardfp version of the libraries, and transfer the shared libraries back to my desktop just to get some bechmarking done.)

I will make sure all three are using the same source code and OpenCV version. It looks like my wish of using the same binary won't work, because at a minimum, I would have to re-compile the binary to use vfp on the jetson, and eventually the GPU (but I expected that already).


I do have the same binary from the beaglebone running directly on the RoboRio, but do not have comparison numbers as of yet. So I will get those posted as soon as I can.

Regards,
Kevin

Aside from speed, what's the concern with compiling directly on the Jetson? Just curious.

We've got our students writing code and compiling on them so that's why I am asking. It just made sense for us given the number of students and limited workstations. It was easier for us to just use the boards.

NotInControl
03-11-2014, 15:45
Aside from speed, what's the concern with compiling directly on the Jetson? Just curious.

We've got our students writing code and compiling on them so that's why I am asking. It just made sense for us given the number of students and limited workstations. It was easier for us to just use the boards.

No real concern other than what works for us. We typically have 1 or 2 development boards and many desktops/laptops. We need to be able to decouple ourselves from the embedded device so that we can be more productive.

I understand everyone can SSH into the board, and have a different session, but that is slow, and it makes it hard for us because our development boards typically stay at the school (where we don't have remote access through schools firewall). With cross-compiler tools set up I can give my students homework where they can write code at home, build it, push to github, and then we can test it on the board later - saves us a lot more time.

We only had 2 beaglebone this past season, a team owned and a mentor owned so it was really important for us to be able to develop off the target. Right now we only have one jetson, which is mentor owned. If we get things rolling on this, I will probably just donate it to my team, but it still means having one dev board and multiple programmers.

Plus, I haven't come across any real clear tutorials yet working with the jetson in a cross-compiled environment, so I decided to tackle the challenge. Not sure how smart this was just yet lol.

marshall
03-11-2014, 15:50
No real concern other than what works for us. We typically have 1 or 2 development boards and many desktops/laptops. We need to be able to decouple ourselves from the embedded device so that we can be more productive.

I understand everyone can SSH into the board, and have a different session, but that is slow, and it makes it hard for us because our development boards typically stay at the school (where we don't have remote access through schools firewall). With cross-compiler tools set up I can give my students homework where they can write code at home, build it, push to github, and then we can test it on the board later - saves us a lot more time.

We only had 2 beaglebone this past season, a team owned and a mentor owned so it was really important for us to be able to develop off the target. Right now we only have one jetson, which is mentor owned. If we get things rolling on this, I will probably just donate it to my team, but it still means having one dev board and multiple programmers.

Plus, I haven't come across any real clear tutorials yet working with the jetson in a cross-compiled environment, so I decided to tackle the challenge.

Rock on. Keep us posted on how you end up doing with it. I toyed with Nvidia's cross compiling tools but I just got frustrated with them. I'm more of an admin and less of a programmer though so YMMV.

RyanShoff
03-11-2014, 15:56
My initial plan is to take the same c++ vision binary (using OpenCV) we used for our BeagleBone in the 2014 season, and run a comparison test between the beaglebone, Jetson, and RoboRio.


I expect the GPU accelerated version of opencv from the Nvidia website will make a big difference. You might want to benchmark both. In PCL, I saw 10x increase in framerates on some of the PCL samples.

That might also complicate your cross compilation issues.

NotInControl
03-11-2014, 16:55
Rock on. Keep us posted on how you end up doing with it. I toyed with Nvidia's cross compiling tools but I just got frustrated with them. I'm more of an admin and less of a programmer though so YMMV.

Which NVidia tools are you referring too? Have a link?

marshall
03-11-2014, 17:13
Which NVidia tools are you referring too? Have a link?

This stuff: http://devblogs.nvidia.com/parallelforall/nvidia-nsight-eclipse-edition-for-jetson-tk1/

NotInControl
06-11-2014, 15:33
So just an update:

I finally got the cross-compiler for the Jetson working. I cross-compiled OpenCV for the Jetson's ARM-hf processor, and now have the same Vision code we used last year running on a BeagleBone, running on the RoboRio, and Jetson.

Setting up the cross-compiler in Eclipse for this go around was a bit of a nightmare because I was using an older version of OpenCV for the bone (we wrote that code back in January 2014) and it was using old versions of FFMPEG, and LibGTK as well as LibC version 2.17.

Once I got hold of those old libraries, and recompiled them to armhf, fixed over 100 broken symlinks, the cross-compiler was working.

I am running Ubuntu 12.04 on a Dell Latitude for my development. The cross-compiler I am using is arm-linux-armeabihf-g++ version 4.6.3

So far the OpenCV I cross-compiled and have running on the Jetson just has support for Neon, FFMPEG, and LibGTK, as well as JPEG and python binding (although I don't use them). It does not have support for CUDA yet.

After I run my benchmark tests, using the binaries we ran on the Beaglebone last season, I will upgrade to the latest versions of OpenCV, FFMPEG, GTKLib, and incorporate CUDA, how I set up that cross-compiler in eclipse will be what I release.

Now that I have this working on all 3 of my test platforms, I will be publishing initial test results sometime this weekend, and then follow up with a GPU benchmark later on.

If anyone has any specific questions with how the beaglebone white, RoboRio, and Jetson compare please let me know and ill see what I can do.

Also look for a compete how-to on setting up the cross compiler in eclipse with CUDA support. (This should be a lot easier, because all I should really need to do is install the official OpenCV for Tegra released by Nvidia with CUDA support on the Jetson, and transfer those binaries to my laptop. And after install the CUDA SDK to my laptop. Hopefully I can get to this by next week.

Regards,
Kevin

P.S. I only had to recompile my code to armhf to run on the Jetson, the same binaries and shared libraries I had on the beaglebone (armsoftfp) ran directly on the RoboRio without any recompilation (just symlink fixing), so if you currently use a beagle bone, and want to port your code to the RoboRio, its a no-brainer.

This stuff: http://devblogs.nvidia.com/parallelforall/nvidia-nsight-eclipse-edition-for-jetson-tk1/

Thanks for the link, I remember coming across that post before, and immediately dismissed it because I didn't want to have to install another IDE. I want the cross-compiler to live in the same eclipse that I use for everything else. It's weird I just can't find a Nvidia board support package for the Jetson, but its one of those things where they released the hardware to the community, without completing all the support documentation, which I can appreciate as a developer.

NotInControl
04-12-2014, 01:26
So I know I said I would be posting a comparison of the jetsons performance against a couple of different devices, including the RoboRio, and I will, I just keep getting involved with more pressing matters.

I have a ton of data, and most of my testing is done. I just need to sift through it.

Here is a draft of the stuff I have documented so far:

http://khengineering.github.io/RoboRio/vision/cameratest/

More data will be posted very shortly, (i.e. few days). I am also in the process of rewriting our vision code to make use of the Jetson GPU. All tests so far were cpu vs cpu. So that should be done in about 2 weeks time.

Regards,
Kevin

NotInControl
09-12-2014, 22:03
All,

We have added a few more updates to our performance analysis. So far based on our testing the Tegra TK1 is capable of processing 640x480 images well over 30 frames per second without any lag, just using OpenCV on the CPU. There is a lot of CPU headroom left.

We need to perform additional tests on the RoboRio. I remember one test where we were able to run 320x240 at 30 frames per second without any noticeable lag with x11 forwarding enabled, but the data for other frame rates do not support that conclusion. We are double backing here and re-running our testing to ensure accuracy. We also need to make sure that all cores are being used on the Rio.

We can safely conclude however, at the moment that under our test conditions, the RoboRio can not process 640x480 images at 10fps or higher without experiencing noticeable lag. We are still trying to determine at what framerate we can achieve lag free 640x480 processing on the RoboRio. Our baseline test suggests 8fps but we have not run any performance test to confirm.

We still have yet to post any processing results from the BeagleBone black, so look for those soon.

The URL where we are documenting these tests is here: http://khengineering.github.io/RoboRio/vision/cameratest/

If you have any questions about our test methods, or conclusions, please let me know.

Mr. Lim
11-12-2014, 09:14
I'm really interested in the various Jetson TK1 trials teams are doing right now. I probably should've gotten one some time ago.

A few questions for anyone with one of these units:

1) How quickly does it boot up once powered on?

2) Does anything become corrupted if you repeatedly hard power on/off in the middle of ?

3) Has anyone tried wiring it directly to unregulated 12V on the PDP, and driven a robot hard to see if it browns-out or powers off?

marshall
11-12-2014, 09:58
I'm really interested in the various Jetson TK1 trials teams are doing right now. I probably should've gotten one some time ago.

A few questions for anyone with one of these units:

1) How quickly does it boot up once powered on?

2) Does anything become corrupted if you repeatedly hard power on/off in the middle of ?

3) Has anyone tried wiring it directly to unregulated 12V on the PDP, and driven a robot hard to see if it browns-out or powers off?

Answers

1) Fast. Less than 20 seconds. Can be tweaked to go even faster.

2) We haven't seen anything become corrupted but as I have said previously in this thread and others, it's a linux system. Rebooting it repeatedly uncleanly is going to cause some pain with fsck at some point so just be mindful and take necessary steps to avoid it.

3) Not yet. We will be doing that soon. I would recommend a regulator. For this year, the VRM has some 2A points where it could be plugged in and should be fine, assuming the rules allow for that.

EDIT: My one new comment is that after a recent discussion and some more benchmarking and other nonsense, I will add that not all CUDA cores are created equal over at Nvidia. The CUDA cores on the tegras are not the CUDA cores on the graphics cards in your super awesome gaming rig. The bottom line is that extra horsepower is not an excuse for sloppy coding and this is still an embedded system so efficient code is key. Also, memory management between the CPU and GPU has proven to be tricky.

marshall
16-12-2014, 08:59
First successful test getting data from the Jetson to the RoboRIO and controlling the robot. 900 HQ was a happy place last night.

http://www.chiefdelphi.com/media/img/359/359f280022c8196808b28d33a44796fc_m.jpg
Full Size Image Here (http://www.chiefdelphi.com/media/img/359/359f280022c8196808b28d33a44796fc_l.jpg)

Caboose
02-01-2015, 23:48
To steal a bit of thunder from Marshall, I just made a thread with links to our code on GitHub. THREAD: Team 900 - nVIDIA Jetson TK1 OpenCV Co-Processor (http://www.chiefdelphi.com/forums/showthread.php?p=1419471#1)

Caboose
28-04-2015, 15:10
Hey all,

I just posted links to 900's code, including vision, over here (http://www.chiefdelphi.com/forums/showthread.php?p=1477930). Ask questions if you have any there please.

sparkytwd
28-05-2015, 12:45
For those interested in continuing with TK1 development, I've gotten Ubuntu working on the Acer CB5-311 notebook, this one (http://smile.amazon.com/dp/B00MHX6V88).

Using this script (https://www.dropbox.com/s/zeb24i0jm27go6k/chrubuntu-install-21.3.sh?dl=0) and the regular chrubuntu instructions it was pretty straight forward to get up and running.

Got the CUDA examples building and running locally. It's nice having a portable development so students can work on the same platform as running on the robot.

Quick Edit: This was entered on a CB5-311

faust1706
28-05-2015, 14:03
Are you going to make a script to install caffe with cudNN support? (if not, I'll write one up this weekend, or at the least a step by step guide). I feel caffe + cuda + cudNN is a more valuable and a different application of cuda than cuda based opencv.

My arguement: while opencv is great, teams have just about exhausted the real time use for it. Even with what 900 did, they were getting 15 fps. It's time to move on if we wish to advance what we are doing. The easiest way to do that, I argue, is to switch our roots entirely to a library that is more encompassing.

marshall
28-05-2015, 14:07
For those interested in continuing with TK1 development, I've gotten Ubuntu working on the Acer CB5-311 notebook, this one (http://smile.amazon.com/dp/B00MHX6V88).

Using this script (https://www.dropbox.com/s/zeb24i0jm27go6k/chrubuntu-install-21.3.sh?dl=0) and the regular chrubuntu instructions it was pretty straight forward to get up and running.

Got the CUDA examples building and running locally. It's nice having a portable development so students can work on the same platform as running on the robot.

Quick Edit: This was entered on a CB5-311

Way cool! I'm happy to know that laptop works. Having something with a battery onboard the robot solves some logistical problems. Good stuff!

sparkytwd
28-05-2015, 14:18
Are you going to make a script to install caffe with cudNN support? (if not, I'll write one up this weekend, or at the least a step by step guide). I feel caffe + cuda + cudNN is a more valuable and a different application of cuda than cuda based opencv.

My arguement: while opencv is great, teams have just about exhausted the real time use for it. Even with what 900 did, they were getting 15 fps. It's time to move on if we wish to advance what we are doing. The easiest way to do that, I argue, is to switch our roots entirely to a library that is more encompassing.

I installed the cuda libraries using the Jetson instructions. I think an automatic script would be a great idea.

The biggest issue for teams and the neural network stuff is going to be collecting good training data and building a useful model. Ideally you'd have targets for recognition in-situ, but practice fields are usually unavailable until later in the season.

I wouldn't take a single implementation as setting the bar for what's possible. Even setting aside the CUDA cores, 4 2ghz ARMv7 cores are quite capable.

sparkytwd
28-05-2015, 14:27
Way cool! I'm happy to know that laptop works. Having something with a battery onboard the robot solves some logistical problems. Good stuff!

I picked up the laptop due to issues we had with reliability this year. We picked up 3 at the start of the year, and are down to 1 reliably working. One of them won't get past the bootloader on a regular basis, but if you sit on it with a serial console and spam reset it will eventually start.

The other refuses to power up at all. I suspect the first case was due to handling for cading a case, the second, due to a miscommunication, was connected to VBatt, not VReg(12).

The problem with the laptop form factor is the weight. I feel that with a good case and sufficient QA to make sure the device is connected to the regulated 12v supply this will be a reliable system for next year.

I'm also working on a UPS that would conform with this years regulations for giving about 30 seconds of power to safely shut down a co-processor. That being said, in the past 3 years, we haven't had issues with sudden power removal impacting the coprocessors.

faust1706
28-05-2015, 14:43
I wouldn't take a single implementation as setting the bar for what's possible. Even setting aside the CUDA cores, 4 2ghz ARMv7 cores are quite capable.

To my understanding, 900 was the first team to implement a complete machine learning based vision solution. OpenCV is not regarded as a machine (deep) learning library. It seems only natural to switch to a library that has at least an emphasis on this, instead of an after thought.

Their implementation, cascade training, is an extremely light version of machine learning by comparison, and they were getting 15 fps. Unless teams are going to start putting *entire computers on their robot, and struggle to reliably power it off of the PDB as well as dedicate that much space, something has to change. Also cost must be considered for a computer; Between a motherboard, memory, cpu and gpu, it adds up fast.

You could always off-board everything, but then you're limiting yourself to the bandwidth limit.

*In 2012, 1706 did have an entire computer on their robot. It had 8 gb of ram, an i5 and ran ubuntu. We were averaging 20 fps (though we were doing a real time pose calculation, so that's actually really good with everything considered). I personally don't recommend unless absolutely needed.

marshall
28-05-2015, 15:50
I picked up the laptop due to issues we had with reliability this year. We picked up 3 at the start of the year, and are down to 1 reliably working. One of them won't get past the bootloader on a regular basis, but if you sit on it with a serial console and spam reset it will eventually start.

The other refuses to power up at all. I suspect the first case was due to handling for cading a case, the second, due to a miscommunication, was connected to VBatt, not VReg(12).

The problem with the laptop form factor is the weight. I feel that with a good case and sufficient QA to make sure the device is connected to the regulated 12v supply this will be a reliable system for next year.

I'm also working on a UPS that would conform with this years regulations for giving about 30 seconds of power to safely shut down a co-processor. That being said, in the past 3 years, we haven't had issues with sudden power removal impacting the coprocessors.

Rock on! Keep us posted. I'm all for opening up options to teams for this sort of stuff. We're about a day or two away from getting our white paper out for what we worked on this year. Nothing earth shattering but we want to share it and make this stuff a little more accessible.

ForeverAlon
28-05-2015, 20:19
Here is a link to team 900's vision whitepaper: http://www.chiefdelphi.com/forums/showthread.php?p=1484741

Caboose
27-11-2015, 12:19
FYI, the Jetson TX1 was recently released.

dusty_nv
09-01-2016, 13:11
Jetson TK1 is included in Kit of Parts again this year for FIRST 2016, and in addition, the new 1TFLOP+ Jetson TX1 is available for FIRST teams to use via discount: http://www.chiefdelphi.com/forums/showthread.php?t=141133

sanelss
01-02-2016, 22:53
Anyone else trying to use a ZED camera with a tk1? We got it working but it's not looking promising.

First off, you CAN'T use any sort of hub with the zed and usb 3. I've tried two different usb 3 hubs and it's an issue regardless. if you use a hub with other peripherals the image corruption will make it completely unusable. I had to buy a mini pci-e usb module to hook the keyboard and mouse up to so the ZED is the only device on the usb 3 port. Even with that, there is still some occasional image corruption, but at least it's usable.

The other issue the depth viewer example only gets 5-8 fps..... that's abysmally slow. Even other examples without as much heavy lifting don't get much better than that....

Turing'sEgo
02-02-2016, 02:41
I know this probably isn't something you want to hear, but...

OpenCV has stereo camera functions in it. They even have GPU support.

One can very easily recreate the fancy ZED camera with 2 $10 dollar webcams. If you want 1080p, then you'll have to spend more, but there is no reason you need 1080p for FRC. 480p is plenty.

sanelss
02-02-2016, 07:28
I know this probably isn't something you want to hear, but...

OpenCV has stereo camera functions in it. They even have GPU support.

One can very easily recreate the fancy ZED camera with 2 $10 dollar webcams. If you want 1080p, then you'll have to spend more, but there is no reason you need 1080p for FRC. 480p is plenty.


At the time wasn't aware of that but even so, the zed does come in a nice easy to mount package, is pre-calibrated, high res, high fps, and comes with examples and support. For some that may not be worth it but we felt it was.

It's just a matter of getting it to work properly. I'm sure even with 2 regular cameras, using stereo vision is still going to bog it down majorily anyway

KJaget
02-02-2016, 08:25
Anyone else trying to use a ZED camera with a tk1? We got it working but it's not looking promising.

First off, you CAN'T use any sort of hub with the zed and usb 3. I've tried two different usb 3 hubs and it's an issue regardless. if you use a hub with other peripherals the image corruption will make it completely unusable. I had to buy a mini pci-e usb module to hook the keyboard and mouse up to so the ZED is the only device on the usb 3 port. Even with that, there is still some occasional image corruption, but at least it's usable.

The other issue the depth viewer example only gets 5-8 fps..... that's abysmally slow. Even other examples without as much heavy lifting don't get much better than that....

Have you set the CPU and GPU clocks to max? See http://elinux.org/Jetson/Performance.

Have you reduced the resolution, fps and depth reconstruction quality of the Zed processing? Take a look at the Zed camera constructor and init calls to adjust those parameters.

marshall
02-02-2016, 08:42
I know this probably isn't something you want to hear, but...

OpenCV has stereo camera functions in it. They even have GPU support.

One can very easily recreate the fancy ZED camera with 2 $10 dollar webcams. If you want 1080p, then you'll have to spend more, but there is no reason you need 1080p for FRC. 480p is plenty.

The Zed is a fancy camera but it was born for this kind of stuff unlike the Kinect, the RealSense Cameras, and even the existing OpenCV libraries, which can't easily sync frames between cameras.

That being said, if you are going to go the OpenCV route then I would suggest looking at the PlayStation Eye cameras. You can also hack them to sync frames. They are very inexpensive and the frame rates can be jacked up to where they'll work great. I do recommend a reliable USB hub for those though into the Jetson.

RyanShoff
02-02-2016, 10:31
Our zed just arrived yesterday. We are still waiting on a TK1. I think it has shipped.

dbbones
12-02-2016, 12:42
Hello jetsons...

Team4915 is using a TK1 this year (for the first time) and we wondered if anyone has thoughts on best-practices for ensuring that vision software is running during competition.

The goal is to ensure our attached usb camera and associated vision services are running across power-down/restart events.

Currently, we're hoping that installing our software as an init.d service will just work, but we're encountering issues that may be related to intermittent network connectivity (during the reboot sequence). We've also seen some threads about usb power and auto-login may also be contributing to this.

Our current workaround is to ssh in and restart our services, but this seems fragile in a real competition setting.

Any comments are greatly appreciated!

Thanks!

Dana Batali, Spartronics 4915 mentor

dbbones
20-02-2016, 15:29
For future jetsonians, here's where we've gotten on this. No claims are made that this is the best approach, but it does seem to work for us (so far).

1. we built and installed a custom mjpg-streamer webserver on the jetson to serve mjpg streams to the driver station... This boots just fine as a standard init.d script.

2. our custom-built mjpg-streamer http server has a trivial extension that allows a remote webbrowser to control parameters of our vision algorithm. This just launches a shell script, which can be modified as desired. We haven't gotten any more fancy than this, but could imagine going further into CGI land or beyond. This would require more webserver hacking and doesn't seem justified at this point.

The result: after powering on the robot, the webserver is available on port 80 on the jetson. (We enabled mDNS services on ubuntu via avahi-set-host-name). Now we can launch our imaging service and it drops occasional images for delivery by mjpg-streamer to the smart dashboard on the driver station.

Issues: if we suffer from brown-out on the robot, the jetson powers off and requires a physical button-press to recover... Anyone have any thoughts on that issue?

Finally: to further protect the jetson, we invested in a power conditioning device. There is some discussion of this issue in the TX1 thread, but here's the one we landed on fwiw:

http://www.amazon.com/gp/product/B011KLQNRG

lethc
20-02-2016, 18:43
For future jetsonians, here's where we've gotten on this. No claims are made that this is the best approach, but it does seem to work for us (so far).

1. we built and installed a custom mjpg-streamer webserver on the jetson to serve mjpg streams to the driver station... This boots just fine as a standard init.d script.

2. our custom-built mjpg-streamer http server has a trivial extension that allows a remote webbrowser to control parameters of our vision algorithm. This just launches a shell script, which can be modified as desired. We haven't gotten any more fancy than this, but could imagine going further into CGI land or beyond. This would require more webserver hacking and doesn't seem justified at this point.

The result: after powering on the robot, the webserver is available on port 80 on the jetson. (We enabled mDNS services on ubuntu via avahi-set-host-name). Now we can launch our imaging service and it drops occasional images for delivery by mjpg-streamer to the smart dashboard on the driver station.

Issues: if we suffer from brown-out on the robot, the jetson powers off and requires a physical button-press to recover... Anyone have any thoughts on that issue?

Finally: to further protect the jetson, we invested in a power conditioning device. There is some discussion of this issue in the TX1 thread, but here's the one we landed on fwiw:

http://www.amazon.com/gp/product/B011KLQNRG

We are using the Jetson TK1 as well, and we also had the issue of needing a physical button press to start it at times. We did some research and found this (https://devtalk.nvidia.com/default/topic/787172/power-on-issues-with-jetson-tk1/). Essentially, you need to remove the C6D4 capacitor from the board to get it to always power on when it's receiving power. We did this with pliers.

Would you be willing to share your code? Your webserver implementation sounds awesome.

sparkytwd
23-03-2016, 11:39
For video streaming this year, we're using gstreamer with NVidia's optimized h264 codecs. On the jetson we run this:

gst-launch -v -e v4l2src device=/dev/video0 -v ! 'video/x-raw-yuv,width=320,height=240,framerate=30/1' ! ffmpegcolorspace ! nv_omx_h264enc bitrate=300000 low-latency=true framerate=30 ! 'video/x-h264,width=424,height=240,framerate=30/1' ! rtph264pay pt=96 ! udpsink host=drivestation.local port=5805 -v

That spits out a UdpSrc caps = ... line

Something like this: /GstPipeline:pipeline0/GstUDPSink:udpsink0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, sprop-parameter-sets=(string)\"Z0JAKJWgbH+XQA\\=\\=\\,aM48gA\\=\\=\", payload=(int)96, ssrc=(uint)2314783494, clock-base=(uint)4227592485, seqnum-base=(uint)43060

You need the part after 'caps =', but it seems to not need the sprop-parameter-sets part of it, which gives escaping problems on Windows.

And on the drive station this:

gst-launch-1.0 -vvv udpsrc port=5805 ! $CAPS ! rtph264depay ! avdec_h264 ! d3dvideosink sync=false

Make sure to disable automatic exposure on the camera to keep it a steady 30fps.

It's robust, streaming will come back up even if the tegra reboots, or the stream is closed on the drivestation. Latency is the lowest we've seen on any solution. Quality is good while 300kBps keeps network overhead low

marshall
23-03-2016, 11:40
For video streaming this year, we're using gstreamer with NVidia's optimized h264 codecs. On the jetson we run this:

gst-launch -v -e v4l2src device=/dev/video0 -v ! 'video/x-raw-yuv,width=320,height=240,framerate=30/1' ! ffmpegcolorspace ! nv_omx_h264enc bitrate=300000 low-latency=true framerate=30 ! 'video/x-h264,width=424,height=240,framerate=30/1' ! rtph264pay pt=96 ! udpsink host=drivestation.local port=5805 -v

That spits out a UdpSrc caps = ... line

Something like this: /GstPipeline:pipeline0/GstUDPSink:udpsink0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, sprop-parameter-sets=(string)\"Z0JAKJWgbH+XQA\\=\\=\\,aM48gA\\=\\=\", payload=(int)96, ssrc=(uint)2314783494, clock-base=(uint)4227592485, seqnum-base=(uint)43060

You need the part after 'caps =', but it seems to not need the sprop-parameter-sets part of it, which gives escaping problems on Windows.

And on the drive station this:

gst-launch-1.0 -vvv udpsrc port=5805 ! $CAPS ! rtph264depay ! avdec_h264 ! d3dvideosink sync=false

Make sure to disable automatic exposure on the camera to keep it a steady 30fps.

It's robust, streaming will come back up even if the tegra reboots, or the stream is closed on the drivestation. Latency is the lowest we've seen on any solution. Quality is good while 300kBps keeps network overhead low

Nicely done!

dusty_nv
24-03-2016, 09:16
***Attention Silicon Valley FRC and FTC Teams!***
NVIDIA will be hosting its first-ever FIRST Day at our GPU Technology Conference (GTC) (http://www.gputechconf.com/) on Thursday, April 7th from 10am – 2:30pm at the San Jose Convention Center. Just a few miles from the San Jose State Event Center, GTC is the largest event of the year for developers like you working on the forefront of visual computing and A.I. — you know, the kind of computing that helps robots connect, see, think and learn. April 7th is a practice day at the San Jose Event Center and we are hoping some of you can join us for this FIRST Day at GTC. Students must be accompanied by an adult, and entrance to the event is provided free of charge.

We’ve got some special events lined up for just for FIRST Participants:

A Hands-on Lab that includes Deep Learning and Getting Started on NVIDIA Jetson
This will be a practical hands-on session on deep learning and Caffe with GPU acceleration. Each station will include a Jetson for detecting and classifying objects in realtime from a live camera using neural networks.

To make sure you leave GTC with everything you need to take your robotics to the next level, each hands-on lab station will include a free Jetson TX1 (http://www.nvidia.com/object/jetson-tx1-dev-kit.html) to take home, compliments of NVIDIA.

Tour of the NVIDIA GTC Exposition Hall
• Come see our intelligent, autonomous machine and robot demos that includes everything from submersibles to drones that help with search and rescue to ruggedized, autonomous vehicles that use deep learning.
• Experience the VR Village where you can explore the latest advances in VR technologies and learn all about the visualization power they demand. You’ll see VR demos from 3D gaming, to product design, to cinematic experiences and beyond
• See demos on artificial intelligence, graphics virtualization, accelerated computing, product design, self-driving cars, media & entertainment and more!

There will also be a presentation on NVIDIA technology and how GPUs have changed the game in everything from gaming to movies to the new frontier in artificial intelligence.

Space is limited, so please sign up here to reserve your spot: http://goo.gl/forms/wU4MFGqcFI
Hope to see you there!

snekiam
24-03-2016, 09:25
***Attention Silicon Valley FRC and FTC Teams!***

To make sure you leave GTC with everything you need to take your robotics to the next level, each hands-on lab station will include a free Jetson TX1 (http://www.nvidia.com/object/jetson-tx1-dev-kit.html) to take home, compliments of NVIDIA.


Its unfortunate this is only available to California teams, but I appreciate your ongoing support of the FIRST community!

marshall
24-03-2016, 10:05
Its unfortunate this is only available to California teams, but I appreciate your ongoing support of the FIRST community!

In fairness to Nvidia, they invited us out for it (and we are a fair bit away from CA) but we've got a competition to attend. Wish we could be there though. What an amazing partner for FIRST to have and I can tell you that they are an awesome sponsor for our team.

dusty_nv
24-03-2016, 18:58
Its unfortunate this is only available to California teams, but I appreciate your ongoing support of the FIRST community!If you're in the area, RVSP (http://goo.gl/forms/wU4MFGqcFI) and stop by!

ctetrick
13-04-2016, 16:37
The Zed is a fancy camera but it was born for this kind of stuff unlike the Kinect, the RealSense Cameras, and even the existing OpenCV libraries, which can't easily sync frames between cameras.

That being said, if you are going to go the OpenCV route then I would suggest looking at the PlayStation Eye cameras. You can also hack them to sync frames. They are very inexpensive and the frame rates can be jacked up to where they'll work great. I do recommend a reliable USB hub for those though into the Jetson.

Marshall, got any suggestions on a good usb 3.0 hub? preferably one that is likely to work with the TK1/Zed combo.