View Full Version : GRIP on NVIDIA Jetson TK1
Hello everyone,
My team and I are interested in using GRIP for one of our off season competitions. We have decided that we want to run GRIP from an NVIDIA Jetson TK1. I have looked on the GIThub article about deploying to a co processor, but it only lists the steps for deploying to the Raspberry Pi 2. Do the same steps apply for deploying to the NVIDIA Jetson TK1, or are they different? In the event the steps are different, can you please explain what those steps are?
Thanks,
Jonathan Daniel
P.S. I will be at World's this year with my team, so we can meet face to face and discuss GRIP on a co processor there.
nickbrickmaster
27-04-2016, 12:51
Be aware that you won't get the full power of the tk1, because GRIP does not take advantage of the GPU.
Anything I can say beyond that is unfounded speculation.
How well did it work for your team?
nickbrickmaster
27-04-2016, 14:45
How well did it work for your team?
We didn't use a tk1 this year. That was just information I picked up on CD. We ran GRIP on the DS laptop.
Was there a significant delay when you ran it on the Driver Station?
nickbrickmaster
27-04-2016, 15:12
Was there a significant delay when you ran it on the Driver Station?
We didn't measure it. However, we found it was difficult to use the vision as feedback unless we added delays in between moving. If you are using it to generate setpoints and then using gyro/encoder feedback to aim, that should be fine.
billbo911
27-04-2016, 15:15
Hello everyone,
My team and I are interested in using GRIP for one of our off season competitions. We have decided that we want to run GRIP from an NVIDIA Jetson TK1. I have looked on the GIThub article about deploying to a co processor, but it only lists the steps for deploying to the Raspberry Pi 2. Do the same steps apply for deploying to the NVIDIA Jetson TK1, or are they different? In the event the steps are different, can you please explain what those steps are?
Thanks,
Jonathan Daniel
I have no way of knowing why you chose the Jetson over the RPi, but I'm sure you have your reasons.
If you would allow me, you might want to look into using an RPi with OpenCV. When taking this approach, we are able to process frames at a rate faster than the camera can deliver them. Would knowing that help you reconsider your choice?
I have no experience with the Jetson and I know teams that have been quite successful with them, but I also know a few teams that really struggled with them. Just food for thought.
axton900
28-04-2016, 05:53
It is possible to run OpenCV on the Jetson as well. GRIP on Linux-based on-board processors seems like a headache in my opinion. I have only seen a few use it successfully.
I have no way of knowing why you chose the Jetson over the RPi, but I'm sure you have your reasons.
If you would allow me, you might want to look into using an RPi with OpenCV. When taking this approach, we are able to process frames at a rate faster than the camera can deliver them. Would knowing that help you reconsider your choice?
I have no experience with the Jetson and I know teams that have been quite successful with them, but I also know a few teams that really struggled with them. Just food for thought.
Hello, my team already owns a Jetson, so we are trying to work with convenience. We also don't know how to use OpenCV, but we are more comfortable with GRIP. What are the steps to implement it on a Jetson.
billbo911
28-04-2016, 12:37
Hello, my team already owns a Jetson, so we are trying to work with convenience. We also don't know how to use OpenCV, but we are more comfortable with GRIP. What are the steps to implement it on a Jetson.
My understanding is that GRIP is basically a graphical interface of OpenCV. Please correct me if I'm wrong.
So, once you understand the processing you are doing with GRIP and why, transitioning to OpenCV is fairly easy.
How to get OpenCV onto the Jetson would only be a guess for me.
nickbrickmaster
28-04-2016, 13:06
The steps would likely be similar. Maybe you want to follow the Pi tutorial until something doesn't work.
billbo911
28-04-2016, 14:43
Hello, my team already owns a Jetson, so we are trying to work with convenience. We also don't know how to use OpenCV, but we are more comfortable with GRIP. What are the steps to implement it on a Jetson.
Here you go! (http://elinux.org/Jetson/Installing_OpenCV) Instructions on how to install OpenCV on the TK1.
RyanShoff
28-04-2016, 16:20
Here you go! (http://elinux.org/Jetson/Installing_OpenCV) Instructions on how to install OpenCV on the TK1.
And then after that look at https://github.com/FRC-Team-4143/vision-stronghold for one working example of a TK1 and a lifecam HD-3000.
marshall
28-04-2016, 19:21
For what it's worth, start with basic OpenCV tutorials on the Jetson before strapping it to a robot. Once you have something working to detect your targets, learn how to transfer that data between the Jetson and the RoboRIO. Then move on to putting it on the robot.
Also, before you start any of this, ask yourself some basic questions:
What is the end goal for our team to do vision processing?
Can we simplify our vision processing? (Single USB cam straight to RoboRIO?)
What target are we trying to acquire?
What information are we trying to get from the target?
Are there easier ways to get that information?
Once we have the target, how do we get that information into a form that we can use on the robot?
There are a lot of other questions to consider too but that's a start.
But hey, vision processing is all black and white to me so what do I know? ;)
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.