My team and I are interested in using GRIP for one of our off season competitions. We have decided that we want to run GRIP from an NVIDIA Jetson TK1. I have looked on the GIThub article about deploying to a co processor, but it only lists the steps for deploying to the Raspberry Pi 2. Do the same steps apply for deploying to the NVIDIA Jetson TK1, or are they different? In the event the steps are different, can you please explain what those steps are?
Thanks,
Jonathan Daniel
P.S. I will be at World’s this year with my team, so we can meet face to face and discuss GRIP on a co processor there.
We didn’t measure it. However, we found it was difficult to use the vision as feedback unless we added delays in between moving. If you are using it to generate setpoints and then using gyro/encoder feedback to aim, that should be fine.
I have no way of knowing why you chose the Jetson over the RPi, but I’m sure you have your reasons.
If you would allow me, you might want to look into using an RPi with OpenCV. When taking this approach, we are able to process frames at a rate faster than the camera can deliver them. Would knowing that help you reconsider your choice?
I have no experience with the Jetson and I know teams that have been quite successful with them, but I also know a few teams that really struggled with them. Just food for thought.
It is possible to run OpenCV on the Jetson as well. GRIP on Linux-based on-board processors seems like a headache in my opinion. I have only seen a few use it successfully.
Hello, my team already owns a Jetson, so we are trying to work with convenience. We also don’t know how to use OpenCV, but we are more comfortable with GRIP. What are the steps to implement it on a Jetson.
My understanding is that GRIP is basically a graphical interface of OpenCV. Please correct me if I’m wrong.
So, once you understand the processing you are doing with GRIP and why, transitioning to OpenCV is fairly easy.
How to get OpenCV onto the Jetson would only be a guess for me.
For what it’s worth, start with basic OpenCV tutorials on the Jetson before strapping it to a robot. Once you have something working to detect your targets, learn how to transfer that data between the Jetson and the RoboRIO. Then move on to putting it on the robot.
Also, before you start any of this, ask yourself some basic questions:
What is the end goal for our team to do vision processing?
Can we simplify our vision processing? (Single USB cam straight to RoboRIO?)
What target are we trying to acquire?
What information are we trying to get from the target?
Are there easier ways to get that information?
Once we have the target, how do we get that information into a form that we can use on the robot?
There are a lot of other questions to consider too but that’s a start.
But hey, vision processing is all black and white to me so what do I know?