Quote:
Originally Posted by KJaget
Two other things we do, each at opposite ends of the spectrum :
1. Build and test on x86 Linux laptops and desktops. With a little bit of extra effort our code is portable which adds a lot in our ability to work outside lab hours or when the Jetson is being used on the robot to debug other problems (i.e. our drive team). Included in that is the ability to test using recorded videos or still images so we don't need the entire robot to test changes. You can't always test everything that way, but you can fix a lot of stuff before moving it over to the Jetson.
2. Export the Jetson X display to a laptop via ssh tunnel (ssh -Y ubuntu@10.x.y.z to connect, then anything which uses X exports the display back to the Linux system you connected to). This is great for headless debugging when the Jetson is actually on the robot.
|
This past season, we developed our algorithms on desktop PCs (running Windows and Visual Studio) and then switched to developing and deploying directly to the Jetson from Ubuntu machines later in the season when it was on the robot. We kept all our code compatible with both platforms however, and with a compile-time switch we could enable and disable GUI windows with sliders and previews and such. Once we got to the point where we needed to deploy to the Jetson, we began running into serious issues with the Nvidia IDE tooling that we were using, which essentially left us dead in the water -- that's the major learning point that I'm trying to remedy. There were definitely a lot of things we could've done better in that workflow had we been a bit smarter!
Quote:
Originally Posted by KJaget
We use cmake to build, and that has support for building CUDA code.
|
I think that is looking like the best option for me. I'm interested to see how CUDA integration would work; I guess I'll be investigating that next. Thanks for all the tips! Seeing what others have done successfully makes it a lot easier to figure this stuff out.
Quote:
Originally Posted by KJaget
This is one of those cases where you need to be sure you're actually speeding things up. That includes making sure that what you're working on something that's actually slow (i.e. profile it rather than assuming) and make sure that the speed up will actually matter (i.e. going from 80 to 100 FPS is useless if your camera runs at 30FPS).
|
IIRC our code was running at something around 10-20FPS by the end of last season, and sometimes it dropped to more like 5... so I didn't hit the capacity of the camera

(although we did hit frame bandwidth issues with some cameras that would only give us 10FPS). The algorithm that we were using was fundamentally flawed, so I have been able to dramatically improve it recently as a proof of concept.
For profiling, I was actually using a tool that Nvidia provides packaged with their development kit. I think it is primarily targeted at profiling GPU code (it automatically keeps track of CUDA core utilization, memory transfers, etc.) but by adding calls in your code that label certain events, you can get a very nice analysis of time taken for each step in processing frames. That clearly showed me where the slowdown was occurring.