Quote:
Originally Posted by RyanShoff
It is much less laggy that anything I've been able to do with libpcl on the jetson.
With a little work, I think it could work for autonomous navigation.
|
I have never written a vision program with a ton of lag. The most I personally witnessed first hand with 1706's vision solution in 2014. It utilized 3 cameras and solved for the robot's position and orientation on the field. It dedicated a single core for the entirety of processing of one of the cameras (so 3 of the 4 were used to process images). It had about a half second of display. I forget the exact amount. We tested lag by placing a stop watching in front of the camera, then took pictures of the vision output along with the stop watch then simply subtracted the times to find the lag.
If you want to, go for it. @cmastudios informed me that they are now using vision in autonomous, which is exciting. I wrote matlab code that is a basic implementation of a* in 2d. cmastudios converted it to c++, then I changed his c++ code to a custom path finding algorithm that takes robot width into consideration. The custom path finding algorithm is being used currently on a robotics team at MST.
There is a step missing between vision output and input to path finding: changing to D.S (data structure) of the vision output to fit that D.S that a * can operate on. Usually it is simply a list of points in a finite, discrete, grid that are deemed in-transversible (obstacles). You cannot simply pass the centers of all detected object to a astar due to the object (in this case totes), having a decent amount of width and length.
A big problem with converting from vision to path finding is precision. Yes, you can return every pixel that is an obstacle, but then your grid is extremely discrete and path finding is O(nlogn) if I remember.
cma utilized the gnu optimizer when we were toying with the idea of a* this past summer and he got a 900x900 grid to be solved in about 1ms, I forget the exact time, on a decent laptop.