Go to Post ...I'm still waiting for a Chiefdelph-aholics group to form at the local YMCA. - KelliV [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
Thread Tools Rating: Thread Rating: 7 votes, 5.00 average. Display Modes
  #16   Spotlight this post!  
Unread 14-02-2015, 15:00
yara92's Avatar
yara92 yara92 is offline
M.Fawdah Mechanical engineering
AKA: Mohamed
FRC #1946 (Mechka Monster)
Team Role: RoboCoach
 
Join Date: Jan 2007
Rookie Year: 2006
Location: Israel
Posts: 236
yara92 will become famous soon enoughyara92 will become famous soon enough
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared

hey everyone,

i have some qustion about using the knicet in auto ,but we can't figure if it is supported by the new wpi code (c++)
__________________
TEAM 1946-Tamra
  #17   Spotlight this post!  
Unread 14-02-2015, 15:05
Christopher149 Christopher149 is offline
Registered User
FRC #0857 (Superior Roboworks) FTC 10723 (SnowBots)
Team Role: Mentor
 
Join Date: Jan 2011
Rookie Year: 2007
Location: Houghton, MI
Posts: 1,100
Christopher149 has a reputation beyond reputeChristopher149 has a reputation beyond reputeChristopher149 has a reputation beyond reputeChristopher149 has a reputation beyond reputeChristopher149 has a reputation beyond reputeChristopher149 has a reputation beyond reputeChristopher149 has a reputation beyond reputeChristopher149 has a reputation beyond reputeChristopher149 has a reputation beyond reputeChristopher149 has a reputation beyond reputeChristopher149 has a reputation beyond repute
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared

Quote:
Originally Posted by yara92 View Post
hey everyone,

i have some qustion about using the knicet in auto ,but we can't figure if it is supported by the new wpi code (c++)
On the robot, or the driver station? Because if driver station, keep in mind rule G21:
Quote:
During AUTO, DRIVE TEAMS must not directly or indirectly interact with ROBOTS or OPERATOR CONSOLES.
VIOLATION: FOUL and YELLOW CARD

FIRST salutes the creative and innovative ways in which Teams have interacted with their ROBOTS during AUTO in previous seasons, making the AUTO period more of a hybrid period due to indirect interaction with the OPERATOR CONSOLE. The RECYCLE RUSH AUTO Period, however, is meant to be truly autonomous and ROBOT or OPERATOR CONSOLE interaction (such as through webcam or Kinect™) are prohibited.
emphasis mine
__________________
2015-present: FTC 10723 mentor
2012-present: 857 mentor
2008-2011: 857 student

2015: Industrial Design, Excellence in Engineering, District Finalist, Archimedes Division (#6 alliance captain)
2014: Judges Award, District Engineering Inspiration, District Finalist, Galileo Division

  #18   Spotlight this post!  
Unread 14-02-2015, 16:31
yara92's Avatar
yara92 yara92 is offline
M.Fawdah Mechanical engineering
AKA: Mohamed
FRC #1946 (Mechka Monster)
Team Role: RoboCoach
 
Join Date: Jan 2007
Rookie Year: 2006
Location: Israel
Posts: 236
yara92 will become famous soon enoughyara92 will become famous soon enough
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared

Quote:
Originally Posted by Christopher149 View Post
On the robot, or the driver station? Because if driver station, keep in mind rule G21:

emphasis mine
we mean to use the Kinect as 3D camera
__________________
TEAM 1946-Tamra
  #19   Spotlight this post!  
Unread 14-02-2015, 16:37
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared

you have to utilize the libfreenect library in order to interface with the kinect. If you go to our code, cmastudios made grabbing the rgb, ir, and depth map from the kinect into one line of code.

https://github.com/rr1706/vision2015/tree/master/lib the files are free.cpp and free.hpp.

The rgb image is obviously in color (RGB), the ir is in grayscale, and the depth is an interesting type of image, it is grayscale where pixel values are a representation of depth.
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
  #20   Spotlight this post!  
Unread 25-02-2015, 09:42
RyanShoff RyanShoff is offline
Registered User
FRC #4143 (Mars Wars)
Team Role: Mentor
 
Join Date: Mar 2012
Rookie Year: 2012
Location: Metamora, IL
Posts: 147
RyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to behold
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared

Thanks for sharing this. I did get your solution working on a Nvidia Jetson board yesterday.
__________________
Ryan Shoff
4143 Mars/Wars
CheapGears.com
  #21   Spotlight this post!  
Unread 25-02-2015, 14:15
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared

I have been stuck on a time consuming project in the lab so I haven't had time to configure it for the jetson board laying on the table next to me. I just stare at it with desire....

How'd it do? On the odroid, we get manageable fps, but it is rather laggy compared to our vision programs in the past that were hitting 30fps.
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
  #22   Spotlight this post!  
Unread 25-02-2015, 15:04
RyanShoff RyanShoff is offline
Registered User
FRC #4143 (Mars Wars)
Team Role: Mentor
 
Join Date: Mar 2012
Rookie Year: 2012
Location: Metamora, IL
Posts: 147
RyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to behold
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared

Adding the profile times in robot.log gives me an average frame rate of 3-4 fps. This is with X running and both color and ir maps displaying on screen. There is a slight lag when putting your hand quickly in front on the camera. It probably is way less than a quarter second. Only two cores are being used.

It is much less laggy that anything I've been able to do with libpcl on the jetson.

With a little work, I think it could work for autonomous navigation.
__________________
Ryan Shoff
4143 Mars/Wars
CheapGears.com
  #23   Spotlight this post!  
Unread 25-02-2015, 15:25
RyanShoff RyanShoff is offline
Registered User
FRC #4143 (Mars Wars)
Team Role: Mentor
 
Join Date: Mar 2012
Rookie Year: 2012
Location: Metamora, IL
Posts: 147
RyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to behold
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared

I didn't understand the profile times in the post above. With image display now turned off, I'm now seeing 20 fps.
__________________
Ryan Shoff
4143 Mars/Wars
CheapGears.com
  #24   Spotlight this post!  
Unread 25-02-2015, 16:06
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared

Image display is a rather computationally intensive task, to many people's surprise. I expected a decent jump in fps, but not that much. That is really encouraging to see, actually. Thank you so much for doing this.
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
  #25   Spotlight this post!  
Unread 25-02-2015, 16:17
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: 1706 Vision Solution: Tracking totes in depth-map, color, and infrared

Quote:
Originally Posted by RyanShoff View Post
It is much less laggy that anything I've been able to do with libpcl on the jetson.

With a little work, I think it could work for autonomous navigation.
I have never written a vision program with a ton of lag. The most I personally witnessed first hand with 1706's vision solution in 2014. It utilized 3 cameras and solved for the robot's position and orientation on the field. It dedicated a single core for the entirety of processing of one of the cameras (so 3 of the 4 were used to process images). It had about a half second of display. I forget the exact amount. We tested lag by placing a stop watching in front of the camera, then took pictures of the vision output along with the stop watch then simply subtracted the times to find the lag.

If you want to, go for it. @cmastudios informed me that they are now using vision in autonomous, which is exciting. I wrote matlab code that is a basic implementation of a* in 2d. cmastudios converted it to c++, then I changed his c++ code to a custom path finding algorithm that takes robot width into consideration. The custom path finding algorithm is being used currently on a robotics team at MST.

There is a step missing between vision output and input to path finding: changing to D.S (data structure) of the vision output to fit that D.S that a * can operate on. Usually it is simply a list of points in a finite, discrete, grid that are deemed in-transversible (obstacles). You cannot simply pass the centers of all detected object to a astar due to the object (in this case totes), having a decent amount of width and length.

A big problem with converting from vision to path finding is precision. Yes, you can return every pixel that is an obstacle, but then your grid is extremely discrete and path finding is O(nlogn) if I remember.

cma utilized the gnu optimizer when we were toying with the idea of a* this past summer and he got a 900x900 grid to be solved in about 1ms, I forget the exact time, on a decent laptop.
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 19:51.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi