Go to Post It's the corollary that you pick them no matter what shape they start the event in. Pink is known for showing up at their 1st regional with the robot in an unfinished state. And somehow, magically by the end of qualifications it just works, and they win. - Nuttyman54 [more]
Home
Go Back   Chief Delphi > Technical > Programming > NI LabVIEW
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rating: Thread Rating: 2 votes, 5.00 average. Display Modes
  #1   Spotlight this post!  
Unread 24-09-2013, 23:28
Invictus3593's Avatar
Invictus3593 Invictus3593 is offline
time you like wasting is not wasted
FRC #3593 (Team Invictus)
Team Role: Leadership
 
Join Date: Jan 2013
Rookie Year: 2010
Location: Tulsa, OK
Posts: 318
Invictus3593 is just really niceInvictus3593 is just really niceInvictus3593 is just really niceInvictus3593 is just really nice
Dual Cameras - Dual Purposes

Our team programmer is trying to write some vision processing software for the upcoming year in C# using AForge and he's trying to figure out if there's any way to display the second camera feed on the Dashboard?

We use Labview and have used it for 3 years and we'd be using an onboard ITX pc to process each frame from a Kinect's depth sensing capabilites and send the data to the crio every 100ms. But we don't know how would we be able to grab the depth feed from the Kinect and disply it on our LV dashboard.


The second feed, the kinect depth data, should look something like this:



It would be great if we didn't have to change languages to accomplish any of this, but any suggestions or corrections are awesome!

Let us know if you need more information.
Reply With Quote
  #2   Spotlight this post!  
Unread 25-09-2013, 00:35
billbo911's Avatar
billbo911 billbo911 is offline
I prefer you give a perfect effort.
AKA: That's "Mr. Bill"
FRC #2073 (EagleForce)
Team Role: Mentor
 
Join Date: Mar 2005
Rookie Year: 2005
Location: Elk Grove, Ca.
Posts: 2,372
billbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond repute
Re: Dual Cameras - Dual Purposes

Assuming the game this coming year will some type of "target", maybe you don't need to send the entire image to the cRio. You might be able to just sent the target parameters you want to exploit.
Send the data as a string every 100ms, then just have the cRio decode the string and use the information to align with the "target". Let the ITX pc do all the heavy lifting and let the cRio operate the robot.
__________________
CalGames 2009 Autonomous Champion Award winner
Sacramento 2010 Creativity in Design winner, Sacramento 2010 Quarter finalist
2011 Sacramento Finalist, 2011 Madtown Engineering Inspiration Award.
2012 Sacramento Semi-Finals, 2012 Sacramento Innovation in Control Award, 2012 SVR Judges Award.
2012 CalGames Autonomous Challenge Award winner ($$$).
2014 2X Rockwell Automation: Innovation in Control Award (CVR and SAC). Curie Division Gracious Professionalism Award.
2014 Capital City Classic Winner AND Runner Up. Madtown Throwdown: Runner up.
2015 Innovation in Control Award, Sacramento.
2016 Chezy Champs Finalist, 2016 MTTD Finalist
Reply With Quote
  #3   Spotlight this post!  
Unread 25-09-2013, 08:50
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,752
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: Dual Cameras - Dual Purposes

Sending the image to the cRIO every 100ms is step one, displaying on on dashboard is step two, correct?

To send any data you like from any processor on the robot to the cRIO, I'd open up a socket, or use TCP or UDP. It sounds like this is already underway.

To send data from any processor on the robot to the DS is pretty much the same, but you need to ensure that you used ports compatible with FMS setup. The FMS blocks most ports and a select few are left open for exactly this sort of communication. Again, use TCP or sockets. The modification to the Dashboard will be very similar to the code on the robot. Read the image from the correct port and IP. Depending on the format of the image, I'd probably try to convert it to an IMAQ image. The WPI functions do this internally for JPEGs, and it is possible to hand over an array of pixel data and IMAQ will use it for the image data. Then you use the normal dashboard image control and write to the terminal. I've done three camera images at once before, and it works fine as long as the laptop can keep up with the decompression overhead.

I don't think language choice has much to do with this, and if you have more detailed questions, please ask. Or get the programmer to ask directly.

Greg McKaskle
Reply With Quote
  #4   Spotlight this post!  
Unread 25-09-2013, 10:26
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: Dual Cameras - Dual Purposes

Side note* with the depth camera, you are losing a lot of data converting it to a bit that we can see and make sense of. I don't know anything about c#, but the follow code is in c++

mat raw;

while(1)
{
raw = freenect_sync_get_depth_cv(0); //11-bit Image // 10-bit useful
Mat raw_mat(raw), depth_mat;
raw_mat = raw_mat-512; //Erase unusable close values and basically make it 9-bit
raw_mat.convertTo(depth_mat, CV_8UC1, 0.5); // convert 9-bit to 8-bit
imshow("Depth", depth_mat);
}
}

just to clean up the depth map a bit. I assume this was a given program due to the colouring scheme for distance is identical, or nearly, to the one I found with freenect. Im sure the above code could be easily converted to c#.

Anyways, back to your quesiton. Have you ever thought about running two programs for this task? One program uses camera a, the other camera b. Due to the bandwidth restriction nowadays (which i am very glad FIRST implemented because it teaches not to send a lot of data over a short period of time), you could create a simulation on your driverstation. This year, based off of our distances found by the vision program and x rotation, the screen on our driver station adjusted a simulated target to fit those constraints. So, if you were say...trying to take frisbees, or other robots, you could send the coordinates and size, then recreate it on your driver station, and you could update the display for every solution, not just every 100ms, so that's a bonus. Just a suggestion...If you are determined to display both, then I'd say write two programs. But I'm not sure how natural one can get at reading a depth image in the heat of a match. That'd be some serious mental training.

Just wondering, I assume the other image is an rgb camera, yes? Is it the one on the kinect or a webcam/axis camera?
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
Reply With Quote
  #5   Spotlight this post!  
Unread 26-09-2013, 13:39
Invictus3593's Avatar
Invictus3593 Invictus3593 is offline
time you like wasting is not wasted
FRC #3593 (Team Invictus)
Team Role: Leadership
 
Join Date: Jan 2013
Rookie Year: 2010
Location: Tulsa, OK
Posts: 318
Invictus3593 is just really niceInvictus3593 is just really niceInvictus3593 is just really niceInvictus3593 is just really nice
Re: Dual Cameras - Dual Purposes

Invictus programmer, here!

Quote:
You might be able to just sent the target parameters you want to exploit.
I'm not completely sure what you mean by "parameters." Just the optimal depth for firing or what?

Quote:
displaying on on dashboard is step two, correct?
...
To send data from any processor on the robot to the DS is pretty much the same, but you need to ensure that you used ports compatible with FMS setup. The FMS blocks most ports and a select few are left open for exactly this sort of communication. Again, use TCP or sockets. The modification to the Dashboard will be very similar to the code on the robot. Read the image from the correct port and IP. Depending on the format of the image, I'd probably try to convert it to an IMAQ image. The WPI functions do this internally for JPEGs, and it is possible to hand over an array of pixel data and IMAQ will use it for the image data. Then you use the normal dashboard image control and write to the terminal. I've done three camera images at once before, and it works fine as long as the laptop can keep up with the decompression overhead.
First Question - yes that is the second step
Second - Our setup would have the usb from the kinect into the ITX, then the ITX would have an ethernet connection to the d-link with an IP like 10.35.93.7 or something. If I wanted to bypass the crio and just get image sent from the ITX on the dashboard, how would I go about doing that? I'm stumped.. :/
I had heard that you could only do 2 camera feeds in c++, but obviously that's not true.

Quote:
Have you ever thought about running two programs for this task? One program uses camera a, the other camera b. Due to the bandwidth restriction nowadays (which i am very glad FIRST implemented because it teaches not to send a lot of data over a short period of time), you could create a simulation on your driverstation. This year, based off of our distances found by the vision program and x rotation, the screen on our driver station adjusted a simulated target to fit those constraints. So, if you were say...trying to take frisbees, or other robots, you could send the coordinates and size, then recreate it on your driver station, and you could update the display for every solution, not just every 100ms, so that's a bonus. Just a suggestion...If you are determined to display both, then I'd say write two programs. But I'm not sure how natural one can get at reading a depth image in the heat of a match. That'd be some serious mental training.

Just wondering, I assume the other image is an rgb camera, yes? Is it the one on the kinect or a webcam/axis camera?
We'd like to be able to display both camera feeds on the same LV dashboard.
To read the depth data during a match, we set the first 24 bits in a 32-bit rgb data according to the depth, and set the other 8 at ++colorBitmap; basically, the closer something is, the color changes and vice-versa.
Any I am very contientious of how much bandwidth I'm using, I only want to get about 15fps at 50% compression on both displays, that way i'm still "in the green."



Thank you guys so much for the quick replies!

I had one other question. I had heard that the Kinect has an accelerometer in it and I was wondering if anyone has tried to use an accelerometer to compute where their robot is on the field. It may be a fun idea to collaborate on, if no one has done it yet! If there's enough interest, I'll create another thread for this idea, just let me know!
Reply With Quote
  #6   Spotlight this post!  
Unread 26-09-2013, 14:33
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,752
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: Dual Cameras - Dual Purposes

The existing dashboard code reads directly from the camera IP on port 80. It requests an image stream as a CGI GET and processes the stream from the resulting session. If parameters change, it closes and reopens a new CGI session.

If the "camera" on the ITX could be made to look identical to the Axis, you could use the DB code as is. This would involve making an mjpeg on the ITX and serving it up on port 80 as if you were an Axis camera. While cool hackery, this is not really what I'd recommend, but it is a starting point for understanding one way to do this.

What I'd recommend making a simple ask/response protocol over TCP. When asked for it, take the IR or depth image on the robot and writing it. When it is read on the DB, format it as needed and transfer it into an image of some sort. I have some Kinect code in LV that moves the data into efficiently into different formats if you find you need it.

For display, you can use the LV picture control or the IMAQ display. You could even use the intensity graph if you would like it to do the color mapping as part of the blit.

Using an accelerometer to determine distance is a very hard problem. To do this, you need to know orientation as well as accelerations and you need to have high quality sensors and/or knowledge of how the chassis can be affected.

If you hook an analog sensor to the cRIO and use the DMA interface, you can pull back a high speed signal. This helps demonstrate how a tiny tilt of the sensor accumulates and results in an erroneous velocity as you integrate. You can also play with this if you have the right app on your phone.

Greg McKaskle
Reply With Quote
  #7   Spotlight this post!  
Unread 26-09-2013, 14:36
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: Dual Cameras - Dual Purposes

Quote:
Originally Posted by Invictus3593 View Post
I had heard that you could only do 2 camera feeds in c++, but obviously that's not true.

I had one other question. I had heard that the Kinect has an accelerometer in it and I was wondering if anyone has tried to use an accelerometer to compute where their robot is on the field. It may be a fun idea to collaborate on, if no one has done it yet! If there's enough interest, I'll create another thread for this idea, just let me know!
It is very possible to show 3 camera feeds in c++( 2 from the kinect, 1 from a webcam), and c for the matter. I could send you a really good demo program that I wrote for our team meeting last week to teach our new programmers about openCV/computer vision. Pm me and I'll send it to you. It's kind of long (~100ish lines) so it'd really stretch out this post and would be obnoxious.

You're right about the kinect having an accelerometer, but I honest have no idea how to call for it's reading. I emailed my mentor that helps me with vision programming and he sent me this link: http://www.youtube.com/watch?v=c9bWpE4tm-o. it doesn't give any info in the description, but the fact that is has been done is encouraging I don't know much about the kinect sdk, so I can't help you there.

Instead of using the accelerometer , we use a gyro for orientation. What we have been working on is using our vision solution instead of a gyro, or as a check for it at the very least. To do this for all 6 desired values, x y and z displacement, and pitch roll yaw, I'd suggest using pose. I digress, however. I would love to work on a project like this with anyone interested (that is, using the accelerometer readings in the kinect)
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
Reply With Quote
  #8   Spotlight this post!  
Unread 27-09-2013, 09:35
Invictus3593's Avatar
Invictus3593 Invictus3593 is offline
time you like wasting is not wasted
FRC #3593 (Team Invictus)
Team Role: Leadership
 
Join Date: Jan 2013
Rookie Year: 2010
Location: Tulsa, OK
Posts: 318
Invictus3593 is just really niceInvictus3593 is just really niceInvictus3593 is just really niceInvictus3593 is just really nice
Re: Dual Cameras - Dual Purposes

Quote:
What I'd recommend making a simple ask/response protocol over TCP. When asked for it, take the IR or depth image on the robot and writing it. When it is read on the DB, format it as needed and transfer it into an image of some sort. I have some Kinect code in LV that moves the data efficiently into different formats if you find you need it.

...

Using an accelerometer to determine distance is a very hard problem. To do this, you need to know orientation as well as accelerations and you need to have high quality sensors and/or knowledge of how the chassis can be affected.
If I used a gyro and the kinect sensor together and had a small diagram of the field and our robot on it (to scale of the field), couldn't I use the gyro for orientation? On the kinect side, I was thinking that maybe taking the accelerometer data and calculating speed and time to get the distance traveled, then compare that to the gyro data and move the diagramed robot accordingly.


Quote:
It is very possible to show 3 camera feeds in c++( 2 from the kinect, 1 from a webcam), and c for the matter. I could send you a really good demo program that I wrote for our team meeting last week to teach our new programmers about openCV/computer vision. Pm me and I'll send it to you. It's kind of long (~100ish lines) so it'd really stretch out this post and would be obnoxious.

...

Instead of using the accelerometer , we use a gyro for orientation. What we have been working on is using our vision solution instead of a gyro, or as a check for it at the very least. To do this for all 6 desired values, x y and z displacement, and pitch roll yaw, I'd suggest using pose. I digress, however. I would love to work on a project like this with anyone interested (that is, using the accelerometer readings in the kinect)
I have some experience with opencv, but i used the .net wrapper to use it in c# and i felt like it lacked what i needed. But i'd love to see that code! I'll be pm-ing you soon.

does the gyro provide speed and time? I have never used one so I have no idea. If it does, there may be no reason to call the kinect's accelerometer at all, just use the gyro to help plot the diagram maybe?
Reply With Quote
  #9   Spotlight this post!  
Unread 27-09-2013, 10:56
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: Dual Cameras - Dual Purposes

Quote:
Originally Posted by Invictus3593 View Post
If I used a gyro and the kinect sensor together and had a small diagram of the field and our robot on it (to scale of the field), couldn't I use the gyro for orientation? On the kinect side, I was thinking that maybe taking the accelerometer data and calculating speed and time to get the distance traveled, then compare that to the gyro data and move the diagramed robot accordingly.

does the gyro provide speed and time? I have never used one so I have no idea. If it does, there may be no reason to call the kinect's accelerometer at all, just use the gyro to help plot the diagram maybe?

If you aren't knowledgeable with calculus, to get position out of acceleration, you have to integrate the graph of the values twice, but that won't be too hard considering how powerful of machines laptops. If you do this for x, y and z, you get orientation in relation to your starting position. A problem we have with our gyro is that it drifts, so it starts climbing at an exponential rate. i have a vivid memory of our gyro climbing at 10 rps (revolutions per SECOND) while testing our motors in the pit to make sure everything was running smoothly. While the gyro climbing isn't the worst thing in the world, it is when you use the gyro for machanuum (spelling?) wheels. I haven't heard of many teams having this problem, but we have it, so it affects what we do. The gyro we have just gives pitch roll and yaw based off our initial orientation, which helps us for machanuum driving but nothing else.

I really like the idea of using the accerlometer in the kinect, it's already there, so why not use it, right?
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
Reply With Quote
  #10   Spotlight this post!  
Unread 27-09-2013, 12:07
Invictus3593's Avatar
Invictus3593 Invictus3593 is offline
time you like wasting is not wasted
FRC #3593 (Team Invictus)
Team Role: Leadership
 
Join Date: Jan 2013
Rookie Year: 2010
Location: Tulsa, OK
Posts: 318
Invictus3593 is just really niceInvictus3593 is just really niceInvictus3593 is just really niceInvictus3593 is just really nice
Re: Dual Cameras - Dual Purposes

Quote:
Originally Posted by faust1706 View Post
If you aren't knowledgeable with calculus, to get position out of acceleration, you have to integrate the graph of the values twice, but that won't be too hard considering how powerful of machines laptops. If you do this for x, y and z, you get orientation in relation to your starting position.
In my physics class, we're learning that kind of stuff, so technically it would be just a matter of inputting variables into the correct equations to get time and speed and position, i thought.


Quote:
Originally Posted by faust1706 View Post
...I really like the idea of using the accerlometer in the kinect, it's already there, so why not use it, right?
I agree! If the accelerometer is sensitive enough, it would be ideal for this. I think it's a matter of getting the right data, though.

Also, i shot you an email because it wouldn't let me PM you, faust.


I'll start a new thread pertaining to the position idea here in a few so we stay on the camera idea in this thread!

Last edited by Invictus3593 : 27-09-2013 at 12:19.
Reply With Quote
  #11   Spotlight this post!  
Unread 30-09-2013, 12:28
adciv adciv is offline
One Eyed Man
FRC #0836 (RoboBees)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2010
Location: Southern Maryland
Posts: 478
adciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to all
Re: Dual Cameras - Dual Purposes

See this other post for some code. The code isn't as neat as I would like, but If I remember right, I had it feeding the image to the driverstation. Short version, I compressed the image to JPG, sent it over TCP to the driver station, and decoded it for display on the screen.

http://www.chiefdelphi.com/forums/sh....php?p=1205226
__________________
Quote:
Originally Posted by texarkana View Post
I would not want the task of devising a system that 50,000 very smart people try to outwit.
Reply With Quote
  #12   Spotlight this post!  
Unread 01-10-2013, 00:12
Invictus3593's Avatar
Invictus3593 Invictus3593 is offline
time you like wasting is not wasted
FRC #3593 (Team Invictus)
Team Role: Leadership
 
Join Date: Jan 2013
Rookie Year: 2010
Location: Tulsa, OK
Posts: 318
Invictus3593 is just really niceInvictus3593 is just really niceInvictus3593 is just really niceInvictus3593 is just really nice
Re: Dual Cameras - Dual Purposes

Quote:
Originally Posted by adciv View Post
See this other post for some code. The code isn't as neat as I would like, but If I remember right, I had it feeding the image to the driverstation. Short version, I compressed the image to JPG, sent it over TCP to the driver station, and decoded it for display on the screen.

http://www.chiefdelphi.com/forums/sh....php?p=1205226
Is there a PC port for this program? I'm not a fluent Linux user, haha. If not, would it be relatively simple for me to rebuild the same program to a PC executable?
Reply With Quote
  #13   Spotlight this post!  
Unread 04-10-2013, 09:22
Invictus3593's Avatar
Invictus3593 Invictus3593 is offline
time you like wasting is not wasted
FRC #3593 (Team Invictus)
Team Role: Leadership
 
Join Date: Jan 2013
Rookie Year: 2010
Location: Tulsa, OK
Posts: 318
Invictus3593 is just really niceInvictus3593 is just really niceInvictus3593 is just really niceInvictus3593 is just really nice
Re: Dual Cameras - Dual Purposes

I got the C# program written to get the Kinect depth picture, now i can't figure out how to send the data over tcp to the dashboard, any ideas?
__________________
Per Audacia Ad Astra
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 20:55.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi