|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||
|
||||
|
Re: Processed Image > Robot Movement Help Needed
Alright, I've flattened the distance information to a string and did some poking around with SD/NT and I'm getting pretty confused. I've got it writing a string array to a SmartDashboard table which, like the tutorial shows, calls it "Distances". I was seeing some Network Tables .vis that were calling for the cRIO IP address to start communicating information but I know absolutely nothing about it and I don't know how SD and NT relate to each other, if at all. Though I've got a hold on reading it on the robot once the information is received (as far as I know, I haven't done it before), what's the next step for establishing a TCP or UDP connection in the Dashboard the Robot? I feel like it's right there in the Network Tables palette but I can't figure it out.
EDIT: I apologize if my battery of questions are annoying, I've spent 4 years looking into vision and it's a miracle the dashboard can see shapes as of Tuesday and I'm excited but know nothing more about the subject. Last edited by BenGrapevine : 16-01-2014 at 16:57. |
|
#2
|
||||
|
||||
|
Re: Processed Image > Robot Movement Help Needed
This is unrelated but do you know if we can put color sensors on the robot?
|
|
#3
|
||||
|
||||
|
Re: Processed Image > Robot Movement Help Needed
UPDATE: I found UDP write but I don't know what information on the .vi means what? What is the difference between Connection ID and Address?
EDIT: Man I'm stupid 'detailed help' is a life saver. EDIT 2: It's not as much of a life saver as I thought, I still don't know what they mean. Last edited by BenGrapevine : 16-01-2014 at 17:14. |
|
#4
|
||||||
|
||||||
|
Re: Processed Image > Robot Movement Help Needed
Quote:
|
|
#5
|
|||
|
|||
|
Re: Processed Image > Robot Movement Help Needed
As Joe said. NT and SD are already in the template code. So that it isn't magic, let me answer a few of your questions.
Network Tables is a name for the networked variables. The variables have a path that organizes them hierarchically and you can think of the elements at the same level of hierarchy as a table of variables. Smart Dashboard is really the same thing, but we put all of the variables into the /SmartDashboard/ table. This simply isolates them from some book keeping areas and avoids a bit of clutter. The LabVIEW Dashboard has a loop that calls the NT Binding function. It is loop 3. It takes in a list of controls and indicators that are then bound to the identically named NT variables. The loop keeps it running even if the robot disconnects and reappears. The LabVIEW robot template has Robot Main start the NT Server. It is just below the loop towards the bottom of the window. With those two pieces running, you should simply need to use SD VIs to read and write data. Make sure the name and datatype matches. On the dashboard, there is a second feature where you don't even need to use a read or write. By placing your control in the Operation or Auto tab and naming it will make a variable of that name and keep it synched to the robot where you can read or write it. This is more useful for simple checkboxes and numerics, but is not sufficient for more complex types like the cluster you are using. Greg McKaskle |
|
#6
|
||||
|
||||
|
Re: Processed Image > Robot Movement Help Needed
Okay I have the dashboard flattening Distance information to a string, and then sending it to SD Write String, which should put it into it's hierarchy or whatever. Now in the PeriodicTasks.vi I have placed a SD Read String and made it unflatten the string like this:
![]() What do I do with this unstrung data in regards to getting it global and moving to mutliple .vi's? I've never used globals before, and I can't find any documentation that talks about this step in the vision processing systems. My end goal for this is to be able to see boxes (done), identify which boxes mean left and right by using a score system (done), calculate the distance the robot is from the wall (done), send that information from the dashboard to the robot code, interpret the score values it gets, and on a button pressing move to a specific xy location to line up a shot. I am just starting with distance so I can at least make the robot go a certain distance from the goal. After that is when I would like to look into turning the robot to face the goal straight on, or moving to be in an exact spot on the field. Any help with moving forward on this is and will be appreciated. |
|
#7
|
|||
|
|||
|
Re: Processed Image > Robot Movement Help Needed
The input to the unflatten node called "type" is used to describe what was flattened in the string. You need to wire the same datatype to type as you wired into the flatten node on the dashboard. It is often easiest to drag and drop directly from one window to the other to ensure they are the same. If you envision this changing, you should read up on typedefs as a means to share types between projects.
The robot project contains a file called robot global.vi. If you open that you'll see that there are already a few in use. To create another, drag your type into robot global and name it. You can now drag robot global from the project into tele op and periodic tasks and select the global to read or write. Another thing you should do in the loop you show that reads the SD, unflattens, and updates the global is to add a delay. As written, this parallel loop requests that it run as fast as possible. This will max out the CPU. In reality, this loop probably needs to wait for 30ms or 50ms in parallel with the operations it contains. Notice that the other loops do this already with 10ms and 100ms wired up. For more details on globals, please search the general LabVIEW help documentation. Greg McKaskle |
|
#8
|
||||
|
||||
|
Re: Processed Image > Robot Movement Help Needed
(dashboardmain.vi)
![]() (periodictasks.vi) ![]() Looking at the flatten to string, it appears the data coming in is a 1D array. I have no clue what you mean by dragging and dropping stuff from there to the 'type' in the unflatten from string. Last edited by BenGrapevine : 18-01-2014 at 10:37. |
|
#9
|
|||
|
|||
|
Re: Processed Image > Robot Movement Help Needed
Actually, I'd suggest right-clicking on the Distances indicator terminal. Choose Create >> Constant. This creates a constant of the same datatype. Use the mouse to drag the constant from your dashboard VI's block diagram to the periodic VI's block diagram. If this is awkward, you can copy and paste. You can then delete the constant from the dashboard diagram.
Then wire the constant to the Type input. Next you can unbundle the data front he unflatten or otherwise share the data via globals. Greg McKaskle |
|
#10
|
||||
|
||||
|
Re: Processed Image > Robot Movement Help Needed
Okay so I followed your instruction here and added the distances to work on a joystick button like so:
![]() On button3 and once checked if the distance is >= 6 feet, drive forward. This seems like it would work fine but I have no idea why it's not calculating distance information: ![]() As you can see on the dashboard, in both places the distance is indicated as 0. What am I doing wrong? |
|
#11
|
||||
|
||||
|
Re: Processed Image > Robot Movement Help Needed
I've modified the dashboard so that the smaller "distances" panel next to the HSV control now displays hot/not hot and lights up when it is "left", what still doesnt show up is distance and target to center. Do I need to do something with the computing distances vi to allow it to calculate?
Last edited by BenGrapevine : 21-01-2014 at 18:43. |
|
#12
|
|||
|
|||
|
Re: Processed Image > Robot Movement Help Needed
I'd suggest opening additional subVI panels to see what is causing the zero distance. The Calc distance is based on the camera type and is pretty simple trig. Does it make more sense now?
Another thing looks odd in the joystick button code you attached. It unbundles a double and compares it to 6. The double numeric seems to be coming from a constant. I suspect that should be the distance value that is being communicated and unflattened. Greg McKaskle |
|
#13
|
||||
|
||||
|
Re: Processed Image > Robot Movement Help Needed
What would you recommend doing? I redid the vision processing to make sure distance works now (which it does, thanks for the help on that).
![]() But now I'm working on integrating it into something I can work with in teleop. I would like to have it read the distance data value and while both the button being pressed is true, and the distance value being greater than a certain number is true, drive forward until that is no longer true. Here is what I have for teleop.vi at this point in regards to vision (now that I've started over): ![]() Last edited by BenGrapevine : 25-01-2014 at 15:05. |
|
#14
|
|||
|
|||
|
Re: Processed Image > Robot Movement Help Needed
I attached an image with a few edits.
You were heading in the right direction, but making a few wrong turns. Your code was reading "Left and Right Motors" from both RobotDrive and also from the Motors list. While this is supported, I suspect you were really meaning to update robot drive again. Please correct me if I'm wrong. The next thing to think about is what the result is when a motor is given multiple commands during a teleOp execution or in parallel portions of the app. The better approach is to combine the logic and update the motors or other actuators just once. That is what I attempted to show I'm my modified image. You bring the joystick and the distance info into a single switch statement. You can update the motors either in a common location using outputs. A slight variation that you may see or decide to use is to bring the values together and use a ternary ?: operator to select one of them to go to the motors. Greg McKaskle |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|