NETWORK TABLE HELP! (ROBOREALM)

I was wondering if anyone knows how to get the Network Table values from RoboRealm to eclipse so I can use that data for vision tracking. right now we are just getting the default values for the network table. If you need a picture of the network table or the code then email me at [email protected]

Have you gotten a response? My team is having the same issues.

MY Team is ALSO having the same problems, only with a Raspberry PI.

We can see the NetworkTables in the LabView dashboard, but according to the code on our RoboRio, our “GRIP” Table does not exist.

We also tried sending the values to a Sub-Table, and just writing the variables in the SmartDashboard table.

No matter where we put them, the LabView dashboard COULD see them, and our RoboRio code COULD NOT.

Any help would be fantastic, our team is completely stumped.

It might be helpful to run the OutlineViewer and attach it to your networkTable and then browse the entries to see where they ended up (sometimes they have an extra or missing level of hierarchy than what you were expecting.
c:\users<username>\wpilib ools\OutlineViewer

set the address to roborio-####-frc.local (#### being your team number) and run the client. This will let you snoop on the data and see what’s going on. If you see nothing, then it’s possible that either your vision processing code isn’t writing data, or is even writing it to a different table instance entirely (e.g. to localhost, instead of to the RIO’s networkTable instance)

in a pinch, you can just jam data into the table that way and see if your robot code sees it-- that might get you moving on the consumption side while someone debugs the producer side in parallel…

FWIW, here’s the code we were running earlier today to extract the vision data for target tracking.

in vision.java (autogenerated subsystem from RobotBuilder):


	// create a new constructor
	public Vision() {		
    // Get a pointer to the networkTable.  "StrongholdContours" is the name we entered into the publish box in GRIP
    table = NetworkTable.getTable("GRIP/StrongholdContours");
	}

and here’s the actual function. We had some issues with null sets (if no contours found) and hit some race conditions where the two array reads got different sizes because of an intervening update and some noise. I hacked around those for the moment, but will have to figure something a little better. But, anyway… this did produce the right X values for left-right tracking. All the other reads were pretty much the same. the xRes variable was the pixel resolution in the X direction, which was 320 for us.


    public double getRawTargetXpos() {
    	double xPos; // return value
    	
    	// 1) Get the current list of targets found.  There might be more than one visible at a time
       	// first, get the vision system data for the target
    	double] defaultValue = new double[0]; // set up a default value in case the table isn't published yet

    	// Get all needed table items at the same time, in case the table updates between reads.
    	// (could end up with different array sizes)
    	double] targetX = table.getNumberArray("centerX", defaultValue);
		double] areas = table.getNumberArray("area", defaultValue);
		if (targetX.length != areas.length) {
			// here the table updated in the middle; we'll have to punt
			System.out.println("NetworkTable udpated in the middle of getRawTargetXpos; returning first valid entry");
	    	if (targetX.length==0)
	    	{
	    		// we didn't find ANY object.  Return a perfectly centered answer so that the system doesn't
	    		// try to adapt
	    		xPos = xRes/2;
	    	}
	    	else xPos = targetX[0];
	    	return xPos;
		}
   	
    	// For initial debug, just print out the table so we can see what's going on
/*
    	System.out.print("centerX: ");
    	for (double xval : targetX) { // for each target found,
    		System.out.print(xval + " ");
    	}
    	System.out.println();
*/    	
    	
    	// 2) Choose the one that has the largest area.  This is PROBABLY the closest target (and most in-line)
    	//    Don't want to choose the one closest to the center because that might actually be the target
    	//    for a different face that's very oblique to our robot position.
		int largestIdx = 0;
    	if (targetX.length > 1) {
    		double largest = 0;
    		for (int c = 0; c < areas.length; c++) {
    			if (areas[c] > largest){
    				largest = areas[c];
    				largestIdx = c;
    			}
    		}
    	}
    	
    	if (targetX.length==0)
    	{
    		// we didn't find ANY object.  Return a perfectly centered answer so that the system doesn't
    		// try to adapt
    		xPos = xRes/2;
    	}
    	else xPos = targetX[largestIdx];
    	
    	return xPos;
    }