Few Questions over WPILib Code

First of all, for the Encoder->GetRate() function, it’s return type is a double, but what units does it return it in? Revs/sec, rads/sec? Is there a maximum rate at which this can be called, even though the physical limit at which meaningful data can be obtained might be around 10,000 revs /sec max?

Second, for the Timer functions, is it possible to obtain microsecond or better accuracy by subtracting two different timestamp functions, like GetPPCTimeStamp() or GetFPGATimeStamp(), and furthermore, are these timestamps just representations of how many clock cycles have passed, and thus dependent on the speed of the proccesor, or of the actual time in seconds passed?

GetRate() uses the value you set with SetDistancePerPulse(). It returns distancePerPulse / the period between the last two pulses in seconds. So if you set distancePerPulse to 1.0 and the FPGA saw three pulses in the last second it would return 3.0. The units of the number you set with SetDistancePerPulse() is up to you. So if 1.0 was an inch then you are moving at 3 inches/second. If it was a foot then 3 feet / sec.

Well I can certainly field a few of these.

In whatever units you’d like, you should be specifying a distance per pulse which the GetRate() function then uses. Which ever unit you are * implicitly* using in distance per pulse is the unit GetRate() returns.

The encoder will obviously only have a certain refresh rate due to hardware limitations, but there is no hard coded maximum of calling the function.

Not entirely sure, however I frankly believe the accuracy the Timer object provides should be good enough for high-level applications - the Timer object will be the actual time in seconds passed.

http://mmrambotics.ca/wpilib/class_timer.html

It seems you are doing some interesting data acquisition. Care to elaborate?

EDIT: jwakeman beat me to the first answer, but I’ll keep my post.

My sensor setup this year will likely comprise 4 encoders, one each on the two SuperShifters, and then 1 per arm controller, although one arm controller will probably be a potentiometer instead as there wont be enough rotations to get meaningful data. Furthermore, roughly 10 Maxbotix sensors will be positioned around the robot, probably in 5-5 front-back format, with the last two on each side angled 45-90 degrees. Last, all three line sensors will be on a horizontal line in the front of the robot.

By taking samples from the sensors at such a high rate, I can automate most of the driving with enough precision that I will only need to tell the robot what type of tube it has and which peg to place it on, and presto! The greater the precision in sensing time differences I have, and the greater the amount of samples, I’ll be able to do a more accurate integration of the distances traveled, as well as determine what obstacles I may need to avoid, like other robots, as well as their current velocity and position, or get additional references like just how far away the walls are when lining things up.

sounds ambitious…why not get the camera involved?

Because of the huge resources required in doing image processing, there will be too much trade off between speed and usable data, plus there’s not much a camera can do for this system, other than provide another type of data acquisition. Ultrasonics can perform object detection just as well, at a far greater rate. With A few gyros, potentiometers and encoders thrown in, too, I’ll be capable of placing the arm to the correct height without having to deal with a camera system’s code.

Plus, our camera broke.