
08-02-2009, 19:40
|
 |
Software Engineer
VRC #0111 (Wildstang)
Team Role: Engineer
|
|
Join Date: Feb 2002
Rookie Year: 1995
Location: North Barrington, IL
Posts: 1,366
|
|
|
Re: New class for Logitech Dual Action Gamepad
Quote:
Originally Posted by Phazonmutant
In general, you're absolutely right. But in this case, I've done pretty extensive testing (outputting the raw axis value to the console), and the gamepad will always output -1 or 1.
Edit 1: You'll notice the article is talking about testing floats when doing floating point math. Since we're doing no math, only testing a returned value, it's safe.
Edit 2: Furthermore, it talks about how the binary representation is not 100% precise for some numbers. The binary representation of floating points is implementation-specific, but lets assume the first bit is for the sign, the next 24 bits are for the significant figures, and the last 7 bits are for the base (IEEE 754 defines 24 bits as standard for 32-bit - single precision - floating point numbers). OK, a d1 (a 1 in decimal) looks like:
Code:
11000000000000000000000000000000
or, a 1 for the sign, a "100000000000000000000000" (a 1 followed by 23 0s) for the significant figures, and 7 0s (base = 0).
This is precisely 1.
|
We're using a Logitech gamepad that outputs analog values on the D-Pad. The buttons under the dpad on that stick are pressure sensitive and CAN output a value less than 1 if you press them lightly. And, most people follow the "don't test an exact value of a float" as a rule-of-thumb (so as not to get into trouble) rather than considering the exact implementation and whether it's safe for a particular case. We tend to write code with portability in mind, even if we're not likely to port it, just as a good habit to have.
|