Jeff McCune
21-02-2004, 18:47
Not sure if I missed something in my CIS classes, but can someone offer some insight into this behavior?
int foo;
foo = 127 + 127;
printf ("%16b | %d \r\n", foo, foo);
// prints: "1111 1111 1111 1110 | -2"
Is this a bug in the printf code, or is the compiler and microchip actually representing this using ones for the first 15 significant digits and a zero at the least significant position?
It would seem that it might be treating the literal values as an 8 bit integer and then padding with the most significant digit to get to 16 bits. "it" may be printf, or the compiler/CPU... ?
This code is my work around:
unsigned char foo;
foo = 127 + 127;
printf ("%16b | %d \r\n", (int)foo, (int)foo);
// prints: "0000 0000 1111 1110 | 254"
// This also works, keeping all my data types at (int)
// This also implies it's a bug in how the chip/compiler represents literal values
int foo;
foo = (int)(unsigned char)(127 + 127);
printf("%16b | %d \r\n", foo, foo);
From the last example, it would seem that literal values are padded out using the most significant digit, but this padding doesn't occur if you cast the data along the way. Am I correct in this guess?
Any insights? I'd like to just use one data type (int) everywhere, but this is hanging me up, since it becomes difficult to debug the output.
int foo;
foo = 127 + 127;
printf ("%16b | %d \r\n", foo, foo);
// prints: "1111 1111 1111 1110 | -2"
Is this a bug in the printf code, or is the compiler and microchip actually representing this using ones for the first 15 significant digits and a zero at the least significant position?
It would seem that it might be treating the literal values as an 8 bit integer and then padding with the most significant digit to get to 16 bits. "it" may be printf, or the compiler/CPU... ?
This code is my work around:
unsigned char foo;
foo = 127 + 127;
printf ("%16b | %d \r\n", (int)foo, (int)foo);
// prints: "0000 0000 1111 1110 | 254"
// This also works, keeping all my data types at (int)
// This also implies it's a bug in how the chip/compiler represents literal values
int foo;
foo = (int)(unsigned char)(127 + 127);
printf("%16b | %d \r\n", foo, foo);
From the last example, it would seem that literal values are padded out using the most significant digit, but this padding doesn't occur if you cast the data along the way. Am I correct in this guess?
Any insights? I'd like to just use one data type (int) everywhere, but this is hanging me up, since it becomes difficult to debug the output.