View Single Post
  #1   Spotlight this post!  
Unread 21-02-2004, 18:47
Jeff McCune's Avatar
Jeff McCune Jeff McCune is offline
Alpha Geek
#0677 (The Wirestrippers)
Team Role: Mentor
 
Join Date: Jan 2003
Location: The Ohio State University
Posts: 67
Jeff McCune is on a distinguished road
Send a message via ICQ to Jeff McCune Send a message via AIM to Jeff McCune
Integer Representation - Huh?!

Not sure if I missed something in my CIS classes, but can someone offer some insight into this behavior?
Code:
int foo;
foo = 127 + 127;
printf ("%16b | %d \r\n", foo, foo);
// prints: "1111 1111 1111 1110 | -2"
Is this a bug in the printf code, or is the compiler and microchip actually representing this using ones for the first 15 significant digits and a zero at the least significant position?

It would seem that it might be treating the literal values as an 8 bit integer and then padding with the most significant digit to get to 16 bits. "it" may be printf, or the compiler/CPU... ?

This code is my work around:
Code:
unsigned char foo;
foo = 127 + 127;
printf ("%16b | %d \r\n", (int)foo, (int)foo);
// prints: "0000 0000 1111 1110 | 254"
 
// This also works, keeping all my data types at (int)
// This also implies it's a bug in how the chip/compiler represents literal values
int foo;
foo = (int)(unsigned char)(127 + 127);
printf("%16b | %d \r\n", foo, foo);
From the last example, it would seem that literal values are padded out using the most significant digit, but this padding doesn't occur if you cast the data along the way. Am I correct in this guess?

Any insights? I'd like to just use one data type (int) everywhere, but this is hanging me up, since it becomes difficult to debug the output.
__________________
Team 677 - The Wirestrippers - Columbus School for Girls and The Ohio State University
EMAIL: mccune@ling.ohio-state.edu

...And all you touch and all you see
Is all your life will ever be...

Last edited by Joe Johnson : 21-02-2004 at 21:57. Reason: Typo in the Title