That's normal computer science

It makes sense once you begin thinking of each data type as the bits it is capable of storing, rather than the everyday math you are accustom to.
Each data type has it's own unique data representation and we have to understand what each of them means and how it affects the values we are storing in it and how the data types affect any calculations or intermediate calculations that we may go through.
If you use a data type that can only hold 0-255, then what would you expect to happen when adding or subtracting leaves us with a number outside the possible range? If you look in the LabVIEW palettes you'll find a dozen different data types you can use. Understanding why you would use each of them is one of those skills we develop.
Integer (computer science)
: http://en.wikipedia.org/wiki/Integer_%28computer_science%29
Signed number representations:
http://en.wikipedia.org/wiki/Signed_...epresentations