The compiler we use has the irritating feature of choosing eight-bit arithmetic when adding eight-bit quantities. That often results in overflow and unexpected results. Adding a constant 2000 to the expression forces the compiler to do things using 16-bit arithmetic, and everything then works as it should.
The Limit_Mix() function subtracts 2000 from its result, neatly compensating for the extra 2000 in its input.
The Limit_Mix function assumes that its input value has had 2000 added in, and thus subtracts it back out before returning.
Back in the Basic Stamp days, there were certain circumstances where it was necessary to avoid negative numbers. This Limit_Mix function is a legacy of those days. By rights, the function should now be:
OK. I forgot about the 8 bit quirk. Adding the 2000 bias does circumvent this trap, but it’s an awfully kludgy way to do it. I assert the following would be clearer: (Assuming that all the definitions remain eight bits each.)
It makes the compiler treat the variable with the (int) in front of it as an int. However, having even a single int in the expression causes the compiler to choose 16-bit operations to evaluate the result, even if everything else is only 8 bits in size.
The (int) is something called an explicit typecast. It forces the variable right after it to be converted into an integer type, which in our case is a 16-bit signed type.
This “Limit Mix / why add 2000?” question is one that gets asked A LOT by teams up in Canada. I usually refer people to a post on the FIRST Greater Toronto Regional forums: