I’ve seen the AND function used with numeric datatypes, and I really don’t understand:
What the heck does it mean like that? Does it convert the integers into boolean arrays, perform the function as if on an array, and then convert them back?
All processors store numbers as groups of bits. Performing an AND on numeric datatypes performs the AND operation on these sets of bits. The way the processor sees it no conversion is necessary, the data is already right there in the correct format to do a logical AND bit by bit.
This is often useful to grab parts of a number, usually the most or least significant bits.
Be careful about which AND you’re talking about though, because there are two distinct types of AND.
Logical AND - “&&”: This is the boolean type of AND that you’re probably used to seeing. It can be used with numeric types as inputs, but generally translates numeric zeros to FALSE, and non-zero values to TRUE. It will generally output 0 for false, and 1 for true.
Bitwise AND - “&”: This will take two numeric inputs, then perform a logical and for each bit in both inputs. For instance if you took two 8-bit binary numbers 11111111 & 00001111 = 00001111. Bitwise ANDs are commonly used for “bitmasking,” which is when you only want to take few bits from a number and strip the rest away (make them 0).
In all my years of LabVIEW experience, I don’t think that I’ve ever tried to AND a pair of integers but I just tried it and d*&n if LabVIEW and polymorphism don’t just work hand-in-hand. I typically use byte-to-boolean array (or similar) for bitwise manipulations of integers but I’m definitely going to exploit fixed point boolean “arithmetic” more often.