Floats

I was curious if anybody has any idea on this… but about how many floats can you get away with using before you start running into problems?

Well if you are just creating floats, they do the same thing any other variable not in use does, they take up memory.

The question should be how many floating point operations per second can you do before you start running into problems. I don’t think its in the manual, but I’m sure somewhere you can find out how many instructions it takes for the chip to complete each type of floating point operation.

Don’t worry about it until you run into problems. Preemptive optimization almost always goes to waste, as the chip is so fast that you (usually, there are exceptions) can’t feed it data fast enough to use all its processing power. If a variable is best stored as a float, use one. If it’s an integer, use one of those. Saves a lot of headaches. Remember the basic stamps which couldn’t even use floats? Ever want one of them? Or howabout signed numbers? :stuck_out_tongue:

:eek: This is definately NOT a good engineering principle. One of the many unwritten laws of engineering failure is that the problem will occur in the most inopportune time. What if professionals did this? What if automotive guys said that there’s a problem with the brake system, but we’ll take care of it when the problem shows up. What if the communications guys said that there’s a problem in making phone calls when the system is under load, but we’ll wait until the problem shows up during the Super Bowl to fix it?

The point is that if you know that something will cause a problem in the future, design your system in such a way that you never get yourself into that situation.

Preemptive optimization almost always goes to waste, as the chip is so fast that you (usually, there are exceptions) can’t feed it data fast enough to use all its processing power.

Wanna bet. If you get enough code running in your loop you can easilly get the RC to miss packets. Trust me, we’ve been there.

If a variable is best stored as a float, use one. If it’s an integer, use one of those.
This might be the case in a non-embedded environment, but in a case that you have limited resources, you don’t want to go around wasting valuable cycles doing float calculations if you can get away with integer math. For each float operation that you do you probably add a dozen or more assembly instructions.

Also floats take up more program space, which is definately a valueable commodity.

This is definately NOT a good engineering principle.

While that may be true, the programming part is quite a bit more forgiving - unlike forces on the robot and such, the program (if written well) will do the exact same thing every time - you don’t have hostile robots crashing into your code. Generally, if it works in testing, it will work anywhere - and if it doesn’t, well it’s much more likely to be a bug than your code than your choice of variables. According to the IFI website, you have 1800 bytes of variable space. That’s 225 double-precision (8 bytes, right?) variables. Granted, there is a bit of overhead associated with the default code - it has its own state variables and such - but even (just a wild guess) if it used 100 of those (which I’m pretty sure it comes nowhere near) that still leaves you with another 125 for your own use - quite enough for most people.

The point is that if you know that something will cause a problem in the future, design your system in such a way that you never get yourself into that situation.

That’s like saying that we know at some point we’re going to lose a pushing match - so we’d better gear down all our motors 600:1 so that we can win it! At some point, you’re losing more by doing that than if you do eventually run into that limit. I’m not saying that your point isn’t reasonable - just that in this case convenience for the programmer may be more important.

Wanna bet. If you get enough code running in your loop you can easilly get the RC to miss packets. Trust me, we’ve been there.

Actually, I’ve never seen this happen, even though we were running a few PID loops (yes, with floating-point numbers :stuck_out_tongue: ) along with a few interrupts firing quite often to count motor revolutions. That’s interesting that that would happen - I would have thought that as long as you hit the main loop at least every 26.2 ms you would be ok. Doesn’t the master processor buffer the incoming radio data? If you do in fact need to hit that more often, than I may be completely and totally off.

Also floats take up more program space, which is definately a valueable commodity.

We have (again according to IFI) 32k of program space. Again some of that is taken up by the default code and libraries and such, but even if you’re left with only a few K of space, you’re not likely to save enough to do anything significant with a few extra bytes here and there.

Of course, everything I’ve said goes right out the window as soon as you start using arrays and structs and such. In that case, you may be much better off using chars or bitpacking :stuck_out_tongue:

I probably don’t know enough about how IFI has this all set up to be totally right, but it seems to me that replacing variables which do in fact hold floating point values with integers just because it saves you a few bytes of space is worth it.

Dave brings up good points. I didn’t intend to state that I wanted to use a ridiculous number of floats in my operations, but I’ve found that for certain things, the precision is necessary. However, I know that a few floats won’t kill the controller and make it skip packets unless you go overboard, so you don’t actually have a problem (unless I’m mistaken). If I didn’t have to use floats for certain operations, I wouldn’t because they do add alot more strain then any programmer would like to have in a limited resource system. I guess I’m just curious where the line is drawn between being safe in your use of floats, and when you’ll start to observe packet loss (which is something nobody wants). I’ve been coding using the fewest floats I can while still getting the precision I need in some parts (with sensor calculations specifically).

Dave, do you know any other ways to work with the decimal calculations in these sensor calculations so that you run the least risk of having problems meanwhile still keeping the accuracy neccessary? Currently, I’m just doing my best to use the least number of floats possible. Thanks!

I think you’re missing a key point here: the microprocessor inside the RC does not support floating point operations in hardware. Why is this a big deal? Because now, every time you add two floating point numbers (or divide, or do basically anything with them), the compiler has to generate quite a few extra lines of assembly code to do that (in some cases, several dozen lines of assembly). So, in essence, each time you use a floating point operation, you’re consuming an order of magnitude more program space than if you had used an integer (nearly all integer operations are a single assembly instruction). This also translates to taking much longer to execute. All in all we’re potentially talking about hundreds or maybe even thousands of extra bytes, not just a handful.

Generally, if it works in testing, it will work anywhere

Man, I really do wish that was the case… Why do you think so many professional software programs out there are still buggy? The problem with software is that it is extremely hard to create tests that exercise every bit of a program in every possible running scenario and to test all the different paths through the code that can be taken.

you’re not likely to save enough to do anything significant with a few extra bytes here and there

Our control system was complicated enough last year that we filled up the program space completely. We were able to work with it and take care of it, of course. But we were careful from day 1 to employ good software practices (we’re a team heavily populated with software engineers so it comes naturally). The trouble is that if you take the attitude of “hey, as long as it isn’t causing any trouble it’s fine” you may find yourself with 5 minutes before the match that could mean winning or losing the tournament only to find out that the tweak you need to make to your autonomous code won’t fit because you’re out of space! If you ever find yourself in that situation then I guarantee that you’ll wish you had been more careful about using resources wisely.

Also, if you ever end up developing embedded software for a living, you’ll find that companies generally want to put the smallest, cheapest microprocessor inside something that they can get away with. If your company is going to ship 5 million of the newest, hottest gizmo to all the teenagers in the country, and you tell them that you need to spend an extra $1 or $2 on a faster processor, then that’s $5 million extra cost to produce the product that the company will either need to pass on to the customer (and hope the competition doesn’t do it for less) or subtract from the profits. In that case, writing efficient code will be seen as very important.

I know you’re talking to the other Dave, but… :slight_smile:

One trick that we use frequently is to take a 16-bit number and treat the lower 8 bits as a fractional portion. That way you can stick with simple integer math operations but still obtain higher accuracy. We’ve done this with all of our distance calculations from wheel encoders, angle calculations from the gyro, PID controls, etc. There are always tradeoffs where using floats is justified and a decent idea. I think Dave was trying to make the point that (like any compromise) you just need to be smart about it and not make the decision without taking some time to consider the consequences.

That sounds like a good idea. Could you show me an example of what you mean? I don’t know how you can treat the lower 8 bits as a fractional portion.

I agree with you about the tradeoffs certainly, that’s why I was posting this thread before I got too deep to see if there were any alternatives and where I’ll start hitting problems at if I don’t. Thanks!

Those are the fixed-point pseudo-floats that I keep telling people to use! (I picked it up programming in Pascal under DOS, where there are similarly annoying memory limits.)

Here’s how it works:
Take a number, say the sine of 75° (or any arbitrary number), and express it as a 64-bit real (which is a float on this platform, if I’m not mistaken). You’re using 64 bits to store a lot of different things–the numerical part, the sign (i.e. + or -) and the exponent (i.e. 10n). As it happens, the value sin 75° = 9.65925826289068E-0001 is stored as 965925826289068, with an exponent of 10-1 and a sign of +.

We’ve got about 15 significant digits here (965925826289068 fits in the 52 bits reserved for it); the question is, do we want or even need 15 significant digits? Probably not. So let’s toss some. (This is what he meant.) We can take the most significant few, say 9659, which stores neatly in 15 bits (with lots left over, actually).

While we’re at it, we have that exponent block of 11 bits; it’s no use to us, so long as we can say for sure that all of our values are going to land within a certain range. For sine, it’s between -1 and 1–which is to say, in the neighbourhood of 100. So we don’t need 11 bits to represent that either. We instead assume that our data is going to be in that range, and don’t even store the exponent. (This of course means that we can’t use this as easily as a float, because the order of magnitude isn’t stored in the variable itself–it’s in our heads!)

The sign bit stays, because we have both positive and negative numbers.

What we’re left with is 15 + 1 = 16 bits, which happens to be equivalent to an integer type that is directly supported on the PIC (16-bit signed int). So instead of working with the number 9.65925826289068E-0001 (a float), we use the number 9659 to represent the same thing (knowing that we really mean 0.9659). All of our routines, custom-written, of course, make that same assumption, but can therefore store in 16 bits what would have taken 64, sacrificing useless precision and gaining the marked advantage of using the built-in integer operations.

But wait–you can do better. We’ve naïvely assumed that we need to use the 15 bits to store a number up to 10000 (representing 1.0000). Actually, 215 is 32768. So, if we want, we can scale everything by this proportion (3.2768) and end up with a little free precision. Once again, our functions have to know that when we input 32767*, we really mean 1.0000. Similarly, sin 75° would be handled internally as 31650.

Is this representation easy to read? No. Is it easy to code? Not trivial, but not hard either. Certainly not beyond what’s expected of a student in any reputable senior high school programming course. Basically, if the mere humans can get their heads around it, they can write a function to printf whatever they please, while letting the PIC do its thing with integers (which it likes).

*Don’t do 32768–that’s actually an overflow, because we need to account for 0 in the integer number line on the interval -32768, 32767]; it is intentionally asymmetrical.

…oh
That kind of deflates my entire argument. Never mind… :yikes:

Steven,

To expand for the benefit of other interested viewers, I thought I’d chime in…

I am entirely in agreement with Dave, Dave and Tristan on this issue. Never use floating point…

Let’s look at a** sin** function. According to Microchip application note AN660, the math library “sin” function has a performance of 4030 (minimum) to 6121 (maximum) machine cycles. Note: This data is for the PIC16 family. I could not find PIC18 data but I expect it is similar.

As my fixed point integer representation, I choose 0x0400 = 360 degrees for my integer variable “theta”. This is a granularity of about 1/3 of a degree (I dare you to argue that you need more. A table of integers (0x4000 = 1.000) is generated in Excel for the 1024 data points required and included into the code as a rom const int array named sine_table (2K of program memory used).

The statement

sine = sine_table [theta];

executes in 15 machine cycles. Note that you get cos just as easy from the same array:

cosine = sine_table (theta + 0x0100) % 0x0400];

If using 2K of program space bothers you, you can use various math techniques such as a Taylor series expansion (see this thread for a discussion).

Regards,

Mike

Post Script: I never saw Seth’s last question in the above link until just now…


// sine_table is an array of 1024 2FX14 integers where 0x4000 = 1.0 and 0xC000 = -1.0
// sine.csv is a slightly modified "comma separated variable" file created by Excel.
// The declaration "rom const" causes this array to exist in the PIC program space.
 
rom const int sine_table[DEG_360] = 
{
 #include "sine.csv"
};

Sorry for not responding sooner Seth… - Mike

Sorry for not responding earlier.

Floats are generally not a good thing in embedded programming. Granted, it’s a little late for you this year, but I am presenting an Advanced C Programming topic at the Conference in Atlanta. In the presentation I will be covering fixed-point math (i.e. implementing fractions with only integer data types). If you are using floats, I would suggest coming and checking out the presentation.

In the mean time, I would suggest searching the web for “fixed point math”. It’s probably a little late to change your code for this year, but you can get a head start for next year.

Just as a side note, our autonomous controller from last year (and this year) mapped the field with X,Y coordinates to a calculation accuracy of less than one inch (of course our real accuracy depended highly on the sensor calibrations, but the calculation accuracy was good). We had to use trig (sin and cos) to do the mapping, and arctan to do a go-to X,Y command. EVERYTHING was done using fixed-point math with integer data types. 95% was done with 16-bit variables and the stuff that needed higher precision was done with 32-bit variables. We’ve NEVER used a float (that statement is also true for everything we do at work). So, it can be done as long as you know what you’re doing.