Time to feel stupid....

OK, seeing as I am fairly new to C, i have one question which i know will make me feel stupid: what are interupts? Are they as self-explanitory as pointers (i.e. pointers point so interupts interupt)? Also, what would be a pactical use for an interupt and are they quirky and strange like pointers where they seem simple enough but are the bane of every programmer’s existance?

-Kesich

Err. Maybe not so much the first link, but the other three should help.

Interrupts are not unique to C. I’ll do my best to put my 2 cent explanation. Interrupts are really a capability of microprocessor and not a language construct. They are not like pointers or other “C” things.

The PICs in the new RCs allow interrupts, which is to say that the new processors support interrupts.

So, what is an interrupt?

A typical microprocessor crunkes and chugs away on a piece of code in a sequential manner. Step 1, 2, 3,…,N on and on dilengently following the sequence of N steps that a programmer has laid out.

This straightforward execution model works fine in a very simple environment. However, let’s say you are running a program on a computer with a mouse. An N-step program is running and suddenly the user clicks the mouse at Step 3 of the sequence. The user has to wait until the end of the Nth step before his mouse click is processed by the computer if the processor doesn’t have interrupts.

Now, fortunately, most modern computers use interrupt processors. So, the mouse click intiates and interrupt. The processor stops at Step 3 temporarily stores is “state” in a temporary array. The processor jumps to a section of code to handle the “mouse interrupt”. When its done, the processor reloads state and continues with Step 3 of ther original program.

So, interrupts are exactly that. When you program an interrupt, the processor halts it’s current execution whenever the interrupt becomes true. Usually an interrupt is a high or low bit on a particular pin of the processor.

Now, where might you wanna use interrupts with FIRST robotics? One idea might be if you have a “bummer switch” that tells you when your robot hits a wall. You could tie this switch to one of the interrupts. When you bump into something, the processor could then run a special “I’ve bummped into the wall” piece of code.

That seems like a lot of trouble, right? Why not just check the switch at the top of the normal 17ms control loop? Well, in the case of the “bummer switch” an interrupt may not be necessary, but it illustrates the idea.

Interrupts are very important to real-time programming when waiting 17 ms to respond to an event may be an eternity. This capability is especially true for autonomous mode operations.

I hope this explanation is helpful. Maybe some computer science folks can correct/elaborate furhter.

Thanks IrisLab. Thats exactly what i thought it was, but i wanted to confirm. Seeing as 17ms is just faster than 1/59th of a second, it seems to me that interupts are overkill and you could make much better use of your programming space (800 lines), but i’ll still keep them in mind if they ever become a necessity.

Actually, that 17ms contol loop is just off the top of my head. I can’t remember the time, but I believe that’s close. However, that number is only approximate. The control loop is not guaranteed to be every 17ms. If your code becomes too long, you’ll stretch beyond that 17ms cycle easily. The 17ms is NOT guaranteed.

With interrupts, you are pretty guaranteed to get things done with greater time precision.

A common thing to do is to set an interrupt timer. Like a high-precision time clock. You could, for example, set the timer to go off at every 6 ms. The interrupt will occur at that regular interval, regardless of the length of your code. You are guaranteed to have the interrupt to run every 6ms.

The key with interrupts is GUARANTEE and precision TIMING. I wouldn’t casually write these capabilities off. Especially for a good autonomous mode.

The 17ms figure is correct for the EDU bot (it’s ~26ms for the full size)

However your code should NOT take that long to execute. If it does you will start having problem with missing communication packets from the OI. If you take too long (not sure of the exact time) the master microprocessor will shut down the robot (since it figures that the user code is stuck in an infinite loop).

That being said, assuming you keep your user code relatively compact. The user code (specifically the function Process_Data_From_Master_uP() ) should execute fairly close to every 17ms. From what I can determine, the timing is controlled by the reciept of data from the master micro.

I’m not quite sure where you are getting that 800 lines from, would you care to elaborate on it?

Also, while 17ms is approaching the upper limit of human reaction times, it can be, (though not always) an eternity in a computer. For instance in 2002 I was trying to design a system to keep track of what zone the robot was in at all times. (If you’re not familiar in 2002 the playing field was divide into 5 sections by 1" strips of tape). However, one of the problems that cropped up was that if the robot was moving at any sort of reasonable speed, the sensor would not detect the crossing of the line becuase it would only be high for a really brief period (5-6ms I think).

So unless the program just happened to be checking the sensor during that brief period, it would never know that there had been a line. Now, If there had been interrupts on that controller, I could have used those to immediately (well within a microsecond or so) known that I had crossed the line.

And while you could design some external hardware involving a latch, to detect the pulse, it would add complexity and additional things to fail.

Am I correct in saying that the drastic increase of MIPS (from 0.01 to 10.0) will mean that the length of your code should no longer impact the execution time – so that you will always be able to execute a loop once every 17 (or ~26) ms?

edit: not referring to the local IO loop, just the uP loop.

In general, the code (and code requirements such as I/O) will grow to fill all available space/time. There have already been posts in these fori where users have run out of space (one gentleman could not load his program into ROM and asked what could be deleted) and time (another was using interrupts at a very high repetition rate to decode shaft position).

The task of the design engineer is to choose and implement a design that is within the capabilities of system constraints. The PIC 18C considerably increases (loosens) the system constraints we must stay within but not infinitely.

Now I will borrow Car Knack’s Cloak of Prediction from Mr. Beatty:

“Before 3/1/2004, more than one engineer or EIT will complain that the new controller is not powerful enough.”

I suppose that what I’m getting at is –

Is it true that adding one block of code (obviously not an infinite loop, but, say, another case to a switch) to an already “reasonably” sized Default_Routine will not force you to change the timings of your loop counters as drastically as before?

Quite correct. You can now have many pages of code which will execute with no obvious effect of the system, if it is done carefully.

The easiest trap to fall into is to use floating point arithmetic or calls to the math.h libraries. Since the PIC is a integer machine, a floating point operation can cause literally thousands of machine operations to be performed. These operations take time and time can add up.

Even some integer operations can take longer than you think. Since the PIC has no firmware for division, dividing 30,000 by 3 (without optimization) involves 10,000 machine cycles (subtract 3 from 30,000 10,000 times).

Don’t get me wrong, the PIC is a very powerful microcontroller. But just as a power drill is a powerful tool, it is not great at driving a 10 penny nail.

To borrow from another thread: Choose the path you follow wisely.

Thanks, that’s the type of answer I was hoping for.

When we did our autonomous programming before, if we had added a few more lines of code to a specific step or routine, we had to subtract from the number of times the main do loop was executed to compensate for the additional amount of time spent. It was quite tedious changing values so frequently, which cost us precious rounds (we didn’t start programming autonomously until after we shipped and had no 2nd controller to accurately test on).

From what I’m understanding, this won’t be as big of a problem this year, and code can be more accurately written around theoretical timings vs “time-trial” timings.

Thanks for your help!