concerned about bogging down rc?

Are you concerned about using to many intterupts and other sensors all together and slowing down the entire processor??

My team’s a bit concerned about it, but we haven’t had any actual trouble… yet.
We’ve been working with the camera and with some encoders but it’s all been separate so far. Once we actually put it all together, though…

To an extent. Instead of worrying about it we just reduce them to not using too many interrupts, then hope there’s not a problem.

Last year, we used the camera with 3 encoders no problem. Custom encoder code, though. There was absolutely no issue. We also did not use pwm13-16.

Not concerned at all. We only sense what we need; generally this means a couple of encoders at reasonable PPRs, a camera, and maybe a gyro or other analog sensor.

Even with multi-channel PID controllers, we have never had any problem with either code space or execution time.

Though not really worried about this, we were interested enough in this problem to actually get some hard numbers on this.

Using this as a guide, and with the help of the C18 library reference, we used the onboard timers to figure out how fast the Process_Data_From_Master_Function ran. Here’s (more or less) what we did:

void Process_Data_From_Master_uP(void)
{
  unsigned long int clocks; /* Variable to record the number of clock cycles
                             that the function takes */
  /* If I remember correctly, once the "new data from master processor" flag
     is set, you then have 26.2ms to do your thing and send back data.
     Hence, timing should begin HERE. */

  /* The timer setup given in the IFI white paper is "good enough" for what
     we're doing here. It overflows every 52ms this way, but I figure that
     if this function takes that long, you have bigger problems to 
     worry about (such as the Master processor shutting you down) */
  OpenTimer1(   TIMER_INT_OFF &
                T1_16BIT_RW &
                T1_SOURCE_INT &
                T1_PS_1_8 &            // 1:8 prescaling - important for below
                T1_OSC1EN_OFF &
                T1_SYNC_EXT_OFF );

  WriteTimer1( 0x0000 ) ; /* Supposedly, zeroing the timer after calling 
                             OpenTimer1 isn't necessary, but paranoia is a
                             good thing. */

  Getdata(&rxdata);   /* Get fresh data from the master microprocessor. */

  /* Do your stuff here. Default_Routine() or whatever */

  Generate_Pwms(pwm13,pwm14,pwm15,pwm16);

  /* Read the elapsed time off the timer */
  clocks = ReadTimer1();
  /* Shut off the timer before it does something exciting, such as trigger an
     interrupt */
  CloseTimer1();

  /* Since the timer value was prescaled 1:8, multiply by 8 to get the number
     of clock cycles */
  clocks <<= 3;

  /* This printf takes a non-trivial amount of time, but oh, well. Shouldn't
     be too bad. */
  printf("LOAD: %8lu cycles, %8lu us, %2lu%% load\r",
         clocks, /* Print the raw number of cycles out */
         clocks / 10, /* Each clock cycle is 10^-7 seconds, so a divide by 10
                         will give you the time in microseconds */
         clocks * 100 / 262000    /* Again, since each clock cycle is 10^-7 sec,
                                   the 26.2ms you have translates to 262000
                                   cycles. This prints out a good "percent load"
                                   value for the function, telling you how
                                   close you are to a timeout */
        );

  /* Perhaps the printf should go after the Putdata? */
  Putdata(&txdata);             /* DO NOT CHANGE! */
}

(note: I don’t have access to our actual code at the moment, so there may be bugs)

The same treatment can be applied to your autonomous code, as it uses pretty much the same loop.

My line of thinking was that what really matters is how much time it takes for you to take new data from Master and turn it around to produce output.

We haven’t really tried much real measuring using this method, but we did find one interesting thing: the Generate_Pwms call takes a whopping 8% of your processing time.

If anyone else has some improvements to make to this method, I appreciate suggestions. When we get around to actually timing some functioning code, I’ll post some results here.

Orborde,

Back in 2004/2005 I calculated the processor “free time” by counting the number of times the routine “Process_Data_From_Local_IO” was called in a 1 second period. Since this routine continually gets executed as long as there is nothing better to do (and we didn’t have any custom code in it), it’s a good way to see how loaded the processor is. In my case it took 14 instruction clock cycles to complete the while loop in “Main.c” as well as the call/addition/return of the “Process_Data_From_Local_IO” routine as long as there wasn’t new OI data. With the processor running at 10 MHz instruction clock speed, it was a simple matter to calculate a % free time. We posted this to one of the User_Bytes so we could monitor it on the dashboard, and you could see the % free time change as the number of interrupts varied due to motor speeds (drive encoders mainly).

As for the 8% for the generate_pwms call, it sounds about right. My understanding is that the routine uses a loop to “wait” for the required amount of time to generate the 1-2 msec PWM pulse. Assuming a maximum 2 msec pulse, executed at about 38 times per second, that comes out to a delay of about 76 msec, or about 8% of the processor.

Mike