0

so my problem is related to Timers in atmega32 in general, my problem is that I am using timer0 in my atmega32 as a delay timer with interrupts every unit time specified by the callee function, so for example, if the application user specified that I want an interrupt every 1 second for example, then I initialize the timer0 and based on some equations I can delay for one second then I call the application user ISR.

my problem in the equations itself requires me to use floating variables while the microprocessor on atmega32 doesn't have a floating point unit so the compiler increases the code size.

by the way, I am using my timer in Normal mode and this is the datasheet of timer0 (page 69)

here are the equations I use:

T(tick) = prescalar / freq(CPU)                           -> where T(tick) is the time needed by one tick for the timer, freq(CPU) is the frequency of the MCU.
T(max_delay) = (2^8) * T(tick)                            -> where T(max_delay) represents the max delay the timer can provide until the 1st overflow, (2^8) is the maximum number of ticks that timer0 can make before the overflow
Timer(init_value) = (T(max_delay) - T(delay)) / T(tick)   -> where Timer(init_value) is the intial value to be inserted into TCNT0 register at the first and every time there is an overflow, T(delay) is the user required delay.
N(of_overflows) = [ceil](T(delay) / T(max_delay))         -> where N(of_overflows) is the number of overflows needed to achieve application user delay if it's greater than T(max_delay)

and this is my code that I wrote for just as a reference:

/*
 *  @fn         -:      -calculatInitValueForTimer0
 *
 *  @params[0]  -:      -a number in milliseconds to delay for 
 *
 *  @brief      -:      -calculate the initial value needed for timer0 to be inserted into the timer
 *
 *  @return     -:      -the initial value to be in the timer0
 */
static uint8_t calculatInitValueForTimer0(uint32_t args_u32TimeInMilliSeconds, uint16_t args_u8Prescalar)
{
    /*local variable for time in seconds*/
    double volatile local_f64TimerInSeconds = args_u32TimeInMilliSeconds / 1000.0;

    /*local variable that will contain the value for init timer*/
    uint8_t volatile local_u8TimerInit = 0;

    /*local variable that will contain the time for one tick*/
    double volatile local_f64Ttick;


    /*local variable that will contain the time for max delay*/
    double volatile local_f64Tmaxdelay;

    /*get the tick timer*/
    local_f64Ttick = args_u8Prescalar / 1000000.0;

    /*get the max delay*/
    local_f64Tmaxdelay = 256 * local_f64Ttick;

    /*see which init time to be used*/
    if (local_f64TimerInSeconds == (uint32_t) local_f64Tmaxdelay)
    {
        /*only one overflow needed*/
        global_ValueToReachCount = 1;
        /*begin counting from the start*/
        local_u8TimerInit = 0;
    }
    else if (local_f64TimerInSeconds < (uint32_t) local_f64Tmaxdelay)
    {
        /*only one overflow needed*/
        global_ValueToReachCount = 1;
        /*begin counting from the start*/
        local_u8TimerInit = (uint8_t)((local_f64Tmaxdelay - local_f64TimerInSeconds) / local_f64Ttick);
    }
    else if (local_f64TimerInSeconds > (uint32_t) local_f64Tmaxdelay)
    {
        /*many overflow needed*/
        global_ValueToReachCount = ((local_f64TimerInSeconds / local_f64Tmaxdelay) == ((uint32_t)local_f64TimerInSeconds / (uint32_t)local_f64Tmaxdelay)) ? (uint32_t)(local_f64TimerInSeconds / local_f64Tmaxdelay) : (uint32_t)(local_f64TimerInSeconds / local_f64Tmaxdelay) + 1;
    
        /*begin counting from the start*/
        local_u8TimerInit = 256 -  (uint8_t)( (local_f64TimerInSeconds / local_f64Ttick) / global_ValueToReachCount);
    }

    /*return the calculated value*/
    return local_u8TimerInit;
}

currently I am not handling the case when there is only one overflow required, but this isn't the case.

my problem is that in order to calculate the timer initial value and number of overflows needed to achieve a big delay is all calculated using double or float variables and the problem is that the microprocessor in atmega32 doesn't have FPU so it makes the Compiler increase code size to achieve these equations, so is there any other way to calculate the timer initial value and number of overflows needed without double or float variables ?

abdo Salm
  • 1,678
  • 4
  • 12
  • 22
  • 2
    All those floating aritmetic calculations will be extremely heavy operations on that chip, especially if you wil use them frequently. For this type of chip, I would suggest that you better use the timer in free running mode with a 1 microsecond resolution and each rollover of the timer you add that amount to a 32 bit unsigned variable so that you can keep track of time in microseconds. Whenever you want to use a time delay, read that variable and save it as beginning. Then you will have to check the passed time repetitively by `currentTime - beginningTime >= delayTime`. – Kozmotronik Oct 07 '22 at 06:56
  • 1
    You can make use of macro tricks to make it more handy. This way is more efficient than calculating the delay in runtime. You will see some similar implementeations in [protothread source code](http://dunkels.com/adam/pt/download.html). – Kozmotronik Oct 07 '22 at 07:11
  • 1
    You could make some improvements by calculating the reciprocal of the tick period, which is the tick rate. Then instead of multiplying by the tick period you divide by the tick rate. And instead of dividing by the tick period you multiply by the tick rate. (Multiplying by fraction is equivalent to dividing by the reciprocal.) But this is only going to get you so far and you'll end up with rounding errors. A better design is to setup a fixed timer interrupt rate (perhaps 1 millisecond) and then just (if necessary, calculate and) wait for the requested number of interrupt expirations. – kkrambo Oct 07 '22 at 12:16

0 Answers0