2

I want to generate square-signals with a first generation RPI's gpio output.

For this purpose I first wanted to use wiringPi.

Code language is fixed, shall be C or C++.

As per wiringPi's documentation for the blink example, solution should be easy:

#include <wiringPi.h>
int main (void)
{
  wiringPiSetup () ;
  pinMode (0, OUTPUT) ;
  for (;;)
  {
    digitalWrite (0, LOW) ; delay (500) ;
    digitalWrite (0,  HIGH) ; delay (500) ;
  }
  return 0 ;
}

But I want to have ~600 microsecond pauses between them.

Therefore I've created an other delay method:

void myDelay(long int usec) {
  struct timespec ts, rem;

  ts.tv_sec = 0;
  ts.tv_nsec = usec * 1000;

  while (clock_nanosleep(CLOCK_MONOTONIC, 0, &ts, &rem)) {
      ts = rem;
  }
}

Then I switched the 2 delay(500) to myDelay(600).

This mostly works, however sometimes myDelay waits more than 600 microseconds.

Please see this scope image: enter image description here

How can I have exactly the same squares with C/C++?

I also tried a Python script with pigpio:

pi = pigpio.pi()
pi.wave_add_new()
pi.set_mode(1, pigpio.OUTPUT)
wf=[] 
for i in range (0, 100):
    wf.append(pigpio.pulse(0, 1<<1, 600))
    wf.append(pigpio.pulse(1<<1, 0, 600))
wf.append(pigpio.pulse(0, 1<<1, 1000))
pi.wave_add_generic(wf)
wid = pi.wave_create()
pi.wave_send_once(wid)
while pi.wave_tx_busy():
    pass
pi.wave_delete(wid)
pi.stop()

And this python gives the intended result (i.e.: all squares are equal on scope).

Now the question is, how can I achieve the same result with pure C/C++ implementation (without having to mess with gpioWave* functions)?

Daniel
  • 2,318
  • 2
  • 22
  • 53
  • http://man7.org/linux/man-pages/man2/clock_nanosleep.2.html Read notes. – KamilCuk Sep 12 '19 at 18:12
  • wiringPi also has http://wiringpi.com/reference/software-pwm-library/ And you can browse it https://github.com/WiringPi/WiringPi/blob/master/wiringPi/softPwm.c – KamilCuk Sep 12 '19 at 18:20

3 Answers3

2

I usually prefer sleeping until an absolute time. The remaining time is treated differently on different platforms so I try to stay away from that.

inline timespec init_clock() {
    timespec ts;
    clock_gettime(CLOCK_MONOTONIC, &ts); // or try using CLOCK_MONOTONIC_RAW
    return ts;
}

inline void add_usec(timespec& ts, long int usec) {
    ts.tv_nsec += usec * 1000;
    time_t sec = ts.tv_nsec / 1000000000;
    ts.tv_sec += sec;
    ts.tv_nsec -= sec * 1000000000;
}

inline void myDelay(long int usec) {
    timespec ts = init_clock();

    add_usec(ts, usec);

    while(clock_nanosleep(CLOCK_MONOTONIC, TIMER_ABSTIME, &ts, nullptr));
}

Another thing could be to measure times since last time you looped. That would remove much of the fuzzyness due to other events in the system. Then, just save the clock between calls by making it static:

inline void myDelay(long int usec) {
    static timespec ts = init_clock();

    add_usec(ts, usec);

    while(clock_nanosleep(CLOCK_MONOTONIC, TIMER_ABSTIME, &ts, nullptr));
}

Both the above combinations could also be done using the standard C++ library <chrono>. This example saves the clock between calls for a more accurate square wave:

#include <wiringPi.h>

#include <chrono>
#include <thread>

inline void myDelay2(std::chrono::microseconds sleep_time) {
    static auto cl = std::chrono::steady_clock::now();
    cl += sleep_time;
    std::this_thread::sleep_until(cl);
}

int main() {
    using namespace std::literals::chrono_literals;

    while(true) {
        digitalWrite (0, LOW) ; myDelay2(600us) ;
        digitalWrite (0,  HIGH) ; myDelay2(600us) ;
    }
}
Ted Lyngmo
  • 93,841
  • 5
  • 60
  • 108
  • Making *timespec ts* static will make the timings more exact by eliminating the overhead of the for-loop and init_clock fn, right? +1 also for the **, will try and accept your answer if working correctly. – Daniel Sep 12 '19 at 20:11
  • 1
    Yes, and it will also smoothen out the inevitable small differences in timing that will occur. A LOW or HIGH 1000000000 periods after the start will happen pretty close to what you could extrapolate by doing the math. When using a normal delay, the inaccuracy would build up over time. – Ted Lyngmo Sep 12 '19 at 20:17
  • The name _delay_ for the function with memory isn't good though since it's not sleeping from the point it's called. A better name is needed :) – Ted Lyngmo Sep 12 '19 at 20:21
  • Unfortunately both ways (myDelay and myDelay2) won't solve the problem. Maybe it is due to some system latency of setting the gpio? – Daniel Sep 13 '19 at 12:02
  • @Daniel and you used the versions with "memory", right? 600µs is pretty short so perhaps you have some system events messing this up. Did you try `gpioDelay`? – Ted Lyngmo Sep 13 '19 at 12:12
  • I tried it yes, but `gpioDelay` won't help, because it is just basically a busy-wait if wait time is less than 100us, otherwise it calls `gpioSleep` which yields to `clock_nanosleep`. – Daniel Sep 13 '19 at 13:16
  • I see. And `sched_setscheduler` `SCHED_FIFO` or `SCHED_RR` with `sched_get_priority_max()` as @sonicwave suggested didn't help either? Then, if you use the same C functions that the python API is wrapping, does that work? If it doesn't, something is very strange. – Ted Lyngmo Sep 13 '19 at 13:34
  • `sched_get_priority_max()` did not help either. C functions of pigpio (which python wraps) works OK. Also turned out it uses HW timer to get nice and solid waveforms. – Daniel Sep 13 '19 at 21:47
  • "_Also turned out it uses HW timer to get nice and solid waveforms_" - Aha, ok, then if you want to do something not directly supported by the SDK you may need to dig into those timers too. – Ted Lyngmo Sep 14 '19 at 06:56
2

Have a look at the description for clock_nanosleep (from http://man7.org/linux/man-pages/man2/clock_nanosleep.2.html, emphasis mine)

clock_nanosleep() suspends the execution of the calling thread until either at least the time specified by request has elapsed, or a signal is delivered that causes a signal handler to be called or that terminates the process.

That is, the only guarantee is that you'll sleep for at least 600 microseconds - but without any upper bound for how long you'll actually end up sleeping.

I'll assume that you're running one of the default Linux distros on your RaspberryPi. Linux runs a lot of stuff under the hood, apart from your application, and is by default not a so-called real-time operating system. Real-time in this sense does not mean anything about performance (in the sense of how fast it runs or processes data), but is about guaranteeing a maximum upper bound for waits such as the one above.

If you want to get closer to what you need, you can try one or both of the following:

  1. Use a real-time scheduler. This boosts the priority of your thread, above that of basically everything else running in userspace. This is a rather quick thing to try - have a look at sched_setscheduler() (http://man7.org/linux/man-pages/man2/sched_setscheduler.2.html)
  2. Since you still have things running in kernelspace, you'll probably get better performance by switching schedulers, but you'll still have "issues" with the kernel. That's where the PREEMPT-RT patch comes into play - and makes the kernel better suited for things such as this. This will require you to compile you own kernel, which is definitely a bit more complicated than just changing the scheduler, but not impossible at all. A quick google provides lots of hits for other people that have been doing the same thing.
sonicwave
  • 5,952
  • 2
  • 33
  • 49
  • thanks! I have also tried *SCHED_FIFO* with *sched_setscheduler* but it haven't brought much better results. PREEMPT-RT is surely a way to go, but in the question I marked that with another implementation of gpio handler (pigpio) it works good on the same hardware with the same kernel. – Daniel Sep 12 '19 at 20:19
  • @Daniel - ah, I've just had a quick look at pigpio, but my guess is that it either does busy waiting (that is, keeps the thread running, instead of sleeping and letting other things run), and/or uses hardware support - for instance through a hardware timer that can generate the waveforms. The first case would probably help, as long as you don't have much load on the system, the second case is more or less guaranteed to work, but is less flexible (timers can't be progammed generate arbitrary waveforms). Any reason for not just using pigpio and the C-interface? – sonicwave Sep 12 '19 at 20:39
  • I think it uses HW support. No reason of not using it, I just want to know how exactly it is working. – Daniel Sep 13 '19 at 13:05
  • That's the great thing about open source though - you can just download the code and have a look ;) – sonicwave Sep 13 '19 at 13:50
  • Yepp, already decoded it.. not super-easy. It uses DMA which (on RPi) can control GPIO directly without CPU. Also uses PWM (or PCM) which only paces DMA operations for precise timing. All this happens without CPU usage therefore no OS functions affect timing. – Daniel Sep 13 '19 at 21:56
0

That is probably due to the way delay() is implemented. In an operating system context, it is better to use sleep on a task which is delaying more than a very short interval. That subjects the task to scheduling delay which may be longer than requested to allow other waiting tasks time to run. However, they are not usually preemptively paused to allow previous task to continue when their sleep expires. In that context, sleep is a minimum time guaranteed before running again.

Outside of operating systems, delays are often busy-wait and therefore reliable if they cannot be interrupted.

wallyk
  • 56,922
  • 16
  • 83
  • 148