1

Can systemd manage a pipeline similar to how the daemontools family does it? And if so, what's the best way to achieve this?

I want to run the equivalent of service1 | service2 where both service1 and service2 are (separate or not) services managed by systemd.

I would like to be able to restart the service2 process without interrupting service1. In other words, the file descriptor to which service1 is writing must not be closed when service2 exits. When a new instance of service2 starts, it should inherit the existing file descriptor so that stdout from service1 will flow into the new service2. (Much like daemontools maintains a pipe between run and log/run, though the pipeline need not be a service and a logger.)

Perhaps something with a systemd-managed FIFO in between?

Patrick
  • 322
  • 6
  • 17
  • This is not the way to capture log output on a systemd based system; see below for details on how to do it correctly. If you're trying to do something else, please be specific as to what it is. – Michael Hampton Aug 27 '16 at 12:46
  • 1
    @MichaelHampton `logger` may be a traditional logger, or it may be something else entirely. The important part is that it's one service in a pipeline. – Patrick Aug 30 '16 at 21:27

2 Answers2

10

Finally had the opportunity and need to work through this myself. My solution requires support for the fd option to StandardOutput=, which is available in (at least) systemd version 232 but not in version 215.

There are three services and two FIFOs. Together they create the pipeline input | filter | output, and any part of the pipeline can be individually restarted without data loss.

The input process writes to a FIFO from which filter reads, which in turn writes to a FIFO that output reads.

input.service

[Unit]
Description=The input process
Requires=filter.socket
After=filter.socket

Wants=filter.service output.service

[Service]
TimeoutStartSec=infinity

Sockets=filter.socket

StandardInput=null
StandardOutput=fd:filter.socket
StandardError=journal
ExecStart=/path/to/input

Restart=always
RestartSec=5s

[Install]
WantedBy=multi-user.target

filter.service

[Unit]
Description=The filter process
Requires=filter.socket output.socket
After=filter.socket output.socket

[Service]
TimeoutStartSec=infinity

Sockets=filter.socket
Sockets=output.socket

StandardInput=fd:filter.socket
StandardOutput=fd:output.socket
StandardError=journal
ExecStart=/path/to/filter

Restart=always
RestartSec=5s

filter.socket

[Unit]
Description=Filter process reads from this

[Socket]
ListenFIFO=/run/filter
SocketMode=0600
RemoveOnStop=false

output.service

[Unit]
Description=The output process
Requires=output.socket
After=output.socket

[Service]
TimeoutStartSec=infinity

Sockets=output.socket

StandardInput=fd:output.socket
StandardOutput=journal
StandardError=journal
ExecStart=output

Restart=always
RestartSec=5s

output.socket

[Unit]
Description=Output process reads from this

[Socket]
ListenFIFO=/run/output
SocketMode=0600
RemoveOnStop=false
Patrick
  • 322
  • 6
  • 17
1

Have the service write to stdout, and configure StandardOutput in the systemd unit file for the service to write to the journal:

http://0pointer.de/public/systemd-man/systemd.exec.html

This makes logs available to the journald service, which offers other options for log consumption.

http://0pointer.de/public/systemd-man/journald.conf.html

A custom "logger" can be a journald client, and can directly pull from the journal, and if not available the upstream service is of course not impacted. The logger can also be configured with its own unit file so it is managed by systemd.

Jonah Benton
  • 1,252
  • 7
  • 13
  • 1
    Can you give an example of a unit file managing the logger? Ideally, it will be a single unix process (rather than, say a shell line `journalctl some args | logger`). It must be able to be stopped and started and pick up where it left off, reading lines from `service` without duplicates. – Patrick Aug 30 '16 at 21:21
  • From the systemd docs, it seems something where `service` has StandardOutput set to socket, and `logger` has StandardInput set to socket is what I want, where the socket might be a systemd socket with ListenFIFO. I'll explore that... – Patrick Aug 30 '16 at 21:25
  • The FIFO the docs are referring to is a file system object, also known as a named pipe. It can be made with "mknod -p". It is unlikely to be the right vehicle through which to manage the state of a log reader, because FIFOs have limited buffer sizes and writes will block if the FIFO is not read from. A better approach is likely to be for the logger to keep track of the last consumed timestamp, so that it can pull from the journal using --since. – Jonah Benton Aug 31 '16 at 12:26
  • Relatedly, it's not in systemd's domain to manage application functionality requirements, like "read from some data source without duplications". The logger should be seen as its own application, consuming from a data source, managing its own state. – Jonah Benton Aug 31 '16 at 12:33
  • "FIFOs have limited buffer sizes and writes will block if the FIFO is not read from" <--- this is exactly what I want for this case. Please don't read too much into the word "logger". – Patrick Aug 31 '16 at 18:48