-2

I am trying to send logs to a remote server using curl. I tried the following command and nothing happened

journalctl -o json -f | curl -H "content-type:application/json" -d @- http://logs

Any ideas?

user1441287
  • 107
  • 1

2 Answers2

4

It looks like the correct way to retrieve JSON events over the internet is likely to be systemd-journal-gatewayd You'll need to rewrite your other end to pull events from the server, but based on the documentation the server can stream JSON data if you add the follow parameter to the url:

curl -H 'Accept: application/json' 'http://localhost:19531/entries?follow'

If your receiving application keeps track of the events it has received it can use the range header to avoid receiving duplicate events if it has to reconnect:

-H 'Range: cursorname:number_of_events_to_skip'

it's not clear from the documentation what cursorname is supposed to be, I suspect it is a unique name you make up so that gatewayd can keep track of events it has shown your application versus some other application that may also want log entries.

Alternatively you can use systemd-journal-remote to receive journal entries in the native format on the remote computer, then use journalctl locally to receive the JSON data.

DerfK
  • 19,493
  • 2
  • 38
  • 54
  • it seems that cursor is not just any unique name. There are cursors at points in the logs, which an also be shown using `journalctl --show-cursor`. However, it seems that cursorname is optional: `-H'Range: entries=:40:20'` works just fine. – falstaff Oct 12 '21 at 21:03
3

You may need -X POST in there to be sure how data is sent to the server, though it does seem to be implied in -d.

Your use of -f in journalctl makes it function like tail -f would, in that the stream stays open. This tells me your intent is to create a persistent HTTP tunnel that sends new log-lines to the log-server over http. Per the man-page:

In normal work situations, curl will use a standard buffered output stream that will have the effect that it will output the data in chunks, not necessarily exactly when the data arrives.

That behavior doesn't match your intent. You probably want to also run with the -N or --no-buffer options so each line gets piped to curl as it comes in.

sysadmin1138
  • 133,124
  • 18
  • 176
  • 300
  • `-N` seems to be for curl's output, not for curl's input. I'm about 90% certain that curl is going to sit there and wait for input to finish so that it can calculate the `Content-Length:` header. – DerfK Oct 13 '14 at 21:47
  • That's my fear as well. CUrl may not start transmission until it has the whole datagram. – sysadmin1138 Oct 13 '14 at 23:23