7

I have a Pod with two containers.

apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - name: my-container
    image: google/my-container:v1
  - name: third-party
    image:  google/third-party:v1

One container is my image and the second is third-party image which I can’t control its stdout/stderr.
I need that my-container will access logs written in third-party container.
Inside "my-container" I want to collect all the stdout and stderr from the "third-party" container, add some metadata and write it with my logger.

I cant use a privileged container with volumeMounts.

If I could do something like this it was great.

 containers:
  - name: my-container
    image: google/my-container:v1
    volumeMounts:
    - name: varlog
      mountPath: /var/log

  - name: third-party
    image:  google/third-party:v1 
    stdout: /var/log/stdout
    stderr: /var/log/stderr

 volumes:
  - name: varlog
    emptyDir: {}

Malathi
  • 2,119
  • 15
  • 40
nassi.harel
  • 81
  • 1
  • 5
  • Have you read this? https://kubernetes.io/docs/concepts/cluster-administration/logging/#using-a-node-logging-agent – OneCricketeer Jul 18 '19 at 04:39
  • Yes, but it's not my case. – nassi.harel Jul 18 '19 at 09:01
  • 1
    I'm not quite understanding the issue. If your "third party" container logs to stdout/stderr like any other container, then why aren't you able to get its logs via any logging driver that works with those other containers? – OneCricketeer Jul 18 '19 at 14:42

3 Answers3

3

Based on the logging driver specified for docker, docker tracks the containers' logs. The default logging driver of docker is json-file which redirect the container's stdout and stderr logs to /var/log/containers folder in the host machine that runs docker.

In case of kubernetes, the logs will be available in the worker nodes /var/log/containers folder.

Probably, what you are looking for is fluentd daemonset, that creates a daemonset, which runs in each worker node and then help you move the logs to s3, cloudwatch or Elastic search. There are many sinks provided with fluentd. You can use one that suits your needs. I hope this is what you want to do with your my-container.

Malathi
  • 2,119
  • 15
  • 40
  • Thanks, No this is not the case, I'm already using fluentd to collect all logs from all containers. In the my-container I need access to the logs written in third-party – nassi.harel Jul 18 '19 at 06:09
  • The third party application is not running as a container in your kubernetes cluster? – Malathi Jul 18 '19 at 07:02
  • Did you mean our* ? – Malathi Jul 18 '19 at 08:04
  • Yes, It is running as a container in our kubernetes cluster – nassi.harel Jul 18 '19 at 08:22
  • Then why not fluentd? You mentioned that the program writes to stdout/stderr. Is it a file and not stdout/stderr? – Malathi Jul 18 '19 at 08:32
  • Inside "my-container" I want to collect all the stdout and stderr from the "third-party" container, add some metadata and write it with my logger. – nassi.harel Jul 18 '19 at 08:51
  • I don't want to rely on fluentd, I need that my process will access logs of another container – nassi.harel Jul 18 '19 at 11:55
  • In that case, you can then use a volumeMount for `my-container` with `hostPath` value as `/var/log/containers`. Then monitor the file for the third party container in the folder from my-container and do the processing and then send logs wherever you want to. – Malathi Jul 18 '19 at 11:59
  • For that I need to be in privileged: true which I can't do it (e.g. openshift). It is also unsecured. – nassi.harel Jul 18 '19 at 12:35
0

As containers inside a pod share the same persistence layer, you can mount a Shared Volume to make that data accesible for both.

For your specific purpose, you'll need to log both streams (stderr and stdout) into files in the volume. Then, you need to export them from the main container to the whichever logging driver you're running in your cluster.

There is no specific instruction to write these outputs into a file in the specs though.

yyyyahir
  • 2,262
  • 1
  • 5
  • 14
  • 1
    The third-party container is not my image, I can’t control stdout and stderr files location. – nassi.harel Jul 17 '19 at 19:24
  • Probably you can use fluentd, as mentioned in my answer – Malathi Jul 18 '19 at 05:34
  • 1
    I don't want to rely on fluentd, I need that my process will access logs of another container – nassi.harel Jul 18 '19 at 11:55
  • Define if you're expecting log files or `stdout`, `stderr`. For the first, you'll need to know the log files location and the `mountPath` directive in both containers for the Shared Volume. For the second, both streams are [captured via logging driver](https://kubernetes.io/docs/concepts/cluster-administration/logging/#logging-at-the-node-level) and can be accessed via `kubectl logs` or in the log-rotate files inside the node. – yyyyahir Jul 18 '19 at 12:20
  • 3
    I’m expecting stdout/stderr, I think I will use the approach of Share Process Namespace between Containers in a Pod. Using tail on /proc/PID/fd1on third-party container. https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/ – nassi.harel Jul 18 '19 at 18:21
0

I think I understood your requirement. I stumbled upon Logspout: https://github.com/gliderlabs/logspout

Do

$ docker pull gliderlabs/logspout:latest

and then run the container like,

$ docker run \
--volume=/var/run/docker.sock:/var/run/docker.sock \
gliderlabs/logspout \
raw://192.168.10.10:5000

It then attaches to all the containers on a host, then routes their logs wherever you want.

Check the link above for details.

rAhulD
  • 59
  • 8