0

I have an issue with promtail and Loki, in my server I almost have 10 docker containers which are running on Prod and Dev Environments. As I am new to Grafana I wanted to scrape these 10 docker container logs and see it in Grafana using Loki Datasource.

What I have done so far?

Scenario 1: With loki and Promtail config file Step1: Logged into Grafana cloud and created a Loki configuration with new API key

Step2: Pasted the below config file in /etc/promtail/config.yaml

'server:
  # port for the healthcheck
  http_listen_port: 0
  grpc_listen_port: 0
positions:
  filename: /tmp/positions.yaml
client:
  url: https://<user>:<password>@logs-prod-us-central1.grafana.net/api/prom/push
scrape_configs:
- job_name: local
  static_configs:
  - targets:
      - localhost
    labels:
      job: mrp
      __path__: /var/lib/docker/containers/*/*log'

Step3: Ran a docker run promtail command

docker run --name promtail --volume "$PWD/promtail:/etc/promtail" --volume "/var/lib/docker/containers:/var/lib/docker/containers/" grafana/promtail:master -config.file=/etc/promtail/config.yaml -log.level=debug

Step4: I am able to see logs but couldn't find container name, Image name or anything within that file it-seems like a plain text. --> Can you please help me how to solve this

Scenario2: Tried with log driver

Step 1: Installed log driver within my server Step 2: Pasted docker below commands in /etc/docker/daemon.json

{
    "debug" : true,
    "log-driver": "loki",
    "log-opts": {
        "loki-url": "https://<user_id>:<password>@logs-us-west1.grafana.net/loki/api/v1/push",
        "loki-batch-size": "400"
    }
}

Step 3: I need to restart docker to take effect of daemon if i do so i might loose the running containers they are going to excited state --> This is one kind of blocker

Please help me to solve this thanks in advance

tpc
  • 1
  • 1

1 Answers1

1

I think what you are looking for is the live restore option provided by Docker. Ideally you should not run both dev and prod environments on the same machine but if you have a valid reason for doing so, then you need to add below setting in your docker daemon config, and try systemctl reload docker instead of doing a restart.

{
  "live-restore": true
}

More details are documented here at: https://docs.docker.com/config/containers/live-restore/