Context: I want to create a docker-compose to run ELK + Beats + Kafka for logging purposes.
I had successfully started this task and suddenly I decided to update docker-compose from version 2 to version 3. After that I keep getting:
ERROR: for kibana Cannot start service kibana: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/host_mnt/c/Dockers/megalog-try-1/kibana.yml\\\" to rootfs \\\"/var/lib/docker/overlay2/e1bf99bc19edf4bb68bfad5a76c1e6b9ac1b69f84af85767c2127fd1295c0536/merged\\\" at \\\"/var/lib/docker/overlay2/e1bf99bc19edf4bb68bfad5a76c1e6b9ac1b69f84af85767c2127fd1295c0536/merged/usr/share/kibana/config/kibana.yml\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: Encountered errors while bringing up the project.
Firstly I though it was because previous volume was impacting somehow then I deleted all volumes and previous containers. But it didn't fix it.
I read carefully Are you trying to mount a directory onto a file (or vice-versa)? and checked all suggestions but I didn't move forward. In my case, I am not using Oracle Virtual Box at all.
Any thing suggestion to check will be highly appreciatted.
All my docker-compose is:
version: '3'
services:
kibana:
image: docker.elastic.co/kibana/kibana:7.5.2
volumes:
- "./kibana.yml:/usr/share/kibana/config/kibana.yml"
restart: always
ports:
- "5601:5601"
links:
- elasticsearch
depends_on:
- elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- "./esdata:/usr/share/elasticsearch/data"
ports:
- "9200:9200"
logstash:
image: docker.elastic.co/logstash/logstash:7.5.2
volumes:
- "./logstash.conf:/config-dir/logstash.conf"
restart: always
command: logstash -f /config-dir/logstash.conf
ports:
- "9600:9600"
- "7777:7777"
links:
- elasticsearch
- kafka1
- kafka2
- kafka3
kafka1:
image: wurstmeister/kafka
depends_on:
- zoo1
- zoo2
- zoo3
links:
- zoo1
- zoo2
- zoo3
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: "168"
KAFKA_LOG_RETENTION_BYTES: "100000000"
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_CREATE_TOPICS: "log:3:3"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
kafka2:
image: wurstmeister/kafka
depends_on:
- zoo1
- zoo2
- zoo3
links:
- zoo1
- zoo2
- zoo3
ports:
- "9093:9092"
environment:
KAFKA_BROKER_ID: 2
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: "168"
KAFKA_LOG_RETENTION_BYTES: "100000000"
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_CREATE_TOPICS: "log:3:3"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
kafka3:
image: wurstmeister/kafka
depends_on:
- zoo1
- zoo2
- zoo3
links:
- zoo1
- zoo2
- zoo3
ports:
- "9094:9092"
environment:
KAFKA_BROKER_ID: 3
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: "168"
KAFKA_LOG_RETENTION_BYTES: "100000000"
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_CREATE_TOPICS: "log:3:3"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
zoo1:
image: elevy/zookeeper:latest
environment:
MYID: 1
SERVERS: zoo1,zoo2,zoo3
ports:
- "2181:2181"
zoo2:
image: elevy/zookeeper:latest
environment:
MYID: 2
SERVERS: zoo1,zoo2,zoo3
ports:
- "2182:2181"
zoo3:
image: elevy/zookeeper:latest
environment:
MYID: 3
SERVERS: zoo1,zoo2,zoo3
ports:
- "2183:2181"
filebeat:
image: docker.elastic.co/beats/filebeat:7.5.2
volumes:
- "./filebeat.yml:/usr/share/filebeat/filebeat.yml:ro"
- "./apache-logs:/apache-logs"
links:
- kafka1
- kafka2
- kafka3
depends_on:
- apache
- kafka1
- kafka2
- kafka3
apache:
image: lzrbear/docker-apache2-ubuntu
volumes:
- "./apache-logs:/var/log/apache2"
ports:
- "8888:80"
depends_on:
- logstash
IN case it is relevant, filebeat and kibana.yml are:
filebeat.yml
filebeat.prospectors:
- paths:
- /apache-logs/access.log
tags:
- testenv
- apache_access
input_type: log
document_type: apache_access
fields_under_root: true
- paths:
- /apache-logs/error.log
tags:
- testenv
- apache_error
input_type: log
document_type: apache_error
fields_under_root: true
output.kafka:
hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"]
topic: 'log'
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
max_message_bytes: 1000000
kibana.yml
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
xpack.monitoring.ui.container.elasticsearch.enabled: false
The whole log is:
C:\Dockers\megalog-try-1>docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3366ee0766a8 lzrbear/docker-apache2-ubuntu "apachectl -D FOREGR…" 16 hours ago Up About a minute 0.0.0.0:8888->80/tcp megalog-try-1_apache_1
6fcdcbf8e75e docker.elastic.co/logstash/logstash:7.5.2 "/usr/local/bin/dock…" 16 hours ago Up 15 seconds 0.0.0.0:7777->7777/tcp, 5044/tcp, 0.0.0.0:9600->9600/tcp megalog-try-1_logstash_1
dd854b18aa80 elevy/zookeeper:latest "/entrypoint.sh zkSe…" 16 hours ago Up About a minute 2888/tcp, 3888/tcp, 9010/tcp, 0.0.0.0:2183->2181/tcp megalog-try-1_zoo3_1
498c3d3132fd elevy/zookeeper:latest "/entrypoint.sh zkSe…" 16 hours ago Up About a minute 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, 9010/tcp megalog-try-1_zoo1_1
555279c42b9d elevy/zookeeper:latest "/entrypoint.sh zkSe…" 16 hours ago Up About a minute 2888/tcp, 3888/tcp, 9010/tcp, 0.0.0.0:2182->2181/tcp megalog-try-1_zoo2_1
C:\Dockers\megalog-try-1>docker-compose up -d
Creating megalog-try-1_zoo3_1 ... done Creating megalog-try-1_zoo2_1 ... done Creating megalog-try-1_zoo1_1 ... done Creating megalog-try-1_elasticsearch_1 ... done Creating megalog-try-1_kibana_1 ... error Creating megalog-try-1_kafka2_1 ...
Creating megalog-try-1_kafka1_1 ...
Creating megalog-try-1_kafka3_1 ...
ERROR: for megalog-try-1_kibana_1 Cannot start service kibana: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/host_mnt/c/Dockers/megalog-try-1/kibana.yml\\\" to rootfs \\\"/var/lib/docker/overlay2/e1bf99bc19edf4bb68bfad5a76c1e6b9ac1b69f84afCreating megalog-try-1_kafka2_1 ... done Creating megalog-try-1_kafka1_1 ... done Creating megalog-try-1_kafka3_1 ... done Creating megalog-try-1_logstash_1 ... done Creating megalog-try-1_apache_1 ... done Creating megalog-try-1_filebeat_1 ... error
ERROR: for megalog-try-1_filebeat_1 Cannot start service filebeat: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/host_mnt/c/Dockers/megalog-try-1/filebeat.yml\\\" to rootfs \\\"/var/lib/docker/overlay2/bc908c4b9e42c9c3c0a0f2f88387ca1dee1d20b341d18175df4678136a4e7730/merged\\\" at \\\"/var/lib/docker/overlay2/bc908c4b9e42c9c3c0a0f2f88387ca1dee1d20b341d18175df4678136a4e7730/merged/usr/share/filebeat/filebeat.yml\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: for filebeat Cannot start service filebeat: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/host_mnt/c/Dockers/megalog-try-1/filebeat.yml\\\" to rootfs \\\"/var/lib/docker/overlay2/bc908c4b9e42c9c3c0a0f2f88387ca1dee1d20b341d18175df4678136a4e7730/merged\\\" at \\\"/var/lib/docker/overlay2/bc908c4b9e42c9c3c0a0f2f88387ca1dee1d20b341d18175df4678136a4e7730/merged/usr/share/filebeat/filebeat.yml\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: for kibana Cannot start service kibana: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/host_mnt/c/Dockers/megalog-try-1/kibana.yml\\\" to rootfs \\\"/var/lib/docker/overlay2/e1bf99bc19edf4bb68bfad5a76c1e6b9ac1b69f84af85767c2127fd1295c0536/merged\\\" at \\\"/var/lib/docker/overlay2/e1bf99bc19edf4bb68bfad5a76c1e6b9ac1b69f84af85767c2127fd1295c0536/merged/usr/share/kibana/config/kibana.yml\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: Encountered errors while bringing up the project.
*** Edited
I could move forward by:
1 - deleting all images. After that I got this other error message
ERROR: for elasticsearch Cannot start service elasticsearch: error while creating mount source path '/host_mnt/c/Dockers/megalog-try-1/esdata': mkdir /host_mnt/c/Dockers/megalog-try-1/esdata: file exists
2 - then I read somewhere someone saying this weird solution: remove D drive from share files, save, restart, share again, save and restart. Well then it worked. Honestly I don't consider this the answer to my above question mainly beacause it was working wiht docker-compose version 2 and now I am jumping from one error to another. Either I did something wrong in docker-compose file or there is some concept I am missing ( I can't delete images and remove shared drive on daily base).
3 - now I can't log in Kibana with "Kibana server is not ready yet" and this message from Docker
{"type":"log","@timestamp":"2020-02-04T04:03:53Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"}
{"type":"log","@timestamp":"2020-02-04T04:03:56Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"Unable to revive connection: http://localhost:9200/"}
{"type":"log","@timestamp":"2020-02-04T04:03:56Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"}
Here is the whole console from PowerShell:
PS C:\Users\Cast> $images = docker images -a -q
PS C:\Users\Cast> foreach ($image in $images) { docker image rm $image -f }
...
new error:
C:\Dockers\megalog-try-1>docker-compose up -d
megalog-try-1_zoo3_1 is up-to-date
megalog-try-1_zoo2_1 is up-to-date
Starting megalog-try-1_elasticsearch_1 ...
Starting megalog-try-1_elasticsearch_1 ... error Starting megalog-try-1_kafka2_1 ...
Starting megalog-try-1_kafka1_1 ...
Starting megalog-try-1_kafka3_1 ...
Starting megalog-try-1_kafka2_1 ... done Starting megalog-try-1_kafka1_1 ... done Starting megalog-try-1_kafka3_1 ... done
ERROR: for elasticsearch Cannot start service elasticsearch: error while creating mount source path '/host_mnt/c/Dockers/megalog-try-1/esdata': mkdir /host_mnt/c/Dockers/megalog-try-1/esdata: file exists
ERROR: Encountered errors while bringing up the project.
After removing C drive/restart/share again C drive/restart Docker
C:\Dockers\megalog-try-1>docker-compose up -d
megalog-try-1_zoo3_1 is up-to-date
megalog-try-1_zoo2_1 is up-to-date
megalog-try-1_zoo1_1 is up-to-date
Starting megalog-try-1_elasticsearch_1 ... done Starting megalog-try-1_kafka2_1 ... done Starting megalog-try-1_kafka1_1 ... done Starting megalog-try-1_kafka3_1 ... done Creating megalog-try-1_kibana_1 ... done Creating megalog-try-1_logstash_1 ... done Creating megalog-try-1_apache_1 ... done Creating megalog-try-1_filebeat_1 ... done
C:\Dockers\megalog-try-1>
By the way, I wasn't get issues at all when using docker-compose version 2. So I am wondering if docker-compose version isn't the issue.