4

I'm using this Splunk image on Kubernetes (testing locally with minikube).

After applying the code below I'm facing the following error:

ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe $SPLUNK_HOME or $SPLUNK_ETC is set wrong?

My Splunk deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: splunk
  labels:
    app: splunk-app
    tier: splunk
spec:
  selector:
    matchLabels:
      app: splunk-app
      track: stable
  replicas: 1
  template:
    metadata:
      labels:
        app: splunk-app
        tier: splunk
        track: stable
    spec:
      volumes:
      - name: configmap-inputs
        configMap:
           name: splunk-config
      containers:
      - name: splunk-client
        image: splunk/splunk:latest
        imagePullPolicy: Always
        env:
        - name: SPLUNK_START_ARGS
          value: --accept-license --answer-yes
        - name: SPLUNK_USER
          value: root
        - name: SPLUNK_PASSWORD
          value: changeme
        - name: SPLUNK_FORWARD_SERVER
          value: splunk-receiver:9997
        ports:
        - name: incoming-logs
          containerPort: 514
        volumeMounts:
          - name: configmap-inputs
            mountPath: /opt/splunk/etc/system/local/inputs.conf
            subPath: "inputs.conf"
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: splunk-config
data:
  inputs.conf: |
    [monitor:///opt/splunk/var/log/syslog-logs]
    disabled = 0
    index=my-index

I tried to add also this env variables - with no success:

    - name: SPLUNK_HOME
      value: /opt/splunk
    - name: SPLUNK_ETC
      value: /opt/splunk/etc

I've tested the image with the following docker configuration - and it ran successfully:

version: '3.2'
services:
    splunk-forwarder:
      hostname: splunk-client
      image: splunk/splunk:latest
      environment:
        SPLUNK_START_ARGS: --accept-license --answer-yes
        SPLUNK_USER: root
        SPLUNK_PASSWORD: changeme
      ports:
      - "8089:8089"
      - "9997:9997"

Saw this on Splunk forum but the answer did not help in my case.

Any ideas?


Edit #1:

Minikube version: Upgraded fromv0.33.1 to v1.2.0.

Full error log:

$kubectl logs -l tier=splunk

splunk_common : Set first run fact -------------------------------------- 0.04s
splunk_common : Set privilege escalation user --------------------------- 0.04s
splunk_common : Set current version fact -------------------------------- 0.04s
splunk_common : Set splunk install fact --------------------------------- 0.04s
splunk_common : Set docker fact ----------------------------------------- 0.04s
Execute pre-setup playbooks --------------------------------------------- 0.04s
splunk_common : Setting upgrade fact ------------------------------------ 0.04s
splunk_common : Set target version fact --------------------------------- 0.04s
Determine captaincy ----------------------------------------------------- 0.04s
ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe $SPLUNK_HOME or $SPLUNK_ETC is set wrong?

Edit #2: Adding config map to the code (was removed from the original question for the sake of brevity). This is the cause of failure.

Rot-man
  • 18,045
  • 12
  • 118
  • 124
  • 1
    Can you add an error, because I run this deployment and it works? – FL3SH Aug 13 '19 at 22:36
  • @FL3SH, You can see the error in the question: ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe $SPLUNK_HOME or $SPLUNK_ETC is set wrong? – Rot-man Aug 13 '19 at 22:43
  • 1
    I tried the Deployment YAML you provided, on `minikube version: v1.2.0`, and it worked fine. I didn't set those environment variables but I could exec into the container and see that the `$SPLUNK_HOME` env var is set to `/opt/splunk`, the `$SPLUNK_ETC` env var is not set, and I could successfully cat out `/opt/splunk/etc/splunk-launch.conf`. There error message doesn't imply that those env vars are set wrong, they seem to imply that if unset by you, the default will be to look for the conf file in `/opt/splunk/etc/splunk-launch.conf` and for some reason it's having trouble reading that, ... – Amit Kumar Gupta Aug 13 '19 at 22:52
  • 1
    ... and it's guessing that maybe you put the conf file somewhere else and forgot to set the `$SPLUNK_XXX` env vars to tell splunk to look elsewhere. But I assume you just want to use the defaults, and not change anything, which is what I tried, and it worked. What `minikube version` are you running? – Amit Kumar Gupta Aug 13 '19 at 22:53
  • minikube version - v1.13.2. – Rot-man Aug 13 '19 at 22:56
  • 1
    @Rotemya, it's not minikube version. It's rather kubectl version. What about the result of `minikube version` command ? I've also tested it on minikube v1.2.0 and it works perfectly. Could you also post here the summary which is displayed after running ansible playbook which sets up Splunk ? Are there any errors ? – mario Aug 14 '19 at 16:07
  • 1
    You can check it by running `kubectl logs `. – mario Aug 14 '19 at 16:16
  • Thanks @mario, Check EDIT in the question. Also after the upgrade to minikube v1.2.0 - facing the same error. – Rot-man Aug 14 '19 at 18:09
  • @Amit Kumar Gupta, you were right, For the sake of brevity - I didn't specified the configmap I used. Now I've add it to the original code. The problem is because of this configuration and it its related to Splunk only. Can you post your answer and I'll accept it? – Rot-man Aug 14 '19 at 19:06

3 Answers3

2

Based on the direction pointed out by @Amit-Kumar-Gupta I'll try also to give a full solution.

So this PR change makes it so that containers cannot write to secret, configMap, downwardAPI and projected volumes since the runtime will now mount them as read-only.
This change is since v1.9.4 and can lead to issues for various applications which chown or otherwise manipulate their configs.

When Splunk boots, it registers all the config files in various locations on the filesystem under ${SPLUNK_HOME} which is in our case /opt/splunk.
The error specified in the my question reflect that splunk failed to manipulate all the relevant files in the /opt/splunk/etc directory because of the change in the mounting mechanism.


Now for the solution.

Instead of mounting the configuration file directly inside the /opt/splunk/etc directory we'll use the following setup:

We'll start the docker container with a default.yml file which will be mounted in /tmp/defaults/default.yml.

For that, we'll create the default.yml file with:
docker run splunk/splunk:latest create-defaults > ./default.yml

Then, We'll go to the splunk: block and add a config: sub block under it:

splunk:
  conf:
    inputs:
      directory: /opt/splunk/etc/system/local
      content:
          monitor:///opt/splunk/var/log/syslog-logs:
            disabled : 0
            index : syslog-index
    outputs:
      directory: /opt/splunk/etc/system/local
      content:
          tcpout:splunk-indexer:
            server: splunk-indexer:9997

This setup will generate two files with a .conf postfix (Remember that the sub block start with conf:) which be owned by the correct Splunk user and group.

The inputs: section will produce the a inputs.conf with the following content:

[monitor:///opt/splunk/var/log/syslog-logs]
disabled = 0
index=syslog-index

In a similar way, the outputs: block will resemble the following:

[tcpout:splunk-receiver]
server=splunk-receiver:9997

This is instead of the passing an environment variable directly like I did in the origin code:

SPLUNK_FORWARD_SERVER: splunk-receiver:9997

Now everything is up and running (:


Full setup of the forwarder.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: splunk-forwarder
  labels:
    app: splunk-forwarder-app
    tier: splunk
spec:
  selector:
    matchLabels:
      app: splunk-forwarder-app
      track: stable
  replicas: 1
  template:
    metadata:
      labels:
        app: splunk-forwarder-app
        tier: splunk
        track: stable
    spec:
      volumes:
      - name: configmap-forwarder
        configMap:
          name: splunk-forwarder-config

      containers:
      - name: splunk-forwarder
        image: splunk/splunk:latest
        imagePullPolicy : Always
        env:
        - name: SPLUNK_START_ARGS
          value: --accept-license --answer-yes

        - name: SPLUNK_PASSWORD
          valueFrom:
            secretKeyRef:
              name: splunk-secret
              key: password

        volumeMounts:
        - name: configmap-forwarder
          mountPath: /tmp/defaults/default.yml
          subPath: "default.yml"

For further reading:

https://splunk.github.io/docker-splunk/ADVANCED.html

https://github.com/splunk/docker-splunk/blob/develop/docs/ADVANCED.md

https://www.splunk.com/blog/2018/12/17/deploy-splunk-enterprise-on-kubernetes-splunk-connect-for-kubernetes-and-splunk-insights-for-containers-beta-part-1.html

https://splunk.github.io/splunk-ansible/ADVANCED.html#inventory-script

https://static.rainfocus.com/splunk/splunkconf18/sess/1521146368312001VwQc/finalPDF/FN1089_DockerizingSplunkatScale_Final_1538666172485001Loc0.pdf

Rot-man
  • 18,045
  • 12
  • 118
  • 124
1

There are two questions here: (1) why are you seeing that error message, and (2) how to achieve the desired behaviour you're hoping to achieve that you're trying to express through your Deployment and ConfigMap. Unfortunately, I don't believe there's a "cloud-native" way to achieve what you want, but I can explain (1), why it's hard to do (2), and point you to something that might give you a workaround.

The error message:

ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe $SPLUNK_HOME or $SPLUNK_ETC is set wrong?

does not imply that you've set those environment variables incorrectly (necessarily), it implies that Splunk is looking for a file in that location and can't read a file there, and it's providing a hint that maybe you've put the file in another place but forgot to give Splunk the hint (via the $SPLUNK_HOME or $SPLUNK_ETC environment variables) to look elsewhere.

The reason why it can't read /opt/splunk/etc/splunk-launch.conf is because, by default, the /opt/splunk directory would be populated with tons of subdirectories and files with various configurations, but because you're mounting a volume at /opt/splunk/etc/system/local/inputs.conf, nothing can be written to /opt/splunk.

If you simply don't mount that volume, or mount it somewhere else (e.g. /foo/inputs.conf) the Deployment will start fine. Of course the problem is that it won't know anything about your inputs.conf, and it'll use the default /opt/splunk/etc/system/local/inputs.conf it writes there.

I assume what you want to do is allow Splunk to generate all the directories and files it likes, you only want to set the contents of that one file. While there is a lot of nuance about how Kubernetes deals with volume mounts, in particular those coming from ConfigMaps, and in particular when using subPath, at the end of the day I don't think there's a clean way to do what you want.

I did an Internet search for "splunk kubernetes inputs.conf" and this was my first result: https://www.splunk.com/blog/2019/02/11/deploy-splunk-enterprise-on-kubernetes-splunk-connect-for-kubernetes-and-splunk-insights-for-containers-beta-part-2.html. This is from official splunk.com, and it's advising running things like kubectl cp and kubectl exec to:

"Exec" into the master pod, and run ... commands, to copy (configuration) into the (target) directory and chown to splunk user.

‍♂️

Amit Kumar Gupta
  • 17,184
  • 7
  • 46
  • 64
0

One solution that worked for me in K8s deployment was:

  1. Ammend below to the image Dockerfile

      #RUN chmod -R 755 /opt/ansible
      #RUN echo "  ignore_errors: yes" >> /opt/ansible/roles/splunk_common/tasks/change_splunk_directory_owner.yml
    
  2. Then use that same image in your deployment from your private repo with belo env variables: #has to run as root otherwise won't let you write to $SPLUNK_HOME/S

    env: - name: SPLUNK_START_ARGS value: --accept-license --answer-yes --no-prompt - name: SPLUNK_USER value: root

Dharman
  • 30,962
  • 25
  • 85
  • 135
briefcase
  • 39
  • 2