0

I have implemented a sidecar container to forward my main application logs to splunk. Have used universalsplunkforwarder image. After I deploy both my main application and forwarder seems up and running. But anyway not recieving any logs in splunk index specified. To troubleshoot splunkd log or any specific splunk internal logs are not found in /var/log path. Can someone please help how we enable this splunk internal logs?

piece of deployment.yaml

 - name: universalforwarder
>           image: <docker-registry>/splunk/universalforwarder:latest
>           imagePullPolicy: Always
>           env:
>            - name: SPLUNK_START_ARGS
>              value: "--accept-license --answer-yes"
>            - name: SPLUNK_USER
>              value: splunk
>            - name: SPLUNK_PASSWORD
>              value: ****
>            - name: SPLUNK_CMD
>              value: add monitor /var/log
>           resources:
>             limits:
>              memory: "312Mi"
>              cpu: "300m"
>             requests:
>              memory: "80Mi"
>              cpu: "80m"
>           volumeMounts:
>            - name: shared-logs
>              mountPath: /var/log

Piece of confgmap.yml

outputs.conf: |-
    [tcpout]
    defaultGroup = idxm4d-bigdata


    [tcpout:idxm4d-bigdata]
    server = <servers>
    clientCert = /opt/splunkforwarder/etc/auth/ca.pem
    sslPassword = password
    sslVerifyServerCert = false
  inputs.conf: |-
    [monitor:/bin/streaming/adapters/logs/output.log]


    [default]
    host = localhost
    index = krp_idx


    [monitor:/bin/streaming/adapters/logs/output.log]
    disabled = false
    sourcetype = log4j
    recursive = True
  deploymentclients.conf: |-
    targetUri = <target-uri>
  props.conf: |-
    [default]
    TRANSFORMS-routing=duplicate_data



    [telegraf]
    category = Metrics
    description = Telegraf Metrics
    pulldown_type = 1
    DATETIME_CONFIG =
    NO_BINARY_CHECK = true
    SHOULD_LINEMERGE = true
    disabled = false
    INDEXED_EXTRACTIONS = json
    KV_MODE = none
    TIMESTAMP_FIELDS = time
    TRANSFORMS-routing=duplicate_data
kind: ConfigMap

Not able to view splunkd logs to troubleshoot if splunk is able to get the logs or what might be the issue Thanks

  • 1
    The Splunk forwarder's internal logs are at /opt/splunkforwarder/var/log/splunk. If there are problems sending to the indexer then it will be reported there. – RichG Aug 23 '22 at 10:16
  • yep, There i am not getting any logs to troubleshoot. That's the issue. @RichG – Bhagya arer Aug 23 '22 at 11:40
  • 1
    If the logs don't exist then the forwarder must not be running. – RichG Aug 23 '22 at 12:06
  • Yes, From docker and kubernetes pod (custom execution was executing successfully )(In rancher explicitly if I am starting splunk it is starting.) ...seems env variables are not configured properly. Not sure what exactly has to be changed. – Bhagya arer Aug 23 '22 at 13:17
  • check wether the splunk is running or not by checking status. $ docker exec -it -u splunk /bin/bash – Jerin Joy Nov 17 '22 at 08:29

0 Answers0