I have implemented a sidecar container to forward my main application logs to splunk. Have used universalsplunkforwarder image. After I deploy both my main application and forwarder seems up and running. But anyway not recieving any logs in splunk index specified. To troubleshoot splunkd log or any specific splunk internal logs are not found in /var/log path. Can someone please help how we enable this splunk internal logs?
piece of deployment.yaml
- name: universalforwarder
> image: <docker-registry>/splunk/universalforwarder:latest
> imagePullPolicy: Always
> env:
> - name: SPLUNK_START_ARGS
> value: "--accept-license --answer-yes"
> - name: SPLUNK_USER
> value: splunk
> - name: SPLUNK_PASSWORD
> value: ****
> - name: SPLUNK_CMD
> value: add monitor /var/log
> resources:
> limits:
> memory: "312Mi"
> cpu: "300m"
> requests:
> memory: "80Mi"
> cpu: "80m"
> volumeMounts:
> - name: shared-logs
> mountPath: /var/log
Piece of confgmap.yml
outputs.conf: |-
[tcpout]
defaultGroup = idxm4d-bigdata
[tcpout:idxm4d-bigdata]
server = <servers>
clientCert = /opt/splunkforwarder/etc/auth/ca.pem
sslPassword = password
sslVerifyServerCert = false
inputs.conf: |-
[monitor:/bin/streaming/adapters/logs/output.log]
[default]
host = localhost
index = krp_idx
[monitor:/bin/streaming/adapters/logs/output.log]
disabled = false
sourcetype = log4j
recursive = True
deploymentclients.conf: |-
targetUri = <target-uri>
props.conf: |-
[default]
TRANSFORMS-routing=duplicate_data
[telegraf]
category = Metrics
description = Telegraf Metrics
pulldown_type = 1
DATETIME_CONFIG =
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = true
disabled = false
INDEXED_EXTRACTIONS = json
KV_MODE = none
TIMESTAMP_FIELDS = time
TRANSFORMS-routing=duplicate_data
kind: ConfigMap
Not able to view splunkd logs to troubleshoot if splunk is able to get the logs or what might be the issue Thanks