0

I have a Splunk server with index data for 650k events. I want to migrate the entire data from one instance to another new instance. I tried using a migration script with data field -27D@d but I can only migrate 50k data. -27D@d is the point from where initial data is available. Can you please help me here? Here's the code :

import splunklib.client as client
import splunklib.results as results
import json
import requests

send_string = ""
service=client.connect(host="host1", port=8089,  username="admin", password="xxxx")
rr = results.ResultsReader(service.jobs.export('search index=my_index latest=-27D@d' ))
for result in rr:
    if isinstance(result, results.Message):
        continue
    elif isinstance(result, dict):
        final = dict(result)
        data = final['_raw']
        send_string = json.dumps({"event" : data,"source" : "test"},ensure_ascii=False).encode('utf8')
    url='http://host2:8088/services/collector'
    authHeader = {'Authorization': 'Splunk 5fbxxxx'}
    #Send data to Splunk
    response = requests.post(url, headers=authHeader, data=send_string, verify=False)
    if response.status_code == 200:
        print("Successfully pushed the data to Splunk source")
    else:
        print("Failed to push the data to Splunk source")
Mayank Srivastava
  • 149
  • 1
  • 3
  • 18
  • 2
    50,000 is the maximum number of results the search command will generate. You may have to find a way to iterate over your indexed data to get all 650k events. The more common way to migrate data is to copy the index files from one host to another. Tell us more about the architecture of the two Splunk servers so we can help with that. Are they standalone? Does either use an indexer cluster? – RichG Oct 17 '21 at 13:22
  • Consider also that "migrating" using this method will consume some of the license on host2. – RichG Oct 17 '21 at 17:16
  • both are standalone server with basic installation of Splunk, no indexer cluster is used – Mayank Srivastava Oct 18 '21 at 03:18

2 Answers2

2

If index my_index does not exist on host2 then just copy the directory $SPLUNK_DB/my_index to host2, add my_index to indexes.conf, and restart Splunk.

RichG
  • 9,063
  • 2
  • 18
  • 29
  • What if the index already exists? (I am using the `main` index.) – serg06 Jan 15 '23 at 21:46
  • If the index already exists, make sure the bucket IDs do not conflict when copying them. The bucket ID is a number in the fourth part of the bucket name `db___`. – RichG Jan 16 '23 at 00:52
1

I managed to do this with the Splunk Docker image. I imagine it's the same with a regular installation.

Note: In this example, $SPLUNK_HOME === /opt/splunk

First I backed it up:

mkdir splunk_backup
cd splunk_backup

# Back up index data
mkdir -p ./opt/splunk/var/lib/splunk
sudo docker cp $container:/opt/splunk/var/lib/splunk/defaultdb ./opt/splunk/var/lib/splunk

# Back up index configurations and dashboards
# - config is at      /opt/splunk/etc/apps/search/local/indexes.conf
# - dashboards are at /opt/splunk/etc/apps/search/local/data/ui/views
mkdir -p ./opt/splunk/etc/apps/search
sudo docker cp $container:/opt/splunk/etc/apps/search/local ./opt/splunk/etc/apps/search

# Back up users and reports
mkdir -p ./opt/splunk/etc
sudo docker cp $container:/opt/splunk/etc/users ./opt/splunk/etc

Then I went to the new server, launched Splunk, and stopped it:

sudo docker run --env SPLUNK_START_ARGS="--accept-license" --env SPLUNK_PASSWORD="FILL_THIS_IN" -p 8000:8000 -p 8088:8088 -p 9997:9997 -d --restart unless-stopped splunk/splunk:latest
sudo docker ps  # wait for it to say (healthy) then grab container ID
sudo docker stop $new_container

Then I restored it on the new server:

cd splunk_backup
sudo docker cp ./opt/splunk/ $new_container:/opt

Then I started the new server back up:

sudo docker start $new_container

As far as I can tell, all of my data, indices, users, reports, and dashboards were copied over successfully!

serg06
  • 2,027
  • 1
  • 18
  • 26