0

I noticed that the Splunk documentation on this site says that this should support multiple environments (s) - looking at the code in the python scripts though it looks like it doesn't?

SailPoint IIQ version: 8.1p3 Splunk version: 8.0.9 TA version: 2.0.5

After reviewing the Splunk Plugin code (the Python code which Splunk uses to read data from SailPoint), I noticed the following bits of information:

Splunk/etc/apps/Splunk_TA_sailpoint is the plugin directory where the plugin derives its files. Splunk/etc/apps/Splunk_TA_sailpoint/bin/input_module_sailpoint_identityiq_auditevents.py – this is the file in question that caught my attention. The way the code appears to operate is that it defines a SINGLE file for where it stores the epoch date following the logic outlined below:  

1.  Initially, there is a file check (audit_events_checkpoint.txt) to see if the file exist
2.  If Python doesn’t find it attempts to create it
3.  If it fails again, it creates the folder structure and then adds the file
4.  After the first three steps, Python opens the file.
5.  Python then reads the files and pull the first value in (the Unix/Epoch timestamp)
6.  It then uses this as part of its query outbound.  

#Read the timestamp from the checkpoint file, and create the checkpoint file if necessary
    #The checkpoint file contains the epoch datetime of the 'created' date of the last event seen in the previous execution of the script. 
    checkpoint_file = os.path.join(os.environ['SPLUNK_HOME'], 'etc', 'apps', 'Splunk_TA_sailpoint', 'tmp', "audit_events_checkpoint.txt")
    try:
        file = open(checkpoint_file, 'r')
    except IOError:
        try:
            file = open(checkpoint_file, 'w')
        except IOError:
            os.makedirs(os.path.dirname(checkpoint_file))
            file = open(checkpoint_file, 'w')
            
    with open(checkpoint_file, 'r') as f:
         checkpoint_time = f.readlines()
     
    #current epoch time in milliseconds 
    # new_checkpoint_time = int((datetime.datetime.utcnow() - datetime.datetime(1970, 1, 1)).total_seconds() *1000)
    
    if len(checkpoint_time) == 1:
        checkpoint_time =int(checkpoint_time[0])
    else:
        checkpoint_time = 1562055000000
        helper.log_info("No checkpoint time available. Setting it to default value.")
    
    #Standard query params, checkpoint_time value is set from what was saved in the checkpoint file
    queryparams= {
         "startTime" : checkpoint_time,
         "startIndex" : 1,
         "count" : 100
    }

  1.  Jumping down to the next references, we find that the JSON object pulled in is used to create the new timestamp that the system will use on the next request.
2. It then takes this value and writes it to file, which it will reuse the next time a call is made.  

#Iterate the audit events array and create Splunk events for each one
    invalid_response = isListEmpty(auditEvents)
    if not invalid_response:
        for auditEvent in auditEvents:
 
            data = json.dumps(auditEvent)
            event = helper.new_event(data=data, time=None, host=None, index=helper.get_output_index(), source=helper.get_input_type(), sourcetype=helper.get_sourcetype(), done=True, unbroken=True)
            ew.write_event(event)
 
        #Get the created date of the last audit event in the run and save it as a checkpoint key in the checkpoint file
        list_of_created_date = extract_element_from_json(results, ["auditEvents", "created"])
 
        new_checkpoint_created_date = list_of_created_date[-1]
        helper.log_info("DEBUG New checkpoint date \n{}".format(new_checkpoint_created_date))
 
    #Write new checkpoint key to the checkpoint file
        with open(checkpoint_file, 'r+') as f:
            f.seek(0)
            f.write(str(new_checkpoint_created_date))
            f.truncate()

So here is my thought as to what is happening: When we enter information into Splunk for the connectors to EACH environment (we have a total of 6), the checkpoint_file is being over-written. I would also assume that each connected env calls the same timestamp because they all appear to be pulling that information from the same file. Did we miss a configuration, or is this a coding gap?

Bob
  • 388
  • 5
  • 19
  • Try to flag this to the developer (which is sailpoint). It appears the Splunk app is developer supported and you can find their details on the TA's page on splunkbase https://splunkbase.splunk.com/app/4088/ – Honky Donkey Aug 13 '21 at 13:18
  • Reached out to the developer and posted to sailpoint direct - well see what they do... – Bob Aug 19 '21 at 01:28

0 Answers0