Skip to main content

Splunk Lantern turned 5 on May 28th. Thank you for being one of our 750,000 annual users!
Click here to join our Slack channel to tell us what you love about the site or what content you'd like to see more of.

 

Splunk Lantern

Upgrading to Enterprise Security 8.0.x - Configuration and customization

 

This article is part of a comprehensive guide to help you upgrade or migrate pre-8.0.x Splunk Enterprise Security deployments to Splunk Enterprise Security 8.0.x. If you do not feel comfortable completing these steps on your own and would prefer assistance in completing the upgrade, contact our Professional Services experts.

Follow the technical upgrade steps for Splunk Enterprise Security 8.0.x

The step-by-step process of upgrading Splunk Enterprise Security (ES} has not changed from previous versions. However, there are some caveats with this release that require some manual configuration and customization within the application after the technical upgrade process is complete.

Splunk Docs: Upgrade Splunk Enterprise Security

The guidance here only applies to on-premises deployments. Cloud ES will be upgraded by Splunk technical operations.

Manual configuration and customization

The following sections are a collection of manual steps that are required or recommended in order to complete the upgrade to ES 8.0.x.

Required: Validate necessary supporting add-ons (On-Prem and CMP)

  1. In the Splunk platform, navigate to Manage Apps and ensure the Splunk Enterprise Security (ES) application and all supporting add-ons (SA-*, DA-*, Mission Control) are on the appropriate versions (8.0.x).
  2. If you upgraded via the Web UI and supporting apps have not been upgraded or installed along with the ES application, you might need to manually extract them to apps from the ‘install’ directory inside the ES directory.
  3. Review the status of the Mission Control application. If it is not already enabled, enable it.
  4. Restart the Splunk platform.

Required: Create and deploy new required indexes for Enterprise Security 8 (On-premises and CMP)

  1. Download Splunk_TA_ForIndexers and check Include index definitions only. For instructions on downloading this add-on, click here.
  2. From the downloaded app, extract indexes.conf and copy and append it to your existing base app that defines indexes.
  3. Modify the coldPath, homePath and thawedPath settings to reflect the correct volume definitions.
  4. Deploy the indexes app to indexers using the cluster manager or using your normal process.

The new default index definitions that need to be added are:

## missioncontrol

###### MC aux incidents ######

[mc_aux_incidents]

repFactor = auto

coldPath = $SPLUNK_DB/mc_aux_incidents/colddb

homePath = $SPLUNK_DB/mc_aux_incidents/db

thawedPath = $SPLUNK_DB/mc_aux_incidents/thaweddb

###### MC artifacts ######

[mc_artifacts]

repFactor = auto

coldPath = $SPLUNK_DB/mc_artifacts/colddb

homePath = $SPLUNK_DB/mc_artifacts/db

thawedPath = $SPLUNK_DB/mc_artifacts/thaweddb

###### MC investigations ######

[mc_investigations]

repFactor = auto

coldPath = $SPLUNK_DB/mc_investigations/colddb

homePath = $SPLUNK_DB/mc_investigations/db

thawedPath = $SPLUNK_DB/mc_investigations/thaweddb

###### MC events ######

[mc_events]

repFactor = auto

coldPath = $SPLUNK_DB/mc_events/colddb

homePath = $SPLUNK_DB/mc_events/db

thawedPath = $SPLUNK_DB/mc_events/thaweddb

###### MC old incidents ######

[mc_incidents_backup]

repFactor = auto

coldPath = $SPLUNK_DB/mc_incidents_backup/colddb

homePath = $SPLUNK_DB/mc_incidents_backup/db

thawedPath = $SPLUNK_DB/mc_incidents_backup/thaweddb

## SA-ContentVersioning

[cms_main]

homePath   = $SPLUNK_DB/cms_main/db

coldPath   = $SPLUNK_DB/cms_main/colddb

thawedPath = $SPLUNK_DB/cms_main/thaweddb

Required: Correlation search migration - CS to EBD/FBD (On-premises and CMP)

During the upgrade process, all correlation searches that were previously configured should be converted to event-based detections (EBDs). In Splunk Enterprise Security, navigate to Security Content > Content Management, and then filter to show only enabled EBDs. On the right of the screen, all enabled correlation searches should now be labeled as Event-Based Detections.

  • By default, any of these detections that previously had a notable adaptive response action assigned will still create notables, now known as findings. You can spot check these by opening up the EBD in the detection editor, and ensuring the Finding Output Type is set to ‘Finding'.
  • If your correlation search was previously configured only to assign risk, it will be set as an Intermediate Finding in the detection editor.
  • After you modify a correlation search that was migrated, you cannot edit and save it until you specify a risk object (risk annotations are required now). As part of the upgrade process, a good practice is to validate that risk objects and annotations are assigned to each correlation search. Use the following search to identify correlation searches that do not currently have a risk object assigned:
    | rest /servicesNS/-/-/saved/searches splunk_server=local
    
    ``` Determine what Correlation search changes are required to pass ES8 Content Management validation checks after migration - run on ES7.x system ```
    
    | search disabled=0 is_scheduled=1 action.correlationsearch.enabled=1
    
    | rename "eai:acl.app" as app
    
    | eval actions=split(actions,",")
    
    | eval will_create_a_new_risk_event=if(actions in ("risk","notable"," notable"),0,1)
    
    | eval needs_risk_object=if(isnull('action.risk.param._risk_message'),1,0)
    
    | eval needs_description=if(isnull(description) OR description="",1,0)
    
    | eval needs_notable_title=if (isnull('action.notable.param.rule_title'),1,0)
    
    | eval needs_notable_description=if(isnull('action.notable.param.rule_description') OR 'action.notable.param.rule_description'="" ,1,0)
    
    | eval score=will_create_a_new_risk_event+needs_risk_object+needs_description+needs_notable_title+needs_notable_description
    
    | eval actions=split(actions,",")
    
    | sort - score 
    
    | where score > 0
    
    | table app, title, description, score, will_create_a_new_risk_event, needs_description, needs_risk_object, needs_notable_title, needs_notable_description, actions, action.notable.param.rule_title, action.notable.param.security_domain, action.risk.param._risk action.risk.param._risk_message, action.notable.param.rule_description, description

Important: Risk analysis in ES detections and risk notables (now known as findings)

The detection editor in ES 8.0.x now requires the risk analysis (scoring) section to be configured with every detection, regardless of whether they are using the risk-based alerting (RBA) methodology for their security monitoring. In addition, if multiple risk objects are configured for a single detection, ES creates a notable and/or risk event for each object identified, which can have a significant impact on the number of results analysts are accustomed to seeing in their queues. There is currently no workaround for this.

The default event-based detection (not the findings-based detection version) “ATT&CK Tactic Threshold Exceeded for Object over Previous 7 Days” has the Risk Analysis section in the detection editor configured with entities of ‘risk_object_user’ and ‘risk_object_system’. These fields do not exist in the risk index or data model. Because it is configured (although incorrectly), it will allow you to save and run this detection. However, it will error out when trying to create the corresponding finding in the notable index.

There are two possible workarounds for this issue. You can either change the risk analysis configuration to assign risk to the ‘risk_object’ field, or you can add evals to account for these fields in the detection SPL. See the screenshots below:

Screenshot 2025-04-18 at 9.32.14 AM.png
Figure 1. Risk Analysis configuration

Screenshot 2025-04-18 at 9.25.05 AM.png
Figure 2. Notable creation error in cim_modactions index

Screenshot 2025-04-18 at 9.48.50 AM.png
Figure 3. Workaround with evals

Recommended: Custom navigation configuration (On-premises, CMP, and Cloud)

If you have a customized ES navigation bar, that will override the new navigation bar included in ES 8.0.x. You need to reset it back to default in order to see the new navigation bar pages for version 8.0.x.

  1. Before the upgrade, document or snapshot the menu customizations and back up any custom views. They should not get lost in the upgrade, but this is a best practice.
  2. After the upgrade, navigate to Configure > All Configurations > Navigation, and then click Restore Default Navigation.
  3. To acknowledge the changes, click OK.
  4. After the navigation is set back to default, verify you now see the new ES 8.0.x navigation with “Mission Control”, “Security Content”, etc.
  5. Manually recreate any required custom navigation.

Recommended: Enable detection versioning (On-premises, CMP, and Cloud)

By default, detection versioning is disabled in Splunk Enterprise Security 8.0.x. If you want this functionality, you need to enable it.

  1. To enable this function, navigate to General Settings > Detection Versions, and then click Turn on.
  2. Wait 10 minutes. Versioning should then be enabled successfully.

Recommended: Migrate Investigations Content (On-premises, CMP, and Cloud)

If you previously leveraged investigations in Enterprise Security 7.x, any existing content will no longer be visible after upgrading to Splunk Enterprise Security 8.0.x. The data is not deleted but it is no longer visible. To restore this content into a read-only state on the upgraded version of ES, follow these instructions:

Pre-upgrade

  1. Create an investigation archive index.
    1. Go to Settings > Indexes > New Index.
    2. Make the index name investigation_archive or a different name according to your naming convention.
    3. Set the size and paths as needed.
  2. Collect data from KV stores related to investigations. Run each of these searches once:
    1. | rest splunk_server=local count=0 /services/storage/investigation/investigation
      | eval _time=create_time
      | collect index=investigation_archive sourcetype=investigation
    2. | rest splunk_server=local count=0 /services/storage/investigation/investigation_attachment
      | eval _time=create_time
      | collect index=investigation_archive sourcetype=investigation_attachment
    3. | rest splunk_server=local count=0 /services/storage/investigation/investigation_event
      | eval _time=create_time
      | collect index=investigation_archive sourcetype=investigation_event
    4. | rest splunk_server=local count=0 /services/storage/investigation/investigation_lead
      | eval _time=create_time
      | collect index=investigation_archive sourcetype=investigation_lead
    5. | rest splunk_server=local count=0 /services/storage/investigation/investigative_canvas
      | eval _time=create_time
      | collect index=investigation_archive sourcetype=investigation_canvas
    6. | rest splunk_server=local count=0 /services/storage/investigation/investigative_canvas_entries
      | eval _time=create_time
      | collect index=investigation_archive sourcetype=investigation_canvas_entries
  3. Verify the collected data is present in the investigation_archive index using the following searches. 
    1. In this one, the result count should match the number of investigations you have. index=investigation_archive sourcetype=investigation | stats dc(id)
    2. If this one returns any count > 1 for any unique ID, you have ingested a duplicate investigation or data. You might have accidentally run the commands in step 2 twice. index=investigation_archive | stats count by id | sort -count

      If you have more than 10K results, your results might not display potential duplicates. In this case, look at the raw events to see if the ID field has any values with more than 1 to verify for duplicates.

  4. Export a list of file attachment IDs for all investigations as a CSV using the following search: index=investigation_archive sourcetype=investigation_attachment | table id name | eval ID=replace(id, ".*/", "") | dedup ID
  5. Export the file as a CSV using the UI, name it "attachments.csv" (or a different name according to your naming convention), and copy it to the ES search head.

    If you have more than 10K results you can append "| outputcsv attachments.csv" to the end of your query and it will export all results of your CSV file (even if it only displays 10K results in the UI) to $SPLUNK_HOME/var/run/*.csv. This can also save you the effort of getting the CSV file onto the ES search head instead of manually copying it over.

  6. Use the following script to export attachments and move them to a long term storage location. You will need to move this script to a system that has access to the ES search head or to the ES search head itself. 

    If you do not leverage attachments, or only have a handful, it might be easier to just download them via the UI inside of ES one at a time and manually move them to a convenient location. In that case, you do not need to follow the remainder of the steps in this procedure for using the script. Move on to Post-migration.

     
    #!/bin/bash
    
    # Adjustable variables
    TARGET_DIR="/home/splunk/investigation_attachments" # Directory to save files
    SPLUNK_HOST="myessearchhead.splunk.com" # Change this to your Splunk host
    TARGET_FILE="/home/splunk/attachments.csv" # CSV file containing the IDs and names exported from above step
    
    # Ensure the target directory exists, create if not
    mkdir -p "$TARGET_DIR"
    
    # Check if the target file exists
    if [ ! -f "$TARGET_FILE" ]; then
    echo "Error: $TARGET_FILE not found!"
    exit 1
    fi
    
    # Loop through each line in the CSV (skip header)
    tail -n +2 "$TARGET_FILE" | while IFS=',' read -r ID NAME; do
    
    # Check if ID and NAME are not empty
    if ; then
    
    # Construct the filename
    FILENAME="${ID}_${NAME}"
    
    # Run the curl command to download the file
    echo "Downloading attachment for ID: $ID -> $FILENAME"
    curl -u "$SPLUNK_USERNAME:$SPLUNK_PASSWORD" -k -o "$TARGET_DIR/$FILENAME" "https://$SPLUNK_HOST:8089/services/s...&download=true"
    else
    echo "Skipping row with missing ID or NAME."
    fi
    done
    
    echo "Download process completed. Files saved in: $TARGET_DIR"
    
    
  7. Update variables as needed and run the script.
    1. Set your environmental variables for your Splunk username/password.
      1. export SPLUNK_USERNAME="your_username"
      2. export SPLUNK_PASSWORD="your_password"
    2. Update script variables as needed
      1. Update the target directory to save investigation attachments, if needed.
      2. Update the Splunk host name, if needed.
      3. Update the target CSV file to process, if needed.
    3. Ensure the script has appropriate permissions. chmod +x download_attachments.sh
    4. Run the script: /path/to/script/myscript.sh
    5. Verify a successful export and copy your attachments to a secure location for future access. Use this search to compare the number of files versus unique IDs: index=investigation_archive sourcetype=investigation_attachment
    6. Unset your environmental variables.
      1. unset SPLUNK_USERNAME
      2. unset SPLUNK_PASSWORD

Post-migration

Below is a simple XML dashboard that can be used for querying the imported investigation data. You can install this anywhere that has access to the investigation_archive index:

<form version="1.1" theme="dark">
  <label>Legacy ES  Investigations</label>
  <description>Legacy ES Investigation Data</description>
  <fieldset submitButton="false">
    <input type="time" token="time_token" searchWhenChanged="true">
      <label>Time Range</label>
      <default>
        <earliest>0</earliest>
        <latest></latest>
      </default>
    </input>
    <input type="text" token="investigation_id" searchWhenChanged="true">
      <label>Investigation ID</label>
      <default>*</default>
      <initialValue>*</initialValue>
    </input>
    <input type="text" token="query_token">
      <label>Investigation _raw query</label>
      <default>*</default>
      <initialValue>*</initialValue>
    </input>
  </fieldset>
  <row>
    <panel>
      <title>Number of Investigations</title>
      <single>
        <search>
          <query>index=investigation_archive sourcetype=investigation
| stats dc(id)</query>
          <earliest>$time_token.earliest$</earliest>
          <latest>$time_token.latest$</latest>
        </search>
        <option name="drilldown">none</option>
        <option name="refresh.display">progressbar</option>
      </single>
    </panel>
  </row>
  <row>
    <panel>
      <title>Investigations</title>
      <table>
        <search>
          <query>index=investigation_archive sourcetype=investigation title="$investigation_id$" "$query_token$" 
| eval create_time=strftime(create_time, "%Y-%m-%dT%H:%M:%S")|eval mod_time=strftime(mod_time, "%Y-%m-%dT%H:%M:%S")
| rex field=_raw "collaborators=\"(?&lt;collaborators_json&gt;\[{.*?}\])\"" 
| eval clean_collaborators_json = replace(collaborators_json, "[\\{\\}\\[\\]\"\\\\]", "")
| eval clean_collaborators_json = replace(clean_collaborators_json, ", write", "_write")
| eval collaborators = split(clean_collaborators_json, ", ")
| rex field=_raw "status=.{15}(?&lt;status_value&gt;\w+)"
| eval status=status_value 
| rename title as id 
| table create_time,creator,status,collaborators,name,description,id,mod_time,_raw
| sort 0 -create_time</query>
          <earliest>$time_token.earliest$</earliest>
          <latest>$time_token.latest$</latest>
        </search>
        <option name="drilldown">cell</option>
        <option name="refresh.display">progressbar</option>
        <drilldown>
          <set token="parent_id">$row.id$</set>
        </drilldown>
      </table>
    </panel>
  </row>
  <row>
    <panel>
      <table>
        <title>Investigation Events for $parent_id$</title>
        <search>
          <done>
            <set token="investigation_i">$result.sourcetype$</set>
          </done>
          <query>index=investigation_archive sourcetype=investigation_event parent_id=$parent_id$
| rename title as event_id
| table _time,creator,name,description
| sort -_time</query>
          <earliest>$time_token.earliest$</earliest>
          <latest>$time_token.latest$</latest>
        </search>
        <option name="drilldown">none</option>
        <option name="refresh.display">progressbar</option>
      </table>
    </panel>
  </row>
  <row>
    <panel>
      <title>Investigation Attachments for $parent_id$</title>
      <table>
        <title>Investigation Attachments</title>
        <search>
          <query>index=investigation_archive sourcetype=investigation_attachment parent_id IN ($event_id$)
| rename title as attachment_id
| table _time content_type, name, attachment_id
| sort 0 -_time</query>
          <earliest>$time_token.earliest$</earliest>
          <latest>$time_token.latest$</latest>
        </search>
        <option name="drilldown">none</option>
        <option name="refresh.display">progressbar</option>
      </table>
    </panel>
  </row>
  <row depends="$foo$">
    <panel>
      <table>
        <title>Event Ids</title>
        <search>
          <done>
            <set token="event_id">$result.id$</set>
          </done>
          <query>index=investigation_archive sourcetype=investigation_event parent_id=$parent_id$ | stats list(title) as id | eval id=mvjoin(id, ",")</query>
          <earliest>$time_token.earliest$</earliest>
          <latest>$time_token.latest$</latest>
        </search>
        <option name="drilldown">none</option>
        <option name="refresh.display">progressbar</option>
      </table>
    </panel>
  </row>
</form>

Required (SOAR customers only): Pair Splunk SOAR to Enterprise Security (Cloud)

Pairing with Splunk SOAR is not supported for on-premises or CMP deployments of Splunk Enterprise Security (ES) in version(s) 8.0.x. Pairing in ES 8.0.x is only supported in Splunk Cloud Platform, with a cloud-to-cloud relationship.

Official guidance for pairing with Splunk SOAR can be found at Pair Splunk SOAR (Cloud) with Splunk Enterprise Security.

  • Written by Randy Trobock and Ted Skinner
  • Professional Services at Splunk