Analyzing your organization's adoption of risk-based alerting
If you are running the Insider Threat Workshop with your team, you are likely only in the early phases of the RBA maturity curve. If you are already using RBA, document the answer to the following questions with your team before beginning this phase:
- Do you have an established incident response or investigation workflow?
- Do you send risk events to the Risk Notable Playbook in SOAR?
- Do you have a process for identifying false positives and tuning accordingly?
- Do you have a high number of “unknown” risk objects over the previous 7-30 days?
- Have you configured drill down searches for individual risk rules that can be used within Risk Notables?
- Do you use any type of advanced analytics within configured risk rules?
- Do you have a governance model for your RBA deployment?
When you have answered these questions, you can begin the following technical review of your Splunk Enterprise Security (ES) implementation. It involves the following steps:
- Identify your security monitoring requirements and map to current risk rule coverage
- Assess effectiveness of risk notables by reviewing current baselines and contributing risk rules
- Assess completeness and effectiveness of assets and identities
- Review applicable data sources to ensure they are onboarded and CIM compliant
- Review and ensure current risk rules are configured correctly
- Ensure RBA related dashboards and investigative workflows are populating correctly
- Document findings and recommendations into RBA Maturity Roadmap Template
As described in the final step in the list above, the expected outcome of this phase is guidance for Risk Based Alerting (RBA) maturity based on established RBA milestones.
This Insider Threat Workshop is available as a 5-day engagement with Splunk Professional Services. If you do not feel comfortable completing this workshop on your own, or would like hands-on training with any of the concepts and processes included in this offering, contact our Professional Services experts.
Identify your security monitoring requirements and map to current risk rule coverage
Your security requirements should already be highlighted in a summary of the work completed in the insider threat use case selection phase. If you haven't done so already, frame those objectives into the context of MITRE ATT&CK tactics and techniques. Then, you can run the following searches to output your current risk rule coverage across MITRE.
- ► Click here to see the searches.
-
Show how many risk rules are enabled:
| rest /services/saved/searches | search action.correlationsearch.enabled=1 action.risk=1 NOT disabled=1 | stats count
Get a count of sources (risk rules) by MITRE tactics:
| rest /services/saved/searches | search action.correlationsearch.enabled=1 NOT disabled=1 | spath input=action.correlationsearch.annotations | fields title mitre* Tactic | rename "mitre_attack{}" AS mitre_attack | stats dc(title) AS Sources BY mitre_attack
Output all enabled correlation searches and their corresponding actions and annotations, along with the expected values from Splunk Security Essentials (SSE):
| rest /services/saved/searches | search disabled=0 action.correlationsearch.enabled=1 | rex field=title "\w+\s\-\s(?<name>.*)\s\-\s\w+" | join name [ search | sseanalytics ] | spath input=action.correlationsearch.annotations | rename mitre_attack{} AS risk_annotation, mitre_id AS sse_mitre_id mitre_tactic_display AS sse_mitre_tactic_display mitre_technique_display AS sse_mitre_technique_display | fields title action.notable action.risk risk_annotation sse_*
Assess effectiveness of risk notables by reviewing current baselines and contributing risk rules
Do you receive a high number of false positive risk notables? If so, you might need to adjust thresholds based on current risk entries. You might also consider a dynamic threshold, based on priority or category context surrounding the affected risk object. Review risk entry activity for both risk scores and MITRE tactics, and and notate your findings. The following are suggested questions to collect information about the effectiveness of risk notables:
- Are enabled risk rules triggering as expected?
- Is thresholding configured properly for risk incident rules/risk notables?
- Should your thresholding strategy be revisited?
- Are there a large number of risk rules with zero results? Is this expected behavior?
- ► Click here to see the searches.
-
Show the percentage of enabled risk rules with zero results over previous seven days:
| rest /services/saved/searches | search action.correlationsearch.enabled=1 NOT disabled=1 action.risk=1 | fields title actions | join title type=left [search index=risk | rename source AS title | eventstats count BY title | fields title count] | eventstats count(title) AS total_enabled_risk_rules | where isnull(count) | eventstats count(title) AS risk_rules_zero_results | eval percent_zero_results=(round(risk_rules_zero_results/total_enabled_risk_rules,3)*100) | eval percent_zero_results=percent_zero_results+"%" | fields percent_zero_results | head 1
Show the enabled risk incident rules, their corresponding thresholding logic, and number of times each has triggered within past seven days:
| rest splunk_server=local /services/saved/searches | search action.correlationsearch.enabled=1 actions=notable NOT disabled=1 | search search IN ("*datamodel=Risk*","*datamodel:\"Risk*","*index=risk*","*datamodel:Risk*") | fields title actions search | rex max_match=5 field=search "where\s(?<threshold>[^\|]+)" | mvexpand threshold | eval threshold="--> "+threshold | stats values(threshold) as "Risk Notable Thresholding Logic" BY title | join title type=left [ search index=notable | rename source AS title | eventstats count BY title | fields title count] | eval count=if(isnull(count),0,count) | rename count as "Trigger Frequency (7 days)"
Risk factor configuration in the RBA framework should also be examined during this review. The following suggested questions can help guide the conversation:
- Are risk factors enabled and configured properly?
- Do you have any custom risk factors created?
- Is expected data available in your environment for any enabled risk factor?
- Does the mathematical operator and associated value align with your risk modifier strategy?
Show all enabled risk factors, including source app, required data and enrichment, and applicable fields, mathematical operations, and multiplier/add value:
| rest /services/alerts/risk_factors | search NOT disabled=1 | fields eai:acl.app title conditions operation_group value | rename eai:acl.app AS app | spath input=conditions output=field path={}.field | mvexpand field | eval required_data=case(field="source","Risk Rules/Index",field="severity","Raw Events",match(field,"(src_|dest_|user_|risk_object_)"),"Assets/Identities",match(field,"(cve|cvss)"),"Vulnerability Data") | rename operation_group AS operation | stats values(field) as field values(required_data) AS required_data BY app title operation value | eval field=mvjoin(field,",") | eval required_data=mvjoin(required_data,",") | rename field AS "field(s)" | fields app title required_data "field(s)" operation value
Assess completeness and effectiveness of assets and identities
The asset and identity framework is a critical component of RBA, as it contains much of the necessary context surrounding affected risk objects (people and systems). This includes, among others, category and priority context, which analysts can use to make decisions throughout the incident response process. Evaluate the completeness of this data, to include the following:
- Are you collecting assets and identities dynamically, and is the framework configured as an input into Splunk Enterprise Security?
- What context is being collected?
- What context is NOT being collected?
- If data is missing, what is the source of truth for that information, and can it be collected into ES?
- ► Click here to see additional questions and some corresponding searches.
-
- Is asset/identity correlation configured properly?
- Are assets and identities configured properly?
- Does the assets and identities implementation have complete coverage across your environment?
- Are risk object fields configured properly? Risk objects should be customer-owned objects, not external objects.
- What is the coverage level of the asset list compared to the network traffic data model contents?
- Does the category field dropdown contain the correct/expected categories?
- Do you use asset macros in your correlation searches?
- In the Threat Artifacts Dashboard (Security Intelligence > Threat Intelligence > Threat Artifacts):
- Are network related indicators present?
- Are endpoint related indicators present?
- Are certificate related indicators present?
- Are email related indicators present?
- What enabled correlation rules are present that use each of the threat artifact types?
- What out-of-the-box ES Content could be enabled to leverage indicator types that are not being used?
- What is the status of the demo asset list?
- What entity zones, if any, are configured?
- What is the status of correlation setup for assets?
| rest /services/data/props/lookups | search attribute IN (*identity_lookup_expanded*) | fields attribute stanza title | stats dc(stanza) AS dc_sourcetypes dc(title) AS dc_lookups dc(attribute) AS dc_attributes values(stanza) AS stanzas, values(title) AS lookups, values(attribute) AS attributes | eval status=case(isnull(stanzas),"Disabled for All Sourcetypes",stanzas="default","Enabled for All Sourcetypes",1==1,"Enabled for Some Sourcetypes") | fields status
- What is the status of applicable global settings for assets (enforce_identityLookup, enforce_macros, enforce_props, enforce_replicate, enforce_transforms, overlay_cidr, entity_merge)?
- What are the contributing asset modular inputs, and associated lookup gens, to determine if assets are being populated dynamically?
- What are all the header values in the combined asset list, and what percentage complete is each column?
| rest splunk_server=local /services/search/jobs/export search=" | inputlookup asset_lookup_by_str | foreach * [eval <<FIELD>> = replace(replace(replace(replace(mvjoin(<<FIELD>>,\"@@MVField@@\"), \"\\n\", \"@@NewLine@@\"), \"\\r\", \"@@CarriageReturn@@\"), \"\\\"\", \"@@DoubleQuote@@\"), \"NULL\", \"@@NULL@@\")] | fillnull value=NULL | rename _* AS tmp_*" output_mode=csv | fields value | makemv tokenizer="([^\n]+)" value | eval header=mvindex(value,0), value=mvindex(value,1,mvcount(value)) | makemv tokenizer="(\"[^\"]+\"|[^,]+)" header | mvexpand value | makemv tokenizer="(\"[^\"]+\"|[^,]+)" value | eval tuple=mvzip(header,value,"#####") | fields tuple | eval primarykey=md5(tostring(tuple)) | mvexpand tuple | rex field=tuple "^(?P<field>.*)#{5}(?P<value>.*)$" | eval field=trim(field,"\""), value=if(value=="NULL","",trim(value,"\"")) | fields primarykey field value | eval {field}=value | fields - name, field, value | stats values(*) AS * BY primarykey | fields - primarykey | rename tmp_* AS _* | fieldformat _time=if(isint(_time),strftime(_time, "%s"),_time) | foreach * [ eval <<FIELD>> = split(replace(replace(replace(replace(<<FIELD>>, "@@NewLine@@", " "), "@@CarriageReturn@@", ""), "@@DoubleQuote@@", "\""), "@@NULL@@", "NULL"),"@@MVField@@") ] ```everything above should be contained in a macro``` | eventstats count | rename * AS myfield_* | foreach myfield_* [eval <<FIELD>> = if(isnull(<<FIELD>>),null,1)] | fields - myfield__* | stats sum(*) AS * | rename myfield_count AS count | foreach myfield_* [eval <<FIELD>>=round((<<FIELD>>/count)*100,2) | eval <<FIELD>>=<<FIELD>>+"%"] | rename myfield_* AS * | fields - count
- What are the total number of assets in the combined asset list?
| rest splunk_server=local /services/search/jobs/export search=" | inputlookup asset_lookup_by_str | foreach * [eval <<FIELD>> = replace(replace(replace(replace(mvjoin(<<FIELD>>,\"@@MVField@@\"), \"\\n\", \"@@NewLine@@\"), \"\\r\", \"@@CarriageReturn@@\"), \"\\\"\", \"@@DoubleQuote@@\"), \"NULL\", \"@@NULL@@\")] | fillnull value=NULL | rename _* AS tmp_*" output_mode=csv | fields value | makemv tokenizer="([^\n]+)" value | eval header=mvindex(value,0), value=mvindex(value,1,mvcount(value)) | makemv tokenizer="(\"[^\"]+\"|[^,]+)" header | mvexpand value | makemv tokenizer="(\"[^\"]+\"|[^,]+)" value | eval tuple=mvzip(header,value,"#####") | fields tuple | eval primarykey=md5(tostring(tuple)) | mvexpand tuple | rex field=tuple "^(?P<field>.*)#{5}(?P<value>.*)$" | eval field=trim(field,"\""), value=if(value=="NULL","",trim(value,"\"")) | fields primarykey field value | eval {field}=value | fields - name, field, value | stats values(*) AS * BY primarykey | fields - primarykey | rename tmp_* AS _* | fieldformat _time=if(isint(_time),strftime(_time, "%s"),_time) | foreach * [ eval <<FIELD>> = split(replace(replace(replace(replace(<<FIELD>>, "@@NewLine@@", " "), "@@CarriageReturn@@", ""), "@@DoubleQuote@@", "\""), "@@NULL@@", "NULL"),"@@MVField@@") ] ```everything above should be contained in a macro``` | stats count
- How many CIDR ranges are configured to contribute to the combined asset list?
| rest splunk_server=local /services/search/jobs/export search=" | inputlookup asset_lookup_by_cidr | foreach * [eval <<FIELD>> = replace(replace(replace(replace(mvjoin(<<FIELD>>,\"@@MVField@@\"), \"\\n\", \"@@NewLine@@\"), \"\\r\", \"@@CarriageReturn@@\"), \"\\\"\", \"@@DoubleQuote@@\"), \"NULL\", \"@@NULL@@\")] | fillnull value=NULL | rename _* AS tmp_*" output_mode=csv | fields value | makemv tokenizer="([^\n]+)" value | eval header=mvindex(value,0), value=mvindex(value,1,mvcount(value)) | makemv tokenizer="(\"[^\"]+\"|[^,]+)" header | mvexpand value | makemv tokenizer="(\"[^\"]+\"|[^,]+)" value | eval tuple=mvzip(header,value,"#####") | fields tuple | eval primarykey=md5(tostring(tuple)) | mvexpand tuple | rex field=tuple "^(?P<field>.*)#{5}(?P<value>.*)$" | eval field=trim(field,"\""), value=if(value=="NULL","",trim(value,"\"")) | fields primarykey field value | eval {field}=value | fields - name, field, value | stats values(*) AS * BY primarykey | fields - primarykey | rename tmp_* AS _* | fieldformat _time=if(isint(_time),strftime(_time, "%s"),_time) | foreach * [ eval <<FIELD>> = split(replace(replace(replace(replace(<<FIELD>>, "@@NewLine@@", " "), "@@CarriageReturn@@", ""), "@@DoubleQuote@@", "\""), "@@NULL@@", "NULL"),"@@MVField@@") ] ```everything above should be contained in a macro``` | stats count
- How many asset records have possible excessive merging?
- How many assets are represented by priority?
| rest splunk_server=local /services/search/jobs/export search=" | inputlookup asset_lookup_by_str | foreach * [eval <<FIELD>> = replace(replace(replace(replace(mvjoin(<<FIELD>>,\"@@MVField@@\"), \"\\n\", \"@@NewLine@@\"), \"\\r\", \"@@CarriageReturn@@\"), \"\\\"\", \"@@DoubleQuote@@\"), \"NULL\", \"@@NULL@@\")] | fillnull value=NULL | rename _* AS tmp_*" output_mode=csv | fields value | makemv tokenizer="([^\n]+)" value | eval header=mvindex(value,0), value=mvindex(value,1,mvcount(value)) | makemv tokenizer="(\"[^\"]+\"|[^,]+)" header | mvexpand value | makemv tokenizer="(\"[^\"]+\"|[^,]+)" value | eval tuple=mvzip(header,value,"#####") | fields tuple | eval primarykey=md5(tostring(tuple)) | mvexpand tuple | rex field=tuple "^(?P<field>.*)#{5}(?P<value>.*)$" | eval field=trim(field,"\""), value=if(value=="NULL","",trim(value,"\"")) | fields primarykey field value | eval {field}=value | fields - name, field, value | stats values(*) AS * BY primarykey | fields - primarykey | rename tmp_* AS _* | fieldformat _time=if(isint(_time),strftime(_time, "%s"),_time) | foreach * [ eval <<FIELD>> = split(replace(replace(replace(replace(<<FIELD>>, "@@NewLine@@", " "), "@@CarriageReturn@@", ""), "@@DoubleQuote@@", "\""), "@@NULL@@", "NULL"),"@@MVField@@") ] ```everything above should be contained in a macro``` | stats count BY priority
- How many assets are represented by category?
| rest splunk_server=local /services/search/jobs/export search=" | inputlookup asset_lookup_by_str | foreach * [eval <<FIELD>> = replace(replace(replace(replace(mvjoin(<<FIELD>>,\"@@MVField@@\"), \"\\n\", \"@@NewLine@@\"), \"\\r\", \"@@CarriageReturn@@\"), \"\\\"\", \"@@DoubleQuote@@\"), \"NULL\", \"@@NULL@@\")] | fillnull value=NULL | rename _* AS tmp_*" output_mode=csv | fields value | makemv tokenizer="([^\n]+)" value | eval header=mvindex(value,0), value=mvindex(value,1,mvcount(value)) | makemv tokenizer="(\"[^\"]+\"|[^,]+)" header | mvexpand value | makemv tokenizer="(\"[^\"]+\"|[^,]+)" value | eval tuple=mvzip(header,value,"#####") | fields tuple | eval primarykey=md5(tostring(tuple)) | mvexpand tuple | rex field=tuple "^(?P<field>.*)#{5}(?P<value>.*)$" | eval field=trim(field,"\""), value=if(value=="NULL","",trim(value,"\"")) | fields primarykey field value | eval {field}=value | fields - name, field, value | stats values(*) AS * BY primarykey | fields - primarykey | rename tmp_* AS _* | fieldformat _time=if(isint(_time),strftime(_time, "%s"),_time) | foreach * [ eval <<FIELD>> = split(replace(replace(replace(replace(<<FIELD>>, "@@NewLine@@", " "), "@@CarriageReturn@@", ""), "@@DoubleQuote@@", "\""), "@@NULL@@", "NULL"),"@@MVField@@") ] ```everything above should be contained in a macro``` | stats count BY category
- How many assets are represented by business unit?
- How many errors are being generated, over time, across the asset/identity correlation logs (modular input, entity merge, and rest handler)?
index=_internal sourcetype=identity_correlation* ERROR | timechart count BY sourcetype
- What are the most recent errors generated by the modular input, entity merge, and rest handler logs?
- Latest modular input errors:
index=_* sourcetype=identity_correlation:modular_input ERROR | bucket _time span=10m | stats values(_raw) as message BY _time | sort - _time | head 1 | fields - _time
- Latest entity merge errors:
index=_* sourcetype=identity_correlation:merge ERROR | bucket _time span=10m | stats values(_raw) AS message BY _time | sort - _time | head 1 | fields - _time
- Latest rest handler errors:
index=_* sourcetype=identity_correlation:rest_handler ERROR | bucket _time span=10m | stats values(_raw) AS message BY _time | sort - _time | head 1 | fields - _time
- Latest modular input errors:
- What percentage of detected risk objects in the risk rules have a matching record in assets or identities?
index=risk | stats dc(risk_object_asset_id) AS count_risk_object_asset_id dc(risk_object_identity_id) AS count_risk_object_identity_id dc(risk_object) AS total_risk_objects | eval matches=round(((count_risk_object_asset_id+count_risk_object_identity_id)/total_risk_objects)*100,0) | eval percent_matches=matches+"%" | fields percent_matches
Review applicable data sources to ensure they are onboarded and CIM compliant
In this step, you want to answer the following questions:
- Are you collecting all of the data sources prescribed in the insider threat use case selection phase? If there are missing data sources, review with your team to determine why.
- Is that data fully CIM compliant? The Security Essentials app has a utility to review data CIM compliance if you have set up Data Inventory Introspection in the app.
- ► Click here to see the searches.
-
Return a table that shows showing Splunk Enterprise Security relevant data models and source types found in ES:
| makeresults | eval title="Authentication,Change,Email,Endpoint,Intrusion_Detection,Malware,Network_Resolution,Network_Sessions,Network_Traffic,Vulnerabilities,Web" | makemv title delim="," | mvexpand title | fields - _time | map maxsearches=100 search=" | tstats summariesonly=true allow_old_summaries=true values(sourcetype) AS sourcetype FROM datamodel=$$title$$ | eval datamodel=\"$$title$$\"" | fields datamodel sourcetype | eval sourcetype=mvjoin(sourcetype,",")
Get a list of source types in the Splunk stack, ordered by volume:
index=_internal source=*license_usage.log type=Usage pool=* | stats sum(b) AS b BY st | eval MB's=round(b/1024/1024,2) | rename st AS SourceType MB's AS "MB's Used" | fields SourceType "MB's Used" | sort 20 - "MB's Used"
Visualize the volume of top ingested data sources as a line graph:
index=_internal source=*license_usage.log type=Usage pool=* | eval MBs=round(b/1024/1024,2) | timechart sum(MBs) BY s
Get a list of source types not sending data in last 24 hours. There is a commented line in the sample SPL that can be used to report based on non-sending hosts, instead of non-sent source types:
noop ```| append [ |metadata type=hosts | table *]``` | append [ |metadata type=sourcetypes | table *] | eval t = now() - lastTime | where t > 259200 | eval name = coalesce(host,sourcetype) | table name t lastTime totalCount type | convert ctime(t) timeformat=%H:%H:%S | rename t AS "timeSinceEvent" | convert ctime(lastTime) timeformat="%m/%d/%Y %H:%M:%S %z"
Get a list of source types along with a chart of counts of various parsing issues that exist in the data:
index=_internal splunk_server=* source=*splunkd.log* splunk_server=* (log_level=ERROR OR log_level=WARN) (component=AggregatorMiningProcessor OR component=DateParserVerbose OR component=LineBreakingProcessor) | rex field=event_message "Context: source(::|=)(?<context_source>[^\\|]*?)\\|host(::|=)(?<context_host>[^\\|]*?)\\|(?<context_sourcetype>[^\\|]*?)\\|" | eval data_source=if((isnull(data_source) AND isnotnull(context_source)),context_source,data_source), data_host=if((isnull(data_host) AND isnotnull(context_host)),context_host,data_host), data_sourcetype=if((isnull(data_sourcetype) AND isnotnull(context_sourcetype)),context_sourcetype,data_sourcetype) | stats count(eval(component=="LineBreakingProcessor" OR component=="DateParserVerbose" OR component=="AggregatorMiningProcessor")) AS total_issues dc(data_host) AS "Host Count" dc(data_source) AS "Source Count" count(eval(component=="LineBreakingProcessor")) AS "Line Breaking Issues" count(eval(component=="DateParserVerbose")) AS "Timestamp Parsing Issues" count(eval(component=="AggregatorMiningProcessor")) AS "Aggregation Issues" BY data_sourcetype | sort - total_issues | rename data_sourcetype AS Sourcetype, total_issues AS "Total Issues"
Review and ensure current risk rules are configured correctly
This section contains a list of risk rule attributes that should be checked, along with suggested searches to be used for the evaluations.
If you install the Security Detection Insights App on your search head, these RBA rule checks can be semi-automated. All of the checks listed in this section are included in a dashboard within the app, and it will help this part of the workshop move along much faster.
- ► Click here to see the searches.
-
Detection is throttled:
| rest splunk_server=local count=0 /services/saved/searches | where isnotnull('action.correlationsearch.enabled') | fillnull value="NONE ⚠️" action.notable.param.security_domain alert.suppress.fields alert.suppress.period | eval "Provided By" = 'eai:acl.app' | eval isEnabled = if(disabled==0, "True", "False") | search isEnabled = True | eval isThrottled = if('alert.suppress'==1, "Yes", "No") | rename eai:acl.app AS Application, title AS Name, action.notable.param.security_domain AS Domain, isEnabled AS "Enabled?", isThrottled AS "Throttled?", alert.suppress.fields AS "Throttling By Fields", alert.suppress.period AS "Throttling Window" | table Name, "Enabled?", "Provided By", Application, Domain, "Throttled?", "Throttling By Fields", "Throttling Window"
Detection has a contributing event search:
| rest splunk_server=local count=0 /services/saved/searches | where isnotnull('action.correlationsearch.enabled') | fillnull value="NONE ⚠️" action.notable.param.security_domain | eval "Provided By" = 'eai:acl.app' | eval isEnabled = if(disabled==0, "True", "False") | search isEnabled = True | eval hasCESearch = if((isnull('action.notable.param.drilldown_search') OR 'action.notable.param.drilldown_search' == "") AND (isnull('action.notable.param.drilldown_searches') OR 'action.notable.param.drilldown_searches' IN ("", "[]")), "No", "Yes") | eval drilldown_search = if(isnotnull('action.notable.param.drilldown_search') AND 'action.notable.param.drilldown_search'!="", 'action.notable.param.drilldown_search', json_extract('action.notable.param.drilldown_searches',"{}.search")) | rename eai:acl.app AS Application, title AS Name, action.notable.param.security_domain AS Domain, isEnabled AS "Enabled?", hasCESearch AS "Has Search?", drilldown_search AS "Contributing Events Search" | table Name, "Enabled?", "Provided By", Application, Domain, "Has Search?", "Contributing Events Search"
Detection has next steps defined:
| rest splunk_server=local count=0 /services/saved/searches | where isnotnull('action.correlationsearch.enabled') | fillnull value="NONE ⚠️" action.notable.param.security_domain | eval "Provided By" = 'eai:acl.app' | eval isEnabled = if(disabled==0, "True", "False") | search isEnabled = True | eval hasNextSteps = if(isnotnull('action.notable.param.next_steps'), "Yes", "No") | rename eai:acl.app AS Application, title AS Name, action.notable.param.security_domain AS Domain, isEnabled AS "Enabled?", hasNextSteps AS "Has Next Steps?" | table Name, "Enabled?", "Provided By", Application, Domain, "Has Next Steps?"
Detection has multiple contributing events searches:
| rest splunk_server=local count=0 /services/saved/searches | where isnotnull('action.correlationsearch.enabled') | fillnull value="NONE ⚠️" | eval "Provided By" = 'eai:acl.app' | eval isEnabled = if(disabled==0, "True", "False") | eval searchCount = mvcount(json_array_to_mv(json_extract('action.notable.param.drilldown_searches',"{}.search"))) | eval hasCESearch = if(isnull(searchCount), "No", "Yes") | spath input=action.notable.param.drilldown_searches | search isEnabled = "True" | eval key = mvzip('{}.name', '{}.search', "^") | mvexpand key | streamstats count AS index by title | eval searchIndex = index + " of " + searchCount | eval "Search Name" = mvindex(split(key, "^"), 0) | eval "Search Index" = searchIndex | eval "Contributing Events Search" = mvindex(split(key, "^"), 1) | rename eai:acl.app AS Application, title AS Name, action.notable.param.security_domain AS Domain, isEnabled AS "Enabled?", hasCESearch AS "Has Multiple Searches?", searchIndex AS "Search Index" | table Name, "Enabled?", "Provided By", Application, Domain, "Has Multiple Searches?", "Search Name", "Contributing Events Search"
Detection is mapped to atomic red team tests:
| rest splunk_server=local count=0 /services/saved/searches | where isnotnull('action.correlationsearch.enabled') | fillnull value="NONE ⚠️" action.notable.param.security_domain | eval "Provided By" = 'eai:acl.app' | eval isEnabled = if(disabled==0, "True", "False") | search isEnabled = "True" | eval hasARTMapping = if(match('action.correlationsearch.annotations',"atomic_red_team"), "Yes", "No") | spath input=action.correlationsearch.annotations | rename atomic_red_team{} AS atomic_red_team | mvexpand atomic_red_team | rename eai:acl.app AS Application, title AS Name, action.notable.param.security_domain AS Domain, isEnabled AS "Enabled?", hasARTMapping AS "Has Atomic Mapping?", atomic_red_team AS "Atomic Red Team Test Mapping" | table Name, "Enabled?", "Provided By", Application, Domain, "Has Atomic Mapping?", "Atomic Red Team Test Mapping"
Detection is mapped to KillChain:
| rest splunk_server=local count=0 /services/saved/searches | where isnotnull('action.correlationsearch.enabled') | fillnull value="NONE ⚠️" action.notable.param.security_domain | eval "Provided By" = 'eai:acl.app' | eval isEnabled = if(disabled==0, "True", "False") | search isEnabled = "True" | eval mappedToKillChain = if(match('action.correlationsearch.annotations',"kill_chain_phases"), "Yes", "No") | rename eai:acl.app AS Application, title AS Name, action.notable.param.security_domain AS Domain, isEnabled AS "Enabled?", mappedToKillChain AS "Mapped to KillChain?" | table Name, "Enabled?", "Provided By", Application, Domain, "Mapped to KillChain?"
Detection is mapped to MITRE ATT&CK:
| rest splunk_server=local count=0 /services/saved/searches | where isnotnull('action.correlationsearch.enabled') | fillnull value="NONE ⚠️" action.notable.param.security_domain | eval "Provided By" = 'eai:acl.app' | eval isEnabled = if(disabled==0, "True", "False") | search isEnabled = "True" | eval mappedToMitre = if(match('action.correlationsearch.annotations',"mitre_attack"), "Yes", "No") | spath input=action.correlationsearch.annotations | rename mitre_attack{} AS mitre_technique | rename eai:acl.app AS Application, title AS Name, action.notable.param.security_domain AS Domain, isEnabled AS "Enabled?", mappedToMitre AS "Mapped to MITRE?", mitre_technique AS "MITRE Technique(s)" | table Name, "Enabled?", "Provided By", Application, Domain, "Mapped to MITRE?", "MITRE Technique(s)"
Risk rule with threat object defined:
| rest splunk_server=local count=0 /services/saved/searches | where isnotnull('action.correlationsearch.enabled') AND match('actions',"risk") | fillnull value="NONE ⚠️" action.notable.param.security_domain | eval "Provided By" = 'eai:acl.app' | eval isEnabled = if(disabled==0, "True", "False") | search isEnabled = "True" | eval hasThreatObject = if(match('action.risk.param._risk',"threat_object_field"), "Yes", "No") | spath input=action.risk.param._risk | rename eai:acl.app AS Application, title AS Name, action.notable.param.security_domain AS Domain, isEnabled AS "Enabled?", hasThreatObject AS "Has Threat Object?", {}.threat_object_field AS "Threat Object", {}.threat_object_type AS "Threat Object Type" | table Name, "Enabled?", "Provided By", Application, Domain, "Has Threat Object?", "Threat Object", "Threat Object Type"
Detection generates notable events:
| rest splunk_server=local count=0 /services/saved/searches | where isnotnull('action.correlationsearch.enabled') | fillnull value="NONE ⚠️" action.notable.param.security_domain | eval "Provided By" = 'eai:acl.app' | eval isEnabled = if(disabled==0, "True", "False") | search isEnabled = "True" | eval hasCESearch = if((isnull('action.notable.param.drilldown_search') OR 'action.notable.param.drilldown_search' == "") AND (isnull('action.notable.param.drilldown_searches') OR 'action.notable.param.drilldown_searches' IN ("", "[]")), "No", "Yes") | spath input=action.notable.param.drilldown_searches | eval "Contributing Events Search" = if(isnull('action.notable.param.drilldown_search') OR 'action.notable.param.drilldown_search' == "",'{}.search', 'action.notable.param.drilldown_search') | rename eai:acl.app AS Application, title AS Name, action.notable.param.security_domain AS Domain, isEnabled AS "Enabled?", hasCESearch AS "Has Search?" | table Name, "Enabled?", "Provided By", Application, Domain, "Has Search?", "Contributing Events Search"
Risk rule with risk object defined:
| rest splunk_server=local count=0 /services/saved/searches | where isnotnull('action.correlationsearch.enabled') AND match('actions',"risk") | fillnull value="NONE ⚠️" action.notable.param.security_domain | eval "Provided By" = 'eai:acl.app' | eval isEnabled = if(disabled==0, "True", "False") | search isEnabled = "True" | eval hasRiskObject = if(match('action.risk.param._risk',"risk_object_field") OR isnotnull('action.risk.param._risk_object'), "Yes", "No") | spath input=action.risk.param._risk | eval riskObject = coalesce('{}.risk_object_field','action.risk.param._risk_object') | eval riskObjectType = coalesce('{}.risk_object_type','action.risk.param._risk_object_type') | rename eai:acl.app AS Application, title as Name, action.notable.param.security_domain AS Domain, isEnabled AS "Enabled?", hasRiskObject AS "Has Risk Object?", riskObject AS "Risk Object", riskObjectType AS "Risk Object Type" | table Name, "Enabled?", "Provided By", Application, Domain, "Has Risk Object?", "Risk Object", "Risk Object Type"
Detection generates risk:
| rest splunk_server=local count=0 /services/saved/searches | where isnotnull('action.correlationsearch.enabled') | fillnull value="NONE ⚠️" action.notable.param.security_domain | eval "Provided By" = 'eai:acl.app' | eval isEnabled = if(disabled==0, "True", "False") | search isEnabled = "True" | eval generatesRisk = if(match('actions',"risk"), "Yes", "No") | rename eai:acl.app as Application, title AS Name, action.notable.param.security_domain AS Domain, isEnabled AS "Enabled?", generatesRisk AS "Generates Risk?" | table Name, "Enabled?", "Provided By", Application, Domain, "Generates Risk?"
Detection uses Threat Intelligence Management action:
| rest splunk_server=local count=0 /services/saved/searches | where isnotnull('action.correlationsearch.enabled') | fillnull value="NONE ⚠️" action.notable.param.security_domain | eval "Provided By" = 'eai:acl.app' | eval isEnabled = if(disabled==0, "True", "False") | search isEnabled = "True" | eval usesTIM = if(match('actions',"trustar_submit_event"), "Yes", "No") | rename eai:acl.app AS Application, title as Name, action.notable.param.security_domain AS Domain, isEnabled as "Enabled?", usesTIM AS "Uses Threat Intelligence Management?" | table Name, "Enabled?", "Provided By", Application, Domain, "Uses Threat Intelligence Management?"
Detection uses other actions:
| rest splunk_server=local count=0 /services/saved/searches | where isnotnull('action.correlationsearch.enabled') | fillnull value="NONE ⚠️" action.notable.param.security_domain | eval "Provided By" = 'eai:acl.app' | eval isEnabled = if(disabled==0, "True", "False") | search isEnabled = "True" | eval usesOtherActions = if(replace(replace(actions, "notable|risk|trustar_submit_event", ""), "(^,\s+|\s*,$)", "")!="", "Yes", "No") | rename eai:acl.app as Application, title as Name, action.notable.param.security_domain AS Domain, isEnabled AS "Enabled?", usesOtherActions AS "Uses other actions?", actions AS Actions | table Name, "Enabled?", "Provided By", Application, Domain, "Uses other actions?", Actions
Ensure RBA related dashboards and investigative workflows are populating correctly
- Review the out-of-the-box dashboards pertaining to risk. Are all panels populating as expected? If not, determine why.
- Review investigative workbench panels for both asset and user. Are all panels populating as expected? If not, determine why.
Document findings and recommendations into RBA maturity template
Download the RBA Maturity Plan template and use it to formulate a report of your RBA assessment.