Building a SOAR playbook for running commands remotely
You want to build a Splunk SOAR playbook that will perform automated troubleshooting remotely and return the results to SOAR for further automation or analysis.
This article assumes that you already know how to write SOAR playbooks. If not, see Create playbooks in Splunk SOAR, and then come back to this article.
How to use Splunk software for this use case
This playbook is designed to be used as a blueprint to outline some basic troubleshooting of Splunk instances, which you can take and adapt to more specific scenarios within your environment.
This automation runs commands depending on the host operating system (Linux or Windows) which uses either the SSH App and the Windows Remote Management App in SOAR. For this example it has already been determined that the Operating System is Linux based on the upstream filter block, and as such commands will be run remotely via SSH.
This troubleshooting scenario is designed to test Splunk instances that have been identified as failing to perform as intended. For example, we have:
- A Splunk forwarder: To confirm network connectivity to the Splunk Deployment Server (DS) for remote configuration management
- A Splunk Heavy Forwarder (HF): For event forwarding, as well as functionality of the Splunk agent itself
To achieve this, we will SSH from SOAR to the Splunk instance, and use TELNET as a basic connectivity test to try to establish a connection to the DS and HF, followed by additional troubleshooting steps.
Here are the components needed:
Action block (TELNET): First we will run the TELNET commands against the Deployment Server (TCP8089) and the Heavy Forwarder (TCP9997).
Use the SSH app execute program function to Telnet the Splunk Deployment Server using the configuration shown in the following screenshot. This leverages a command value which takes an echo command required to 'trick' the system intro properly performing the subsequent TELNET command to the deployment server (1.2.3.4) on TCP8089, capturing the results of the test.
Action block (Splunk status): Next we run a command to check the status of the Splunk agent to ensure it is running on the problem host. In this example, the Splunk instance was not managed by systemd; therefore, the status was checked directly through the Splunk binary and not systemctl.
As in the above block, we use the command value to run the desired command, from which the results will be captured.
Format block: Next we format the results of the previous two actions. Note that Drop None is selected to keep your results clean. 
Utility block: Use the artifact_update utility to add the a custom field to the artifact that marks it as old. This is helpful for future use or filtering, as shown in Remote Script Execution, as it allows us to differentiate between artifacts that have been processed and those that still require the automation to run.
Action block: Update the ES notable with a comment that the automation has been run and what the results of each command were.
While this example focused on a troubleshooting agent, you can also reuse this functionality to support automated investigations and incident response. You could design playbooks that run native OS commands to gather evidence (ps, netstat, etc).
Additional resources
This article is one of many in a series on using SOAR automation to improve your SOC processes. Check out the additional playbook guidance or some of the links below to continue getting more value out of Splunk SOAR.
- Splunk Lantern Article: Understanding playbook types in SOAR
- Splunk Lantern Article: Improving SOAR playbook design
- Splunk Lantern Article: Applying useful SOAR playbook design features
- Practical SOAR examples from the field (.conf24) [ recording | slides ]
- Practical SOAR examples from the field: Part 2 (.conf25) [ recording | slides ]

