Skip to main content

 

Splunk Lantern

Deploying and managing your Splunk POD environment

Splunk POD is an integrated hardware and software solution that combines Cisco UCS servers, Nexus switches, and the Splunk Enterprise platform to deliver a predictable, high-performance Splunk experience. It uses the Splunk Operator for Kubernetes (SOK) to automate deployment and management, reducing setup time from weeks to hours, integrated with a bespoke tool—the Splunk Kubernetes Installer (SKI)—to build the Kubernetes and Splunk environment.

Core architecture concepts

Your Splunk POD environment consists of a few key node types:

  • Bastion node: This is your management server. It hosts the Kubernetes installer and is where you will run all commands to manage the cluster. It is not part of the Kubernetes cluster itself.
  • Controller nodes (3): These nodes host the Kubernetes control plane, providing orchestration and high availability for the cluster.
  • Worker nodes: These nodes do the heavy lifting, running all the Kubernetes pods for Splunk components (search heads, indexers, etc.) and the supporting object store (SeaweedFS).

All installer commands are run from the bastion node and reference a central configuration file, typically named cluster-config.yaml. This file defines your cluster's topology and is critical for all management operations.

Performing your first deployment

Follow these steps to deploy your Splunk POD cluster for the first time. All actions are performed from the bastion node unless otherwise specified.

  1. System preparation: Move the resources into place and ensure the configuration of the nodes is correct.
  2. Prepare the bastion host: Assemble the Kubernetes and Splunk information for installation.
  3. Build the static cluster configuration file: Create the YAML file that defines the environment.
  4. Run the installation: Using the information acquired, run the SKI with your generated cluster configuration YAML file.
  5. Monitor and validate the cluster: Ensure that what is desired is what is deployed.

Step 1: System preparation

Before running the installer, ensure that all your servers (bastion, controllers, and workers) meet the pre-installation requirements:

  • Operating system: Red Hat Enterprise Linux (RHEL) 9.5+ is installed.
  • System settings: SELinux and Transparent Huge Pages (THP) are disabled, and time synchronization (NTP/Chrony) is configured on all nodes.
  • SSH access: You have configured pre-authenticated SSH access from the bastion node to all controller and worker nodes. This typically involves creating a dedicated user (for example, splunkadmin) on each node with passwordless sudo privileges and distributing an SSH public key to that user's authorized_keys file.

Step 2: Prepare the bastion host

Place the following required files on your bastion server:

  1. The kubernetes-installer-standalone binary.
  2. Your Splunk Enterprise license file (for example, enterprise.lic).
  3. The SSH private key that corresponds to the public key you distributed in Step 1.
  4. A static cluster configuration file, which you will create in the next step.

Step 3: Build the static cluster configuration file

Create a YAML file named cluster-config.yaml. This file acts as the blueprint for your entire deployment, telling the installer which nodes are controllers, which are workers, and what apps to install.

You must customize this file with the correct IP addresses, SSH user/key paths, and app locations for your environment. 

This is an example for a pod-medium-es deployment. The file must include the preceding “---” or the YAML will be incomplete.

--- 
apiVersion: enterprise.splunk.com/v1 
kind: KubernetesCluster 
profile: pod-medium-es 

licenses: 
- /path/to/license_file/enterprise.lic 

ssh: 
  user: "splunkadmin" 
  privateKey: "./nshane-key.pem" 

controllers: 
 - address: "UUU.YYY.XXX.AAA" 
 - address: " UUU.YYY.XXX.AAB " 
 - address: " UUU.YYY.XXX.AAC " 

workers: 
 - address: " UUU.YYY.XXX.AAD " 
 - address: " UUU.YYY.XXX.AAE" 
 - address: " UUU.YYY.XXX.AAF" 
 - address: " UUU.YYY.XXX.AAG" 
 - address: " UUU.YYY.XXX.AAH" 
 - address: " UUU.YYY.XXX.AAI" 
 - address: " UUU.YYY.XXX.AAJ" 
 - address: " UUU.YYY.XXX.AAK" 
 - address: " UUU.YYY.XXX.AAL" 
 - address: " UUU.YYY.XXX.AAM" 
 - address: " UUU.YYY.XXX.AAN" 
 - address: " UUU.YYY.XXX.AAO" 

clustermanager:
apps: 
  cluster: 
   - "./apps/Splunk_TA_ForIndexers_8.1.1-176740.tgz" 

standalone: 
 - name: es-sh 
  apps: 
   premium:  
    - "./apps/splunk_app_es-8.1.1-176740.tgz"

Step 4: Run the installation

With all your files in place, run the installer with the -deploy argument. This command initiates the entire process of setting up Kubernetes, deploying the object store, and installing the Splunk platform.

${path_to_install}/kubernetes-installer-standalone -static.cluster cluster-config.yaml -deploy

On your first run, you will be prompted to accept the Terms and Conditions. This is a one-time action.

Step 5: Monitor and validate the cluster

The installer command will finish in about 5-10 minutes, but the cluster will continue to initialize in the background. The full process for all pods to become ready can take 20-25 minutes.

You can monitor the progress using the following commands:

  • Check node status: Verify all worker nodes join the cluster and show a Ready status.
    ./kubernetes-installer-standalone -static.cluster cluster-config.yaml -status.workers
  • Check pod status: Watch as the pods are created and started. The cluster is ready for use when all pods show a Running status and are 1/1 in the READY column.
    ./kubernetes-installer-standalone -static.cluster cluster-config.yaml -status

After all nodes and pods are ready, your cluster is deployed and operational.

Day-to-day operations and administration

After your initial deployment, you'll use these commands for routine management.

Accessing Splunk Web and other UIs

  • Retrieve credentials: To get the auto-generated admin password and HEC token, run:
    ./kubernetes-installer-standalone -static.cluster cluster-config.yaml -get.creds
  • Accessing services: By default, all Splunk services are accessible via the IP address of any worker node, using different ports.
    Component Port URL example
    Search Head U 443 https://<ANY_WORKER_IP>
    Cluster Manager UI 1443 https://<ANY_WORKER_IP>:1443
    Monitoring Console UI 3443 https://<ANY_WORKER_IP>:3443
  • Local documentation: Information specific to the install is provided by starting a new process through the command (here starting the web server on port 8080, the default)
    ${INSTALLED_PATH}/kubernetes-install-standalone –web [–web.port 8080]
  • After it is started, the system can be accessed by selecting any of the worker IPs on port 8080 (or whatever is designated). This includes all the available documentation, as well as the list of connections available through the different port.
    Screenshot 2025-12-16 at 3.53.16 PM.png

Managing apps

To add or upgrade an app:

  1. Add the path to your app package (.spl, .tgz) in the appropriate section of your cluster-config.yaml.
  2. Re-run the deploy command. The installer will upload the app, and the framework will handle deploying it. (For example, ./kubernetes-installer-standalone -static.cluster cluster-config.yaml -deploy)

Upgrading Splunk POD

  1. Download the new kubernetes-installer-standalone binary to your bastion node.
  2. Run the deploy command. The installer will handle the upgrade for all components. (For example, ./kubernetes-installer-standalone -static.cluster cluster-config.yaml -deploy)

Troubleshooting

Quick command reference

Command Description
./kubernetes-installer-standalone -h Displays the help menu and a full CLI reference.
./kubernetes-installer-standalone -version Prints the current version of the installer binary.

./kubernetes-installer-standalone -static.cluster <file> -deploy

Deploys a new cluster or applies updates/upgrades to an existing one.

./kubernetes-installer-standalone -static.cluster <file> -destroy

Completely removes the cluster and all its components.

./kubernetes-installer-standalone -static.cluster <file> -status

Checks the high-level status of cluster pods

./kubernetes-installer-standalone -static.cluster <file> -status -status.verbose

kubectl get pods -o wide -A

./kubernetes-installer-standalone -static.cluster <file> -status.workers

Checks the status and readiness of all worker nodes.

./kubernetes-installer-standalone -static.cluster <file> -get.creds

Retrieves the auto-generated admin password and HEC token for the cluster.

./kubernetes-installer-standalone -static.cluster <file> -get.logs

Downloads logs from a specific Splunk pod to the bastion server.

./kubernetes-installer-standalone -static.cluster <file> -get.diag

Generates and downloads a Splunk diag file from a specific Splunk pod.

./kubernetes-installer-standalone -static.cluster <file> -web

Serves the built-in documentation on a local web server (default port 8080).

Additional resources

These resources might help you understand and implement this guidance: