The Splunk deployment server is a Splunk Enterprise feature that allows you to manage the configuration update process across sets of other Splunk Enterprise and Universal Forwarder instances.
- If you need to manage under 10,000 endpoints, we recommend a single deployment server instance be dedicated to exclusively managing those updates.
- If you need to manage over 10,000 endpoints, we recommend multiple deployment server instances be dedicated to exclusively managing those updates.
There are two options available for running Splunk deployment servers at this scale.
Option 1 - Run multiple single instances of Splunk deployment server
- This option requires distributing Universal Forwarders deployment connections manually by changing the targetUri in the deploymentclient.conf file to match the target deployment server nodes.
- Deployment servers should be deployed in a 10k:1 ratio.
- No consolidated UI. All deployment server nodes are individually managed in this architecture.
This option might be unsuitable for some customers because we do not recommend the use of a third-party load balancer or (RR) DNS entry to spread Universal Forwarders across multiple deployment server nodes. Using a third-party load balancer has been known to produce several issues:
- Universal Forwarders will fail to negotiate with the deployment server and become unmanageable, due to a known bug. Note that this bug is addressed in Universal Forwarders starting with version 9.0.2 and will be backported for Universal Forwarder releases 8.1.12 and 8.2.9 respectively.
- Universal Forwarders relying on Windows DNS have shown not to honor the TTL set for deployment server round-robin records and will not switch deployment server nodes on demand. This is due in part to how TTLs are handled by our Universal Forwarder and is related to differences between Linux and Windows based DNS systems.
Option 2 - Use the Splunk Distributed Deployment Server project (SDDS)
With this option, you can:
- Host multiple instances of deployment server on a smaller footprint using Kubernetes to manage threads.
- Deploy and secure a pre-configured load-balancing service to maximize Universal Forwarder connection traffic and security.
- Monitor deployment server CPU/memory load and activity using the new SDDS Monitoring Console app.
The SDDS project is found at https://github.com/splunk/sdds. SDDS:
- requires a dedicated instance(s) running a local Kubernetes framework or a separate Kubernetes deployment capable of running MetalLB services.
- ships with a pre-configured L2/L4 load-balancer tested to work consistently with Splunk Universal Forwarder handshaking across legacy Universal Forwarders versions.
- adds centralized monitoring through a new SDDS Monitoring Console app.
- uses Kubernetes to utilize threads more efficiently for better scaling. Using Kubernetes, deployment servers are not multi-threaded by default until version 9.0.2. They can also be deployed in a 25k:1 ratio.
The overall architecture looks like this:
The Splunk Distributed Deployment Server runs a set of virtualized copies of a Splunk deployment server on a standard Linux host. This configuration allows for:
- high availability through the use of replicas.
- use of vCPU more efficiently, requiring fewer threads to support the same workloads.
- a pre-configured load balancer to help increase the overall number of Universal Forwarder handshake sessions supported.
The configuration involves mapping a local mount point
/sdss-global-config to the pods
splunk/etc/apps directory within each deployment server replica.
The configuration structure contains two folders:
/defaultfolder contains a
serverclass.conffile that defines two global settings:
blacklist(note that it is a recommended best practice to use the global blacklist setting).
/localfolder also contains a
serverclass.conffile, which defines the server classes and apps to be deployed.
Another local mount point
/splunk-sdss-deployment-apps is mapped to the
splunk/etc/deployment-apps directory within each deployment server replica. This mapping aligns with the default
repositoryLocation as defined in the
[global] crossServerChecksum=true blacklist.0 = *
[serverClass:fwd-app-1] filterType = whitelist whitelist.0 = * [serverClass:fwd-app-1:app:fwd-app-1] filterType = whitelist whitelist.0 = * repositoryLocation = /opt/splunk/etc/deployment-apps stateOnClient = enabled restartSplunkd = True
When changes are made to either
sdds-deployment-apps/<apps>, this results in a checksum change and requires a refresh of the Splunk deployment server nodes in the pod using
kubectl scale --replicas=0 splunksdds; kubectl scale --replicas=3 splunksdds
It contains the following dashboards:
- Monitoring Console (Home)
- Deployment server host/replica metrics
- Splunk Connect for Kubernetes - CPU/memory utilization
- SDDS checksum tracking
- SDDS handshakes/negotiations counts
- SDDS Universal Forwarder download/installation metrics
- Kubernetes: Extended Metrics
- CPU utilization over time by host, namespace, and replicas
- SDDS: Extended Metrics
- Metrics tracking over time for Universal Forwarder handshakes, downloads, and OK/failure installs
Starting with Splunk Enterprise version 9.0.0, deployment server now requires a additional restmap.conf and server.conf configurations. You can find more details on these configurations at in Splunk Docs on the restmap.conf and server.conf pages.
The configurations address the following security vulnerabilities:
These resources might help you understand and implement this guidance:
- Splunk Docs: About deployment server and forwarder management
- Splunk Docs: Splunk Enterprise distributed deployment manual
Splunk OnDemand Services: Use these credit-based services for direct access to Splunk technical consultants with a variety of technical services from a pre-defined catalog. Most customers have OnDemand Services per their license support plan. Engage the ODS team at OnDemand-Inquires@splunk.
com if you require assistance.