Skip to main content
Os artigos do Splunk Lantern estão agora disponíveis em português.
Splunk Lantern

Establishing authentication requirements for node scaling automation


Before we get into the details of building dynamic scaling for Splunk Edge Processor, we have to understand and prepare for on-demand authentication.

It might help to familiarize yourself with the node install procedure here if you are unfamiliar with the process.

In the following articles, we’ll examine the Splunk Edge Processor node install script in depth, but for the purposes of this section it’s important to understand that the node install script requires that a valid token be provided. The line in question looks like this:

# Create a token file containing the authentication token that allows the Edge Processor instance to connect to SCS
echo "eyJhbGciOiJSUzI1NiIsImtpZCI6IlFnZlNhQ1NMUj…" > splunk-edge/var/token

When manually provisioning nodes, a valid and unexpired token is included in the script provided by the user interface. This token is dynamically generated and expires after 10 minutes. If you are provisioning nodes manually, this isn’t an issue as the script will be updated from the interface and you can copy and paste it. However, for unattended or automated provisioning, this causes a problem since the token in the script will expire. The token needs to be updated with a current, unexpired version.

There is not currently a RESTful workflow for obtaining this token, nor is there an option to generate a long-lived token. In order to regenerate tokens on demand, we have to use a few special tools.

To get started, you can request access to the API Token Automation Beta program, which will contain the executables used in this process and in the processes outlined in the articles to follow. The package contains three files:

Read the following items in the table carefully.

bootstrap The bootstrap process is required to be run one time and will create a long-lived service account. This service account will be used to provision tokens later during the authentication process.
auth This auth command is used to retrieve a current authentication token using the details returned by the bootstrapping process.
cleanup The cleanup command can optionally be used to remove the service account created by the bootstrap process. This is primarily used to roll the service account credentials as part of ongoing security practices.

This beta process might change over time and is not officially supported by Splunk.


Step 1 - Bootstrap

The first thing we will do is create the required service account using the bootstrap command. By examining the usage of the command, we can see that it requires a tenant name and a current API token.

Usage of ./bootstrap:
  -tenant string
        Your tenant name
  -token string
        Your SCS Token:<tenant>/settings

You can run this command on any Linux server with network access to your Splunk Cloud Platform tenant. The output of this command provides a JSON payload with several critical pieces of information. Capture and securely store this information. Consider this data equivalent to a username and password.


There are two pieces of information from this payload that we will use, the PrivateKey and the ServicePrincipalName. If we format the payload and take a closer look, the items in red are the pieces that will be used later.

  "PrivateKey": {"kty": "EC",
    "kid": "b5868fd60950f97c",
    "crv": "P-256",
    "alg": "ES256",
    "x": "U0lFkihpgyHDRmUVm4nll5TREzEb2HJMgh6WjAQRiAY",
    "y": "fTcmqf12kK1mo0IuZ2CCRzUG8jqfQ7bqEQhuApLbwq0",
    "d": "NjXY3A8FiGEIkBdsuejMe8WphaEo1jweWqlWh53gyZk"},
  "ServicePrincipalName": "aa016512c9de82",
  "RoleName": "epsp.node.onboard",
  "GroupName": "identity.aa016512c9de82"

In the next steps, we will provide these details to the auth process in order to retrieve a current valid API token.

Step 2 - Auth

With a service account now provisioned, we can use those credentials, along with the auth command, to retrieve a current API token for your tenant. Examining the usage of the auth command, we can see that it requires 3 parameters: your cloud tenant name, the service principal name, and its private key.

Usage of auth:
  -privatekey string
        The private key associated with the service principal from the bootstrapping process
  -serviceprincipal string
        The name of the service principal from the bootstrapping process
  -tenant string
        Your tenant name

As we’ll see in later articles, there are a number of different ways to store and provide these parameters to the auth executable, but for purposes of understanding and testing our work, we will just use simple environment variables.

First let’s store the service principal name and the private key from above into variables.

Be sure to only use the private key JSON itself and not the entire JSON from the bootstrap output. You must use single quotes to put JSON into an environment variable.

export spn=”aa016512c9de82”
export pk=’{"kty":"EC","kid":"b5868f0950f97c","crv":"P-256","alg":"ES256","x":"U0lFkiaHDRmUVm4nll5TREzEb2HJMgh6WjAQRiAY","y":"fTcmqf12kK1mo0IuZ2CCRzUG8jqfQ7bqEpLbwq0","d":"NjXY3iGBdsuejMe8WphaweWqlWh53gyZk"}’

With these variables set, we can easily pass them to the auth command.

./auth -privatekey $pk -serviceprincipal $spn -tenant <your-tenant>

Running this command with your specific values should result in an API token you can use with your provisioning process.

Some notes about storing and using the information used in Step 1 and Step 2:

  • It is very important that the private key and service principal are stored securely, since this information is essentially a persistent username and password that can be used to create API tokens.
  • How you store and subsequently retrieve and use the credentials will largely depend on your automation frameworks. You might choose to store the private key and service principal individually, store the full JSON output from bootstrap and then use tools like jq or python to extract the data as needed, a password vault, or some other mechanism. In the Kubernetes examples that follow, we’ll store the credentials in Kubernetes secrets and retrieve them as part of our provisioning manifests.
  • While the service principal created in Step 1 is persistent, the API token generated by the auth process in Step 2 will still expire after 10 minutes.

The bootstrapping process in Step 1 is meant to create a long-lived service account and only needs to be run one time from an administrative server with network access to Splunk Cloud Platform. The auth process in Step 2 is meant to leverage the service account from Step 1 to create API tokens on demand and can be run as often as needed during the provisioning process.

Optional step - Credential cleanup

Since the service principal created in Step 1 is persistent and represents an account that can be used to provide API tokens, it’s important that we can remove those credentials at a later time. This could be used as part of credential rolling, risk mitigation, retirement of accounts or process, or just general cleanup.

The cleanup process is a little different than the steps above. The cleanup binary expects two environment variables to be set: SCS_PRINCIPAL and SCS_SYSTEM_TOKEN.

export SCS_PRINCIPAL=”<your service principal name from above>”
export SCS_SYSTEM_TOKEN=”<a current token from<tenant>/settings>”

After these variables are set, running cleanup will use a current API token to remove the supplied service principal from Splunk Cloud Platform and you must affirmatively answer a prompt to perform the delete. Here is an example:

This will delete principal: aa016512c9de82
Do you want to continue? (y/n): y
2024/04/01 19:27:42 Authenticated against SCS system as principal:
2024/04/01 19:27:44 Deleted principal aa016512c9de82

Next steps

Now that we understand how to get API tokens on demand, we can look at building out automation for scaling up our Splunk Edge Processor nodes on-demand.