Skip to main content

 

Splunk Lantern

Following best practices for using SPL2 templates

 

SPL2 is a product-agnostic, intuitive language that includes the best of both query and scripting languages. It supports both traditional SPL and SQL syntax patterns, and is designed to work with the variety of runtimes in the Splunk portfolio. SPL2 makes the search language easier to use, removes infrequently used commands, and improves the consistency of the command syntax.

Despite these benefits, learning a new query language takes time. To help our Splunk Edge Processor and Splunk Ingest Processor customers get running faster with SPL2, we've created a number of templates for popular data sources. SPL2 templates streamline data management by optimizing formatting, removing whitespace, and excluding certain fields that might not be useful. Most customers will find these features beneficial.

However, note the following before deciding to use these templates directly in your production environment:

  • In some templates, whole events are dropped when they don't hold any important information.
  • In some templates, the event format is changed, which might affect the field extractions written on _raw.
  • Dropping fields and events might affect your saved searches and dashboards, based on the field values or by using regexes on _raw.
  • As of now Splunk Edge Processor does not store the dropped events, which can be a problem if you want them later.

This article provides guidelines to help you use these templates securely.

Prerequisites

Before you start using an SPL2 template to reduce log size, you should have the following:

  • Splunk Cloud Platform with Splunk Edge Processor or Splunk Ingest Processor enabled
  • A Splunk destination instance configured to index data after processing it through the SPL2 pipeline

Splunk Edge Processor and Splunk Ingest Processor are included with your Splunk platform. Learn more about the requirements to use them (Edge Processor or Ingest Processor) and how to request access if you do not already have it. If this is your first time using these features, see the getting started content (Edge Processor or Ingest Processor).

Solution

Test the SPL2 pipelines with a test environment

This is the recommended method, however, if you do not have a test environment, you can use a production environment.

Create a backup (blank) pipeline

To ensure that you use the template securely in the production environment, you should take a backup of the original logs for a few hours or days in the production environment while sending the SPL2 transformed logs to the test environment. This can be done by creating an additional, backup pipeline that will forward the data to the destination without changing it.

Creating a blank pipeline increases license usage for some time because the data is ingested twice: once with original events and again with the transformed events.

  1. Create a destination in the Edge Processor using the IP of the production environment instance where you would like to index the data. The following screenshot is an example of a S2S destination. For information on creating destinations, see Add or manage destinations.
    image1.png
  2. Select the source type and destination in the Edge Processor pipeline.
    image5.png
  3. Save the pipeline and add it to the Edge Processor or Ingest Processor.
  4. A pop up displays and asks whether you want to apply the pipeline on Edge Processor. Click Yes, apply.
  5. Select your Splunk Edge Processor and click Save. A brief message states that your changes are being saved.

Create an SPL2 pipeline using a template

  1. Create a destination in the Edge Processor using the IP address of the test environment instance where you would like to index the SPL2 transformed events.
  2. Create an SPL2 pipeline using the reduction template of your choice and select the destination.
    1. Go to your Data Management tenant > Pipelines > Templates.
    2. Click the template for which you want to create an SPL2 pipeline and, in the column on the right, select whether you want an Ingest Processor or Edge Processor pipeline.
      image4.png
    3. You will be taken to a screen where the left panel displays all function names, the middle panel contains the SPL2 code, and the source type is preconfigured (Example: ‘ps’ in the right panel of the following screenshot). Some samples have been inserted into the SPL2 pipeline for testing purposes. To view them, click Inserted Sample in the right panel. You can add more samples or edit these samples to test the SPL2 pipeline with your samples.

      Make sure you have the source type created in the Splunk Data Management tenant. For information on adding a source type, see Add source types for Edge Processors.


      image6.png
    4. Run the pipeline using the image2.png button. The output will be visible in the preview.
    5. Click Save Pipeline and enter a suitable name.
  3. Add the pipeline to the Edge Processor. For more information on this process, see Apply pipelines to Edge Processors.

Start forwarding the events to the Edge Processor. The pass-through pipeline sends the data to the production environment instance, and the reduction pipeline sends the data to the test environment instance. The transformed events are now in the test environment and the original events are in the production environment.

Monitor the data for a while and follow the safety checks. When you are satisfied that the configuration has passed the safety checks, you can delete the blank pipeline and change the destination in the SPL2 pipeline to use the production environment instead of the test environment.

Test the SPL2 pipelines with a production environment

If you do not have a test environment, you can still test the pipelines using the following steps:

  1. Follow the steps above to create a blank pipeline.
  2. Follow the steps above to create an SPL2 pipeline using the template, and select the destination.
  3. Modify the pipeline to send the events to a different index. This can be done by changing the index field in the SPL2 pipeline, for example, | eval index="temporary_index". Make sure the provided index is created in the destination Splunk instance.
    image3.png
  4. Save the pipeline and add it to the Edge Processor.

Start forwarding the events to the Edge Processor. The pass-through pipeline sends the data to the production environment instance, and the reduction pipeline also sends the data to the production environment instance but to the different index you set in the reduction pipeline.

Monitor the data for a while and follow the safety checks. When you are satisfied that the configuration has passed the safety checks, you can delete the blank pipeline and remove the logic for using the custom index from the reduction template.

Perform safety checks

  • Verify that proper tagging of the events is done and all the CIM fields are extracted with correct values.
  • Validate that the saved searches written for the applied source type are working correctly.
  • If you have created any dashboards based on the data from the applied source type, validate they are correctly populated with the transformed data.

Next steps

Now that you understand how to use SPL2 templates safely, you can begin implementing them in your environment. We currently have documentation for the following log sources:

These additional Splunk resources might help you understand and implement this use case: