Skip to main content
Splunk Lantern

Data extraction with SignalFlow

You might need to use SignalFlow when doing the following:

Prerequisites 

In order to execute this procedure in your environment, the following data, services, or apps are required:

Example

You want to extract past or streaming time series data that has been sent to Splunk Infrastructure Monitoring. You want to extract “raw” data (that is, the metrics and their values), as well as data that has been processed by Splunk analytics. 

To use SignalFlow for data extraction you must become familiar with either the Splunk v2 API (to use it with “curl” or via a client library) or the CLI. You need to learn which options provide the information you need and how to build the query using the API or the CLI. This includes an understanding of maxDelay, rollups, and resolutions.

 

Splunk Infrastructure Monitoring provides a language called SignalFlow that is primarily used to describe computations for Splunk's real-time analytics engine. The SignalFlow command-line interface (CLI) for the Splunk v2 API outputs historical or streaming data in text format to a live feed, to a simple graphical display, or as CSV-formatted text. It is good to use to:

  • Export streaming data
  • Export data with a relative time range (e.g. last 15 minutes)
  • Export raw data (no analytics applied), for a specific past time range, using a default rollup and resolution
  • Export raw data (no analytics applied), for a specific past time range, at a rollup or resolution different from the Splunk defaults
  • Export data with analytics applied in a way that isn’t reflected in a chart 

The advantages of SignalFlow are:

  • It provides powerful capabilities that let you filter data, apply analytics, and specify options for resolution, rollup, and other advanced settings.
  • You can export streaming data, meaning you can stream data directly to another target as it is being sent to Splunk Infrastructure Monitoring.
  • You can specify relative time ranges, such as the last 15 minutes, or from 2 days ago to 1 day ago, rather than only using milliseconds since epoch.

The SignalFlow CLI is not an officially supported tool. It is intended to be an example of how to use the SignalFlow analytics language part of the signalfx-python library.

When you invoke SignalFlow, you will see the prompt ->. You can then enter a SignalFlow program (even across multiple lines) and press “” to execute the program and visualize the results. Press ^C at any time to interrupt the stream, and again to exit the client. To actually extract data, you use the “publish()” API.

Example usage

In this example, we are streaming live data directly to the screen.

$ signalflow
-> data('jvm.cpu.load').mean(by='aws_availability_zone').publish()
 

To see current parameter settings, use the . command ( press “”).

-> .
{'max_delay': None,
'output': 'live',
 'resolution': None,
 'start': '-1m',
 'stop': None}
->
 

To set a parameter, use “.<parameter><value> ”:-> .start -15m.

-> .stop -1m
-> .
{'max_delay': None,
 'output': 'live',
 'resolution': None,
 'start': '-15m',
 'stop': '-1m'}
 

In this example, we are using the commands in a program named program.txt to extract non-streaming data from 15 minutes ago to 1 minute ago, and outputting it in CSV format to a file named csv-to-plot.csv.

$ signalflow --start=-15m --stop=-1m --output=csv < program.txt | csv-to-plot

Troubleshoot SignalFlow

When you use SignalFlow, the data is processed using the full capabilities of the Splunk analytics engine, which includes special handling of jitter and lag in data arrival times. There are two reasons that the analytics engine is waiting to process the computation.

The first is "max_delay", which is the amount of time we wait for delayed data before processing analytics. If not specified or set to None, the value of "max_delay" is determined automatically, based on Splunk's analysis of incoming data. To avoid delays in getting data from SignalFlow, set the "max_delay" parameter to 1s. This means that even if data is delayed, Splunk Infrastructure Monitoring will process the analytics after 1 second, without the missing data.

$ signalflow
-> .max_delay 1s

If you want to set “max_delay” to a longer period of time, make sure that your "stop" value is an amount of time, before now, greater than "max_delay". For example, if you want a "max_delay" of 30s then use a "stop" value of -31s or earlier.

-> .max_delay 30s
-> .stop -31s

The second reason computations might be delayed is related to job resolution. SignalFlow must wait to the end of the current resolution window before making its computation. For example, if the job resolution is 300000 (5m) and the "stop" value is “None” (or not specified), SignalFlow will wait until it has all data points from the current 5m time window before performing any computations.

To avoid delays, make sure your "stop" value is an amount of time, before now, greater than the job resolution. For example, if you are looking at data from a few months back, the resolution may be 3600000 (1h). In this case, use a "stop" value of -1h or more.

-> .stop -1h

If a request for the latest data yields data that is a minute old, the issue can also be related to max delay. Instead of using a "stop" value of "None" (or not specifying a value), set the "stop" value to -1m.

-> .stop -1m
  • Was this article helpful?