Using the Cisco Time Series Model 1.0 on DSDL 5.2.3
Splunk has released the official Cisco Time Series Model (CTSM) 1.0, a pretrained Transformer-based model for time series forecasting, now available on Hugging Face. Compared to the initial preview version released in November 2025, the 1.0 release features a lighter model weight for faster inference and additional quantile outputs for improved confidence intervals.
CTSM is available for Splunk Cloud Platform users via the AI Toolkit. For on-premises Splunk Enterprise users, the most effective way to utilize this model is through the Splunk App for Data Science and Deep Learning (DSDL).
We previously published a blog post that demonstrates how to use the preview version with DSDL. With this new release, we have updated the DSDL integration to support both the preview and the current 1.0 versions. In this article, we demonstrate how to use the latest CTSM 1.0 with DSDL 5.2.3 using our updated commands.
Preparation of DSDL
Splunk DSDL is an application that can be installed on Splunk Enterprise and Splunk Cloud Platform. It connects to a customer-provided container backend, running Docker or Kubernetes, to extend the Splunk platform with AI capabilities.
Splunk provides official container images for DSDL to support various capabilities. For CTSM integration, you can use either the Transformers CPU (5.2.3) or the Transformers GPU (5.2.3) image, depending on your specific container runtime.
After DSDL 5.2.3 is properly installed and configured, navigate to the Setup > Containers page to select and start a container image.

Select the Transformers CPU (5.2.3) or Transformers GPU (5.2.3) image, ensuring you choose the proper runtime and cluster target based on your specific environment. Finally, click START to spin up the container.

The container includes a JupyterLab environment for development and testing purposes. Click JUPYTER LAB and enter the password to log in. The default password is Splunk4DeepLearning.
After the container is running, you are ready to use CTSM directly from the Splunk search bar. When you run the model inference search command, the model files are downloaded from Hugging Face or loaded from the local cache.
If your environment is air-gapped (disconnected from the internet) or you wish to avoid frequent downloads, you can manually download the torch_model.pt file from the Hugging Face repository and place it in the appropriate directory within the JupyterLab UI, such as app/model/data.

Search command for inference
To perform inference using the CTSM, use the following SPL command:
| fit MLTKContainer algo=ctsm_forecast hf_repo="repo_name" local_path="path/torch_model.pt" value_field="Number" forecast_steps=128 * into app:ctsm_forecast
Parameters
- value_field (Required): Specifies the field name of the time series from your upstream search results.
- hf_repo (Optional): The Hugging Face repository from which the model is downloaded. If not specified, it defaults to cisco-ai/cisco-time-series-model-1.0.
- local_path (Optional): The local path where the model file is stored. If you have manually downloaded
torch_model.ptand placed it in your JupyterLab directory, use the path:/srv/app/model/data/torch_model.pt. - forecast_steps (Optional): Specifies the forecasting horizon (maximum value is 128).
The command returns a list of fields prefixed with predicted_. These represent the percentiles of the forecasted values, including: mean, P1, P5, P10, P20, P25, P30, P40, P50, P60, P70, P75, P80, P90, P95, and P99.
The output series matches the exact length of the input time series. If your time series data extends to the current timestamp, the CTSM model will "hold back" the last forecast_steps data points and use the preceding data to predict those values.
To forecast future states beyond your current data, you must pad your time series with future timestamps before sending it to the CTSM for inference, as shown in the following SPL example.
| inputlookup internet_traffic.csv | head 10000 | timechart span=5min avg("bits_transferred") as bits_transferred | eval bits_transferred = bits_transferred / 8 / 1024 / 1024 | sort _time
```
Adding data point padding to continue the timeseries for forecasting
```
| append [| makeresults count=128 | eval bits_transferred=0, _time = 0 | streamstats count as pad ]
| eventstats latest(_time) as latest_timestamp
| eval _time=if(pad>0, latest_timestamp + pad*300, _time)
| table _time bits_transferred
```
Forecasting the padded time series
```
| fit MLTKContainer algo=ctsm_forecast value_field="bits_transferred" forecast_steps=128 * into app:ctsm_forecast
In this SPL, we first pad the input time series (for example, bits_transferred) with 128 future steps, using incremental timestamps and setting the series value to 0. After the data is prepared, we use the fit command to run the inference. The model returns a table that contains the forecasted values.

When plotted on a line chart, these quantiles effectively visualize the model’s confidence intervals for future forecasts.

Search command for batched inference
If you have multiple time series of the same length and wish to forecast them simultaneously, you can use the batched inference command:
| fit MLTKContainer algo=ctsm_forecast_batched hf_repo="repo_name" local_path="/srv/app/model/data/model.pt" batch_size=32 forecast_steps=128 * into app:ctsm_forecast_batched
Parameters
- batch_size: Determines the number of time series processed simultaneously.
- forecast_steps: Specifies the forecasting horizon (maximum value is 128).
- hf_repo / local_path: These function the same as in the single-series inference command, allowing you to specify the model source or a local file path.
Unlike the standard inference command, this method does not require a specific value_field input. The command treats all input fields (excluding _time) from the upstream search results as individual time series and processes them in batches defined by the batch_size parameter.
To simplify the output, only the mean forecast of each time series will be returned. The following SPL showcases how this batched inference works.
| inputlookup internet_traffic.csv | timechart span=5min avg("bits_transferred") as bits_transferred
| eval bits_transferred = bits_transferred / 1024 / 1024
| eval field_1=bits_transferred, field_2=bits_transferred, field_3=bits_transferred, field_4=bits_transferred, field_5=bits_transferred, field_6=bits_transferred, field_7=bits_transferred, field_8=bits_transferred, field_9=bits_transferred, field_10=bits_transferred
| fit MLTKContainer algo=ctsm_forecast_batched batch_size=32 forecast_steps=128 * into app:ctsm_forecast_batched
For each input time series (for example, field_1, field_10), the mean forecasts are returned in fields prefixed with predicted_ (for example, predicted_field_1, predicted_field_10).

As with the single-series inference, the batched command performs predictions based on the last 128 steps of your input data. If you wish to forecast future states, you must pad each time series with future timestamps and placeholder values before running the inference, following the same methodology outlined in the previous SPL example.
Additional resources
You might find these resources helpful when implementing the guidance in this article:
- Splunkbase: Splunk App for Data Science and Deep Learning
- Splunkbase: Splunk AI Toolkit
- Cisco-ai Repository: Cisco Time Series Model 1.0

