Monitoring physical and natural environments with AI and Splunk Edge Hub
Monitoring remote physical infrastructure and natural landscapes for rare, critical events - such as unauthorized access at a remote communications tower, equipment failure at a utility substation, or specific wildlife behaviors in conservation areas - presents a massive "needle in a haystack" data challenge. Unlike standard IT environments, these outdoor sites are often vast, disconnected, and filled with environmental noise like wind, heavy rain, or shifting light.
Traditional monitoring approaches for these remote locations are often inefficient. Relying on manual inspections or "human spotters" is costly and logistically difficult in rugged terrain. Meanwhile, streaming continuous high-definition video from a mountain peak to a central data center results in huge storage costs and bandwidth strain. High latency in remote areas can mean that by the time an alert is processed, the window to take action has already closed.
To solve this, organizations are shifting to edge processing and AI. By replacing expensive, high-bandwidth video feeds with directional audio or optimized IoT sensors and processing that data locally on Splunk Edge Hub, you can significantly reduce operational costs. This approach allows you to detect physical anomalies in real-time at the source - whether it's the sound of a failing turbine or a security breach at a remote site - and transmit only the relevant metrics to the Splunk platform. This ensures immediate visibility and alerting without the overhead of traditional backhaul.
Prerequisites
To replicate this architecture, you need:
- A Splunk Edge Hub device
- A USB sensor (such as a directional microphone or camera)
- A containerized application to process the data
- (Optional) The Splunk Operational Technology Intelligence app to configure and manage a fleet of devices
How to use Splunk software for this use case
This workflow involves capturing raw data via USB, processing it with an AI model on the Neural Processing Unit (NPU) of Edge Hub, and sending the resulting metrics to the Splunk HTTP Event Collector (HEC).
The Splunk Edge Hub NPU enables you to run machine learning models directly on the device without sending data to the cloud, reducing latency from minutes to seconds. While models created in the Splunk AI Toolkit can be deployed directly to the Edge Hub for traditional machine learning use cases, this article focuses on the Docker container approach. Containers allow you to run advanced deep learning models - such as TensorFlow Lite models for computer vision or audio classification - directly on the NPU.
A key benefit of running AI containers on the edge is dramatically reduced detection latency. When using saved searches in the Splunk platform, the minimum detection interval is one minute due to cron schedule limitations. With edge AI containers, detections happen in real-time - typically within seconds - because inference runs continuously on the device. This is critical for safety and security applications where a one-minute delay could be too long to take effective action.
1. Connect the hardware
Splunk Edge Hub acts as an industrial IoT gateway. It supports various inputs, including USB, which allows you to connect external sensors like microphones or cameras directly to the device.
- Connect your USB sensor (for example a microphone array or USB camera) to one of the USB ports on the Edge Hub.
- Ensure the device is powered and positioned to capture the target environment.
2. Identify the USB device ID
To allow a Docker container to access the physical hardware, you must identify the specific device path assigned by the Edge Hub OS.
- Access the Splunk Edge Hub Advanced Configuration web interface through the Splunk Edge Hub IP address.
- Navigate to the Containers tab.
- In the Tools section, click Scan for USB Devices.
- Locate your connected device in the list and copy the code with the device path (for example,
/dev/bus/usb/001/002or a specific audio input stream). You need this for the configuration file.
3. Create a Docker container
To perform advanced detection (like identifying specific sounds or objects) directly on the device, you must build a custom Docker container. This involves preparing a TensorFlow Lite model, building a Docker image for the ARM64 architecture, and configuring the necessary storage mappings. See Use Docker containers with Splunk Edge Hub OS for detailed instructions.
The container-based approach is ideal for use cases requiring computer vision, audio classification, or other deep learning models. You can run any AI model that can be compiled for the ARM64 architecture and TensorFlow Lite runtime, and the NPU accelerates inference to enable real-time detection. See Implement your own AI solution for detailed instructions.
Splunk provides a pre-built people detection container that enables you to implement computer vision on Splunk Edge Hub. This container uses a YOLO (You Only Look Once) model compiled for TensorFlow Lite to detect and count people in a video feed from a connected USB camera.
You can deploy this container as-is for people-counting scenarios, or use it as a template for building your own detection solutions. The same architecture applies whether you're detecting people, vehicles, specific objects, or analyzing audio patterns to identify rare events.
4. Configure the deployment manifest (edge.json)
The edge.json file tells Splunk Edge Hub how to run your container and maps the physical USB ports to the container's internal environment. See Use Docker containers with Splunk Edge Hub OS for detailed instructions.
The screenshot below shows an example edge.json file structure, including:
name: Defines a unique identifier for the containerized application. This is how the application is labeled within the Splunk Edge Hub OS and the Edge Hub Web UI.containerArchive: Specifies the exact filename of the Docker image (saved as a.tarfile) that the Edge Hub should extract and run.usbDevices: Maps physical hardware paths from the Edge Hub (the host) into the container. This allows the AI model inside the container to "see" and collect data from your USB sensors.portMap: Defines the network communication bridge between the Edge Hub and the container. It follows the format"HostPort:ContainerPort", allowing external traffic or the Edge Hub OS to communicate with services running inside the Docker environment.mappedStorage: The directory path inside the container that is linked to Edge Hub persistent storage. This is used for saving logs, configuration files, or captured data that needs to persist even if the container restarts.mappedStorageMb: Sets a hard limit (in Megabytes) on how much data the container can write to the mapped storage. This prevents a single application from consuming all available disk space on the device.tileConfiguration: A nested object that controls the local User Interface (UI) on the physical Splunk Edge Hub screen.displayTileOnHub: A boolean (true/false) that determines whether a dedicated visual tile for this application should appear on the built-in display in the Edge Hub.metric_name: The specific variable or data point (such aspeople_count) that the Edge Hub should track and display in real-time on the tile.min_range/max_range: Defines a numerical scale for the data. This helps the Edge Hub render visual elements like gauges or progress bars correctly based on expected data limits.showCameraNavigationButton: When set to true, this adds a button to the Edge Hub UI that allows a user to tap and view the live camera feed associated with the detection logic.sensor_tile_display_name: The human-readable title (for example, "People Detection") that appears at the top of the tile on the device's physical screen.

5. Deploy the solution
You can deploy the container directly to a single device or manage deployments across a fleet of devices.
- Single device deployment: Upload your container bundle through the Splunk Edge Hub Web UI. See Use Docker containers with Splunk Edge Hub OS for detailed instructions.
- Fleet management: Deploy and manage containers centrally across multiple Edge Hub devices using the Splunk Operational Technology Intelligence app.
6. Monitor and analyze in the Splunk platform
After the container is running, it processes the environmental data locally. When an event is detected (for example, a specific sound frequency or visual object), the container sends the metrics to the Splunk platform via HEC. You can then create:
- Real-time alerts: Configure alerts in the Splunk platform to notify you immediately when the Edge Hub detects an anomaly.
- Dashboards: Visualize the frequency and intensity of detected events over time.
If network connectivity is lost, the Edge Hub caches data locally and forwards it after the connection is restored, ensuring no data is lost during remote monitoring operations.
Frequently asked questions
Can I connect any USB device to the Splunk Edge Hub?
Yes. Generally, if the device has standard Linux driver support, you can connect it. You can use the "Scan for USB Devices" tool in the Edge Hub configuration page to identify the specific device ID and map it in your edge.json file.
Can I use multiple audio inputs or channels?
Yes. You can connect a multi-channel interface or a USB hub to the Edge Hub. For example, you can connect a microphone hub with multiple inputs to a single USB port on the Edge Hub, allowing you to capture and process multiple audio channels simultaneously.
Can Splunk Edge Hub devices communicate with each other?
Yes. Because the containers run on a standard network stack, you can open ports to transmit data between containers on different Edge Hub devices. This is helpful in use cases like tracking an object or person as they move across the field of view of multiple cameras connected to different hubs.
Can I connect a CAN bus (for example for vehicle telemetry)?
Yes. You can use a CAN-to-USB converter to connect a CAN bus interface to the Edge Hub. This allows you to ingest telemetry data from vehicles or industrial machinery directly into the device for processing.
Does this solution replace existing OT systems like an MES?
While you technically can run logic similar to a Manufacturing Execution System (MES) in a container, the primary design goal of Splunk Edge Hub is to sit parallel to your existing infrastructure. It extracts and processes data to provide visibility without disrupting or replacing critical control systems.
What types of AI models can I run in containers on the Edge Hub?
You can run any AI model that can be compiled for the ARM64 architecture and TensorFlow Lite runtime. Common examples include object detection models (like YOLO for people or vehicle detection), audio classification models, and anomaly detection models. The NPU accelerates inference, making real-time detection feasible for many use cases. The pre-built people detection container is a good starting point for understanding how to structure your own AI containers.
Additional resources
The content in this article comes from a .conf25 presentation, one of the thousands of Splunk resources available to help users succeed.
In addition, these resources might help you understand and implement this guidance:
- Splunk Help: Use Docker containers with Splunk Edge Hub OS
- Splunk Help: Set up the Splunk Edge Hub SDK
- Splunkbase App: Splunk App for OT Intelligence

