The Splunk App for Hyperledger Fabric contains a set of dashboards and analytics to give you full visibility into the system metrics, application data and ledger so that you can maintain security, stability, and performance for your Hyperledger Fabric deployment.
These dashboards are meant to be a starting point for building analytics around your environment, whether your infrastructure is virtual or physical, on-premise or in the cloud.
In order to take full advantage of the dashboards provided, you should configure these four data sources:
- Hyperledger Fabric Distributed Ledger - These logs contain transaction information from the ledger itself and provide insight into operations and actions on-chain. Splunk Connect for Hyperledger Fabric is open-sourced to help you easily ingest Hyperledger Fabric ledgers in Splunk.
- Hyperledger Fabric Application Logs - Application logs provide information about specific Hyperledger components such as the Orderers, Peer Nodes, and other services (CouchDB and Kafka) useful for troubleshooting, debugging, and monitoring application performance.
- Hyperledger Fabric Metrics - These are metrics specific to Hyperledger Fabric components and performance. You can find a reference on these metrics here.
- Infrastructure/System Level Metrics and Logs - System metrics such as CPU, MEM, DISK and NETWORK activity provide insight into the underlying infrastructure Hyperledger Fabric nodes are running on. These metrics/logs could come from Docker, Kubernetes, IBM IKS, Microsoft Azure, Google’s GCP, and AWS Cloudwatch, to name a few. Splunk has different add-ons and connectors for each.
There are a few dashboards provided to get you started with analyzing your Hyperledger Fabric deployment. These include:
Network Architecture and Channels
See at a glance the number of orderers, peers, and channels in your Hyperledger Fabric network.
Infrastructure Health and Monitoring
An overview of system health from system metrics like CPU, uptime status, and transaction latency. You can see in real time when transactions are starting to back up or a peer is falling behind on blocks.
Real-time visibility into the transactions being written on each ledger. This dashboard blends ledger data sent from the peers with logs and metrics to give a holistic view of the network’s health.
High level visibility into key threat indicators to facilitate detection of attacks on the network. This dashboard is informed by ledger, log, and metric data.
Query the ledger using specific attributes to get detailed event data.
A page containing a dashboard to ensure that your Splunk environment is receiving all the data the application requires.
Field extractions and aliases
The app provides a number of field extractions and aliases that make searching and investigating Hyperledger Fabric data easier. These include parsing couchdb logs for actions (GET, PUT, POST, etc) and documents, chaincode logs for channel and latency metadata, and field aliases for accessing various parts of ledger transactions. To see the full list, look at the props.conf file or go to Settings > Fields in Splunk.
- Install the app on a Splunk Enterprise search head that will have access to the data.
- Open the app and navigate to the Data Setup page from the Introduction page.
- Follow the instructions for each of the data sources on the Data Setup page in order to populate the graphs and validate data is coming in correctly.
- Hyperledger Fabric Ledger Logs - The Splunk Connect for Hyperledger Fabric is an open source agent that connects to a peer on the Hyperledger Fabric network. See the README on Github for deployment instructions. Docker, Kubernetes, and native deployments are all options.
- Hyperledger Fabric Application Logs - Create an index in Splunk as well as an input mechanism to receive the data. It is recommended to create an index called “hyperledger_logs” and “hyperledger_metrics” and enable the Splunk HEC to receive data. You can use the example “indexes.conf.example” provided in the app. Rename the file from “indexes.conf.example” to “indexes.conf” to enable the indexes, and rename “inputs.conf.example” to “inputs.conf” to enable the HEC endpoints. You also need to enable the HTTP Event Collector (HEC) to receive data if it has not been enabled already.
$ cd $SPLUNK_HOME/etc/apps/splunk-hyperledger-fabric/default $ sudo mv inputs.conf.example inputs.conf $ sudo mv indexes.conf.example indexes.conf $ cd /opt/splunk/bin $ sudo ./splunk restart
- Hyperledger Fabric Metrics (Prometheus) - Hyperledger Fabric 2.2 exposes metrics for ingestion using Prometheus, which can be scraped by Splunk Connect for Hyperledger Fabric. Set the following environment variables in your Hyperledger Fabric environment, then configure Splunk Connect for Hyperledger Fabric or the Splunk OpenTelemetry Connector to scrape these metrics. Finally, open the Metrics Workspace to explore and analyze your metrics.
CORE_METRICS_PROVIDER: prometheus CORE_OPERATIONS_LISTENADDRESS: [EXTERNAL-IP]:[PORT] ORDERER_METRICS_PROVIDER: prometheus ORDERER_OPERATIONS_LISTENADDRESS: [EXTERNAL-IP]:[PORT]
- System Logs/Metrics - On the data setup dashboard is a list of common options that you can use to get your system logs and metrics into Splunk for end-to-end visibility.
4. You can combine node monitoring with additional Splunk solutions to capture logs. Here is a non-exhaustive lists of applications you can combine with our offerings:
- Docker: Splunk Docker Logging Driver
- Kubernetes: Splunk Connect for Kubernetes
- Syslog: Monitoring Network Ports in Splunk
- Log File: Monitoring Files and Directories with Splunk
- IBM Cloud Platform: IBM Cloud Platform
- Microsoft Azure: Splunk Add-on for Microsoft Cloud Services
- AWS Cloudwatch: Splunk App for AWS
- GCP Stackdriver: Splunk Add-on for Google Cloud
5. To make logs easier to parse, you might want to configure Fabric to expose logs in JSON format. To do so, set up this environment variable in the running environment of your Fabric cluster: