Getting started with Splunk Connect for Hyperledger Fabric
Splunk Connect for Hyperledger Fabric (aka fabric-logger) sends blocks and transactions from a Hyperledger Fabric distributed ledger to Splunk for analytics. It's recommended (but not required) that you use this with the Splunk App for Hyperledger Fabric. This app can also send blocks and transactions to stdout with use for any other system.
Currently the fabric-logger supports connecting to one peer at a time, so you have to deploy multiple instances of the fabric-logger for each peer that you want to connect to. Each fabric-logger instance can monitor multiple channels for the peer it is connected to.
Fabric ACLs required for Splunk Connect for Hyperledger Fabric
User authentication in Hyperledger Fabric depends on a private key and a signed certificate. If using the cryptogen
tool, these files are found in the following directories:
- Signed Certificate:
crypto-config/peerOrganizations/<org-domain>/users/<username>@<org-domain>/msp/signcerts/<username>@<org-domain>-cert.pem
- Private Key:
crypto-config/peerOrganizations/<org-domain>/users/<username>@<org-domain>/msp/keystore/*_sk
Additionally, Hyperledger Fabric users depend on ACLs defined in the configtx.yaml
file in order to listen for events on peers. You can see the ACLs documented here. The only required ACL policy for using this app is event/Block
, by default this is mapped to the policy /Channel/Application/Readers
. Any user defined under this policy in the organization can be used for the fabric-logger. User membership into policies are defined at the organization level, an example can be seen here.
Configuration
Fabric Logger uses two files for configuration:
- Connection profile
network.yaml
with the appropriate values. fabriclogger.yaml
which Fabric Logger uses for defining channels, peer, chaincode events etc, to listen to.
For setup guidance, refer to the configuration docs and fabriclogger.yaml.example.
Checkpoints
As Fabric Logger processes blocks and chaincode events, the progress is stored in a .checkpoints
file. Upon restart, Fabric Logger loads this file and resumes from the last processed block number. The file uses ini format. The following is a sample:
myChannel=5 mySecondChannel=3 [ccevents.myChannel_myChaincodeId] channelName=myChannel chaincodeId=myChaincodeId block=5
Running in Docker, Kubernetes, or locally
- ► Docker
-
Running the Fabric Logger in Docker is recommended. A sample docker-compose entry looks like this:
services: fabric-logger.example.com: container_name: fabric-logger.example.com image: ghcr.io/splunkdlt/fabric-logger:latest environment: - FABRIC_KEYFILE=<path to private key file> - FABRIC_CERTFILE=<path to signed certificate> - FABRIC_CLIENT_CERTFILE=<path to client certificate when using mutual tls> - FABRIC_CLIENT_KEYFILE=<path to client private key when using mutual tls> - FABRIC_MSP=<msp name> - SPLUNK_HEC_TOKEN=12345678-ABCD-EFGH-IJKL-123456789012 - SPLUNK_HEC_URL=https://splunk.example.com:8088 - SPLUNK_HEC_REJECT_INVALID_CERTS="false" - SPLUNK_INDEX=hyperledger_logs - SPLUNK_METRICS_INDEX=hyperledger_metrics - LOGGING_LOCATION=splunk - NETWORK_CONFIG=network.yaml - PROMETHEUS_DISCOVERY=true - PROMETHEUS_ORDERER_PORT=7060 - PROMETHEUS_PEER_PORT=7061 volumes: - ./crypto:/usr/src/app/crypto/ - ./network.yaml:/usr/src/app/network.yaml - ./fabriclogger.yaml:/usr/src/app/fabriclogger.yaml - ./.checkpoints:/usr/src/app/.checkpoints depends_on: - orderer.example.com - peer0.example.com - peer1.example.com ports: 8080:8080 networks: - hlf_network
- ► Kubernetes
-
A helm chart for Kubernetes deployments is also included. First, set your
values.yaml
file. Here is an example configuration (although this will be specific to your environment):splunk: hec: token: 12345678-ABCD-EFGH-IJKL-123456789012 url: https://splunk-splunk-kube.splunk.svc.cluster.local:8088 rejectInvalidCerts: "false" index: hyperledger_logs secrets: peer: cert: hlf--peer-admincert # itemKey can be defined if there is a secret with multiple items stored inside. certItem: cert.pem key: hlf--peer-adminkey keyItem: key.pem tls: hlf--peer-tlscert tlsItem: tlscacert.pem clientCert: hlf--peer-clientcert clientCertItem: clientCert.pem clientKey: hlf--peer-clientkey clientKeyItem: clientKey.pem fabric: msp: PeerMSP orgDomain: example.com blockType: full user: Admin channels: - channel1 - channel2 ccevents: - channelName: channel1 chaincodeId: myChaincodeId - channelName: channel1 chaincodeId: myChaincodeId
Kubernetes: Autogenerating secrets
Alternatively, if you are using
cryptogen
to generate identities, the helm chart can auto-populate secrets for you.- Download the helm file and untar it locally so you can copy your
crypto-config
into the director.wget https://github.com/splunk/fabric-logger/releases/download/v4.2.2/fabric-logger-helm-4.2.2.tgz tar -xf fabric-logger-helm-4.2.2.tgz cp -R crypto-config fabric-logger/crypto-config
- Set the secrets section of
values.yaml
to:secrets: peer: create: true
- Deploy using:
helm install -n fabric-logger-${NS} --namespace ${NS} \ -f values.yaml -f network.yaml ./fabric-logger
Kubernetes: Manually populating secrets
Make sure that the peer credentials are stored in the appropriately named secrets in the same namespace. You don't have to use the admin credential for connecting, but make sure to select the appropriate user for your use case.
NS=default ADMIN_MSP_DIR=./crypto-config/peerOrganizations/peer0.example.com/users/Admin@peer0.example.com/msp CERT=$(find ${ADMIN_MSP_DIR}/signcerts/*.pem -type f) kubectl create secret generic -n ${NS} hlf-peer--peer0-cert --from-file=cert.pem=$CERT KEY=$(find ${ADMIN_MSP_DIR}/keystore/*_sk -type f) kubectl create secret generic -n ${NS} hlf-peer--peer0-key --from-file=key.pem=$KEY
A
network.yaml
configmap will automatically be generated using the secrets and channel details set above. You can deploy via helm:helm install -n fabric-logger-${PEER_NAME}-${NS} --namespace ${NS} \ -f https://raw.githubusercontent.com/splunk/fabric-logger/v4.2.2/defaults.fabriclogger.yaml \ -f values.yaml -f network.yaml \ https://github.com/splunk/fabric-logger/releases/download/v4.2.2/fabric-logger-helm-4.2.2.tgz
Kubernetes: Deleting helm chart
You can delete the helm chart like this:
helm delete --purge fabric-logger-${PEER_NAME}-${NS}
- Download the helm file and untar it locally so you can copy your
- ► Running locally
-
- Install dependencies:
$ yarn install
- Provide a configuration file
fabriclogger.yaml
or set the appropriate environment variables. Details about fabriclogger's command-line usage can be found in the CLI docs. - Update the
network.yaml
with appropriate values for your system. - Start the application:
$ yarn start
- Install dependencies: