Wazuh cluster installation
This document will go through the installation of the Wazuh manager and the Splunk forwarder.
Note
Root user privileges are required to run all the commands described below.
Prerequisites
Before installing the Wazuh server and the Splunk forwarder, some extra packages must be installed:
Install all the required utilities:
# yum install curl
Install all the required utilities:
# apt install curl apt-transport-https lsb-release gnupg2
Install all the required utilities:
# zypper install curl
Installing the Wazuh server
The Wazuh server collects and analyzes data from the deployed Wazuh agents. It runs the Wazuh manager, the Wazuh API, and the Splunk forwarder. The first step to set up Wazuh is adding the Wazuh's repository to the server, alternatively, all the available packages can be found here.
Adding the Wazuh repository
- Import the GPG key: - # rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH 
- Add the repository: - # cat > /etc/yum.repos.d/wazuh.repo << EOF [wazuh] gpgcheck=1 gpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH enabled=1 name=EL-\$releasever - Wazuh baseurl=https://packages.wazuh.com/4.x/yum/ protect=1 EOF 
- Install the GPG key: - # curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add - 
- Add the repository: - # echo "deb https://packages.wazuh.com/4.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list 
- Update the package information: - # apt-get update 
- Import the GPG key: - # rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH 
- Add the repository: - # cat > /etc/zypp/repos.d/wazuh.repo <<\EOF [wazuh] gpgcheck=1 gpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH enabled=1 name=EL-$releasever - Wazuh baseurl=https://packages.wazuh.com/4.x/yum/ protect=1 EOF 
Installing the Wazuh manager
Install the Wazuh manager package:
# yum install wazuh-manager-4.1.5-1
# apt-get install wazuh-manager=4.1.5-1
# zypper install wazuh-manager-4.1.5-1
Choose the corresponding tab to configure the installation as a single-node or multi-node clustes:
- Enable and start the Wazuh manager service: - # systemctl daemon-reload # systemctl enable wazuh-manager # systemctl start wazuh-manager - Choose one option according to the operating system used: - RPM based operating system: 
 - # chkconfig --add wazuh-manager # service wazuh-manager start - Debian based operating system: 
 - # update-rc.d wazuh-manager defaults 95 10 # service wazuh-manager start 
- Run the following command to check if the Wazuh manager is active: - # systemctl status wazuh-manager - # service wazuh-manager status 
One server has to be chosen as a master, the rest will be workers. So, the section Wazuh server master node must be applied once, in the server chosen for this role. For all the other servers, the section Wazuh server worker node must be applied.
Wazuh server master node
- Configure the cluster node by editing the following settings in the - /var/ossec/etc/ossec.conffile:- <cluster> <name>wazuh</name> <node_name>master-node</node_name> <node_type>master</node_type> <key>c98b62a9b6169ac5f67dae55ae4a9088</key> <port>1516</port> <bind_addr>0.0.0.0</bind_addr> <nodes> <node>wazuh-master-address</node> </nodes> <hidden>no</hidden> <disabled>no</disabled> </cluster> - Parameters and descriptions: - Name of the cluster. - Name of the current node. - It specifies the role of the node. It has to be set to - master.- Key that is used to encrypt communication between cluster nodes. The key must be 32 characters long and same for all of the nodes in the cluster. The following command can be used to generate a random key: - openssl rand -hex 16.- Destination port for cluster communication. - Network IP to which the node is bound to listen for incoming requests (0.0.0.0 for any IP). - The address of the - master node. It must be specified in all nodes, including the master itself. The address can be either an IP or a DNS.- It shows or hides the cluster information in the generated alerts. - It indicates whether the node is enabled or disabled in the cluster. This option must be set to - no.
- Once the - /var/ossec/etc/ossec.confconfiguration file is edited, enable and start the Wazuh manager service:- # systemctl daemon-reload # systemctl enable wazuh-manager # systemctl start wazuh-manager - Choose one option according to the operating system used: - RPM based operating system: 
 - # chkconfig --add wazuh-manager # service wazuh-manager start - Debian based operating system: 
 - # update-rc.d wazuh-manager defaults 95 10 # service wazuh-manager start 
- Run the following command to check if the Wazuh manager is active: - # systemctl status wazuh-manager - # service wazuh-manager status 
Wazuh server worker nodes
- Configure the cluster node by editing the following settings in the - /var/ossec/etc/ossec.conffile:- <cluster> <name>wazuh</name> <node_name>worker-node</node_name> <node_type>worker</node_type> <key>c98b62a9b6169ac5f67dae55ae4a9088</key> <port>1516</port> <bind_addr>0.0.0.0</bind_addr> <nodes> <node>wazuh-master-address</node> </nodes> <hidden>no</hidden> <disabled>no</disabled> </cluster> - As shown in the example above, the following parameters have to be edited: - Name of the cluster. - Each node of the cluster must have a unique name. - It has to be set as - worker.- The key created previously for the - masternode. It has to be the same for all the nodes.- It has to contain the address of the master (it can be either an IP or a DNS). - It has to be set to - no.
- Once the - /var/ossec/etc/ossec.confconfiguration file is edited, enable and start the Wazuh manager service:- # systemctl daemon-reload # systemctl enable wazuh-manager # systemctl start wazuh-manager - Choose one option according to the operating system used: - RPM based operating system: 
 - # chkconfig --add wazuh-manager # service wazuh-manager start - Debian based operating system: 
 - # update-rc.d wazuh-manager defaults 95 10 # service wazuh-manager start 
- Run the following command to check if the Wazuh manager is active: - # systemctl status wazuh-manager - # service wazuh-manager status 
Install and configure Splunk Forwarder
A Splunk Forwarder is required in order to send alerts to the indexer.
Depending on the type of architecture that you're installing, the Splunk Forwarder is configured differently.
Warning
- On a single-instance architecture, the forwarder must point to the Splunk Enterprise instance where the Wazuh app was installed. 
- On a multi-instance architecture, the forwarder must point to the search peers (or indexers). 
- Download Splunk Forwarder v8.1.3 package from the official website. 
- Install the Wazuh manager package: 
# yum install splunkforwarder-package.rpm
# dpkg --install splunkforwarder-package.deb
# zypper install splunkforwarder-package.rpm
- Ensure Splunk Forwarder v8.1.3 is installed in - /opt/splunkforwarder.
Configuration process
This section explains how to configure the Splunk Forwarder to send alerts to the Indexer component.
- props.conf: In order to consume data inputs, Splunk needs to specify what kind of format will handle.
- inputs.conf: The Splunk Forwarder needs this file to read data from an input. In this case, the Wazuh alerts file.
Configuring props
- Download and insert the - props.conftemplate:- # curl -so /opt/splunkforwarder/etc/system/local/props.conf https://raw.githubusercontent.com/wazuh/wazuh-splunk/v4.1.5-8.1.3/setup/forwarder/props.conf 
- Download and insert the - inputs.conftemplate:- # curl -so /opt/splunkforwarder/etc/system/local/inputs.conf https://raw.githubusercontent.com/wazuh/wazuh-splunk/v4.1.5-8.1.3/setup/forwarder/inputs.conf 
- Set the Wazuh manager hostname: - # sed -i "s:MANAGER_HOSTNAME:$(hostname):g" /opt/splunkforwarder/etc/system/local/inputs.conf 
Set up data forwarding
- Point Forwarder output to Wazuh's Splunk Indexer with the following command: - # /opt/splunkforwarder/bin/splunk add forward-server <INDEXER_IP>:<INDEXER_PORT> - INDEXER_IPis the IP address of the Splunk Indexer.
- INDEXER_PORTis the port of the Splunk Indexer. By default it's 9997.
 
- Restart Splunk Forwarder service: - # /opt/splunkforwarder/bin/splunk restart - Warning - If you get an error message about the port - 8089already being in use, you can change it to use a different one.- After installing the Splunk Forwarder, incoming data should appear in the designated Indexer. 
- Optional. If you additionally want the Splunk Forwarder service to start at boot time, please execute the following command: - # /opt/splunkforwarder/bin/splunk enable boot-start 
To configure forwarder instance in the cluster first install the splunk forwarder.
Now, it is necessary to configure the 3 most important files in this instance:
- inputs.conf: Reads alerts from alerts.json 
- outputs.conf: This file is for pointing events to certain indexers. It can be a single indexer or a cluster of indexers, in this last case, load balancing has to be configured on it. 
- props.conf: This file provides format and transforming fields of the data to be indexed. 
Starting with inputs.conf, create it and fill it with the next block:
# touch /opt/splunkforwarder/etc/system/local/inputs.conf
[monitor:///var/ossec/logs/alerts/alerts.json]
disabled = 0
host = MANAGER_HOSTNAME
index = wazuh
sourcetype = wazuh
Now, following with the outputs.conf:
# touch /opt/splunkforwarder/etc/system/local/outputs.conf
And paste this inside:
[indexer_discovery:cluster1]
pass4SymmKey = changeme
master_uri = https://<master_ip>:<port>
[tcpout:cluster1_tcp]
indexerDiscovery = cluster1
[tcpout]
defaultGroup = cluster1_tcp
For the last one, the props.conf, follow the same procedure:
# touch /opt/splunkforwarder/etc/system/local/props.conf
[wazuh]
DATETIME_CONFIG =
INDEXED_EXTRACTIONS = json
KV_MODE = none
NO_BINARY_CHECK = true
category = Application
disabled = false
pulldown_type = true
To save all the changes, restart splunk:
# /opt/splunkforwarder/bin/splunk restart