Wazuh cluster installation

This document will go through the installation of the Wazuh manager and the Splunk forwarder.

Note

Root user privileges are required to run all the commands described below.

Prerequisites

Before installing the Wazuh server and the Splunk forwarder, some extra packages must be installed:

Install all the required utilities:

# yum install curl

Install all the required utilities:

# apt install curl apt-transport-https lsb-release gnupg2

Install all the required utilities:

# zypper install curl

Installing the Wazuh server

The Wazuh server collects and analyzes data from the deployed Wazuh agents. It runs the Wazuh manager, the Wazuh API, and the Splunk forwarder. The first step to set up Wazuh is adding the Wazuh’s repository to the server, alternatively, all the available packages can be found here.

Adding the Wazuh repository

  1. Import the GPG key:

    # rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH
    
  2. Add the repository:

    # cat > /etc/yum.repos.d/wazuh.repo << EOF
    [wazuh]
    gpgcheck=1
    gpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH
    enabled=1
    name=EL-$releasever - Wazuh
    baseurl=https://packages.wazuh.com/4.x/yum/
    protect=1
    EOF
    
  1. Install the GPG key:

    # curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add -
    
  2. Add the repository:

    # echo "deb https://packages.wazuh.com/4.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
    
  3. Update the package information:

    # apt-get update
    
  1. Import the GPG key:

    # rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH
    
  2. Add the repository:

    # cat > /etc/zypp/repos.d/wazuh.repo <<\EOF
    [wazuh]
    gpgcheck=1
    gpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH
    enabled=1
    name=EL-$releasever - Wazuh
    baseurl=https://packages.wazuh.com/4.x/yum/
    protect=1
    EOF
    

Installing the Wazuh manager

Install the Wazuh manager package:

# yum install wazuh-manager
# apt-get install wazuh-manager
# zypper install wazuh-manager

Choose the corresponding tab to configure the installation as a single-node or multi-node clustes:

  1. Enable and start the Wazuh manager service:

    # systemctl daemon-reload
    # systemctl enable wazuh-manager
    # systemctl start wazuh-manager
    

    Choose one option according to the OS used:

    1. RPM based OS:

    # chkconfig --add wazuh-manager
    # service wazuh-manager start
    
    1. Debian based OS:

    # update-rc.d wazuh-manager defaults 95 10
    # service wazuh-manager start
    
  2. Run the following command to check if the Wazuh manager is active:

    # systemctl status wazuh-manager
    
    # service wazuh-manager status
    

One server has to be chosen as a master, the rest will be workers. So, the section Wazuh server master node must be applied once, in the server chosen for this role. For all the other servers, the section Wazuh server worker node must be applied.

Wazuh server master node

  1. Configure the cluster node by editing the following settings in the /var/ossec/etc/ossec.conf file:

    <cluster>
      <name>wazuh</name>
      <node_name>master-node</node_name>
      <node_type>master</node_type>
      <key>c98b62a9b6169ac5f67dae55ae4a9088</key>
      <port>1516</port>
      <bind_addr>0.0.0.0</bind_addr>
      <nodes>
        <node>wazuh-master-address</node>
      </nodes>
      <hidden>no</hidden>
      <disabled>no</disabled>
    </cluster>
    

    The parameters:

    name

    Name of the cluster.

    node_name

    Name of the current node.

    node_type

    Specifies the role of the node. Has to be set to master.

    key

    Key that will be used to encrypt communication between cluster nodes. The key must be 32 characters long and same for all of the nodes in the cluster. The following command can be used to generate a random key: openssl rand -hex 16.

    port

    Destination port for cluster communication.

    bind_addr

    Network IP to which the node will be bound to listen for incoming requests (0.0.0.0 for any IP).

    nodes

    The address of the master node. It must be specified in all nodes (including the master itself). The address can be either an IP or a DNS.

    hidden

    Shows or hides the cluster information in the generated alerts.

    disabled

    Indicates whether the node will be enabled or disabled in the cluster. This option must be set to no.

  2. Once the /var/ossec/etc/ossec.conf configuration file is edited, enable and start the Wazuh manager service:

    # systemctl daemon-reload
    # systemctl enable wazuh-manager
    # systemctl start wazuh-manager
    

    Choose one option according to the OS used:

    1. RPM based OS:

    # chkconfig --add wazuh-manager
    # service wazuh-manager start
    
    1. Debian based OS:

    # update-rc.d wazuh-manager defaults 95 10
    # service wazuh-manager start
    
  3. Run the following command to check if the Wazuh manager is active:

    # systemctl status wazuh-manager
    
    # service wazuh-manager status
    

Wazuh server worker nodes

  1. Configure the cluster node by editing the following settings in the /var/ossec/etc/ossec.conf file:

    <cluster>
        <name>wazuh</name>
        <node_name>worker-node</node_name>
        <node_type>worker</node_type>
        <key>c98b62a9b6169ac5f67dae55ae4a9088</key>
        <port>1516</port>
        <bind_addr>0.0.0.0</bind_addr>
        <nodes>
            <node>wazuh-master-address</node>
        </nodes>
        <hidden>no</hidden>
        <disabled>no</disabled>
    </cluster>
    

    As shown in the example above, the following parameters have to be edited:

    name

    Name of the cluster.

    node_name

    Each node of the cluster must have a unique name.

    node_type

    Has to be set as worker.

    key

    The key created previously for the master node. It has to be the same for all the nodes.

    nodes

    Has to contain the address of the master (it can be either an IP or a DNS).

    disabled

    Has to be set to no.

  2. Once the /var/ossec/etc/ossec.conf configuration file is edited, enable and start the Wazuh manager service:

    # systemctl daemon-reload
    # systemctl enable wazuh-manager
    # systemctl start wazuh-manager
    

    Choose one option according to the OS used:

    1. RPM based OS:

    # chkconfig --add wazuh-manager
    # service wazuh-manager start
    
    1. Debian based OS:

    # update-rc.d wazuh-manager defaults 95 10
    # service wazuh-manager start
    
  3. Run the following command to check if the Wazuh manager is active:

    # systemctl status wazuh-manager
    
    # service wazuh-manager status
    

Install and configure Splunk Forwarder

A Splunk Forwarder is required in order to send alerts to the indexer.

Depending on the type of architecture that you’re installing, the Splunk Forwarder is configured differently.

Warning

  • On a single-instance architecture, the forwarder must point to the Splunk Enterprise instance where the Wazuh app was installed.

  • On a multi-instance architecture, the forwarder must point to the search peers (or indexers).

  1. Download Splunk Forwarder v8.1.2 package from the official website.

  2. Install the Wazuh manager package:

# yum install splunkforwarder-package.rpm
# dpkg --install splunkforwarder-package.deb
# zypper install splunkforwarder-package.rpm

Configuration process

This section explains how to configure the Splunk Forwarder to send alerts to the Indexer component.

  • props.conf : In order to consume data inputs, Splunk needs to specify what kind of format will handle.

  • inputs.conf : The Splunk Forwarder needs this file to read data from an input. In this case, the Wazuh alerts file.

Configuring props

  1. Download and insert the props.conf template:

    # curl -so /opt/splunkforwarder/etc/system/local/props.conf https://raw.githubusercontent.com/wazuh/wazuh-splunk/v4.1.4-8.1.2/setup/forwarder/props.conf
    
  2. Download and insert the inputs.conf template:

    # curl -so /opt/splunkforwarder/etc/system/local/inputs.conf https://raw.githubusercontent.com/wazuh/wazuh-splunk/v4.1.4-8.1.2/setup/forwarder/inputs.conf
    
  3. Set the Wazuh manager hostname:

    # sed -i "s:MANAGER_HOSTNAME:$(hostname):g" /opt/splunkforwarder/etc/system/local/inputs.conf
    

Set up data forwarding

  1. Point Forwarder output to Wazuh’s Splunk Indexer with the following command:

    # /opt/splunkforwarder/bin/splunk add forward-server <INDEXER_IP>:<INDEXER_PORT>
    
    • INDEXER_IP is the IP address of the Splunk Indexer.

    • INDEXER_PORT is the port of the Splunk Indexer. By default it’s 9997.

  2. Restart Splunk Forwarder service:

    # /opt/splunkforwarder/bin/splunk restart
    

    Warning

    If you get an error message about the port 8089 already being in use, you can change it to use a different one.

    After installing the Splunk Forwarder, incoming data should appear in the designated Indexer.

  3. Optional. If you additionally want the Splunk Forwarder service to start at boot time, please execute the following command:

    # /opt/splunkforwarder/bin/splunk enable boot-start
    

To configure forwarder instance in the cluster first install the splunk forwarder.

Now, it is necessary to configure the 3 most important files in this instance:

  • inputs.conf: Reads alerts from alerts.json

  • outputs.conf: This file is for pointing events to certain indexers. It can be a single indexer or a cluster of indexers, in this last case, load balancing has to be configured on it.

  • props.conf: This file provides format and transforming fields of the data to be indexed.

Starting with inputs.conf, create it and fill it with the next block:

# touch /opt/splunkforwarder/etc/system/local/inputs.conf
[monitor:///var/ossec/logs/alerts/alerts.json]
disabled = 0
host = MANAGER_HOSTNAME
index = wazuh
sourcetype = wazuh

Now, following with the outputs.conf:

# touch /opt/splunkforwarder/etc/system/local/outputs.conf

And paste this inside:

[indexer_discovery:cluster1]
pass4SymmKey = changeme
master_uri = https://<master_ip>:<port>

[tcpout:cluster1_tcp]
indexerDiscovery = cluster1

[tcpout]
defaultGroup = cluster1_tcp

For the last one, the props.conf, follow the same procedure:

# touch /opt/splunkforwarder/etc/system/local/props.conf
[wazuh]
DATETIME_CONFIG =
INDEXED_EXTRACTIONS = json
KV_MODE = none
NO_BINARY_CHECK = true
category = Application
disabled = false
pulldown_type = true

To save all the changes, restart splunk:

# /opt/splunkforwarder/bin/splunk restart