Deploying a Wazuh cluster

Cluster nodes configuration

The Wazuh cluster is made up of manager type nodes. Only one of them will take the master role, the others will take the worker role. For both node types, the configuration file /var/ossec/etc/ossec.conf contains the cluster configuration values. Within the labels <cluster>...</cluster> the following configuration values can be set:

  • name: Name that will be assigned to the cluster

  • node_name: Name of the current node

  • key: The key must be 32 characters long and should be the same for all of the cluster nodes. You may use the following command to generate a random one:

    # openssl rand -hex 16
    
  • node_type: Set the node type (master/worker)

  • port: Destination port for cluster communication

  • bind_addr: IP address where this node is listening to (0.0.0.0 any IP)

  • nodes: The address of the master , it must be specified in all nodes (including the master itself). The address can be either an IP or a DNS.

  • hidden: Toggles whether or not to show information about the cluster that generated an alert.

  • disabled: Indicates whether the node will be enabled or not in the cluster.

We are going to configure a cluster with a master node and a single worker node, for the master node we set the following configuration:

<cluster>
    <name>wazuh</name>
    <node_name>master-node</node_name>
    <key>c98b62a9b6169ac5f67dae55ae4a9088</key>
    <node_type>master</node_type>
    <port>1516</port>
    <bind_addr>0.0.0.0</bind_addr>
    <nodes>
        <node>master</node>
    </nodes>
    <hidden>no</hidden>
    <disabled>no</disabled>
</cluster>

Restart the master node:

# systemctl restart wazuh-manager

Now it's time to configure the worker node:

<cluster>
    <name>wazuh</name>
    <node_name>worker01-node</node_name>
    <key>c98b62a9b6169ac5f67dae55ae4a9088</key>
    <node_type>worker</node_type>
    <port>1516</port>
    <bind_addr>0.0.0.0</bind_addr>
    <nodes>
        <node>master</node>
    </nodes>
    <hidden>no</hidden>
    <disabled>no</disabled>
</cluster>

Restart the worker node:

# systemctl restart wazuh-manager

Let's execute the following command (works on both worker and master nodes) to check that everything worked as expected:

# /var/ossec/bin/cluster_control -l
NAME           TYPE    VERSION  ADDRESS
master-node    master  4.4.3   wazuh-master
worker01-node  worker  4.4.3   172.22.0.3

Forwarder installation

  • The apps must be configured to point to the master's API.

  • All manager nodes need an event forwarder in order to send data to Elasticsearch or Splunk. Install Filebeat if you're using the Elastic Stack or Splunk forwarder if you're using Splunk. This is only necessary if the node is in a separate instance from Elasticsearch or Splunk.

Installing Filebeat:

Type

Description

Wazuh single node cluster

Install Filebeat on Wazuh single node cluster.

Wazuh multi node cluster

Install Filebeat on Wazuh multi node cluster.

Installing Splunk forwarder:

Type

Description

RPM/DEB packages | Install Splunk forwarder for RPM or DEB based OS.

Pointing agents to the cluster nodes

Finally, the configuration of the agents has to be modified in order to report to the cluster. In this case, agent 001 will report to the worker01-node node. To achieve this, we must modify the information contained in the labels <client><server> in the file /var/ossec/etc/ossec.conf, where we will place the IP address of the node we want to report to, in our case, it would look like this:

<client>
    <server>
        <address>WORKER01_NODE_IP</address>
        ...
    </server>
</client>

Restart the agent:

# systemctl restart wazuh-agent

The second agent will report to the master-node node. For this, we will place the IP address of our master in the agent configuration, as we have seen in the previous case:

<client>
    <server>
        <address>MASTER_NODE_IP</address>
        ...
    </server>
</client>

Restart the agent:

# systemctl restart wazuh-agent

Execute the following command on the master node to verify that the Wazuh agents are correctly connected:

# /var/ossec/bin/agent_control -l
Wazuh agent_control. List of available agents:
    ID: 000, Name: agent000 (server), IP: 127.0.0.1, Active/Local
    ID: 001, Name: agent001, IP: 172.18.0.5, Active
    ID: 002, Name: agent002, IP: 172.18.0.6, Active

Note

We recommend using a load balancer for registering and connecting the agents. This way, the agents will be able to be registered and report to the nodes in a distributed way, and it will be the load balancer who assigns which worker they will report to. Using this option we can better distribute the load, and in case of a fall in some worker node, its agents will reconnect to another one.