Distributed deployment
The distributed deployment refers to installing the Wazuh components as separate servers, following the step-by-step installation guide (applicable to the Wazuh indexer, server, and dashboard) or using the install assistant (for the Wazuh indexer, server, and dashboard).
To scale your distributed deployment by adding a node, complete these steps:
Certificate creation
For a distributed deployment, certificates can be generated either by using the pre-existing root CA keys or by generating a new root CA along with a new set of certificates. We recommend using the existing root CA keys to maintain trust between nodes.
Using pre-existing root CA key
Perform the steps below on your existing Wazuh server node to generate the certificates using the pre-existing root CA key.
Note
You will require a copy of the <CERTIFICATE_ARCHIVE>.tar file created during the initial configuration for the Wazuh indexer in steps 4 and 5 or a copy of the root CA keys. If neither is available, you can generate new certificates by following the steps outlined in the next section.
Create a
config.ymlfile in the/rootdirectory to add the new Wazuh server node(s):# touch /root/config.yml
Edit the
/root/config.ymlfile to include the Wazuh server node name and IP address of the new node:nodes: # Wazuh server nodes server: - name: <EXISTING_WAZUH_SERVER_NODE_NAME> ip: <EXISTING_WAZUH_SERVER_IP> node_type: master - name: <NEW_WAZUH_SERVER_NODE_NAME> ip: <NEW_WAZUH_SERVER_IP> node_type: worker
Replace the values with your node names and their corresponding IP addresses.
Note
The existing Wazuh server node name and IP address can be found in the cluster section in
/var/ossec/etc/ossec.confof the existing Wazuh server.Extract the certificate archive to obtain the root CA keys:
# mkdir wazuh-install-files && tar -xf ./<CERTIFICATE_ARCHIVE>.tar -C wazuh-install-files
Replace
<CERTIFICATE_ARCHIVE>.tarwith the name of the compressed certificates generated during the certificate creation step. (for example,wazuh-certificates.tarorwazuh-install-files.tar).Download and run
wazuh-certs-tool.shto create the certificates for the new Wazuh server node using the pre-existing root CA keys:# curl -sO https://packages.wazuh.com/4.14/wazuh-certs-tool.sh # bash wazuh-certs-tool.sh -A wazuh-install-files/root-ca.pem wazuh-install-files/root-ca.key
23/01/2026 12:33:37 INFO: Generating Admin certificates. 23/01/2026 12:33:37 INFO: Admin certificates created. 23/01/2026 12:33:37 INFO: Generating Filebeat certificates. 23/01/2026 12:34:38 INFO: Wazuh Filebeat certificates created.
Copy the newly created certificates to the
wazuh-install-filesdirectory. Make sure you do not replace the admin certificates:# cp wazuh-certificates/<NEW_WAZUH_SERVER_NODE_NAME>* wazuh-install-files # cp wazuh-certificates/<EXISTING_WAZUH_SERVER_NODE_NAME>* wazuh-install-files
Compress the certificates directory into a new
<CERTIFICATE_ARCHIVE>.tarfile and copy it to the new Wazuh server node(s). You can make use of thescputility to securely copy the compressed file as follows:# tar -cvf ./<CERTIFICATE_ARCHIVE>.tar -C ./wazuh-install-files/ . # scp <CERTIFICATE_ARCHIVE>.tar <TARGET_USERNAME>@<TARGET_IP>:
Replace
<CERTIFICATE_ARCHIVE>.tarwith the name of the compressed certificates generated during the certificate creation step. (for example,wazuh-certificates.tarorwazuh-install-files.tar).Note
The
scpcommand copies the certificates to the/homedirectory of the target user on the endpoint. You can modify the command to specify a path to your installation directory.
Generating new certificates
You can follow the steps below to generate new certificates if the pre-existing root CA keys have been deleted or are inaccessible.
Create the
/root/config.ymlfile to reference all your Wazuh server nodes:nodes: # Wazuh indexer nodes indexer: - name: <WAZUH_INDEXER_NODE_NAME> ip: <WAZUH_INDEXER_IP> # Wazuh server nodes server: - name: <EXISTING_WAZUH_SERVER_NODE_NAME> ip: <EXISTING_WAZUH_SERVER_IP> node_type: master - name: <NEW_WAZUH_SERVER_NODE_NAME> ip: <NEW_WAZUH_SERVER_IP> node_type: worker # Wazuh dashboard nodes dashboard: - name: <WAZUH_DASHBOARD_NODE_NAME> ip: <WAZUH_DASHBOARD_IP>
Replace the values with your node names and their corresponding IP addresses.
Note
The existing Wazuh server node name and IP address can be found in the cluster section of
/var/ossec/etc/ossec.confon the existing Wazuh server.Download and execute the
wazuh-certs-tool.shscript to create the certificates:# curl -sO https://packages.wazuh.com/4.14/wazuh-certs-tool.sh # bash wazuh-certs-tool.sh -A
Compress the certificates folder and copy it to the new Wazuh indexer node(s). You can make use of the
scputility to securely copy the compressed file:# tar -cvf ./<CERTIFICATE_ARCHIVE>.tar -C ./wazuh-certificates/ . # scp <CERTIFICATE_ARCHIVE> <TARGET_USERNAME>@<TARGET_IP>:
Replace
<CERTIFICATE_ARCHIVE>.tarwith the name you want to use for the compressed certificate archive (for example,wazuh-certificates.tarorwazuh-install-files.tar).Note
The
scpcommand copies the certificates to the/homedirectory of the target user on the endpoint. You can modify the command to specify a path to your installation directory.
Configuring existing components to connect with the new node
Before adding a new Wazuh server node, existing components must be adjusted to maintain proper cluster communication. This includes updating relevant configuration files and connection settings so the Wazuh manager, Wazuh indexer, and Wazuh dashboard can integrate the new node.
Deploy the Wazuh server certificates on your existing Wazuh server node by running the following commands. Replace
<EXISTING_WAZUH_SERVER_NODE_NAME>with the node name of the Wazuh server you are configuring as defined in/root/config.yml.# NODE_NAME=<EXISTING_WAZUH_SERVER_NODE_NAME>
# rm -rf /etc/filebeat/certs # mkdir /etc/filebeat/certs # tar -xf ./<CERTIFICATE_ARCHIVE>.tar -C /etc/filebeat/certs/ ./$NODE_NAME.pem ./$NODE_NAME-key.pem ./root-ca.pem # mv -n /etc/filebeat/certs/$NODE_NAME.pem /etc/filebeat/certs/filebeat.pem # mv -n /etc/filebeat/certs/$NODE_NAME-key.pem /etc/filebeat/certs/filebeat-key.pem # chmod 500 /etc/filebeat/certs # chmod 400 /etc/filebeat/certs/* # chown -R root:root /etc/filebeat/certs
Note
If the certificates were recreated as recommended in the Generating new certificates above.
You will also have to redeploy the certificates on all your existing Wazuh nodes (indexer and dashboard).
After deploying the new certificate on the server, run the following commands to deploy the certificates to the Wazuh indexer and dashboard:
On the Wazuh indexer node(s):
# NODE_NAME=<WAZUH_INDEXER_NODE_NAME>
# rm -rf /etc/wazuh-indexer/certs # mkdir /etc/wazuh-indexer/certs # tar -xf ./<CERTIFICATE_ARCHIVE>.tar -C /etc/wazuh-indexer/certs/ ./$NODE_NAME.pem ./$NODE_NAME-key.pem ./admin.pem ./admin-key.pem ./root-ca.pem # mv -n /etc/wazuh-indexer/certs/$NODE_NAME.pem /etc/wazuh-indexer/certs/indexer.pem # mv -n /etc/wazuh-indexer/certs/$NODE_NAME-key.pem /etc/wazuh-indexer/certs/indexer-key.pem # chmod 500 /etc/wazuh-indexer/certs # chmod 400 /etc/wazuh-indexer/certs/* # chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/certs
On the Wazuh dashboard node:
# NODE_NAME=<WAZUH_DASHBOARD_NODE_NAME>
# rm -rf /etc/wazuh-dashboard/certs # mkdir /etc/wazuh-dashboard/certs # tar -xf ./<CERTIFICATE_ARCHIVE>.tar -C /etc/wazuh-dashboard/certs/ ./$NODE_NAME.pem ./$NODE_NAME-key.pem ./root-ca.pem # mv -n /etc/wazuh-dashboard/certs/$NODE_NAME.pem /etc/wazuh-dashboard/certs/wazuh-dashboard.pem # mv -n /etc/wazuh-dashboard/certs/$NODE_NAME-key.pem /etc/wazuh-dashboard/certs/wazuh-dashboard-key.pem # chmod 500 /etc/wazuh-dashboard/certs # chmod 400 /etc/wazuh-dashboard/certs/* # chown -R wazuh-dashboard:wazuh-dashboard /etc/wazuh-dashboard/certs
Recommended action: Securely save a copy offline for potential future use and scalability. You can remove the
<CERTIFICATE_ARCHIVE>.tarfile on this node by running the command below to increase security:# rm -f ./<CERTIFICATE_ARCHIVE>.tar
Edit the Wazuh indexer configuration file at
/etc/wazuh-indexer/opensearch.ymlto specify the Wazuh indexer's IP address as specified in the/root/config.ymlfile:network.host: "<WAZUH_INDEXER_IP>" node.name: "<WAZUH_INDEXER_NODE_NAME>" cluster.initial_master_nodes: - "<WAZUH_INDEXER_NODE_NAME>"
Edit the Filebeat configuration file
/etc/filebeat/filebeat.yml(located in the Wazuh server node) to specify the indexer's IP address:output.elasticsearchhosts: - <WAZUH_INDEXER_IP>:9200
Note
The structure of this section will vary depending on whether you installed Wazuh using the Wazuh installation assistant or the step-by-step guide. Here, we used the Wazuh installation assistant.
Generate an encryption key that will be used to encrypt communication between the Wazuh server cluster nodes:
# openssl rand -hex 16
Save the output of the above command, as it will be used later to configure cluster mode on both Wazuh server nodes.
Edit the configuration file
/etc/wazuh-dashboard/opensearch_dashboards.ymlto include the Wazuh indexer node's IP address:opensearch.hosts: https://<WAZUH_INDEXER_IP>:9200
Edit the
/usr/share/wazuh-dashboard/data/wazuh/config/wazuh.ymlfile located in the Wazuh dashboard node and replace theurlvalue with the IP address or hostname of the Wazuh server master node:hosts: - default: url: https://<EXISTING_WAZUH_SERVER_IP> port: 55000 username: wazuh-wui password: <WAZUH-WUI-PASSWORD> run_as: false
Edit the Wazuh server configuration file at
/var/ossec/etc/ossec.confto enable cluster mode:<cluster> <name>wazuh</name> <node_name><EXISTING_WAZUH_SERVER_NODE_NAME></node_name> <node_type>master</node_type> <key><ENCRYPTION_KEY></key> <port>1516</port> <bind_addr>0.0.0.0</bind_addr> <nodes> <node><MASTER_NODE_IP></node> </nodes> <hidden>no</hidden> <disabled>no</disabled> </cluster>
The configurable fields in the above section of the
var/ossec/etc/ossec.conffile are as follows:<name> indicates the name of the cluster.
<node_name> indicates the name of the current node. Replace
<EXISTING_WAZUH_SERVER_NODE_NAME>with the name of the existing Wazuh server node as specified in the/root/config.ymlfile.<node_type> specifies the role of the node. It has to be set to master.
<key> represents a key used to encrypt communication between cluster nodes. It should be the same on all the server nodes. To generate a unique key, you can use the command
openssl rand -hex 16.<port> indicates the destination port for cluster communication. Leave the default as
1516.<bind_addr> is the network IP address to which the node is bound to listen for incoming requests (0.0.0.0 means the node will use any IP address).
<nodes> is the address of the master node and can be either an IP address or a DNS hostname. This parameter must be specified in all Wazuh server nodes, including the master itself. Replace
<MASTER_NODE_IP>with the IP address of your master node.<hidden> shows or hides the cluster information in the generated alerts.
<disabled> indicates whether the node is enabled or disabled in the cluster. This option must be set to
no.
Run the following commands on your respective nodes to apply the changes.
Wazuh indexer node
# systemctl restart wazuh-indexer
# service wazuh-indexer restart
Wazuh server node(s)
# systemctl restart filebeat # systemctl restart wazuh-manager
# service filebeat restart # service wazuh-manager restart
Wazuh dashboard node
# systemctl restart wazuh-dashboard
# service wazuh-dashboard restart
Wazuh server worker node(s) installation
Once the certificates have been created and copied to the new node(s), you can proceed to install and configure the new Wazuh server as a worker node.
Adding the Wazuh repository
This step ensures that the new Wazuh server node can download and install the required Wazuh packages from the official repository.
Import the GPG key:
# rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH
Add the repository:
For RHEL-compatible systems version 8 and earlier, use the following command:
# echo -e '[wazuh]\ngpgcheck=1\ngpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH\nenabled=1\nname=EL-$releasever - Wazuh\nbaseurl=https://packages.wazuh.com/4.x/yum/\nprotect=1' | tee /etc/yum.repos.d/wazuh.repo
For RHEL-compatible systems version 9 and later, use the following command:
# echo -e '[wazuh]\ngpgcheck=1\ngpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH\nenabled=1\nname=EL-$releasever - Wazuh\nbaseurl=https://packages.wazuh.com/4.x/yum/\npriority=1' | tee /etc/yum.repos.d/wazuh.repo
Install the following packages if missing:
# apt-get install gnupg apt-transport-https
Install the GPG key:
# curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import && chmod 644 /usr/share/keyrings/wazuh.gpg
Add the repository:
# echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/4.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
Update the packages' information:
# apt-get update
Installing the Wazuh manager
This step installs the Wazuh manager on the node, which enables it to function as part of the cluster and manage alerts, Wazuh agents, and internal communication.
Install the Wazuh manager package.
# yum -y install wazuh-manager
# apt-get -y install wazuh-manager
Enable and start the Wazuh manager service.
# systemctl daemon-reload # systemctl enable wazuh-manager # systemctl start wazuh-manager
RPM-based operating system:
# chkconfig --add wazuh-manager # service wazuh-manager start
Debian-based operating system:
# update-rc.d wazuh-manager defaults 95 10 # service wazuh-manager start
Check the Wazuh manager status to ensure it is up and running.
# systemctl status wazuh-manager
# service wazuh-manager status
Install and configure Filebeat
Install the Filebeat package.
# yum -y install filebeat
# apt-get -y install filebeat
Download the preconfigured Filebeat configuration file:
# curl -so /etc/filebeat/filebeat.yml https://packages.wazuh.com/4.14/tpl/wazuh/filebeat/filebeat.yml
Edit the
/etc/filebeat/filebeat.ymlconfiguration file and replace the following value:hosts:which represents the list of Wazuh indexer nodes to connect to. You can use either IP addresses or hostnames. By default, the host is set to localhosthosts: ["127.0.0.1:9200"]. Replace it with the IP address of your Wazuh indexer.If you have more than one Wazuh indexer node, you can separate the addresses using commas. For example,
hosts: ["10.0.0.9:9200", "10.0.0.10:9200", "10.0.0.11:9200"]:
# Wazuh - Filebeat configuration file output.elasticsearch: hosts: <WAZUH_INDEXER_IP>:9200 protocol: https
Create a Filebeat keystore to securely store authentication credentials:
# filebeat keystore create
Add the admin user and password to the secrets keystore:
# echo admin | filebeat keystore add username --stdin --force # echo <ADMIN_PASSWORD> | filebeat keystore add password --stdin --force
In case you are running an all-in-one deployment and using the default admin password, you could get it by running the following command:
# sudo tar -O -xvf wazuh-install-files.tar wazuh-install-files/wazuh-passwords.txt
Download the alerts template for the Wazuh indexer:
# curl -so /etc/filebeat/wazuh-template.json https://raw.githubusercontent.com/wazuh/wazuh/v4.14.4/extensions/elasticsearch/7.x/wazuh-template.json # chmod go+r /etc/filebeat/wazuh-template.json
Install the Wazuh module for Filebeat:
# curl -s https://packages.wazuh.com/4.x/filebeat/wazuh-filebeat-0.5.tar.gz | tar -xvz -C /usr/share/filebeat/module
Deploying certificates
Before adding the new Wazuh server node, deploy the generated certificate archive on that node. This ensures that Filebeat can establish secure TLS communication with the Wazuh indexer and other required components.
Run the following commands in the directory where the wazuh-certificates.tar file was copied to, replacing <NEW_WAZUH_SERVER_NODE_NAME> with the name of the Wazuh server node you are configuring as defined in /root/config.yml. This deploys the SSL certificates to encrypt communications between the Wazuh central components:
Create an environment variable to store the node name:
NODE_NAME=<NEW_WAZUH_SERVER_NODE_NAME>Deploy the certificates:
# mkdir /etc/filebeat/certs # tar -xf ./wazuh-certificates.tar -C /etc/filebeat/certs/ ./$NODE_NAME.pem ./$NODE_NAME-key.pem ./root-ca.pem # mv -n /etc/filebeat/certs/$NODE_NAME.pem /etc/filebeat/certs/filebeat.pem # mv -n /etc/filebeat/certs/$NODE_NAME-key.pem /etc/filebeat/certs/filebeat-key.pem # chmod 500 /etc/filebeat/certs # chmod 400 /etc/filebeat/certs/* #chown -R root:root /etc/filebeat/certs
Starting the service
# systemctl daemon-reload
# systemctl enable filebeat
# systemctl start filebeat
RPM-based operating system:
# chkconfig --add filebeat # service filebeat start
Debian-based operating system:
# update-rc.d filebeat defaults 95 10 # service filebeat start
Run the following command to verify that Filebeat is successfully installed:
# filebeat test output
An example output is shown below:
elasticsearch: https://10.0.0.9:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 10.0.0.9
dial up... OK
TLS...
security: server's certificate chain verification is enabled
handshake... OK
TLS version: TLSv1.2
dial up... OK
talk to server... OK
version: 7.10.2
Configuring the Wazuh server worker nodes
Configure the Wazuh server worker node to enable cluster mode by editing the following settings in the
/var/ossec/etc/ossec.conffile:<cluster> <name>wazuh</name> <node_name><NEW_WAZUH_SERVER_NODE_NAME></node_name> <node_type>worker</node_type> <key><ENCRYPTION_KEY></key> <port>1516</port> <bind_addr>0.0.0.0</bind_addr> <nodes> <node><MASTER_NODE_IP_ADDRESS></node> </nodes> <hidden>no</hidden> <disabled>no</disabled> </cluster>
The configurable fields in the above section of the
var/ossec/etc/ossec.conffile is as follows:<name>indicates the name of the cluster.<node_name>indicates the name of the current node. Each node of the cluster must have a unique name. Replace<NEW_WAZUH_SERVER_NODE_NAME>with the name specified in the/root/config.ymlfile.node_typespecifies the role of the Wazuh server node. In this instance, it is set as a worker.keyrepresents the key created previously for the master node. It has to be the same for all the nodes. In case you have an already distributed infrastructure, copy this key from the master node's/var/ossec/etc/ossec.conffile.<port>indicates the destination port for cluster communication. Leave the default as1516.<bind_addr>is the network IP address to which the node is bound to listen for incoming requests (0.0.0.0 means the node will use any IP address).<nodes>contain the address of the master node, which can be either an IP address or a DNS hostname. Replace<MASTER_NODE_IP_ADDRESS>with the IP address of your master node.<hidden>shows or hides the cluster information in the generated alerts.<disabled>indicates whether the node is enabled or disabled in the cluster. This option must be set tono.
You can learn more about the available configuration options in the cluster reference guide.
Restart the Wazuh manager service.
# systemctl restart wazuh-manager
# service wazuh-manager restart
Testing the cluster
Now that installation and configuration are complete, you can test your cluster to verify that the new Wazuh server node is connected. Two possible ways of doing this:
Using the cluster control tool
Verify that the Wazuh server cluster is enabled and all the nodes are connected by executing the following command on any of the Wazuh server nodes:
# /var/ossec/bin/cluster_control -l
A sample output of the command:
NAME TYPE VERSION ADDRESS
wazuh-1 master 4.14.2 10.0.0.9
wazuh-2 worker 4.14.2 10.0.0.10
wazuh-3 worker 4.14.2 10.0.0.11
Note that 10.0.0.9, 10.0.0.10, 10.0.0.11 are example IP addresses.
Using the Wazuh API console
You can also check your new Wazuh server cluster by using the Wazuh API Console accessible via the Wazuh dashboard.
Access the Wazuh dashboard using the credentials below.
URL:
https://<WAZUH_DASHBOARD_IP>Username:
adminPassword:
<ADMIN_PASSWORD>oradminin case you already have a distributed architecture and are using the default password.
Navigate to Server management > Dev Tools. On the console, run the query below:
GET /cluster/healthcheck
This query will display the global status of your Wazuh server cluster with the following information for each node:
Nameindicates the name of the Wazuh server node.Typeindicates the role assigned to a node(Master or Worker).Versionindicates the version of theWazuh-managerservice running on the node.IPis the IP address of the Wazuh server node.n_active_agentsindicates the number of active Wazuh agents connected to the node.
Having completed these steps, the Wazuh infrastructure has been successfully scaled up, and the new server nodes have been integrated into the cluster.
If you want to uninstall the Wazuh server, see Uninstall the Wazuh server documentation.