All-in-one deployment

An all-in-one deployment refers to using the Wazuh installation assistant or the pre-built virtual machine image in Open Virtual Appliance (OVA) format provided by Wazuh. This deployment method installs all the Wazuh central components on a single endpoint.

To add a Wazuh server node to your all-in-one Wazuh deployment, complete the following key steps:

Certificate creation

Wazuh server uses certificates to secure communication between its components and related services, including the Wazuh indexer, Filebeat, Wazuh dashboard, and Wazuh agents. The Wazuh server consists of the Wazuh manager and Filebeat. When adding a new Wazuh server node, a certificate is required for Filebeat on that node to securely communicate with the Wazuh indexer. Certificates can also be used to secure communication between Wazuh agents and the Wazuh manager.

In an all-in-one deployment, new certificates are required when adding nodes because the certificates generated by the quickstart installation are intended for a single-node setup and are not valid for multi-node communication.

Perform the following steps on your existing Wazuh server node to generate the certificates required for secure communication among the Wazuh central components.

  1. Create a config.yml file in the /root directory to add the new Wazuh server node(s):

    # touch /root/config.yml
    

    Edit the /root/config.yml file with its content as follows:

    nodes:
      # Wazuh indexer nodes
      indexer:
        - name: <WAZUH_INDEXER_NODE_NAME>
          ip: <WAZUH_INDEXER_IP>
    
      # Wazuh server nodes
      server:
        - name: <EXISTING_WAZUH_SERVER_NODE_NAME>
          ip: <EXISTING_WAZUH_SERVER_IP>
          node_type: master
        - name: <NEW_WAZUH_SERVER_NODE_NAME>
          ip: <NEW_WAZUH_SERVER_IP>
          node_type: worker
    
      # Wazuh dashboard nodes
      dashboard:
        - name: <WAZUH_DASHBOARD_NODE_NAME>
          ip: <WAZUH_DASHBOARD_IP>
    

    Replace the node names and IP values with your new node names and IP addresses.

    You can assign a different node_type in your installation. In this documentation, we assign the master role to the existing Wazuh server node and the worker role to the new node.

  2. Download and run wazuh-certs-tool.sh from your /root directory to create the certificates for the new Wazuh server node and recreate them for the existing one:

    # curl -sO https://packages.wazuh.com/4.14/wazuh-certs-tool.sh
    # bash wazuh-certs-tool.sh -A
    
    23/01/2026 10:36:55 INFO: Generating the root certificate.
    23/01/2026 10:36:55 INFO: Generating Admin certificates.
    23/01/2026 10:36:55 INFO: Admin certificates created.
    23/01/2026 10:36:55 INFO: Generating Wazuh indexer certificates.
    23/01/2026 10:36:55 INFO: Wazuh indexer certificates created.
    23/01/2026 10:36:55 INFO: Generating Filebeat certificates.
    23/01/2026 10:36:56 INFO: Wazuh Filebeat certificates created.
    23/01/2026 10:36:56 INFO: Generating Wazuh dashboard certificates.
    23/01/2026 10:36:56 INFO: Wazuh dashboard certificates created.
    
  3. Compress the certificates folder and copy it to the new Wazuh server node(s). You can make use of the scp utility to securely copy the compressed file:

    # tar -cvf ./<CERTIFICATE_ARCHIVE>.tar -C ./wazuh-certificates/ .
    # scp <CERTIFICATE_ARCHIVE>.tar <TARGET_USERNAME>@<TARGET_IP>:
    

    Replace <CERTIFICATE_ARCHIVE>.tar with the name you want to use for the compressed certificate archive (for example, wazuh-certificates.tar or wazuh-install-files.tar).

    Note

    The scp command will copy the certificates to the /home directory of the user on the target system. You can change this to specify a path to your installation directory.

Configuring existing components to connect with the new node

Before deploying an additional Wazuh server node, it's essential to reconfigure existing components to ensure communication within the cluster. This step involves updating configuration files and connection parameters so that the Wazuh manager, indexer, and dashboard recognize and properly interact with the newly added Wazuh server node.

  1. Create a file, env_variables.sh, in the /root directory of the existing Wazuh server node where you define your environmental variables as follows:

    export NODE_NAME1=<WAZUH_INDEXER_NODE_NAME>
    export NODE_NAME2=<EXISTING_WAZUH_SERVER_NODE_NAME>
    export NODE_NAME3=<WAZUH_DASHBOARD_NODE_NAME>
    

    Replace <WAZUH_INDEXER_NODE_NAME>, <EXISTING_WAZUH_SERVER_NODE_NAME>, <WAZUH_DASHBOARD_NODE_NAME> with the names of the Wazuh indexer, Wazuh server, and Wazuh dashboard nodes, respectively, as defined in /root/config.yml.

  2. Create a deploy-certificates.sh script in the /root directory and paste the following into it:

    #!/bin/bash
    
    # Source the environmental variables from the external file
    source ~/env_variables.sh
    
    rm -rf /etc/wazuh-indexer/certs
    mkdir /etc/wazuh-indexer/certs
    tar -xf ./<CERTIFICATE_ARCHIVE>.tar -C /etc/wazuh-indexer/certs/ ./$NODE_NAME1.pem ./$NODE_NAME1-key.pem ./admin.pem ./admin-key.pem ./root-ca.pem
    mv -n /etc/wazuh-indexer/certs/$NODE_NAME1.pem /etc/wazuh-indexer/certs/wazuh-indexer.pem
    mv -n /etc/wazuh-indexer/certs/$NODE_NAME1-key.pem /etc/wazuh-indexer/certs/wazuh-indexer-key.pem
    chmod 500 /etc/wazuh-indexer/certs
    chmod 400 /etc/wazuh-indexer/certs/*
    chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/certs
    
    rm -rf /etc/filebeat/certs
    mkdir /etc/filebeat/certs
    tar -xf ./<CERTIFICATE_ARCHIVE>.tar -C /etc/filebeat/certs/ ./$NODE_NAME2.pem ./$NODE_NAME2-key.pem ./root-ca.pem
    mv -n /etc/filebeat/certs/$NODE_NAME2.pem /etc/filebeat/certs/wazuh-server.pem
    mv -n /etc/filebeat/certs/$NODE_NAME2-key.pem /etc/filebeat/certs/wazuh-server-key.pem
    chmod 500 /etc/filebeat/certs
    chmod 400 /etc/filebeat/certs/*
    chown -R root:root /etc/filebeat/certs
    
    rm -rf /etc/wazuh-dashboard/certs
    mkdir /etc/wazuh-dashboard/certs
    tar -xf ./<CERTIFICATE_ARCHIVE>.tar -C /etc/wazuh-dashboard/certs/ ./$NODE_NAME3.pem ./$NODE_NAME3-key.pem ./root-ca.pem
    mv -n /etc/wazuh-dashboard/certs/$NODE_NAME3.pem /etc/wazuh-dashboard/certs/wazuh-dashboard.pem
    mv -n /etc/wazuh-dashboard/certs/$NODE_NAME3-key.pem /etc/wazuh-dashboard/certs/wazuh-dashboard-key.pem
    chmod 500 /etc/wazuh-dashboard/certs
    chmod 400 /etc/wazuh-dashboard/certs/*
    chown -R wazuh-dashboard:wazuh-dashboard /etc/wazuh-dashboard/certs
    

    The deploy-certificates.sh script extracts the required certificates from the generated certificate archive and deploys them to the Wazuh indexer, Wazuh server (Filebeat), and Wazuh dashboard directories. It also sets the appropriate ownership and permissions to ensure secure communication between components when adding the new Wazuh server node.

  3. Deploy the certificates by executing the following command:

    # bash /root/deploy-certificates.sh
    

    This deploys the SSL certificates to encrypt communications between the Wazuh central components.

    Recommended action: Save a copy offline for potential future use and scalability. You can remove the <CERTIFICATE_ARCHIVE>.tar file on this Wazuh server node by running the command below to increase security:

    # rm -rf ./wazuh-certificates
    # rm -f ./<CERTIFICATE_ARCHIVE>.tar
    
  4. Edit the Wazuh indexer configuration file at /etc/wazuh-indexer/opensearch.yml to specify the Wazuh indexer IP address and NODE_NAME as mentioned in /root/config.yml file:

    network.host: "<WAZUH_INDEXER_IP>"
    node.name: "<WAZUH_INDEXER_NODE_NAME>"
    cluster.initial_master_nodes:
    - "<WAZUH_INDEXER_NODE_NAME>"
    
  5. Edit the Filebeat configuration file /etc/filebeat/filebeat.yml to specify the Wazuh indexer's IP address:

    output.elasticsearchhosts:
            - <WAZUH_INDEXER_IP>:9200
    

    Note

    The structure of this section varies based on whether you completed your installation using the Wazuh installation assistant or the step-by-step guide. Here, we used the quickstart script.

  6. Generate a random encryption key that will be used to encrypt communication between the cluster nodes:

    # openssl rand -hex 16
    

    Save the output from the command above, as it will be used later to configure the Wazuh server and worker nodes.

  7. Edit the configuration file /etc/wazuh-dashboard/opensearch_dashboards.yml to include connection details for the Wazuh indexer node:

    opensearch.hosts: https://<WAZUH_INDEXER_IP>:9200
    
  8. Edit the /usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml file and replace the url value with the IP address or hostname of the Wazuh server master node:

    hosts:
      - default:
          url: https://<EXISTING_WAZUH_SERVER_IP>
          port: 55000
          username: wazuh-wui
          password: <WAZUH-WUI-PASSWORD>
          run_as: false
    
  9. Edit the Wazuh server configuration file at /var/ossec/etc/ossec.conf to enable the Wazuh server cluster:

    <cluster>
      <name>wazuh</name>
      <node_name><EXISTING_WAZUH_SERVER_NODE_NAME></node_name>
      <node_type>master</node_type>
      <key><ENCRYPTION_KEY></key>
      <port>1516</port>
      <bind_addr>0.0.0.0</bind_addr>
      <nodes>
          <node><MASTER_NODE_IP></node>
      </nodes>
      <hidden>no</hidden>
      <disabled>no</disabled>
    </cluster>
    

    The configurable fields in the above section of the /var/ossec/etc/ossec.conf file are as follows:

    • <name> indicates the name of the cluster.

    • <node_name> indicates the name of the current node Wazuh server node as defined in the <node_name> field under the <cluster> block of the /root/config.yml file on that node. . Replace <EXISTING_WAZUH_SERVER_NODE_NAME> with this exact value.

    • <node_type> specifies the node's role. In this instance, it should be set to master.

    • <key> represents a key used to encrypt communication between cluster nodes. It should be the same on all the server nodes. To generate a unique key, you can use the command openssl rand -hex 16.

    • <port> indicates the destination port for cluster communication. Leave the default as 1516.

    • <bind_addr> is the network IP address to which the node is bound to listen for incoming requests (0.0.0.0 means the node will use any IP address).

    • <nodes> is the address of the master node and can be either an IP address or a DNS hostname. This parameter must be specified in all nodes, including the master itself. Replace <MASTER_NODE_IP> with the IP address of your master node.

    • <hidden> shows or hides the cluster information in the generated alerts.

    • <disabled> indicates whether the node is enabled or disabled in the cluster. This option must be set to no.

  10. Restart the Wazuh central component and Filebeat to apply the changes.

    # systemctl restart wazuh-indexer
    # systemctl restart wazuh-manager
    # systemctl restart wazuh-dashboard
    # systemctl restart filebeat
    

Wazuh server worker node(s) installation

Once the certificates have been created and copied to the new Wazuh server node(s), you can proceed to install and configure the new Wazuh server as a worker node.

Adding the Wazuh repository

This step ensures that the new Wazuh server node can download and install the required Wazuh packages from the official repository.

  1. Import the GPG key:

    # rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH
    
  2. Add the repository:

    • For RHEL-compatible systems version 8 and earlier, use the following command:

      # echo -e '[wazuh]\ngpgcheck=1\ngpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH\nenabled=1\nname=EL-$releasever - Wazuh\nbaseurl=https://packages.wazuh.com/4.x/yum/\nprotect=1' | tee /etc/yum.repos.d/wazuh.repo
      
    • For RHEL-compatible systems version 9 and later, use the following command:

      # echo -e '[wazuh]\ngpgcheck=1\ngpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH\nenabled=1\nname=EL-$releasever - Wazuh\nbaseurl=https://packages.wazuh.com/4.x/yum/\npriority=1' | tee /etc/yum.repos.d/wazuh.repo
      

Installing the Wazuh manager

This step installs the Wazuh manager on the node, which enables it to function as part of the cluster and manage alerts,Wazuh agents, and internal communication.

  1. Install the Wazuh manager package.

    # yum -y install wazuh-manager
    
  2. Enable and start the Wazuh manager service.

    # systemctl daemon-reload
    # systemctl enable wazuh-manager
    # systemctl start wazuh-manager
    
  3. Check the Wazuh manager status to ensure it is up and running.

    # systemctl status wazuh-manager
    

Install and configure Filebeat

  1. Install the Filebeat package.

    # yum -y install filebeat
    
  2. Download the preconfigured Filebeat configuration file:

    # curl -so /etc/filebeat/filebeat.yml https://packages.wazuh.com/4.14/tpl/wazuh/filebeat/filebeat.yml
    
  3. Edit the /etc/filebeat/filebeat.yml configuration file and replace the <WAZUH_INDEXER_IP> value with the IP addresses or hostnames of your Wazuh indexer:

    # Wazuh - Filebeat configuration file
    output.elasticsearch:
      hosts: <WAZUH_INDEXER_IP>:9200
      protocol: https
    
    • hosts: represents the list of Wazuh indexer nodes to connect to. You can use either IP addresses or hostnames. By default, the host is set to localhost hosts: ["127.0.0.1:9200"]. Replace it with your Wazuh indexer IP address accordingly.

      If you have more than one Wazuh indexer node, you can separate the addresses using commas. For example, hosts: ["10.0.0.9:9200", "10.0.0.10:9200", "10.0.0.11:9200"]:

  4. Create a Filebeat keystore to securely store authentication credentials:

    # filebeat keystore create
    
  5. Add the admin user and password to the secrets keystore:

    # echo admin | filebeat keystore add username --stdin --force
    # echo <ADMIN_PASSWORD> | filebeat keystore add password --stdin --force
    

    If you are running an all-in-one Wazuh deployment and have not changed the default admin password, you can retrieve the current admin password by running the following command:

    # sudo tar -O -xvf wazuh-install-files.tar wazuh-install-files/wazuh-passwords.txt
    
  6. Download the alerts template for the Wazuh indexer:

    # curl -so /etc/filebeat/wazuh-template.json https://raw.githubusercontent.com/wazuh/wazuh/v4.14.4/extensions/elasticsearch/7.x/wazuh-template.json
    # chmod go+r /etc/filebeat/wazuh-template.json
    
  7. Install the Wazuh module for Filebeat:

    # curl -s https://packages.wazuh.com/4.x/filebeat/wazuh-filebeat-0.5.tar.gz | tar -xvz -C /usr/share/filebeat/module
    

Deploying certificates

When adding a new Wazuh server node, deploy the generated certificate archive on that node before starting its services. This ensures secure TLS communication with the Wazuh indexer, Filebeat and other required components.

Run the following commands in the directory where the wazuh-certificates.tar file was copied to, replacing <NEW_WAZUH_SERVER_NODE_NAME> with the name of the Wazuh server node you are configuring as defined in /root/config.yml. This deploys the SSL certificates to encrypt communications between the Wazuh central components:

  1. Create an environment variable to store the node name:

    NODE_NAME=<NEW_WAZUH_SERVER_NODE_NAME>
    
  2. Deploy the certificates:

    # mkdir /etc/filebeat/certs
    # tar -xf ./wazuh-certificates.tar -C /etc/filebeat/certs/ ./$NODE_NAME.pem ./$NODE_NAME-key.pem ./root-ca.pem
    # mv -n /etc/filebeat/certs/$NODE_NAME.pem /etc/filebeat/certs/filebeat.pem
    # mv -n /etc/filebeat/certs/$NODE_NAME-key.pem /etc/filebeat/certs/filebeat-key.pem
    # chmod 500 /etc/filebeat/certs
    # chmod 400 /etc/filebeat/certs/*
    #chown -R root:root /etc/filebeat/certs
    

Starting the service

# systemctl daemon-reload
# systemctl enable filebeat
# systemctl start filebeat

Run the following command to verify that Filebeat is successfully installed:

# filebeat test output

An example output is shown below:

elasticsearch: https://10.0.0.9:9200...
  parse url... OK
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 10.0.0.9
    dial up... OK
  TLS...
    security: server's certificate chain verification is enabled
    handshake... OK
    TLS version: TLSv1.2
    dial up... OK
  talk to server... OK
  version: 7.10.2

Configuring the Wazuh server worker nodes

  1. Configure the Wazuh server worker node to enable cluster mode by editing the following settings in the /var/ossec/etc/ossec.conf file:

    <cluster>
        <name>wazuh</name>
        <node_name><NEW_WAZUH_SERVER_NODE_NAME></node_name>
        <node_type>worker</node_type>
        <key><ENCRYPTION_KEY></key>
        <port>1516</port>
        <bind_addr>0.0.0.0</bind_addr>
        <nodes>
            <node><MASTER_NODE_IP_ADDRESS></node>
        </nodes>
        <hidden>no</hidden>
        <disabled>no</disabled>
    </cluster>
    

    The configurable fields in the above section of the /var/ossec/etc/ossec.conf file is as follows:

    • <name> indicates the name of the cluster.

    • <node_name> indicates the name of the current node. Each node of the cluster must have a unique name. Replace <NEW_WAZUH_SERVER_NODE_NAME> with the name specified in the /root/config.yml file.

    • node_type specifies the node's role. In this instance, it should be set as a worker.

    • key represents the key created previously for the master node. It has to be the same for all the nodes. In case you have an already distributed infrastructure, copy this key from the master node's /var/ossec/etc/ossec.conf file.

    • <port> indicates the destination port for cluster communication. Leave the default as 1516.

    • <bind_addr> is the network IP address to which the node is bound to listen for incoming requests (0.0.0.0 means the node will use any IP address).

    • <nodes> contain the Wazuh server master node's address, which can be an IP address or a DNS hostname. Replace <MASTER_NODE_IP_ADDRESS> with the IP address of your master node.

    • <hidden> shows or hides the cluster information in the generated alerts.

    • <disabled> indicates whether the node is enabled or disabled in the cluster. This option must be set to no.

    You can learn more about the available configuration options in the cluster reference guide.

  2. Restart the Wazuh manager service.

    # systemctl restart wazuh-manager
    

Testing the cluster

Now that installation and configuration are complete, you can proceed to test your cluster to ensure the new Wazuh server node is connected. Two possible ways of doing this:

Using the cluster control tool

Verify that the Wazuh server cluster is enabled and all the nodes are connected by executing the following command on any of the Wazuh server nodes:

# /var/ossec/bin/cluster_control -l

A sample output of the command:

NAME     TYPE    VERSION  ADDRESS
wazuh-1  master  4.14.2   10.0.0.9
wazuh-2  worker  4.14.2   10.0.0.10
wazuh-3  worker  4.14.2   10.0.0.11

Note that 10.0.0.9, 10.0.0.10,10.0.0.11 are example IP addresses.

Using the Wazuh API console

You can also check your new Wazuh server cluster by using the Wazuh API Console accessible via the Wazuh dashboard.

Access the Wazuh dashboard using the credentials below.

  • URL: https://<WAZUH_DASHBOARD_IP>

  • Username: admin

  • Password: <ADMIN_PASSWORD> or admin, in case you already have a distributed architecture and are using the default password.

Navigate to Server management > Dev Tools. On the console, run the query below:

GET /cluster/healthcheck
Running query in the API console

This query will display the global status of your Wazuh server cluster with the following information for each node:

  • Name indicates the name of the server node.

  • Type indicates the role assigned to a node(Master or Worker).

  • Version indicates the version of the Wazuh-manager service running on the node.

  • IP is the IP address of the Wazuh server master or worker node.

  • n_active_agents indicates the number of active Wazuh agents connected to the node.

Having completed these steps, the Wazuh infrastructure has been successfully scaled up, and the new server nodes have been integrated into the cluster.

If you want to uninstall the Wazuh server, see Uninstall the Wazuh server documentation.