Offline installation

You can install Wazuh even when there is no connection to the Internet. Installing the solution offline involves downloading the Wazuh central components to later install them on a system with no Internet connection. The Wazuh server, the Wazuh indexer, and the Wazuh dashboard can be installed and configured on the same host in an all-in-one deployment, or each component can be installed on a separate host as a distributed deployment, depending on your environment needs.

For more information about the hardware requirements and the recommended operating systems, check the Requirements section.

Note

You need root user privileges to run all the commands described below.

Prerequisites

  • curl, tar, and setcap need to be installed in the target system where the offline installation will be carried out. gnupg might need to be installed as well for some Debian-based systems.

  • In some systems, the command cp is an alias for cp -i — you can check this by running alias cp. If this is your case, use unalias cp to avoid being asked for confirmation to overwrite files.

Download the packages and configuration files

  1. Run the following commands from any Linux system with Internet connection. This action executes a script that downloads all required files for the offline installation on x86_64 architectures. Select the package format to download.

    # curl -sO https://packages.wazuh.com/4.7/wazuh-install.sh
    # chmod 744 wazuh-install.sh
    # ./wazuh-install.sh -dw rpm
    
  2. Download the certificates configuration file.

    # curl -sO https://packages.wazuh.com/4.7/config.yml
    
  3. Edit config.yml to prepare the certificates creation.

    • If you are performing an all-in-one deployment, replace "<indexer-node-ip>", "<wazuh-manager-ip>", and "<dashboard-node-ip>" with 127.0.0.1.

    • If you are performing a distributed deployment, replace the node names and IP values with the corresponding names and IP addresses. You need to do this for all the Wazuh server, the Wazuh indexer, and the Wazuh dashboard nodes. Add as many node fields as needed.

  4. Run the ./wazuh-certs-tool.sh to create the certificates. For a multi-node cluster, these certificates need to be later deployed to all Wazuh instances in your cluster.

    # curl -sO https://packages.wazuh.com/4.7/wazuh-certs-tool.sh
    # chmod 744 wazuh-certs-tool.sh
    # ./wazuh-certs-tool.sh --all
    
  5. Copy or move wazuh-offline.tar.gz file and ./wazuh-certificates/ folder to a folder accessible to the host(s) from where the offline installation will be carried out. This can be done by using scp.

Install Wazuh components from local files

  1. In the working directory where you placed wazuh-offline.tar.gz and ./wazuh-certificates/, execute the following command to decompress the installation files:

    # tar xf wazuh-offline.tar.gz
    

    You can check the SHA512 of the decompressed package files in wazuh-offline/wazuh-packages/. Find the SHA512 checksums in the Packages list.

Installing the Wazuh indexer

  1. Run the following commands to install the Wazuh indexer.

    # rpm --import ./wazuh-offline/wazuh-files/GPG-KEY-WAZUH
    # rpm -ivh ./wazuh-offline/wazuh-packages/wazuh-indexer*.rpm
    
  2. Run the following commands replacing <indexer-node-name> with the name of the Wazuh indexer node you are configuring as defined in config.yml. For example, node-1. This deploys the SSL certificates to encrypt communications between the Wazuh central components.

    # NODE_NAME=<indexer-node-name>
    
    # mkdir /etc/wazuh-indexer/certs
    # mv -n wazuh-certificates/$NODE_NAME.pem /etc/wazuh-indexer/certs/indexer.pem
    # mv -n wazuh-certificates/$NODE_NAME-key.pem /etc/wazuh-indexer/certs/indexer-key.pem
    # mv wazuh-certificates/admin-key.pem /etc/wazuh-indexer/certs/
    # mv wazuh-certificates/admin.pem /etc/wazuh-indexer/certs/
    # cp wazuh-certificates/root-ca.pem /etc/wazuh-indexer/certs/
    # chmod 500 /etc/wazuh-indexer/certs
    # chmod 400 /etc/wazuh-indexer/certs/*
    # chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/certs
    

    Here you move the node certificate and key files, such as node-1.pem and node-1-key.pem, to their corresponding certs folder. They're specific to the node and are not required on the other nodes. However, note that the root-ca.pem certificate isn't moved but copied to the certs folder. This way, you can continue deploying it to other component folders in the next steps.

  3. Edit /etc/wazuh-indexer/opensearch.yml and replace the following values:

    1. network.host: Sets the address of this node for both HTTP and transport traffic. The node will bind to this address and will also use it as its publish address. Accepts an IP address or a hostname.

      Use the same node address set in config.yml to create the SSL certificates.

    2. node.name: Name of the Wazuh indexer node as defined in the config.yml file. For example, node-1.

    3. cluster.initial_master_nodes: List of the names of the master-eligible nodes. These names are defined in the config.yml file. Uncomment the node-2 and node-3 lines, change the names, or add more lines, according to your config.yml definitions.

      cluster.initial_master_nodes:
      - "node-1"
      - "node-2"
      - "node-3"
      
    4. discovery.seed_hosts: List of the addresses of the master-eligible nodes. Each element can be either an IP address or a hostname. You may leave this setting commented if you are configuring the Wazuh indexer as a single-node. For multi-node configurations, uncomment this setting and set your master-eligible nodes addresses.

      discovery.seed_hosts:
        - "10.0.0.1"
        - "10.0.0.2"
        - "10.0.0.3"
      
    5. plugins.security.nodes_dn: List of the Distinguished Names of the certificates of all the Wazuh indexer cluster nodes. Uncomment the lines for node-2 and node-3 and change the common names (CN) and values according to your settings and your config.yml definitions.

      plugins.security.nodes_dn:
      - "CN=node-1,OU=Wazuh,O=Wazuh,L=California,C=US"
      - "CN=node-2,OU=Wazuh,O=Wazuh,L=California,C=US"
      - "CN=node-3,OU=Wazuh,O=Wazuh,L=California,C=US"
      
  4. Enable and start the Wazuh indexer service.

    # systemctl daemon-reload
    # systemctl enable wazuh-indexer
    # systemctl start wazuh-indexer
    
  5. For multi-node clusters, repeat the previous steps on every Wazuh indexer node.

  6. When all Wazuh indexer nodes are running, run the Wazuh indexer indexer-security-init.sh script on any Wazuh indexer node to load the new certificates information and start the cluster.

    # /usr/share/wazuh-indexer/bin/indexer-security-init.sh
    
  7. Run the following command to check that the installation is successful. Note that this command uses localhost, set your Wazuh indexer address if necessary.

    # curl -XGET https://localhost:9200 -u admin:admin -k
    

    Expand the output to see an example response.

Installing the Wazuh server

Installing the Wazuh manager

  1. Run the following commands to import the Wazuh key and install the Wazuh manager.

    # rpm --import ./wazuh-offline/wazuh-files/GPG-KEY-WAZUH
    # rpm -ivh ./wazuh-offline/wazuh-packages/wazuh-manager*.rpm
    
  2. Enable and start the Wazuh manager service.

    # systemctl daemon-reload
    # systemctl enable wazuh-manager
    # systemctl start wazuh-manager
    
  3. Run the following command to verify that the Wazuh manager status is active.

    # systemctl status wazuh-manager
    

Installing Filebeat

Filebeat must be installed and configured on the same server as the Wazuh manager.

  1. Run the following command to install Filebeat.

    # rpm -ivh ./wazuh-offline/wazuh-packages/filebeat*.rpm
    
  2. Move a copy of the configuration files to the appropriate location. Ensure to type “yes” at the prompt to overwrite /etc/filebeat/filebeat.yml.

    # cp ./wazuh-offline/wazuh-files/filebeat.yml /etc/filebeat/ &&\
    cp ./wazuh-offline/wazuh-files/wazuh-template.json /etc/filebeat/ &&\
    chmod go+r /etc/filebeat/wazuh-template.json
    
  3. Edit /etc/filebeat/wazuh-template.json and change to "1" the value for "index.number_of_shards" for a single-node installation. This value can be changed based on the user requirement when performing a distributed installation.

    {
      ...
      "settings": {
        ...
        "index.number_of_shards": "1",
        ...
      },
      ...
    }
    
  4. Edit the /etc/filebeat/filebeat.yml configuration file and replace the following value:

    1. hosts: The list of Wazuh indexer nodes to connect to. You can use either IP addresses or hostnames. By default, the host is set to localhost hosts: ["127.0.0.1:9200"]. Replace it with your Wazuh indexer address accordingly.

      If you have more than one Wazuh indexer node, you can separate the addresses using commas. For example, hosts: ["10.0.0.1:9200", "10.0.0.2:9200", "10.0.0.3:9200"]

       # Wazuh - Filebeat configuration file
       output.elasticsearch:
       hosts: ["10.0.0.1:9200"]
       protocol: https
       username: ${username}
       password: ${password}
      
  5. Create a Filebeat keystore to securely store authentication credentials.

    # filebeat keystore create
    
  6. Add the username and password admin:admin to the secrets keystore.

    # echo admin | filebeat keystore add username --stdin --force
    # echo admin | filebeat keystore add password --stdin --force
    
  7. Install the Wazuh module for Filebeat.

    # tar -xzf ./wazuh-offline/wazuh-files/wazuh-filebeat-0.3.tar.gz -C /usr/share/filebeat/module
    
  8. Replace <server-node-name> with your Wazuh server node certificate name, the same used in config.yml when creating the certificates. For example, wazuh-1. Then, move the certificates to their corresponding location.

    # NODE_NAME=<server-node-name>
    
    # mkdir /etc/filebeat/certs
    # mv -n wazuh-certificates/$NODE_NAME.pem /etc/filebeat/certs/filebeat.pem
    # mv -n wazuh-certificates/$NODE_NAME-key.pem /etc/filebeat/certs/filebeat-key.pem
    # cp wazuh-certificates/root-ca.pem /etc/filebeat/certs/
    # chmod 500 /etc/filebeat/certs
    # chmod 400 /etc/filebeat/certs/*
    # chown -R root:root /etc/filebeat/certs
    
  9. Enable and start the Filebeat service.

    # systemctl daemon-reload
    # systemctl enable filebeat
    # systemctl start filebeat
    
  10. Run the following command to make sure Filebeat is successfully installed.

    # filebeat test output
    

    Expand the output to see an example response.

    To check the number of shards that have been configured, you can run the following command. Note that this command uses localhost, set your Wazuh indexer address if necessary.

    # curl -k -u admin:admin "https://localhost:9200/_template/wazuh?pretty&filter_path=wazuh.settings.index.number_of_shards"
    

    Expand the output to see an example response.

Your Wazuh server node is now successfully installed. Repeat the steps of this installation process stage for every Wazuh server node in your cluster, expand the Wazuh cluster configuration for multi-node deployment section below, and carry on then with configuring the Wazuh cluster. If you want a Wazuh server single-node cluster, everything is set and you can proceed directly with the Wazuh dashboard installation.

Wazuh cluster configuration for multi-node deployment

After completing the installation of the Wazuh server on every node, you need to configure one server node only as the master and the rest as workers.

Configuring the Wazuh server master node
  1. Edit the following settings in the /var/ossec/etc/ossec.conf configuration file.

    <cluster>
      <name>wazuh</name>
      <node_name>master-node</node_name>
      <node_type>master</node_type>
      <key>c98b62a9b6169ac5f67dae55ae4a9088</key>
      <port>1516</port>
      <bind_addr>0.0.0.0</bind_addr>
      <nodes>
        <node>wazuh-master-address</node>
      </nodes>
      <hidden>no</hidden>
      <disabled>no</disabled>
    </cluster>
    

    Parameters to be configured:

    name

    It indicates the name of the cluster.

    node_name

    It indicates the name of the current node.

    node_type

    It specifies the role of the node. It has to be set to master.

    key

    Key that is used to encrypt communication between cluster nodes. The key must be 32 characters long and the same for all of the nodes in the cluster. The following command can be used to generate a random key: openssl rand -hex 16.

    port

    It indicates the destination port for cluster communication.

    bind_addr

    It is the network IP to which the node is bound to listen for incoming requests (0.0.0.0 for any IP).

    nodes

    It is the address of the master node and can be either an IP or a DNS. This parameter must be specified in all nodes, including the master itself.

    hidden

    It shows or hides the cluster information in the generated alerts.

    disabled

    It indicates whether the node is enabled or disabled in the cluster. This option must be set to no.

  2. Restart the Wazuh manager.

    # systemctl restart wazuh-manager
    
Configuring the Wazuh server worker nodes
  1. Configure the cluster node by editing the following settings in the /var/ossec/etc/ossec.conf file.

    <cluster>
        <name>wazuh</name>
        <node_name>worker-node</node_name>
        <node_type>worker</node_type>
        <key>c98b62a9b6169ac5f67dae55ae4a9088</key>
        <port>1516</port>
        <bind_addr>0.0.0.0</bind_addr>
        <nodes>
            <node>wazuh-master-address</node>
        </nodes>
        <hidden>no</hidden>
        <disabled>no</disabled>
    </cluster>
    

    Parameters to be configured:

    name

    It indicates the name of the cluster.

    node_name

    It indicates the name of the current node. Each node of the cluster must have a unique name.

    node_type

    It specifies the role of the node. It has to be set as worker.

    key

    The key created previously for the master node. It has to be the same for all the nodes.

    nodes

    It has to contain the address of the master node and can be either an IP or a DNS.

    disabled

    It indicates whether the node is enabled or disabled in the cluster. It has to be set to no.

  2. Restart the Wazuh manager.

    # systemctl restart wazuh-manager
    

Repeat these configuration steps for every Wazuh server worker node in your cluster.

Testing Wazuh server cluster

To verify that the Wazuh cluster is enabled and all the nodes are connected, execute the following command:

# /var/ossec/bin/cluster_control -l

An example output of the command looks as follows:

  NAME         TYPE    VERSION  ADDRESS
  master-node  master  4.7.3   10.0.0.3
  worker-node1 worker  4.7.3   10.0.0.4
  worker-node2 worker  4.7.3   10.0.0.5

Note that 10.0.0.3, 10.0.0.4, 10.0.0.5 are example IPs.

Installing the Wazuh dashboard

  1. Run the following commands to install the Wazuh dashboard.

    # rpm --import ./wazuh-offline/wazuh-files/GPG-KEY-WAZUH
    # rpm -ivh ./wazuh-offline/wazuh-packages/wazuh-dashboard*.rpm
    
  2. Replace <dashboard-node-name> with your Wazuh dashboard node name, the same used in config.yml to create the certificates. For example, dashboard. Then, move the certificates to their corresponding location.

    # NODE_NAME=<dashboard-node-name>
    
    # mkdir /etc/wazuh-dashboard/certs
    # mv -n wazuh-certificates/$NODE_NAME.pem /etc/wazuh-dashboard/certs/dashboard.pem
    # mv -n wazuh-certificates/$NODE_NAME-key.pem /etc/wazuh-dashboard/certs/dashboard-key.pem
    # cp wazuh-certificates/root-ca.pem /etc/wazuh-dashboard/certs/
    # chmod 500 /etc/wazuh-dashboard/certs
    # chmod 400 /etc/wazuh-dashboard/certs/*
    # chown -R wazuh-dashboard:wazuh-dashboard /etc/wazuh-dashboard/certs
    
  3. Edit the /etc/wazuh-dashboard/opensearch_dashboards.yml file and replace the following values:

    1. server.host: This setting specifies the host of the back end server. To allow remote users to connect, set the value to the IP address or DNS name of the Wazuh dashboard. The value 0.0.0.0 will accept all the available IP addresses of the host.

    2. opensearch.hosts: The URLs of the Wazuh indexer instances to use for all your queries. The Wazuh dashboard can be configured to connect to multiple Wazuh indexer nodes in the same cluster. The addresses of the nodes can be separated by commas. For example, ["https://10.0.0.2:9200", "https://10.0.0.3:9200","https://10.0.0.4:9200"]

         server.host: 0.0.0.0
         server.port: 443
         opensearch.hosts: https://localhost:9200
         opensearch.ssl.verificationMode: certificate
      
  4. Enable and start the Wazuh dashboard.

    # systemctl daemon-reload
    # systemctl enable wazuh-dashboard
    # systemctl start wazuh-dashboard
    
  5. Only for distributed deployments: Edit the file /usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml and replace the url value with the IP address or hostname of the Wazuh server master node.

    hosts:
      - default:
          url: https://localhost
          port: 55000
          username: wazuh-wui
          password: wazuh-wui
          run_as: false
    
  6. Run the following command to verify the Wazuh dashboard service is active.

    # systemctl status wazuh-dashboard
    
  7. Access the web interface.

    • URL: https://<wazuh_server_ip>

    • Username: admin

    • Password: admin

Upon the first access to the Wazuh dashboard, the browser shows a warning message stating that the certificate was not issued by a trusted authority. An exception can be added in the advanced options of the web browser or, for increased security, the root-ca.pem file previously generated can be imported to the certificate manager of the browser. Alternatively, a certificate from a trusted authority can be configured.

Securing your Wazuh installation

You have now installed and configured all the Wazuh central components. We recommend changing the default credentials to protect your infrastructure from possible attacks.

Select your deployment type and follow the instructions to change the default passwords for both the Wazuh API and the Wazuh indexer users.

  1. Use the Wazuh passwords tool to change all the internal users passwords.

    # /usr/share/wazuh-indexer/plugins/opensearch-security/tools/wazuh-passwords-tool.sh --change-all --admin-user wazuh --admin-password wazuh
    
    INFO: The password for user admin is yWOzmNA.?Aoc+rQfDBcF71KZp?1xd7IO
    INFO: The password for user kibanaserver is nUa+66zY.eDF*2rRl5GKdgLxvgYQA+wo
    INFO: The password for user kibanaro is 0jHq.4i*VAgclnqFiXvZ5gtQq1D5LCcL
    INFO: The password for user logstash is hWW6U45rPoCT?oR.r.Baw2qaWz2iH8Ml
    INFO: The password for user readall is PNt5K+FpKDMO2TlxJ6Opb2D0mYl*I7FQ
    INFO: The password for user snapshotrestore is +GGz2noZZr2qVUK7xbtqjUup049tvLq.
    WARNING: Wazuh indexer passwords changed. Remember to update the password in the Wazuh dashboard and Filebeat nodes if necessary, and restart the services.
    INFO: The password for Wazuh API user wazuh is JYWz5Zdb3Yq+uOzOPyUU4oat0n60VmWI
    INFO: The password for Wazuh API user wazuh-wui is +fLddaCiZePxh24*?jC0nyNmgMGCKE+2
    INFO: Updated wazuh-wui user password in wazuh dashboard. Remember to restart the service.
    

Next steps

Once the Wazuh environment is ready, Wazuh agents can be installed on every endpoint to be monitored. To install the Wazuh agents and start monitoring the endpoints, see the Wazuh agent installation section. If you need to install them offline, you can check the appropriate agent package to download for your monitored system in the Wazuh agent packages list section.

To uninstall all the Wazuh central components, see the Uninstalling the Wazuh central components section.