Offline installation
You can install Wazuh even when there is no connection to the Internet. Installing the solution offline involves downloading the Wazuh central components to later install them on a system with no Internet connection. The Wazuh server, the Wazuh indexer and the Wazuh dashboard can be installed and configured on the same host in an all-in-one deployment, or each component can be installed on a separate host as a distributed deployment, depending on your environment needs.
For more information about the hardware requirements and the recommended operating systems, check the Requirements section.
Note
Root privileges are required to execute all the commands.
Prerequisites
curl
,tar
, andsetcap
need to be installed in the target system where the offline installation will be carried out.gnupg
might need to be installed as well for some Debian-based systems.In some systems, the command
cp
is an alias forcp -i
— you can check this by runningalias cp
. If this is your case, useunalias cp
to avoid being asked for confirmation to overwrite files.
Download the packages and configuration files
Replace
<deb|rpm>
in the following command with your choice of package format and run it from a Linux system with Internet connection. This action executes a script that downloads all required files for the offline installation on x86_64 architectures.# curl -sO https://packages.wazuh.com/4.3/wazuh-install.sh # chmod 744 wazuh-install.sh # ./wazuh-install.sh -dw <deb|rpm>
Prepare the certificate configuration file.
All-in-one deployment
If you are performing an all-in-one deployment, do the following:
# curl -sO https://packages.wazuh.com/4.3/config.yml
Edit
config.yml
and replace<indexer-node-ip>
,<wazuh-manager-ip>
, and<dashboard-node-ip>
with127.0.0.1
.Distributed deployment
If you are performing a distributed deployment, do the following:
# curl -sO https://packages.wazuh.com/4.3/config.yml
Edit
config.yml
and replace the node names and IP values with the corresponding names and IP addresses. You need to do this for all the Wazuh server, the Wazuh indexer, and the Wazuh dashboard nodes. Add as many node fields as needed.
Run the
./wazuh-certs-tool.sh
to create the certificates. For a multi-node cluster, these certificates need to be later deployed to all Wazuh instances in your cluster.# curl -sO https://packages.wazuh.com/4.3/wazuh-certs-tool.sh # chmod 744 wazuh-certs-tool.sh # ./wazuh-certs-tool.sh --all
Copy or move
wazuh-offline.tar.gz
file and./wazuh-certificates/
folder to a folder accessible to the host(s) from where the offline installation will be carried out. This can be done by usingscp
.
Install Wazuh components from local files
Note
In the host where the installation is taking place, make sure to change the working directory to the folder where the downloaded installation files were placed.
In the working directory where you placed wazuh-offline.tar.gz
and ./wazuh-certificates/
, execute the following command to decompress the installation files:
# tar xf wazuh-offline.tar.gz
Installing the Wazuh indexer
Run the following command to install the Wazuh indexer.
# rpm -ivh ./wazuh-offline/wazuh-packages/wazuh-indexer*.rpm
# dpkg -i ./wazuh-offline/wazuh-packages/wazuh-indexer*.deb
Run the following commands replacing
<indexer-node-name>
with the name of the Wazuh indexer node you are configuring as defined inconfig.yml
. For examplenode-1
. This is to deploy the SSL certificates to encrypt communications between the Wazuh central components.# NODE_NAME=<indexer-node-name>
# mkdir /etc/wazuh-indexer/certs # mv -n wazuh-certificates/$NODE_NAME.pem /etc/wazuh-indexer/certs/indexer.pem # mv -n wazuh-certificates/$NODE_NAME-key.pem /etc/wazuh-indexer/certs/indexer-key.pem # mv wazuh-certificates/admin-key.pem /etc/wazuh-indexer/certs/ # mv wazuh-certificates/admin.pem /etc/wazuh-indexer/certs/ # cp wazuh-certificates/root-ca.pem /etc/wazuh-indexer/certs/ # chmod 500 /etc/wazuh-indexer/certs # chmod 400 /etc/wazuh-indexer/certs/* # chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/certs
Edit
/etc/wazuh-indexer/opensearch.yml
and replace the following values:network.host
: Sets the address of this node for both HTTP and transport traffic. The node will bind to this address and will also use it as its publish address. Accepts an IP address or a hostname.Use the same node address set in
config.yml
to create the SSL certificates.node.name
: Name of the Wazuh indexer node as defined in theconfig.yml
file. For example,node-1
.cluster.initial_master_nodes
: List of the names of the master-eligible nodes. These names are defined in theconfig.yml
file. Uncomment thenode-2
andnode-3
lines, change the names, or add more lines, according to yourconfig.yml
definitions.cluster.initial_master_nodes: - "node-1" - "node-2" - "node-3"
discovery.seed_hosts:
List of the addresses of the master-eligible nodes. Each element can be either an IP address or a hostname. You may leave this setting commented if your are the configuring the Wazuh indexer as a single-node. For multi-node configurations, uncomment this setting and set your master-eligible nodes addresses.discovery.seed_hosts: - "10.0.0.1" - "10.0.0.2" - "10.0.0.3"
plugins.security.nodes_dn
: List of the Distinguished Names of the certificates of all the Wazuh indexer cluster nodes. Uncomment the lines fornode-2
andnode-3
and change the common names (CN) and values according to your settings and yourconfig.yml
definitions.plugins.security.nodes_dn: - "CN=node-1,OU=Wazuh,O=Wazuh,L=California,C=US" - "CN=node-2,OU=Wazuh,O=Wazuh,L=California,C=US" - "CN=node-3,OU=Wazuh,O=Wazuh,L=California,C=US"
Enable and start the Wazuh indexer service.
# systemctl daemon-reload # systemctl enable wazuh-indexer # systemctl start wazuh-indexer
Choose one option according to the operating system used.
RPM-based operating system:
# chkconfig --add wazuh-indexer # service wazuh-indexer start
Debian-based operating system:
# update-rc.d wazuh-indexer defaults 95 10 # service wazuh-indexer start
For multi-node clusters, repeat the previous steps on every Wazuh indexer node. Then proceed to the cluster initialization stage.
When all Wazuh indexer nodes are running, run the Wazuh indexer
indexer-security-init.sh
script on any Wazuh indexer node to load the new certificates information and start the cluster:# /usr/share/wazuh-indexer/bin/indexer-security-init.sh
Run the following command to check that the installation is successful.
# curl -XGET https://localhost:9200 -u admin:admin -k
Expand the output to see an example response.
{ "name" : "node-1", "cluster_name" : "wazuh-cluster", "cluster_uuid" : "nRWvWcQsTpuC_PQU9pB3-g", "version" : { "number" : "7.10.2", "build_type" : "rpm", "build_hash" : "e505b10357c03ae8d26d675172402f2f2144ef0f", "build_date" : "2022-01-14T03:38:06.881862Z", "build_snapshot" : false, "lucene_version" : "8.10.1", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "The OpenSearch Project: https://opensearch.org/" }
Installing the Wazuh server
Installing the Wazuh manager
Run the following commands to import the Wazuh key and install the Wazuh manager.
# rpm --import ./wazuh-offline/wazuh-files/GPG-KEY-WAZUH # rpm -ivh ./wazuh-offline/wazuh-packages/wazuh-manager*.rpm
# apt-key add ./wazuh-offline/wazuh-files/GPG-KEY-WAZUH # dpkg -i ./wazuh-offline/wazuh-packages/wazuh-manager*.deb
Enable and start the Wazuh manager service.
# systemctl daemon-reload # systemctl enable wazuh-manager # systemctl start wazuh-manager
Choose one option according to your operating system:
RPM-based operating system:
# chkconfig --add wazuh-manager # service wazuh-manager start
Debian-based operating system:
# update-rc.d wazuh-manager defaults 95 10 # service wazuh-manager start
Run the following command to verify that the Wazuh manager status is active.
# systemctl status wazuh-manager
# service wazuh-manager status
Installing Filebeat
Filebeat must be installed and configured on the same server as the Wazuh manager.
Run the following command to install Filebeat.
# rpm -ivh ./wazuh-offline/wazuh-packages/filebeat*.rpm
# dpkg -i ./wazuh-offline/wazuh-packages/filebeat*.deb
Move a copy of the configuration files to the appropriate location. Ensure to type “yes” at the prompt to overwrite
/etc/filebeat/filebeat.yml
.# cp ./wazuh-offline/wazuh-files/filebeat.yml /etc/filebeat/ &&\ cp ./wazuh-offline/wazuh-files/wazuh-template.json /etc/filebeat/ &&\ chmod go+r /etc/filebeat/wazuh-template.json
Edit
/etc/filebeat/wazuh-template.json
and change to"1"
the value for"index.number_of_shards"
for a single-node installation. This value can be changed based on the user requirement when performing a distributed installation.{ ... "settings": { ... "index.number_of_shards": "1", ... }, ... }
Edit Filebeat configuration file
/etc/filebeat/filebeat.yml
:All-in-one deployment
Change the value of
username
andpassword
to the configured credentials. The default username and password isadmin
.# Wazuh - Filebeat configuration file output.elasticsearch: hosts: ["127.0.0.1:9200"] username: admin password: admin
Distributed deployment
Change the value of
hosts
to the IP address of the Wazuh indexer node. In case of having more than one Wazuh indexer node, you can separate the addresses using commas. For example,hosts: ["10.0.0.1:9200", "10.0.0.2:9200", "10.0.0.3:9200"]
. Ensure to replace the<indexer-node-*-ip>
with the addresses or hostnames of the Wazuh indexer nodes specified inconfig.yml
.Also change the value of
username
andpassword
to the configured credentials. The default username and password isadmin
.# Wazuh - Filebeat configuration file output.elasticsearch: hosts: ["<indexer-node-1-ip>:9200", "<indexer-node-2-ip>:9200", "<indexer-node-3-ip>:9200"] username: admin password: admin
Install the Wazuh module for Filebeat.
# tar -xzf ./wazuh-offline/wazuh-files/wazuh-filebeat-0.1.tar.gz -C /usr/share/filebeat/module
Replace
<server-node-name>
with your Wazuh server node certificate name, the same used inconfig.yml
when creating the certificates. Then, move the certificates to their corresponding location.# NODE_NAME=<server-node-name>
# mkdir /etc/filebeat/certs # mv -n wazuh-certificates/$NODE_NAME.pem /etc/filebeat/certs/filebeat.pem # mv -n wazuh-certificates/$NODE_NAME-key.pem /etc/filebeat/certs/filebeat-key.pem # cp wazuh-certificates/root-ca.pem /etc/filebeat/certs/ # chmod 500 /etc/filebeat/certs # chmod 400 /etc/filebeat/certs/* # chown -R root:root /etc/filebeat/certs
Enable and start the Filebeat service.
# systemctl daemon-reload # systemctl enable filebeat # systemctl start filebeat
Choose one option according to the operating system used.
RPM-based operating system:
# chkconfig --add filebeat # service filebeat start
Debian-based operating system:
# update-rc.d filebeat defaults 95 10 # service filebeat start
Run the following command to make sure Filebeat is successfully installed.
# filebeat test output
Expand the output to see an example response.
elasticsearch: https://127.0.0.1:9200... parse url... OK connection... parse host... OK dns lookup... OK addresses: 127.0.0.1 dial up... OK TLS... security: server's certificate chain verification is enabled handshake... OK TLS version: TLSv1.3 dial up... OK talk to server... OK version: 7.10.2
To check the number of shards that have been configured, you can run the following command.
# curl -k -u admin:admin "https://localhost:9200/_template/wazuh?pretty&filter_path=wazuh.settings.index.number_of_shards"
Expand the output to see an example response.
{ "wazuh" : { "settings" : { "index" : { "number_of_shards" : "1" } } } }
Your Wazuh server node is now successfully installed. Repeat the steps of this installation process stage for every Wazuh server node in your cluster, expand the Wazuh cluster configuration for multi-node deployment section below, and carry on then with configuring the Wazuh cluster. If you want a Wazuh server single-node cluster, everything is set and you can proceed directly with the Wazuh dashboard installation.
Wazuh cluster configuration for multi-node deployment
After completing the installation of the Wazuh server on every node, you need to configure one server node only as the master and the rest as workers.
Configuring the Wazuh server master node
Edit the following settings in the
/var/ossec/etc/ossec.conf
configuration file.<cluster> <name>wazuh</name> <node_name>master-node</node_name> <node_type>master</node_type> <key>c98b62a9b6169ac5f67dae55ae4a9088</key> <port>1516</port> <bind_addr>0.0.0.0</bind_addr> <nodes> <node>wazuh-master-address</node> </nodes> <hidden>no</hidden> <disabled>no</disabled> </cluster>Parameters to be configured:
It indicates the name of the cluster.
It indicates the name of the current node.
It specifies the role of the node. It has to be set to
master
.Key that is used to encrypt communication between cluster nodes. The key must be 32 characters long and same for all of the nodes in the cluster. The following command can be used to generate a random key:
openssl rand -hex 16
.It indicates the destination port for cluster communication.
It is the network IP to which the node is bound to listen for incoming requests (0.0.0.0 for any IP).
It is the address of the
master node
and can be either an IP or a DNS. This parameter must be specified in all nodes, including the master itself.It shows or hides the cluster information in the generated alerts.
It indicates whether the node is enabled or disabled in the cluster. This option must be set to
no
.Restart the Wazuh manager.
# systemctl restart wazuh-manager
# service wazuh-manager restart
Configuring the Wazuh server worker nodes
Configure the cluster node by editing the following settings in the
/var/ossec/etc/ossec.conf
file.<cluster> <name>wazuh</name> <node_name>worker-node</node_name> <node_type>worker</node_type> <key>c98b62a9b6169ac5f67dae55ae4a9088</key> <port>1516</port> <bind_addr>0.0.0.0</bind_addr> <nodes> <node>wazuh-master-address</node> </nodes> <hidden>no</hidden> <disabled>no</disabled> </cluster>Parameters to be configured:
It indicates the name of the cluster.
It indicates the name of the current node. Each node of the cluster must have a unique name.
It specifies the role of the node. It has to be set as
worker
.The key created previously for the
master
node. It has to be the same for all the nodes.It has to contain the address of the
master node
and can be either an IP or a DNS.It indicates whether the node is enabled or disabled in the cluster. It has to be set to
no
.Restart the Wazuh manager.
# systemctl restart wazuh-manager
# service wazuh-manager restart
Repeat these configuration steps for every Wazuh server worker node in your cluster.
Testing Wazuh server cluster
Execute the following command to verify that the Wazuh cluster is enabled and all the nodes are connected.
# /var/ossec/bin/cluster_control -l
Expand the output to see an example response. Note that
10.0.0.3
,10.0.0.4
, and10.0.0.5
are example IPs.NAME TYPE VERSION ADDRESS master-node master 4.2.0 10.0.0.3 worker-node1 worker 4.2.0 10.0.0.4 worker-node2 worker 4.2.0 10.0.0.5
Installing the Wazuh dashboard
Run the following command to install the Wazuh dashboard.
# rpm -ivh ./wazuh-offline/wazuh-packages/wazuh-dashboard*.rpm
# dpkg -i ./wazuh-offline/wazuh-packages/wazuh-dashboard*.deb
Replace
<dashboard-node-name>
with your Wazuh dashboard node name, the same used inconfig.yml
to create the certificates, and move the certificates to their corresponding location.# NODE_NAME=<dashboard-node-name>
# mkdir /etc/wazuh-dashboard/certs # mv -n wazuh-certificates/$NODE_NAME.pem /etc/wazuh-dashboard/certs/dashboard.pem # mv -n wazuh-certificates/$NODE_NAME-key.pem /etc/wazuh-dashboard/certs/dashboard-key.pem # cp wazuh-certificates/root-ca.pem /etc/wazuh-dashboard/certs/ # chmod 500 /etc/wazuh-dashboard/certs # chmod 400 /etc/wazuh-dashboard/certs/* # chown -R wazuh-dashboard:wazuh-dashboard /etc/wazuh-dashboard/certs
Edit the
/etc/wazuh-dashboard/opensearch_dashboards.yml
file and replace the following values:server.host
: This setting specifies the host of the back end server. To allow remote users to connect, set the value to the IP address or DNS name of the Kibana server. The value0.0.0.0
will accept all the available IP addresses of the host.opensearch.hosts
: The URLs of the Wazuh indexer instances to use for all your queries. Wazuh dashboard can be configured to connect to multiple Wazuh indexer nodes in the same cluster. The addresses of the nodes can be separated by commas. For example,["https://10.0.0.2:9200", "https://10.0.0.3:9200","https://10.0.0.4:9200"]
server.host: 0.0.0.0 server.port: 443 opensearch.hosts: https://localhost:9200 opensearch.ssl.verificationMode: certificate
Enable and start Wazuh dashboard.
# systemctl daemon-reload # systemctl enable wazuh-dashboard # systemctl start wazuh-dashboard
Choose one option according to your operating system:
RPM-based operating system:
# chkconfig --add wazuh-dashboard # service wazuh-dashboard start
Debian-based operating system:
# update-rc.d wazuh-dashboard defaults 95 10 # service wazuh-dashboard start
Only for distributed deployments: Edit the file
/usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml
and replace theurl
value with the IP address or hostname of the Wazuh server master node.hosts: - default: url: https://localhost port: 55000 username: wazuh-wui password: wazuh-wui run_as: false
Access the web interface.
URL: https://<wazuh_server_ip>
Username: admin
Password: admin
Upon the first access to the Wazuh dashboard, the browser shows a warning message stating that the certificate was not issued by a trusted authority. An exception can be added in the advanced options of the web browser or, for increased security, the root-ca.pem
file previously generated can be imported to the certificate manager of the browser. Alternatively, a certificate from a trusted authority can be configured.
Note
It is highly recommended to change the default Wazuh indexer passwords. To perform this action, see the Change the Wazuh indexer passwords section.
To uninstall all the Wazuh central components, see the Uninstalling the Wazuh central components section.
Next steps
Once the Wazuh environment is ready, Wazuh agents can be installed on every endpoint to be monitored. To install the Wazuh agents and start monitoring the endpoints, see the Wazuh agent installation section. If you need to install them offline, you can check the appropriate agent package to download for your monitored system in the Wazuh agent packages list section.