Offline installation
You can install Wazuh even when there is no connection to the Internet. Installing the solution offline involves downloading the Wazuh central components to later install them on a system with no Internet connection. The Wazuh server, the Wazuh indexer, and the Wazuh dashboard can be installed and configured on the same host in an all-in-one deployment, or each component can be installed on a separate host as a distributed deployment, depending on your environment needs.
For more information about the hardware requirements and the recommended operating systems, check the Requirements section.
Note
You need root user privileges to run all the commands described below.
Prerequisites
curl
,tar
, andsetcap
need to be installed in the target system where the offline installation will be carried out.gnupg
might need to be installed as well for some Debian-based systems.In some systems, the command
cp
is an alias forcp -i
— you can check this by runningalias cp
. If this is your case, useunalias cp
to avoid being asked for confirmation to overwrite files.
Download the packages and configuration files
Run the following commands from any Linux system with Internet connection. This action executes a script that downloads all required files for the offline installation on x86_64 architectures. Select the package format to download.
# curl -sO https://packages.wazuh.com/4.7/wazuh-install.sh # chmod 744 wazuh-install.sh # ./wazuh-install.sh -dw rpm
# curl -sO https://packages.wazuh.com/4.7/wazuh-install.sh # chmod 744 wazuh-install.sh # ./wazuh-install.sh -dw deb
Download the certificates configuration file.
# curl -sO https://packages.wazuh.com/4.7/config.yml
Edit
config.yml
to prepare the certificates creation.If you are performing an all-in-one deployment, replace
"<indexer-node-ip>"
,"<wazuh-manager-ip>"
, and"<dashboard-node-ip>"
with127.0.0.1
.If you are performing a distributed deployment, replace the node names and IP values with the corresponding names and IP addresses. You need to do this for all the Wazuh server, the Wazuh indexer, and the Wazuh dashboard nodes. Add as many node fields as needed.
Run the
./wazuh-certs-tool.sh
to create the certificates. For a multi-node cluster, these certificates need to be later deployed to all Wazuh instances in your cluster.# curl -sO https://packages.wazuh.com/4.7/wazuh-certs-tool.sh # chmod 744 wazuh-certs-tool.sh # ./wazuh-certs-tool.sh --all
Copy or move
wazuh-offline.tar.gz
file and./wazuh-certificates/
folder to a folder accessible to the host(s) from where the offline installation will be carried out. This can be done by usingscp
.
Install Wazuh components from local files
In the working directory where you placed
wazuh-offline.tar.gz
and./wazuh-certificates/
, execute the following command to decompress the installation files:# tar xf wazuh-offline.tar.gz
You can check the SHA512 of the decompressed package files in
wazuh-offline/wazuh-packages/
. Find the SHA512 checksums in the Packages list.
Installing the Wazuh indexer
Run the following commands to install the Wazuh indexer.
# rpm --import ./wazuh-offline/wazuh-files/GPG-KEY-WAZUH # rpm -ivh ./wazuh-offline/wazuh-packages/wazuh-indexer*.rpm
# dpkg -i ./wazuh-offline/wazuh-packages/wazuh-indexer*.deb
Run the following commands replacing
<indexer-node-name>
with the name of the Wazuh indexer node you are configuring as defined inconfig.yml
. For example,node-1
. This deploys the SSL certificates to encrypt communications between the Wazuh central components.# NODE_NAME=<indexer-node-name>
# mkdir /etc/wazuh-indexer/certs # mv -n wazuh-certificates/$NODE_NAME.pem /etc/wazuh-indexer/certs/indexer.pem # mv -n wazuh-certificates/$NODE_NAME-key.pem /etc/wazuh-indexer/certs/indexer-key.pem # mv wazuh-certificates/admin-key.pem /etc/wazuh-indexer/certs/ # mv wazuh-certificates/admin.pem /etc/wazuh-indexer/certs/ # cp wazuh-certificates/root-ca.pem /etc/wazuh-indexer/certs/ # chmod 500 /etc/wazuh-indexer/certs # chmod 400 /etc/wazuh-indexer/certs/* # chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/certs
Here you move the node certificate and key files, such as node-1.pem and node-1-key.pem, to their corresponding certs folder. They're specific to the node and are not required on the other nodes. However, note that the root-ca.pem certificate isn't moved but copied to the certs folder. This way, you can continue deploying it to other component folders in the next steps.
Edit
/etc/wazuh-indexer/opensearch.yml
and replace the following values:network.host
: Sets the address of this node for both HTTP and transport traffic. The node will bind to this address and will also use it as its publish address. Accepts an IP address or a hostname.Use the same node address set in
config.yml
to create the SSL certificates.node.name
: Name of the Wazuh indexer node as defined in theconfig.yml
file. For example,node-1
.cluster.initial_master_nodes
: List of the names of the master-eligible nodes. These names are defined in theconfig.yml
file. Uncomment thenode-2
andnode-3
lines, change the names, or add more lines, according to yourconfig.yml
definitions.cluster.initial_master_nodes: - "node-1" - "node-2" - "node-3"
discovery.seed_hosts:
List of the addresses of the master-eligible nodes. Each element can be either an IP address or a hostname. You may leave this setting commented if you are configuring the Wazuh indexer as a single-node. For multi-node configurations, uncomment this setting and set your master-eligible nodes addresses.discovery.seed_hosts: - "10.0.0.1" - "10.0.0.2" - "10.0.0.3"
plugins.security.nodes_dn
: List of the Distinguished Names of the certificates of all the Wazuh indexer cluster nodes. Uncomment the lines fornode-2
andnode-3
and change the common names (CN) and values according to your settings and yourconfig.yml
definitions.plugins.security.nodes_dn: - "CN=node-1,OU=Wazuh,O=Wazuh,L=California,C=US" - "CN=node-2,OU=Wazuh,O=Wazuh,L=California,C=US" - "CN=node-3,OU=Wazuh,O=Wazuh,L=California,C=US"
Enable and start the Wazuh indexer service.
# systemctl daemon-reload # systemctl enable wazuh-indexer # systemctl start wazuh-indexer
Choose one option according to the operating system used.
RPM-based operating system:
# chkconfig --add wazuh-indexer # service wazuh-indexer start
Debian-based operating system:
# update-rc.d wazuh-indexer defaults 95 10 # service wazuh-indexer start
For multi-node clusters, repeat the previous steps on every Wazuh indexer node.
When all Wazuh indexer nodes are running, run the Wazuh indexer
indexer-security-init.sh
script on any Wazuh indexer node to load the new certificates information and start the cluster.# /usr/share/wazuh-indexer/bin/indexer-security-init.sh
Run the following command to check that the installation is successful. Note that this command uses localhost, set your Wazuh indexer address if necessary.
# curl -XGET https://localhost:9200 -u admin:admin -k
Expand the output to see an example response.
{ "name" : "node-1", "cluster_name" : "wazuh-cluster", "cluster_uuid" : "095jEW-oRJSFKLz5wmo5PA", "version" : { "number" : "7.10.2", "build_type" : "rpm", "build_hash" : "db90a415ff2fd428b4f7b3f800a51dc229287cb4", "build_date" : "2023-06-03T06:24:25.112415503Z", "build_snapshot" : false, "lucene_version" : "9.6.0", "minimum_wire_compatibility_version" : "7.10.0", "minimum_index_compatibility_version" : "7.0.0" }, "tagline" : "The OpenSearch Project: https://opensearch.org/" }
Installing the Wazuh server
Installing the Wazuh manager
Run the following commands to import the Wazuh key and install the Wazuh manager.
# rpm --import ./wazuh-offline/wazuh-files/GPG-KEY-WAZUH # rpm -ivh ./wazuh-offline/wazuh-packages/wazuh-manager*.rpm
# dpkg -i ./wazuh-offline/wazuh-packages/wazuh-manager*.deb
Enable and start the Wazuh manager service.
# systemctl daemon-reload # systemctl enable wazuh-manager # systemctl start wazuh-manager
Choose one option according to your operating system:
RPM-based operating system:
# chkconfig --add wazuh-manager # service wazuh-manager start
Debian-based operating system:
# update-rc.d wazuh-manager defaults 95 10 # service wazuh-manager start
Run the following command to verify that the Wazuh manager status is active.
# systemctl status wazuh-manager
# service wazuh-manager status
Installing Filebeat
Filebeat must be installed and configured on the same server as the Wazuh manager.
Run the following command to install Filebeat.
# rpm -ivh ./wazuh-offline/wazuh-packages/filebeat*.rpm
# dpkg -i ./wazuh-offline/wazuh-packages/filebeat*.deb
Move a copy of the configuration files to the appropriate location. Ensure to type “yes” at the prompt to overwrite
/etc/filebeat/filebeat.yml
.# cp ./wazuh-offline/wazuh-files/filebeat.yml /etc/filebeat/ &&\ cp ./wazuh-offline/wazuh-files/wazuh-template.json /etc/filebeat/ &&\ chmod go+r /etc/filebeat/wazuh-template.json
Edit
/etc/filebeat/wazuh-template.json
and change to"1"
the value for"index.number_of_shards"
for a single-node installation. This value can be changed based on the user requirement when performing a distributed installation.{ ... "settings": { ... "index.number_of_shards": "1", ... }, ... }
Edit the
/etc/filebeat/filebeat.yml
configuration file and replace the following value:hosts
: The list of Wazuh indexer nodes to connect to. You can use either IP addresses or hostnames. By default, the host is set to localhosthosts: ["127.0.0.1:9200"]
. Replace it with your Wazuh indexer address accordingly.If you have more than one Wazuh indexer node, you can separate the addresses using commas. For example,
hosts: ["10.0.0.1:9200", "10.0.0.2:9200", "10.0.0.3:9200"]
# Wazuh - Filebeat configuration file output.elasticsearch: hosts: ["10.0.0.1:9200"] protocol: https username: ${username} password: ${password}
Create a Filebeat keystore to securely store authentication credentials.
# filebeat keystore create
Add the username and password
admin
:admin
to the secrets keystore.# echo admin | filebeat keystore add username --stdin --force # echo admin | filebeat keystore add password --stdin --force
Install the Wazuh module for Filebeat.
# tar -xzf ./wazuh-offline/wazuh-files/wazuh-filebeat-0.3.tar.gz -C /usr/share/filebeat/module
Replace
<SERVER_NODE_NAME>
with your Wazuh server node certificate name, the same used inconfig.yml
when creating the certificates. For example,wazuh-1
. Then, move the certificates to their corresponding location.# NODE_NAME=<SERVER_NODE_NAME>
# mkdir /etc/filebeat/certs # mv -n wazuh-certificates/$NODE_NAME.pem /etc/filebeat/certs/filebeat.pem # mv -n wazuh-certificates/$NODE_NAME-key.pem /etc/filebeat/certs/filebeat-key.pem # cp wazuh-certificates/root-ca.pem /etc/filebeat/certs/ # chmod 500 /etc/filebeat/certs # chmod 400 /etc/filebeat/certs/* # chown -R root:root /etc/filebeat/certs
Enable and start the Filebeat service.
# systemctl daemon-reload # systemctl enable filebeat # systemctl start filebeat
Choose one option according to the operating system used.
RPM-based operating system:
# chkconfig --add filebeat # service filebeat start
Debian-based operating system:
# update-rc.d filebeat defaults 95 10 # service filebeat start
Run the following command to make sure Filebeat is successfully installed.
# filebeat test output
Expand the output to see an example response.
elasticsearch: https://127.0.0.1:9200... parse url... OK connection... parse host... OK dns lookup... OK addresses: 127.0.0.1 dial up... OK TLS... security: server's certificate chain verification is enabled handshake... OK TLS version: TLSv1.3 dial up... OK talk to server... OK version: 7.10.2
To check the number of shards that have been configured, you can run the following command. Note that this command uses localhost, set your Wazuh indexer address if necessary.
# curl -k -u admin:admin "https://localhost:9200/_template/wazuh?pretty&filter_path=wazuh.settings.index.number_of_shards"
Expand the output to see an example response.
{ "wazuh" : { "settings" : { "index" : { "number_of_shards" : "1" } } } }
Your Wazuh server node is now successfully installed. Repeat the steps of this installation process stage for every Wazuh server node in your cluster, expand the Wazuh cluster configuration for multi-node deployment section below, and carry on then with configuring the Wazuh cluster. If you want a Wazuh server single-node cluster, everything is set and you can proceed directly with the Wazuh dashboard installation.
Wazuh cluster configuration for multi-node deployment
After completing the installation of the Wazuh server on every node, you need to configure one server node only as the master and the rest as workers.
Configuring the Wazuh server master node
Edit the following settings in the
/var/ossec/etc/ossec.conf
configuration file.<cluster> <name>wazuh</name> <node_name>master-node</node_name> <node_type>master</node_type> <key>c98b62a9b6169ac5f67dae55ae4a9088</key> <port>1516</port> <bind_addr>0.0.0.0</bind_addr> <nodes> <node>wazuh-master-address</node> </nodes> <hidden>no</hidden> <disabled>no</disabled> </cluster>Parameters to be configured:
It indicates the name of the cluster.
It indicates the name of the current node.
It specifies the role of the node. It has to be set to
master
.Key that is used to encrypt communication between cluster nodes. The key must be 32 characters long and the same for all of the nodes in the cluster. The following command can be used to generate a random key:
openssl rand -hex 16
.It indicates the destination port for cluster communication.
It is the network IP to which the node is bound to listen for incoming requests (0.0.0.0 for any IP).
It is the address of the
master node
and can be either an IP or a DNS. This parameter must be specified in all nodes, including the master itself.It shows or hides the cluster information in the generated alerts.
It indicates whether the node is enabled or disabled in the cluster. This option must be set to
no
.Restart the Wazuh manager.
# systemctl restart wazuh-manager# service wazuh-manager restart
Configuring the Wazuh server worker nodes
Configure the cluster node by editing the following settings in the
/var/ossec/etc/ossec.conf
file.<cluster> <name>wazuh</name> <node_name>worker-node</node_name> <node_type>worker</node_type> <key>c98b62a9b6169ac5f67dae55ae4a9088</key> <port>1516</port> <bind_addr>0.0.0.0</bind_addr> <nodes> <node>wazuh-master-address</node> </nodes> <hidden>no</hidden> <disabled>no</disabled> </cluster>Parameters to be configured:
It indicates the name of the cluster.
It indicates the name of the current node. Each node of the cluster must have a unique name.
It specifies the role of the node. It has to be set as
worker
.The key created previously for the
master
node. It has to be the same for all the nodes.It has to contain the address of the
master node
and can be either an IP or a DNS.It indicates whether the node is enabled or disabled in the cluster. It has to be set to
no
.Restart the Wazuh manager.
# systemctl restart wazuh-manager# service wazuh-manager restartRepeat these configuration steps for every Wazuh server worker node in your cluster.
Testing Wazuh server cluster
To verify that the Wazuh cluster is enabled and all the nodes are connected, execute the following command:
# /var/ossec/bin/cluster_control -l
An example output of the command looks as follows:
NAME TYPE VERSION ADDRESS master-node master 4.7.5 10.0.0.3 worker-node1 worker 4.7.5 10.0.0.4 worker-node2 worker 4.7.5 10.0.0.5
Note that 10.0.0.3
, 10.0.0.4
, 10.0.0.5
are example IPs.
Installing the Wazuh dashboard
Run the following commands to install the Wazuh dashboard.
# rpm --import ./wazuh-offline/wazuh-files/GPG-KEY-WAZUH # rpm -ivh ./wazuh-offline/wazuh-packages/wazuh-dashboard*.rpm
# dpkg -i ./wazuh-offline/wazuh-packages/wazuh-dashboard*.deb
Replace
<DASHBOARD_NODE_NAME>
with your Wazuh dashboard node name, the same used inconfig.yml
to create the certificates. For example,dashboard
. Then, move the certificates to their corresponding location.# NODE_NAME=<DASHBOARD_NODE_NAME>
# mkdir /etc/wazuh-dashboard/certs # mv -n wazuh-certificates/$NODE_NAME.pem /etc/wazuh-dashboard/certs/dashboard.pem # mv -n wazuh-certificates/$NODE_NAME-key.pem /etc/wazuh-dashboard/certs/dashboard-key.pem # cp wazuh-certificates/root-ca.pem /etc/wazuh-dashboard/certs/ # chmod 500 /etc/wazuh-dashboard/certs # chmod 400 /etc/wazuh-dashboard/certs/* # chown -R wazuh-dashboard:wazuh-dashboard /etc/wazuh-dashboard/certs
Edit the
/etc/wazuh-dashboard/opensearch_dashboards.yml
file and replace the following values:server.host
: This setting specifies the host of the back end server. To allow remote users to connect, set the value to the IP address or DNS name of the Wazuh dashboard. The value0.0.0.0
will accept all the available IP addresses of the host.opensearch.hosts
: The URLs of the Wazuh indexer instances to use for all your queries. The Wazuh dashboard can be configured to connect to multiple Wazuh indexer nodes in the same cluster. The addresses of the nodes can be separated by commas. For example,["https://10.0.0.2:9200", "https://10.0.0.3:9200","https://10.0.0.4:9200"]
server.host: 0.0.0.0 server.port: 443 opensearch.hosts: https://localhost:9200 opensearch.ssl.verificationMode: certificate
Enable and start the Wazuh dashboard.
# systemctl daemon-reload # systemctl enable wazuh-dashboard # systemctl start wazuh-dashboard
Choose one option according to your operating system:
RPM-based operating system:
# chkconfig --add wazuh-dashboard # service wazuh-dashboard start
Debian-based operating system:
# update-rc.d wazuh-dashboard defaults 95 10 # service wazuh-dashboard start
Only for distributed deployments: Edit the file
/usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml
and replace theurl
value with the IP address or hostname of the Wazuh server master node.hosts: - default: url: https://localhost port: 55000 username: wazuh-wui password: wazuh-wui run_as: false
Run the following command to verify the Wazuh dashboard service is active.
# systemctl status wazuh-dashboard
# service wazuh-dashboard status
Access the web interface.
URL: https://<wazuh_server_ip>
Username: admin
Password: admin
Upon the first access to the Wazuh dashboard, the browser shows a warning message stating that the certificate was not issued by a trusted authority. An exception can be added in the advanced options of the web browser or, for increased security, the root-ca.pem
file previously generated can be imported to the certificate manager of the browser. Alternatively, a certificate from a trusted authority can be configured.
Securing your Wazuh installation
You have now installed and configured all the Wazuh central components. We recommend changing the default credentials to protect your infrastructure from possible attacks.
Select your deployment type and follow the instructions to change the default passwords for both the Wazuh API and the Wazuh indexer users.
Use the Wazuh passwords tool to change all the internal users passwords.
# /usr/share/wazuh-indexer/plugins/opensearch-security/tools/wazuh-passwords-tool.sh --change-all --admin-user wazuh --admin-password wazuh
INFO: The password for user admin is yWOzmNA.?Aoc+rQfDBcF71KZp?1xd7IO INFO: The password for user kibanaserver is nUa+66zY.eDF*2rRl5GKdgLxvgYQA+wo INFO: The password for user kibanaro is 0jHq.4i*VAgclnqFiXvZ5gtQq1D5LCcL INFO: The password for user logstash is hWW6U45rPoCT?oR.r.Baw2qaWz2iH8Ml INFO: The password for user readall is PNt5K+FpKDMO2TlxJ6Opb2D0mYl*I7FQ INFO: The password for user snapshotrestore is +GGz2noZZr2qVUK7xbtqjUup049tvLq. WARNING: Wazuh indexer passwords changed. Remember to update the password in the Wazuh dashboard and Filebeat nodes if necessary, and restart the services. INFO: The password for Wazuh API user wazuh is JYWz5Zdb3Yq+uOzOPyUU4oat0n60VmWI INFO: The password for Wazuh API user wazuh-wui is +fLddaCiZePxh24*?jC0nyNmgMGCKE+2 INFO: Updated wazuh-wui user password in wazuh dashboard. Remember to restart the service.
On any Wazuh indexer node, use the Wazuh passwords tool to change the passwords of the Wazuh indexer users.
# /usr/share/wazuh-indexer/plugins/opensearch-security/tools/wazuh-passwords-tool.sh --change-all
INFO: Wazuh API admin credentials not provided, Wazuh API passwords not changed. INFO: The password for user admin is wcAny.XUwOVWHFy.+7tW9l8gUW1L8N3j INFO: The password for user kibanaserver is qy6fBrNOI4fD9yR9.Oj03?pihN6Ejfpp INFO: The password for user kibanaro is Nj*sSXSxwntrx3O7m8ehrgdHkxCc0dna INFO: The password for user logstash is nQg1Qw0nIQFZXUJc8r8+zHVrkelch33h INFO: The password for user readall is s0iWAei?RXObSDdibBfzSgXdhZCD9kH4 INFO: The password for user snapshotrestore is Mb2EHw8SIc1d.oz.nM?dHiPBGk7s?UZB WARNING: Wazuh indexer passwords changed. Remember to update the password in the Wazuh dashboard and Filebeat nodes if necessary, and restart the services.
On your Wazuh server master node, change the default password of the admin users: wazuh and wazuh-wui. Note that the commands below use localhost, set your Wazuh manager IP address if necessary.
Get an authorization TOKEN.
# TOKEN=$(curl -u wazuh-wui:wazuh-wui -k -X GET "https://localhost:55000/security/user/authenticate?raw=true")
Change the wazuh user credentials (ID 1). Select a password between 8 and 64 characters long, it should contain at least one uppercase and one lowercase letter, a number, and a symbol. See PUT /security/users/{user_id} to learn more.
curl -k -X PUT "https://localhost:55000/security/users/1" -H "Authorization: Bearer $TOKEN" -H 'Content-Type: application/json' -d' { "password": "SuperS3cretPassword!" }'
{"data": {"affected_items": [{"id": 1, "username": "wazuh", "allow_run_as": true, "roles": [1]}], "total_affected_items": 1, "total_failed_items": 0, "failed_items": []}, "message": "User was successfully updated", "error": 0}
Change the wazuh-wui user credentials (ID 2).
curl -k -X PUT "https://localhost:55000/security/users/2" -H "Authorization: Bearer $TOKEN" -H 'Content-Type: application/json' -d' { "password": "SuperS3cretPassword!" }'
{"data": {"affected_items": [{"id": 2, "username": "wazuh-wui", "allow_run_as": true, "roles": [1]}], "total_affected_items": 1, "total_failed_items": 0, "failed_items": []}, "message": "User was successfully updated", "error": 0}
See the Securing the Wazuh API section for additional security configurations.
Note
Remember to store these passwords securely.
On all your Wazuh server nodes, run the following command to update the admin password in the Filebeat keystore. Replace
<ADMIN_PASSWORD>
with the random password generated in the first step.# echo <ADMIN_PASSWORD> | filebeat keystore add password --stdin --force
Restart Filebeat to apply the change.
# systemctl restart filebeat
# service filebeat restart
Note
Repeat steps 3 and 4 on every Wazuh server node.
On your Wazuh dashboard node, run the following command to update the kibanaserver password in the Wazuh dashboard keystore. Replace
<KIBANASERVER_PASSWORD>
with the random password generated in the first step.# echo <KIBANASERVER_PASSWORD> | /usr/share/wazuh-dashboard/bin/opensearch-dashboards-keystore --allow-root add -f --stdin opensearch.password
Update the
/usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml
configuration file with the new wazuh-wui password generated in the second step.hosts: - default: url: https://localhost port: 55000 username: wazuh-wui password: "<wazuh-wui-password>" run_as: false
Restart the Wazuh dashboard to apply the changes.
# systemctl restart wazuh-dashboard
# service wazuh-dashboard restart
Next steps
Once the Wazuh environment is ready, Wazuh agents can be installed on every endpoint to be monitored. To install the Wazuh agents and start monitoring the endpoints, see the Wazuh agent installation section. If you need to install them offline, you can check the appropriate agent package to download for your monitored system in the Wazuh agent packages list section.
To uninstall all the Wazuh central components, see the Uninstalling the Wazuh central components section.