Deployment

This section covers deploying Wazuh on Kubernetes for Amazon EKS and Local Kubernetes clusters, from environment preparation to verifying that all components are running correctly.

Clone the Wazuh Kubernetes repository for the necessary services and pods:

$ git clone https://github.com/wazuh/wazuh-kubernetes.git -b v4.14.0 --depth=1
$ cd wazuh-kubernetes

Setup SSL certificates

Perform the steps below to generate the required certificates for the deployment:

  1. Generate self-signed certificates for the Wazuh indexer cluster using the script at wazuh/certs/indexer_cluster/generate_certs.sh or provide your own certificates.

    # wazuh/certs/indexer_cluster/generate_certs.sh
    
    Root CA
    Admin cert
    create: admin-key-temp.pem
    create: admin-key.pem
    create: admin.csr
    Ignoring -days without -x509; not generating a certificate
    create: admin.pem
    Certificate request self-signature ok
    subject=C=US, L=California, O=Company, CN=admin
    * Node cert
    create: node-key-temp.pem
    create: node-key.pem
    create: node.csr
    Ignoring -days without -x509; not generating a certificate
    create: node.pem
    Certificate request self-signature ok
    subject=C=US, L=California, O=Company, CN=indexer
    * dashboard cert
    create: dashboard-key-temp.pem
    create: dashboard-key.pem
    create: dashboard.csr
    Ignoring -days without -x509; not generating a certificate
    create: dashboard.pem
    Certificate request self-signature ok
    subject=C=US, L=California, O=Company, CN=dashboard
    * Filebeat cert
    create: filebeat-key-temp.pem
    create: filebeat-key.pem
    create: filebeat.csr
    Ignoring -days without -x509; not generating a certificate
    create: filebeat.pem
    Certificate request self-signature ok
    subject=C=US, L=California, O=Company, CN=filebeat
    
  2. Generate self-signed certificates for the Wazuh dashboard using the script at wazuh/certs/dashboard_http/generate_certs.sh or provide your own certificates.

    # wazuh/certs/dashboard_http/generate_certs.sh
    

    The required certificates are imported via secretGenerator in the kustomization.yml file:

    secretGenerator:
        - name: indexer-certs
          files:
            - certs/indexer_cluster/root-ca.pem
            - certs/indexer_cluster/node.pem
            - certs/indexer_cluster/node-key.pem
            - certs/indexer_cluster/dashboard.pem
            - certs/indexer_cluster/dashboard-key.pem
            - certs/indexer_cluster/admin.pem
            - certs/indexer_cluster/admin-key.pem
            - certs/indexer_cluster/filebeat.pem
            - certs/indexer_cluster/filebeat-key.pem
        - name: dashboard-certs
          files:
            - certs/dashboard_http/cert.pem
            - certs/dashboard_http/key.pem
            - certs/indexer_cluster/root-ca.pem
    

Setup storage class (optional for non-EKS cluster)

The storage class provisioner varies depending on your cluster. Edit the envs/local-env/storage-class.yaml file to set the provisioner that matches your cluster type.

Check your storage class by running kubectl get sc:

# kubectl get sc
NAME                          PROVISIONER            RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
elk-gp2                       microk8s.io/hostpath   Delete          Immediate           false                  67d
microk8s-hostpath (default)   microk8s.io/hostpath   Delete          Immediate           false                  54d

The provisioner column displays microk8s.io/hostpath.

Apply all manifests

There are two variants of the manifest: one for EKS clusters located in envs/eks and the second for other cluster types located in envs/local-env.

You can adjust cluster resources by editing patches in envs/eks/ or envs/local-env/. You can also tune CPU, memory, and storage for persistent volumes of each cluster object. Remove patches from kustomization.yml or modify patch values to undo changes.

Deploy the cluster using the kustomization.yml file:

  • EKS cluster

    # kubectl apply -k envs/eks/
    
  • Other cluster types

    # kubectl apply -k envs/local-env/
    

Verifying the deployment

Namespace

Run the following command to check that the Wazuh namespace is active:

$ kubectl get namespaces | grep wazuh
wazuh         Active    12m

Services

Run the command below to view all running services in the Wazuh namespace:

$ kubectl get services -n wazuh
NAME                  TYPE           CLUSTER-IP       EXTERNAL-IP        PORT(S)                          AGE
indexer               ClusterIP      xxx.yy.zzz.24    <none>             9200/TCP                         12m
dashboard             ClusterIP      xxx.yy.zzz.76    <none>             5601/TCP                         11m
wazuh                 LoadBalancer   xxx.yy.zzz.209   internal-a7a8...   1515:32623/TCP,55000:30283/TCP   9m
wazuh-cluster         ClusterIP      None             <none>             1516/TCP                         9m
Wazuh-indexer         ClusterIP      None             <none>             9300/TCP                         12m
wazuh-workers         LoadBalancer   xxx.yy.zzz.26    internal-a7f9...   1514:31593/TCP                   9m

Deployments

Run the command below to check for the deployments in the Wazuh namespace:

$ kubectl get deployments -n wazuh
NAME             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
wazuh-dashboard  1         1         1            1           11m

Statefulset

Run the command below to check the active StatefulSets in the Wazuh namespace:

$ kubectl get statefulsets -n wazuh
NAME                   READY   AGE
wazuh-indexer          3/3     15m
wazuh-manager-master   1/1     15m
wazuh-manager-worker   2/2     15m

Pods

Run the command below to view the pods status in the Wazuh namespace:

$ kubectl get pods -n wazuh
NAME                              READY     STATUS    RESTARTS   AGE
wazuh-indexer-0                   1/1       Running   0          15m
wazuh-dashboard-f4d9c7944-httsd   1/1       Running   0          14m
wazuh-manager-master-0            1/1       Running   0          12m
wazuh-manager-worker-0-0          1/1       Running   0          11m
wazuh-manager-worker-1-0          1/1       Running   0          11m

Accessing Wazuh dashboard

If you created domain names for the services, access the dashboard using the URL https://wazuh.<YOUR_DOMAIN>.com. Otherwise, access the Wazuh dashboard using the external IP address or hostname that your cloud provider assigned.

Check the services to view the external IP:

$ kubectl get services -o wide -n wazuh
NAME                  TYPE           CLUSTER-IP       EXTERNAL-IP                      PORT(S)                          AGE       SELECTOR
dashboard             LoadBalancer   xxx.xx.xxx.xxx   xxx.xx.xxx.xxx                   80:31831/TCP,443:30974/TCP       15m       app=wazuh-dashboard

Note

For a local cluster deployment where the external IP address is not accessible, you can access the Wazuh dashboard using a port-forward as shown below:

# kubectl -n wazuh port-forward --address <KUBERNETES_HOST> service/dashboard 8443:443

Replace <KUBERNETES_HOST> with the IP address of the Kubernetes host.

The Wazuh dashboard is accessible at https://<KUBERNETES_HOST>:8443.

The default credentials are admin:SecretPassword.

Change the password of Wazuh users

Improve security by changing default passwords for Wazuh users. There are two categories of Wazuh users:

Wazuh indexer users

Before starting the password change process, log out of your Wazuh dashboard session. Failing to do so might result in errors when accessing Wazuh after changing user passwords due to persistent session cookies.

To change the password of the default admin and kibanaserver users, do the following.

Warning

If you have custom users, add them to the internal_users.yml file. Otherwise, executing this procedure deletes them.

Set a new password hash

  1. Start a Bash shell in the wazuh-indexer-0 pod.

    # kubectl exec -it wazuh-indexer-0 -n wazuh -- /bin/bash
    
  2. Run these commands to generate the hash of your new password. When prompted, input the new password and press Enter.

    $ export JAVA_HOME=/usr/share/wazuh-indexer/jdk
    $ bash /usr/share/wazuh-indexer/plugins/opensearch-security/tools/hash.sh
    

    Warning

    Do not use the $ or & characters in your new password. These characters can cause errors during deployment.

  3. Copy the generated hash and exit the Bash shell.

  4. Open the wazuh/indexer_stack/wazuh-indexer/indexer_conf/internal_users.yml file. Locate the block for the user whose password you want to change and replace the hash:

    • admin user

      ...
      admin:
          hash: "<ADMIN_PASSWORD_HASH>"
          reserved: true
          backend_roles:
          - "admin"
          description: "Demo admin user"
      
      ...
      

      Replace <ADMIN_PASSWORD_HASH> with the password hash generated in the previous step.

    • kibanaserver user

      ...
      kibanaserver:
          hash: "<KIBANASERVER_PASSWORD_HASH>"
          reserved: true
          description: "Demo kibanaserver user"
      
      ...
      

      Replace <KIBANASERVER_PASSWORD_HASH> with the password hash generated in the previous step.

Setting the new password

  1. Encode your new password in base64 format. Use the -n option with the echo command as follows to avoid inserting a trailing newline character to maintain the hash value.

    # echo -n "NewPassword" | base64
    
  2. Edit the indexer or dashbboard secrets configuration file as follows. Replace the value of the password field with the base64 encoded password.

    • To change the admin user password, edit the wazuh/secrets/indexer-cred-secret.yaml file.

      ...
      apiVersion: v1
      kind: Secret
      metadata:
          name: indexer-cred
      data:
          username: YWRtaW4=              # string "admin" base64 encoded
          password: U2VjcmV0UGFzc3dvcmQ=  # string "SecretPassword" base64 encoded
      ...
      
    • To change the kibanaserver user password, edit the wazuh/secrets/dashboard-cred-secret.yaml file.

      ...
      apiVersion: v1
      kind: Secret
      metadata:
          name: dashboard-cred
      data:
          username: a2liYW5hc2VydmVy  # string "kibanaserver" base64 encoded
          password: a2liYW5hc2VydmVy  # string "kibanaserver" base64 encoded
      ...
      

Applying the changes

  1. Apply the manifest changes

    • EKS cluster

      # kubectl apply -k envs/eks/
      
    • Other cluster types

      # kubectl apply -k envs/local-env/
      
  2. Start a new Bash shell in the wazuh-indexer-0 pod.

    # kubectl exec -it wazuh-indexer-0 -n wazuh -- /bin/bash
    
  3. Set the following variables:

    export INSTALLATION_DIR=/usr/share/wazuh-indexer
    export CONFIG_DIR=$INSTALLATION_DIR/config
    CACERT=$CONFIG_DIR/certs/root-ca.pem
    KEY=$CONFIG_DIR/certs/admin-key.pem
    CERT=$CONFIG_DIR/certs/admin.pem
    export JAVA_HOME=/usr/share/wazuh-indexer/jdk
    
  4. Wait for the Wazuh indexer to initialize properly. The waiting time can vary from two to five minutes. It depends on the size of the cluster, the assigned resources, and the speed of the network. Then, run the securityadmin.sh script to apply all changes.

    $ bash /usr/share/wazuh-indexer/plugins/opensearch-security/tools/securityadmin.sh -cd $CONFIG_DIR/opensearch-security/ -nhnv -cacert  $CACERT -cert $CERT -key $KEY -p 9200 -icl -h $NODE_NAME
    
  5. Force the Wazuh dashboard deployment rollout to update the component credentials.

    $ kubectl rollout restart deploy/wazuh-dashboard -n wazuh
    
  6. Delete all Wazuh manager pods to update the component credentials.

    $ kubectl delete -n wazuh pod/wazuh-manager-master-0 pod/wazuh-manager-worker-0 pod/wazuh-manager-worker-1
    
  7. Log in to the Wazuh dashboard using the new credentials.

Wazuh server API users

The wazuh-wui user is the default user used to connect to the Wazuh server API. Follow the steps below to change the password.

Note

The password for Wazuh server API users must be between 8 and 64 characters long. It must contain at least one uppercase and one lowercase letter, a number, and a symbol.

  1. Encode your new password in base64 format. Use the -n option with the echo command as follows to avoid inserting a trailing newline character to maintain the hash value.

    # echo -n "NewPassword" | base64
    
  2. Edit the wazuh/secrets/wazuh-api-cred-secret.yaml file and replace the value of the password field.

    apiVersion: v1
    kind: Secret
    metadata:
        name: wazuh-api-cred
        namespace: wazuh
    data:
        username: d2F6dWgtd3Vp          # string "wazuh-wui" base64 encoded
        password: UGFzc3dvcmQxMjM0LmE=  # string "MyS3cr37P450r.*-" base64 encoded
    
  3. Apply the manifest changes.

    # kubectl apply -k envs/eks/
    
  4. Restart the Wazuh dashboard and Wazuh manager master pods.

    # kubectl delete pod wazuh-manager-master-0 wazuh-manager-worker-0-0 wazuh-manager-worker-1-0 wazuh-dashboard-f4d9c7944-httsd
    

Agents

The Wazuh agent can be deployed directly within your Kubernetes environment to monitor workloads, pods, and container activity. This setup provides visibility into the cluster’s runtime behavior, helping detect threats and configuration issues at the container and node levels.

There are two main deployment models for Wazuh agents in Kubernetes:

  • DaemonSet deployment where one Wazuh agent runs on each node to monitor the node and all containers on that node.

  • Sidecar deployment where the Wazuh agent runs as a companion container alongside a specific application pod to monitor that application only.

Deploying the Wazuh Agent as a DaemonSet

This is the most common approach for full-cluster monitoring. Each node runs one agent, ensuring complete coverage without manual intervention when new nodes are added.

  1. Create the Wazuh Agent DaemonSet manifest wazuh-agent-daemonset.yaml:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: wazuh-daemonset
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: wazuh-agent
      namespace: wazuh-daemonset
    spec:
      selector:
        matchLabels:
          app: wazuh-agent
      template:
        metadata:
          labels:
            app: wazuh-agent
        spec:
          serviceAccountName: default
          terminationGracePeriodSeconds: 20
          containers:
            - name: wazuh-agent
              image: wazuh/wazuh-agent:4.13.1
              imagePullPolicy: IfNotPresent
              env:
                - name: WAZUH_MANAGER
                  value: "<WAZUH_MANAGER_IP_OR_HOSTNAME>"
                - name: WAZUH_PORT
                  value: "1514"
                - name: WAZUH_PROTOCOL
                  value: "tcp"
                - name: WAZUH_REGISTRATION_SERVER
                  value: "<WAZUH_MANAGER_IP_OR_HOSTNAME>"
                - name: WAZUH_REGISTRATION_PORT
                  value: "1515"
                - name: WAZUH_AGENT_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
              volumeMounts:
                - name: varlog
                  mountPath: /var/log
                  readOnly: true
                - name: dockersock
                  mountPath: /var/run/docker.sock
                  readOnly: true
                - name: ossec-data
                  mountPath: /var/ossec
              securityContext:
                runAsUser: 0
                allowPrivilegeEscalation: true
                capabilities:
                  add: ["SETGID","SETUID"]
          volumes:
            - name: varlog
              hostPath:
                path: /var/log
            - name: dockersock
              hostPath:
                path: /var/run/docker.sock
            - name: ossec-data
              emptyDir: {}
    

    Replace <WAZUH_MANAGER_IP_OR_HOSTNAME> with the Wazuh manager IP address or hostname.

  2. Create the namespace:

    # kubectl create namespace wazuh-daemonset
    
  3. Deploy the Wazuh agent:

    # kubectl apply -f wazuh-agent-daemonset.yaml
    
  4. Verify that the Wazuh agent was deployed across all nodes with the following command:

    # kubectl get pods -n wazuh-daemonset -o wide
    
    NAME                               READY   STATUS             RESTARTS        AGE   IP            NODE           NOMINATED NODE   READINESS GATES
    wazuh-agent-jmqrx                  1/1     Running            2 (3m27s ago)   10h   10.xxx.x.35   minikube       <none>           <none>
    wazuh-agent-xjg7b                  1/1     Running            2 (83s ago)   10h   <none>        minikube-m02   <none>           <none>
    

Deploying the Wazuh Agent as a Sidecar

The sidecar approach is ideal for targeted monitoring of sensitive applications or workloads that require isolated log collection. Perform the steps below to deploy Wazuh as a Sidecar:

  1. Modify your application’s deployment to include the Wazuh agent container. In the example below, we deploy Wazuh alongside the Apache Tomcat application from the wazuh-agent-sidecar.yaml deployment file:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: wazuh-sidecar
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: tomcat-wazuh-agent
      namespace: wazuh-sidecar
    spec:
      serviceName: tomcat-app
      replicas: 1
      selector:
        matchLabels:
          app: tomcat-wazuh-agent
      template:
        metadata:
          labels:
            app: tomcat-wazuh-agent
        spec:
          terminationGracePeriodSeconds: 20
          securityContext:
            fsGroup: 999
            fsGroupChangePolicy: OnRootMismatch
    
          initContainers:
            - name: cleanup-ossec-stale
              image: busybox:1.36
              imagePullPolicy: IfNotPresent
              securityContext:
                runAsUser: 0
              command: ["/bin/sh", "-lc"]
              args:
                - |
                  set -e
                  mkdir -p /agent/var/run /agent/queue/ossec
                  rm -f /agent/var/run/*.pid || true
                  rm -f /agent/queue/ossec/*.lock || true
                  echo "Cleanup complete. Ready for next init step."
              volumeMounts:
                - name: wazuh-agent-data
                  mountPath: /agent
    
            - name: seed-ossec-tree
              image: wazuh/wazuh-agent:4.13.0
              imagePullPolicy: IfNotPresent
              securityContext:
                runAsUser: 0
              command: ["/bin/sh", "-lc"]
              args:
                - |
                  set -e
                  if [ ! -d /agent/bin ]; then
                    echo "Seeding /var/ossec into PVC..."
                    tar -C /var/ossec -cf - . | tar -C /agent -xpf -
                  fi
              volumeMounts:
                - name: wazuh-agent-data
                  mountPath: /agent
    
            - name: write-ossec-config
              image: busybox:1.36
              imagePullPolicy: IfNotPresent
              securityContext:
                runAsUser: 0
              env:
                - name: WAZUH_MANAGER
                  value: "<WAZUH_MANAGER_IP_OR_HOSTNAME>"
                - name: WAZUH_PORT
                  value: "1514"
                - name: WAZUH_PROTOCOL
                  value: "tcp"
                - name: WAZUH_REGISTRATION_SERVER
                  value: "<WAZUH_MANAGER_IP_OR_HOSTNAME>"
                - name: WAZUH_REGISTRATION_PORT
                  value: "1515"
                - name: WAZUH_AGENT_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
              command: ["/bin/sh", "-lc"]
              args:
                - |
                  set -e
                  mkdir -p /agent/etc
                  cat > /agent/etc/ossec.conf <<'EOF'
                  <ossec_config>
                    <client>
                      <server>
                        <address>${WAZUH_MANAGER}</address>
                        <port>${WAZUH_PORT}</port>
                        <protocol>${WAZUH_PROTOCOL}</protocol>
                      </server>
                      <enrollment>
                        <enabled>yes</enabled>
                        <agent_name>${WAZUH_AGENT_NAME}</agent_name>
                        <manager_address>${WAZUH_REGISTRATION_SERVER}</manager_address>
                        <port>${WAZUH_REGISTRATION_PORT}</port>
                      </enrollment>
                    </client>
                    <localfile>
                      <log_format>syslog</log_format>
                      <location>/usr/local/tomcat/logs/catalina.out</location>
                    </localfile>
                  </ossec_config>
                  EOF
                  sed -i \
                    -e "s|\${WAZUH_MANAGER}|${WAZUH_MANAGER}|g" \
                    -e "s|\${WAZUH_PORT}|${WAZUH_PORT}|g" \
                    -e "s|\${WAZUH_PROTOCOL}|${WAZUH_PROTOCOL}|g" \
                    -e "s|\${WAZUH_REGISTRATION_SERVER}|${WAZUH_REGISTRATION_SERVER}|g" \
                    -e "s|\${WAZUH_REGISTRATION_PORT}|${WAZUH_REGISTRATION_PORT}|g" \
                    -e "s|\${WAZUH_AGENT_NAME}|${WAZUH_AGENT_NAME}|g" \
                    /agent/etc/ossec.conf
              volumeMounts:
                - name: wazuh-agent-data
                  mountPath: /agent
    
          containers:
            - name: tomcat
              image: tomcat:10.1-jdk17
              imagePullPolicy: IfNotPresent
              ports:
                - containerPort: 8080
              volumeMounts:
                - name: application-data
                  mountPath: /usr/local/tomcat/logs
    
            - name: wazuh-agent
              image: wazuh/wazuh-agent:4.13.0
              imagePullPolicy: IfNotPresent
              lifecycle:
                preStop:
                  exec:
                    command: ["/bin/sh", "-lc", "/var/ossec/bin/ossec-control stop || true; sleep 2"]
              command: ["/bin/sh", "-lc"]
              args:
                - |
                  set -e
                  ln -sf /var/ossec/etc/ossec.conf /etc/ossec.conf
                  exec /init
              env:
                - name: WAZUH_MANAGER
                  value: "<WAZUH_MANAGER_IP_OR_HOSTNAME>"
                - name: WAZUH_PORT
                  value: "1514"
                - name: WAZUH_PROTOCOL
                  value: "tcp"
                - name: WAZUH_REGISTRATION_SERVER
                  value: "<WAZUH_MANAGER_IP_OR_HOSTNAME>"
                - name: WAZUH_REGISTRATION_PORT
                  value: "1515"
                - name: WAZUH_AGENT_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
              securityContext:
                runAsUser: 0
                runAsGroup: 0
              volumeMounts:
                - name: wazuh-agent-data
                  mountPath: /var/ossec
                - name: application-data
                  mountPath: /usr/local/tomcat/logs
    
      volumeClaimTemplates:
        - metadata:
            name: wazuh-agent-data
          spec:
            accessModes: ["ReadWriteOnce"]
            storageClassName: standard
            resources:
              requests:
                storage: 3Gi
        - metadata:
            name: application-data
          spec:
            accessModes: ["ReadWriteOnce"]
            storageClassName: standard
            resources:
              requests:
                storage: 5Gi
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: tomcat-app
      namespace: wazuh-sidecar
    spec:
      selector:
        app: tomcat-wazuh-agent
      type: NodePort
      ports:
        - protocol: TCP
          port: 80
          targetPort: 8080
          nodePort: 30013
    

    Replace <WAZUH_MANAGER_IP_OR_HOSTNAME> with the Wazuh manager IP address or hostname.

  2. Create the namespace for the Wazuh agent and the Node.js application:

    # kubectl create namespace wazuh-sidecar
    
  3. Deploy the sidecar setup:

    # kubectl apply -f wazuh-agent-sidecar.yaml
    
  4. Run the command below to confirm that the tomcat-wazuh-agent pod is running:

    # kubectl get pods -n wazuh-sidecar
    
    NAME                           READY   STATUS     RESTARTS   AGE
    tomcat-wazuh-agent-0   2/2     Running           0          18s