Kubernetes configuration
Pre-requisites
Kubernetes cluster already deployed.
Kubernetes can run on a wide range of Cloud providers and bare-metal environments, this repository focuses on AWS. It was tested using Amazon EKS.
Having at least two Kubernetes nodes in order to meet the podAntiAffinity policy.
Overview
StatefulSet and deployment controllers
Like a Deployment, a StatefulSet manages Pods that are based on an identical container specification, but it maintains an identity attached to each of its pods. These pods are created from the same specification, but they are not interchangeable: each one has a persistent identifier maintained across any rescheduling.
It is useful for stateful applications like databases that save the data to persistent storage. The states of each Wazuh manager, as well as Elasticsearch, are desirable to maintain, so we declare them using StatefulSet to ensure that they maintain their states in every startup.
Deployments are intended for stateless use and are quite lightweight and seem to be appropriate for Kibana and Nginx, where it is not necessary to maintain the states.
Persistent volumes are pieces of storage in the provisioned cluster. It is a resource in the cluster just like a node is a cluster resource. Persistent volumes are volume plugins like Volumes but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
Here, we use persistent volumes to store data from both Wazuh and Elasticsearch.
Read more about persistent volumes here.
Pods
You can check how we build our Wazuh docker containers in our repository.
Wazuh master
This pod contains the master node of the Wazuh cluster. The master node centralizes and coordinates worker nodes, making sure the critical and required data is consistent across all nodes. The management is performed only in this node, so the agent registration service (authd) is placed here.
Image |
Controller |
---|---|
wazuh/wazuh-odfe:4.0.4_1.11.0 |
StatefulSet |
Wazuh worker 0 / 1
These pods contain a worker node of the Wazuh cluster. They will receive the agent events.
Image |
Controller |
---|---|
wazuh/wazuh-odfe:4.0.4_1.11.0 |
StatefulSet |
Elasticsearch
Elasticsearch pod, it ingests events received from Filebeat.
Image |
Controller |
---|---|
amazon/opendistro-for-elasticsearch:1.11.0 |
StatefulSet |
Kibana
Kibana pod, the frontend for Elasticsearch, it also includes the Wazuh app.
Image |
Controller |
---|---|
wazuh/wazuh-kibana-odfe:4.0.4_1.11.0 |
Deployment |
Services
Elastic stack
Name |
Description |
---|---|
wazuh-elasticsearch |
Communication for Elasticsearch nodes. |
elasticsearch |
Elasticsearch service. Used by Kibana and Filebeat. |
kibana |
Kibana service. The UI for Elasticsearch. |
Wazuh
Name |
Description |
---|---|
wazuh |
Wazuh API: wazuh-master.your-domain.com:55000 |
Agent registration service (authd): wazuh-master.your-domain.com:1515 |
|
wazuh-workers |
Reporting service: wazuh-manager.your-domain.com:1514 |
wazuh-cluster |
Communication for Wazuh manager nodes. |
Deploy
Deploy Kubernetes
Follow the Official guide to deploy a Kubernetes Cluster. This repository focuses on AWS but it should be easy to adapt it to another Cloud provider. In case you are using AWS, we recommend EKS.
Create domains to access the services
We recommend creating domains and certificates to access the services. Examples:
wazuh-master.your-domain.com: Wazuh API and authd registration service.
wazuh-manager.your-domain.com: Reporting service.
wazuh.your-domain.com: Kibana and Wazuh app.
Note
You can skip this step and the services will be accessible using the Load balancer DNS from the VPC.
Deployment
Clone this repository to deploy the necessary services and pods.
$ git clone https://github.com/wazuh/wazuh-kubernetes.git -b v4.0.4_1.11.0 --depth=1 $ cd wazuh-kubernetes
3.1. Setup SSL certificates
You can generate self-signed certificates for the ODFE cluster using the script at
certs/odfe_cluster/generate_certs.sh
or provide your own.Since Kibana has HTTPS enabled it will require its own certificates, these may be generated with:
$ openssl req -x509 -batch -nodes -days 365 -newkey rsa:2048 -keyout key.pem -out cert.pem
The required certificates are imported via secretGenerator on the kustomization.yml file:
secretGenerator: - name: odfe-ssl-certs files: - certs/odfe_cluster/root-ca.pem - certs/odfe_cluster/node.pem - certs/odfe_cluster/node-key.pem - certs/odfe_cluster/kibana.pem - certs/odfe_cluster/kibana-key.pem - certs/odfe_cluster/admin.pem - certs/odfe_cluster/admin-key.pem - certs/odfe_cluster/filebeat.pem - certs/odfe_cluster/filebeat-key.pem - name: kibana-certs files: - certs/kibana_http/cert.pem - certs/kibana_http/key.pem
3.2. Apply all manifests using kustomize
By using the kustomization.yml we can now deploy the whole cluster in a single command.
$ kubectl apply -k .
Verifying the deployment
Namespace
$ kubectl get namespaces | grep wazuhwazuh Active 12m
Services
$ kubectl get services -n wazuh
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP xxx.yy.zzz.24 <none> 9200/TCP 12m kibana ClusterIP xxx.yy.zzz.76 <none> 5601/TCP 11m wazuh LoadBalancer xxx.yy.zzz.209 internal-a7a8... 1515:32623/TCP,55000:30283/TCP 9m wazuh-cluster ClusterIP None <none> 1516/TCP 9m wazuh-elasticsearch ClusterIP None <none> 9300/TCP 12m wazuh-workers LoadBalancer xxx.yy.zzz.26 internal-a7f9... 1514:31593/TCP 9m
Deployments
$ kubectl get deployments -n wazuh
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE wazuh-kibana 1 1 1 1 11m
Statefulset
$ kubectl get statefulsets -n wazuh
NAME READY AGE wazuh-elasticsearch 3/3 15m wazuh-manager-master 1/1 15m wazuh-manager-worker 2/2 15m
Pods
$ kubectl get pods -n wazuh
NAME READY STATUS RESTARTS AGE wazuh-elasticsearch-0 1/1 Running 0 15m wazuh-kibana-f4d9c7944-httsd 1/1 Running 0 14m wazuh-manager-master-0 1/1 Running 0 12m wazuh-manager-worker-0-0 1/1 Running 0 11m wazuh-manager-worker-1-0 1/1 Running 0 11m wazuh-nginx-748fb8494f-xwwhw 1/1 Running 0 14m
Accessing Kibana
In case you created domain names for the services, you should be able to access Kibana using the proposed domain name:
https://wazuh.your-domain.com
.Also, you can access using the DNS (e.g.:
https://internal-xxx-yyy.us-east-1.elb.amazonaws.com
):$ kubectl get services -o wide -n wazuh
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kibana LoadBalancer xxx.xx.xxx.xxx internal-xxx-yyy.us-east-1.elb.amazonaws.com 80:31831/TCP,443:30974/TCP 15m app=wazuh-kibana
Note
AWS route 53 can be used to create a DNS that points to the load balancer and make it accessible through that DNS.
Agents
Wazuh agents are designed to monitor hosts. To start using them:
Now, register the agent using the registration service.
Modify the file
/var/ossec/etc/ossec.conf
, changing the "transport protocol" to TCP and changing theMANAGER_IP
for the external IP of the service pointing to port 1514 or for the DNS provided by AWS Route 53 if you are using it.Using the authd daemon with option -m specifying the external IP of the Wazuh service that takes to the port 1515 or its DNS if using AWS Route 53.