Prior to enabling the Wazuh rules for Amazon Web Services, follow the steps below to configure AWS to generate log messages, and store them as JSON data files in an Amazon S3 bucket. A detailed description of each of the steps can be found bellow.


The integration with AWS S3 can be done at the Wazuh manager (which also behaves as an agent) or directly at a Wazuh agent. This choice merely depends on how you decide to access your AWS infrastructure in your environment.


  • AWS

  • Wazuh >= 3.6

  • Python >= 2.7

  • Pip

  • Boto3

Storing AWS logs on S3

Depending on the AWS service to be monitored, the necessary steps to follow are different.


Bucket encryption and all types of compression are supported, except Snappy.


  1. From your AWS console, choose “CloudTrail” from the Deployment & Management section:

  1. Create a new trail:

  1. Provide a name for the new S3 bucket that will be used to store the CloudTrail logs (remember the name you provide here, you’ll need to reference it during plugin setup):

VPC Flow

  1. Go to Services > Storage > S3:

  1. Click on the Create bucket:

  1. Create a new bucket, giving it a name and clicking on the Create button. Don't forget to save its Bucket ARN, you'll need it later in the process:

  1. Go to Services > Compute > EC2:

  1. Go to Network & Security > Network Interfaces on the left menu. Select a network interface and select Create a flow log on the Actions menu:

  1. Change all fields to look like the following screenshot and paste the ARN of the previously created bucket:

Other AWS Services (Guard Duty, Macie and IAM)

This section explains how to get logs from Guard Duty, Macie and IAM.

  1. Go to Services > Storage > S3:

  1. Click on the Create bucket:

  1. Create a new bucket, giving it a name and clicking on the Create button:

  1. Go to Services > Analytics > Kinesis:

4.1. If it's the first time you're using this service, you'll see the following screen. Just click on Get started:

  1. Click on Create delivery stream button:

  1. Put a name to your delivery stream and click on the Next button at the bottom of the page:

  1. On the next page, leave both options as Disabled and click on Next:

  1. Select Amazon S3 as destination, then select the previously created S3 bucket and add a prefix where logs will be stored. AWS Firehose creates a file structure YYYY/MM/DD/HH, if a prefix is used the created file structure would be firehose/YYYY/MM/DD/HH. If a prefix is used it must be specified under the Wazuh Bucket configuration:

  1. You can select which compression do your prefer. Wazuh supports any kind of compression but Snappy. After that, click on Create new or choose:

  1. Give a proper name to the role and click on the Allow button:

  1. The following page is just a summary about the Firehose stream created, go to the bottom of the page and click on the Create delivery stream button:

  1. Go to Services > Management Tools > CloudWatch:

  1. Select Rules on the left menu and click on the Create rule button:

  1. Select which service do you want to get logs from using the Service name slider, then, click on the Add target button and add the previously created Firehose delivery stream there. Also, create a new role to access the delivery stream:

  1. Give the rule some name and click on the Create rule button:

  1. Once the rule is created, data will start to be sent to the previously created S3 bucket. Remember to first enable the service you want to monitor, otherwise you won't get any data.

Create an IAM User

Wazuh will need a user with permissions to pull the CloudTrail log data from your S3 bucket. The easiest way to accomplish this is by creating a new IAM user for your account. We will only allow it to read data from the S3 bucket.

  1. Create new user:

Navigate to Services > IAM > Users

Click on "Next: Permissions" to continue.

  1. Create policy:

We will attach this policy later to the user we are creating.

Check that your new policy looks like this:

Raw output for the example policy:

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
            "Resource": [


The s3:DeleteObject action is only required if the CloudTrail logs will be removed from the S3 bucket by the wodle.

  1. Attach policy:

  1. Confirm user creation and get credentials:

Save the credentials, you will use them later to configure the module.

Installing dependencies

Python Boto3 module is required on the system running the Wazuh module to pull AWS events. This will usually be one of your agents (or your manager).


Pip can be used as Python package manager to install the required module. In order to use it, we will start installing this tool.

  1. CentOS/RHEL/Fedora:

# yum install python-pip
  1. Debian/Ubuntu:

# apt-get update && apt-get install python-pip
  1. From sources:

# curl -O
# python


Boto3 is the official package supported by Amazon to manage AWS resources. It will be used to download the log messages from the S3 Bucket.

# pip install boto3

Plugin configuration

  1. Open Wazuh configuration file:

# vi /var/ossec/etc/ossec.conf
  1. Add the following block of configuration to enable the integration, enter the AWS IAM User credentials you created before and the AWS Account ID of the CloudTrail logs to be processed:

<wodle name="aws-s3">
  <bucket type="cloudtrail">

To monitor logs for multiple AWS accounts, configure multiple <bucket> options within the aws-s3 wodle. Bucket tags must have a type attribute which value can be cloudtrail to monitor CloudTrail logs or custom to monitor any other type of logs, for example, Firehose ones.

Check the user manual reference to read more details about each setting: AWS S3 settings

  1. Restart your Wazuh system to apply the changes:

  • If you're configuring a Wazuh manager:

    1. For Systemd:

    # systemctl restart wazuh-manager
    1. For SysV Init:

    # service wazuh-manager restart
  • If you're configuring a Wazuh agent:

    1. For Systemd:

    # systemctl restart wazuh-agent
    1. For SysV Init:

    # service wazuh-agent restart

Authenticating options

Credentials can be loaded from different locations, you can either specify the credentials as they are in the previous block of configuration, assume an IAM role, or load them from other Boto3 supported locations..

Environment variables

If you're using a single AWS account for all your buckets this could be the most suitable option for you. You just have to define the following environment variables:




You can define profiles in your credentials file (~/.aws/credentials) and specify those profiles on the bucket configuration.

For example, the following credentials file defines three different profiles: default, dev and prod.




To use the prod profile in the AWS integration you would use the following bucket configuration:

 <bucket type="cloudtrail">

IAM Roles


This authentication method requires some credentials to be previously added to the configuration using any other authentication method.

IAM Roles can also be used to access the S3 bucket. Follow these steps to create one:

  1. Go to Services > Security, Identity & Compliance > IAM.

  1. Select Roles in the right menu and click on the Create role button:

  1. Select S3 service and click on Next: Permissions button:

  1. Select the previously created policy:

  1. Click on Create role button:

  1. Access to role summay and click on its policy name:

  1. Add permissions so the new role can do sts:AssumeRole action:

  1. Come back to the role's summary, go to Trust relationships tab and click on Edit trust relationship button:

  1. Add your user to the Principal tag and click on Update Trust Policy button:

Once your role is created, just paste it on the bucket configuration:

 <bucket type="cloudtrail">

IAM roles for EC2 instances

You can use IAM roles and assign them to EC2 instances so there's no need to insert authentication parameters on the ossec.conf file. This is the recommended configuration. Find more information about IAM roles on EC2 instances in the official Amazon AWS documentation.

This is an example configuration:

<bucket type="cloudtrail">

Considerations for configuration


If the S3 bucket contains a long history of logs and its directory structure is organized by dates, it's possible to filter which logs will be read by Wazuh. There are multiple configuration options to do so:

  • only_logs_after: Allows filtering logs produced after a given date. The date format must be YYYY-MMM-DD, for example, 2018-AUG-21 would filter logs produced after the 21th of August 2018 (that day included).

  • aws_account_id: This option will only work on CloudTrail buckets. If you have logs from multiple accounts, you can filter which ones will be read by Wazuh. You can specify multiple ids separating them by commas.

  • regions: This option will only work on CloudTrail buckets. If you have logs from multiple regions, you can filter which ones will be read by Wazuh. You can specify multiple regions separating them by commas.

  • path: If you have your logs stored in a given path, it can be specified using this option. For example, to read logs stored in directory vpclogs/ the path vpclogs need to be specified. It can also be specified with / or \.

Older logs

The aws-cloudtrail wodle only looks for new logs based upon the key for last processed log object, which includes the datetime stamp. If older logs are loaded into the S3 bucket or the only_logs_after option date is set to a datetime earlier than previous executions of the wodle, the older log files will be ignored and not ingested into Wazuh.