Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Configuring an Amazon AWS CloudTrail Log Source by using the Amazon AWS S3 REST API Protocol

If you want to collect AWS CloudTrail logs from Amazon S3 buckets, configure a log source on the JSA Console so that Amazon AWS CloudTrail can communicate with JSA by using the Amazon AWS S3 REST API protocol.

  1. Install the most recent version of the following RPMs on your JSA Console.

    • Protocol Common RPM

    • Amazon AWS S3 REST API Protocol RPM

    • DSMCommon RPM

    • Amazon Web Service RPM

    • Amazon AWS CloudTrail DSM RPM

  2. Choose which method you will use to configure an Amazon AWS CloudTrail log source by using the JSA Console Amazon AWS S3 REST API protocol.

Creating an Identity and Access Management (IAM) User in the AWS Management Console when using the Amazon AWS S3 REST API

An Amazon administrator must create a user and then apply the AmazonS3ReadOnlyAccess policy in the AWS Management Console. The JSA user can then create a log source in JSA.

Note:

Alternatively, you can assign more granular permissions to the bucket. The minimum required permissions are s3:listBucket and s3:getObject

  1. Create a user:

    1. Log in to the AWS Management Console as administrator.

    2. Create an Amazon AWS IAM user and then apply the AmazonS3ReadOnlyAccess policy.

  2. Find the S3 bucket name and directory prefix that you use to configure a log source in JSA:

    1. Click Services.

    2. From the list, select CloudTrail.

    3. From the Trails page, click the name of the trail.

    4. Note the name of the S3 bucket that is displayed in the S3 bucket field.

    5. Click the Edit icon

    6. Click Advanced icon.

    7. Note the location path for the S3 bucket that is displayed below the Log file prefix field.

Configure the log source in JSA . The S3 bucket name is the value for the Bucket name field. The location path for the S3 bucket is the value for Directory prefix field.

Create an SQS Queue and Configure S3 ObjectCreated Notifications

Before you can add a log source in JSA, you must create an SQS queue and configure S3 ObjectCreated notifications in the AWS Management Console when using the Amazon AWS S3 REST API protocol.

Complete the following procedures:

  1. Finding or creating the S3 Bucket that contains the data that you want to collect.

  2. Creating the SQS queue that is used to receive the ObjectCreated notifications from the S3 Bucket that you used in “Finding or creating the S3 bucket that contains the data that you want to collect”.

  3. Setting up SQS queue permissions.

  4. Creating ObjectCreated notifications.

Finding or creating the S3 bucket that contains the data that you want to collect

You must find or create and note the region for the S3 bucket that contains the data that you want to collect.

  1. Log in to the AWS Management Console as an administrator.

  2. Click Services, and then go to the Simple Queue Service Management Console.

  3. From the AWS Region column in the Buckets list, note the region where the bucket that you want to collect data from is located. You need the region for the Region Name parameter value when you add a log source in JSA.

  4. Enable the check box beside the bucket name, and then from the panel that opens to the right, click Copy Bucket ARN to copy the value to the clipboard. Save this value or leave it on the clipboard. You need this value when you complete the “Creating the SQS queue that is used to receive ObjectCreated notifications”.

Creating the SQS queue that is used to receive ObjectCreated notifications

You must create an SQS queue and configure S3 ObjectCreated notifications in the AWS Management Console when using the Amazon AWS S3 REST API protocol.

You must complete the “Finding the S3 bucket that contains the data that you want to collect” procedure.

The SQS Queue must be in the same region as the AWS S3 bucket that the queue is collecting from.

  1. Log in to the AWS Management Console as an administrator.

  2. Click Services, and then go to the Simple Queue Service Management Console.

  3. In the top right of the window, change the region to where the bucket is located. You noted this value when you completed “Finding the S3 bucket that contains the data that you want to collect” procedure.

  4. Select Create New Queue, and then type a value for the Queue Name.

  5. Click Standard Queue, select Configure Queue, and then change the default values for the following Queue Attributes.

    • Default Visibility Timeout - 60 seconds (You can use a lower value. In the case of load balanced collection, duplicate events might occur with values of less than 30 seconds. This value can't be 0.)

    • Message Retention Period - 14 days (You can use a lower value. In the event of an extended collection, data might be lost.)

    Use the default value for the remaining Queue Attributes.

    More options such as Redrive Policy or SSE can be used depending on the requirements for your AWS environment. These values should not affect collection of data.

  6. Select Create Queue.

Setting up SQS queue permissions

You must set up SQS queue permissions for users to access the queue.

You must complete Creating the SQS queue that is used to receive ObjectCreated notifications.

You can set the SQS queue permissions by using either the Permissions Editor or a JSON policy document.

  1. Log in to the AWS Management Console as an administrator.

  2. Go to the SQS Management Console, and then select the queue that you created from the list.

  3. From the Properties window, select Details, and record the ARN field value. You need this value when you complete the “Creating ObjectCreated notification” procedure.

  4. Optional: Set the SQS queue permissions by using the Permissions Editor.

    1. From the Properties window, select Permissions > Add a Permission, and then configure the following parameters:

      Table 1: Permission Parameters

      Parameter

      Value

      Effect

      Click Allow.

      Principal

      Click Everybody (*).

      Actions

      From the list, select SendMessage

    2. Click Add Conditionals (Optional), and then configure the following parameters:

      Table 2: Add Conditionals (Optional) parameters

      Parameter

      Value

      Qualifier

      None

      Condition

      ARNLike

      Key

      Type aws:SourceArn.

      Value

      The ARN of the S3 bucket from when you completed the Finding or creating the S3 bucket that contains the data that you want to collect procedure.

      Example: aws:s3:::my-example-s3bucket

    3. Click Add Condition > Add Permission.

  5. Optional: Set the SQS queue permissions by using a JSON Policy Document.

    1. In the Properties window, select Edit Policy Document (Advanced).

    2. Copy and paste the following JSON policy into the Edit Policy Document window:

      Copy and paste might not preserve the white space in the JSON policy. The white space is required. If the white space is not preserved when you paste the JSON policy, paste it into a text editor and restore the white space. Then, copy and paste the JSON policy from your text editor into the Edit Policy Document window.

  6. Click Review Policy. Ensure the data is correct, and then click Save Changes.

Creating ObjectCreated notifications

Configure ObjectCreated notifications for the folders that you want to monitor in the S3 bucket.

  1. Log in to the AWS Management Console as an administrator.

  2. Click Services, go to S3, and then select a bucket.

  3. Click the Properties tab, and in the Events pane, click Add notification. Configure the parameters for the new event.

    The following table shows an example of a ObjectCreated notification parameter configuration:

    Table 3: Example: New ObjectCreated Notification Parameter Configuration

    Parameter

    Value

    Name

    Type a name of your choosing.

    Events

    Select All object create events.

    Prefix

    AWSLogs/

    Tip:

    You can choose a prefix that contains the data that you want to find, depending on where the data is located and what data that you want to go to the queue. For example; AWSLogs/, CustomPrefix/AWSLogs/, AWSLogs/ 123456789012/.

    Suffix

    json.gz

    Send to

    SQS queue

    Tip:

    You can send the data from different folders to the same or different queues to suit your collection or JSA tenant needs. Choose one or more of the following methods:

    • Different folders that go to different queues

    • Different folders from different buckets that go to the same queue

    • Everything from a single bucket that goes to a single queue

    • Everything from multiple buckets that go to a single queue

    SQS

    SecureQueue_TEST

    In the example in figure 1 of a parameter configuration, notifications are created for AWSLogs/ from the root of the bucket. When you use this configuration, All ObjectCreated events trigger a notification. If there are multiple accounts and regions in the bucket, everything gets processed. In this example, json.gz is used. This file type can change depending on the data that you are collecting. Depending on the content in your bucket, you can omit the extension or choose an extension that matches the data you are looking for in the folders where you have events setup.

    After approximately 5 minutes, the queue that contains data displays. In the Messages Available column, you can view the number of messages

  4. Click Services, then go to Simple Queue Services.

  5. From the SecureQueue TEST list, select View/Delete Messages to view the messages.

    Sample message:

  6. Set a User or Role permission to access the SQS queue and for permission to download from the target bucket. The user or user role must have permission to read and delete from the SQS queue. After JSA reads the notification and then downloads and processes the target file, the message must be deleted from the queue.

    Sample Policy:

    You can add multiple buckets to the S3 queue. To ensure that all objects are accessed, you must have a trailing /* at the end of the folder path that you added.

    You can add this policy directly to a user, a user role, or you can create a minimal access user with sts:AssumeRole permissions only. When you configure a log source in JSA, configure the assume Role ARN parameter for JSA to assume the role. To ensure that all files waiting to be processed in a single run (emptying the queue) can finish without retries, use the default value of 1 hour for the API Session Duration parameter.

    When you use assumed roles, ensure that the ARN of the user that is assuming the rule is in the Trusted Entities for that role. From the Trusted entities pane, you can view the trusted entities that can assume the role. In addition, the user must have permission to assume roles in that (or any) account. Only my test user, no.permissions.user, can have this permission.

Troubleshooting Amazon AWS S3 REST API Log Source Integrations

You configured a log source in JSA to collect Amazon AWS logs, but the log source status is Warn and events are not generated as expected.

Symptom:

Error that is shown in /var/log/qradar.error:

Cause:

This error was probably caused by exporting the Amazon SSL certificate from the incorrect URL or by not using the Automatically Acquire Server Certificate(s) option when you configured the log source.

Environment:

All JSA versions.

Diagnosing the problem:

Verify that the certificate that is on the whitelist does not intersect with the server certificate that is provided by the connection. The server certificate that is sent by Amazon covers the *.s3.amazonaws.com domain. You must export the certificate for the following URL:

The stack trace in JSA indicates the issue with the Amazon AWS S3 REST API Protocol. In the following example, JSA is rejecting an unrecognized certificate. The most common cause is that the certificate is not in the correct format or is not placed in the correct directory on the correct JSA appliance.

Resolving the problem:

If you downloaded the certificate automatically when you created the log source, verify the following steps:

  1. You configured the correct Amazon S3 endpoint URL and the correct bucket name.

  2. You selected the Yes option for Automatically Acquire server Certificate(s).

  3. You saved the log source.

Note:

The log source automatically downloads the .DER certificate file to the /opt/qradar/conf/ trusted_certificates directory. To verify that the correct certificate is downloaded and working, complete the following steps:

  1. From the Navigation menu, click Enable/Disable to disable the log source.

  2. Enable the Amazon AWS CloudTrail log source.

If you manually downloaded the certificate , you must move the .DER certificate file to the correct JSA appliance. The correct JSA appliance is assigned in the Target Event Collector field in the Amazon AWS CouldTrail log source.

Note:

The certificate must have a .DER extension. The .DER extension is case-sensitive and must be in uppercase. If the certificate is exported in lowercase, then the log source might experience event collection issues.

  1. Access your AWS CloudTrail S3 bucket at https://<bucketname>.s3.amazonaws.com

  2. Use Firefox to export the SSL certificate from AWS as a DER certificate file.

  3. Copy the DER certificate file to the /opt/qradar/conf/trusted_certificates directory on the JSA appliance that manages the Amazon AWS CloudTrail log source.

    Note:

    The JSA appliance that manages the log source is identified by the Target Event Collect field in the Amazon AWS CloudTrail log source. The JSA appliance has a copy of the DER certificate file in the /opt/qradar/conf/trusted_certificates folder.

  4. Log in to JSA as an administrator.

  5. Click the Admin tab.

  6. Click the Log Sources icon.

  7. Select the Amazon AWS CloudTrail log source.

  8. From the navigation menu, click Enable/Disable to disable, then re-enable the Amazon AWS CloudTrail log source.

    Note:

    Forcing the log source from disabled to enabled connects the protocol to the Amazon AWS bucket as defined in the log source. A certificate check takes place as part of the first communication.

  9. If you continue to have issues, verify that the Amazon AWS bucket name in the Log Source Identifier field is correct. Ensure that the Remote Directory path is correct in the log source configuration.