Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Configuring an Amazon AWS CloudTrail Log Source by using the Amazon AWS S3 REST API Protocol

 

If you want to collect AWS CloudTrail logs from Amazon S3 buckets, configure a log source on the JSA Console so that Amazon AWS CloudTrail can communicate with JSA by using the Amazon AWS S3 REST API protocol.

  1. Install the most recent version of the following RPMs on your JSA Console.

    • Protocol Common RPM

    • Amazon AWS S3 REST API Protocol RPM

    • DSMCommon RPM

    • Amazon Web Service RPM

    • Amazon AWS CloudTrail DSM RPM

  2. Choose which method you will use to configure an Amazon AWS CloudTrail log source by using the JSA Console Amazon AWS S3 REST API protocol.

Configuring an Amazon AWS CloudTrail Log Source that uses an S3 Bucket with an SQS Queue

If you want to collect AWS CloudTrail logs from multiple accounts or regions in an Amazon S3 bucket, configure a log source on the JSA Console so that Amazon AWS CloudTrail can communicate with JSA by using the Amazon AWS S3 REST API protocol and a Simple Queue Service (SQS) queue.

Using the Amazon AWS S3 REST API protocol and a Simple Queue Service (SQS) queue instead of with a directory prefix has the following advantages:

  • You can use one log source for an S3 bucket, rather than one log source for each region and account.

  • There is a reduced chance of missing files since this method uses ObjectCreate notifications to determine when new files are ready.

  • It's easy to balance the load across multiple Event Collectors as the SQS queue supports connections from multiple clients.

  • File names are not an issue. Unlike the directory prefix method, the SQS queue method does not require that the file names in the folders be in a string sorted in ascending order based on the full path. File names from custom applications don't always conform to this.

  • You can monitor the SQS queue and set up alarms if it gets over a certain number of records. This would tell you whether JSA is either falling behind or not collecting events.

  • You can use IAM Role authentication with SQS, which is Amazon's best practice for security.

  • Certificate handling is improved with the SQS method and does not require the downloading of certificates to the Event Collector.

Procedure

Create an SQS queue and configure S3 ObjectCreated notifications

You must create an SQS queue and configure S3 ObjectCreated notifications in the AWS Management Console when using the Amazon AWS S3 REST API protocol.

Complete the following procedures:

  1. Finding the S3 bucket that contains the data that you want to collect.
  2. Creating the SQS queue that is used to receive ObjectCreated notifications from the S3 Bucket that you used in Step 1.
  3. Setting up SQS queue permissions.
  4. Creating ObjectCreated notifications.

Finding the S3 bucket that contains the data that you want to collect

You must find the S3 bucket that contains the data that you want to collect.

  1. Log in to the AWS Management Console as an administrator.
  2. Click Services, and then navigate to S3.
  3. From the Region column in the S3 buckets list, note the region where the bucket that you want to collect data from is located.
  4. Enable the check box beside the bucket name, and then from the panel that opens to the right, click Copy Bucket ARN to copy the value to the clipboard. Save this value or leave it on the clipboard. You will need this value when you set up Setting up SQS queue permissions.

Creating the SQS queue that is used to receive ObjectCreated notifications

You must create an SQS queue and configure S3 ObjectCreated notifications in the AWS Management Console when using the Amazon AWS REST API protocol.

You must complete Finding the S3 bucket that contains the data that you want to collect. The SQS Queue must be in the same region as the AWS S3 bucket that the queue is collecting from.

  1. Log in to the AWS Management Console as an administrator.
  2. Click Services, and then navigate to the Simple Queue Service Management Console.
  3. In the upper right of the window, change the region to where the bucket is located. You noted this value when you completed the Finding the S3 bucket that contains the data that you want to collect procedure.
  4. Select Create New Queue, and then type a value for the Queue Name.
  5. Click Standard Queue, and then select Configure Queue at the bottom of the window. Change the default values for the following Queue Attributes.
    • Default Visibility Timeout - 60 seconds (Lower can be used. However, in the case of load balanced collection, duplicate events might occur with values of less than 30 seconds. This value can't be 0.)

    • Message Retention Period - 14 days (Lower can be used. However, in the event of an extended collection, data might be lost.)

    Use the default value for the remaining Queue Attributes.

    More options such as Redrive Policy or SSE can be used depending on the requirements for your AWS environment. These values should not affect collection of data.

  6. Select Create Queue.

Setting up SQS queue permissions

You must set up SQS queue permissions for users to access the queue.

You must complete Creating the SQS queue that is used to receive ObjectCreated notifications.

  1. Log in to the AWS Management Console as an administrator.
  2. Go to the SQS Management Console, and then select the queue that you created from the list.
  3. From the Properties window, select Details. Record the ARN field value.

    Example: arn:aws:sqs:us-east-1:123456789012:MySQSQueueName

  4. Set the SQS queue permissions by using either the Permissions Editor or a JSON policy document.
    • Using the Permissions Editor:

      1. From the Properties window, select Permissions > Add a Permission, and then configure the following options.

        Table 1: Permission parameters

        Effect

        Click Allow

        Principal

        Click Everybody (*)

        Actions

        From the list, select SendMessage

      2. Click Add Conditionals (Optional), and then configure the following parameters:

        Table 2: Add Conditionals (Optional) parameters

        Qualifier

        None

        Condition

        ARNLike

        Key

        aws:SourceArn

        Value

        ARN of the S3 bucket, from Finding the S3 bucket that contains the data that you want to collect.

        aws:s3:::my-examples3bucket
      3. Click Add Condition.

      4. Click Add Permission.

    • Using a JSON Policy Document:

      1. In the Properties window, at the bottom, select Edit Policy Document (Advanced).

      2. Copy and paste the following JSON policy into the Edit Policy Document window:

        Copy and paste might not preserve the whitespace in the JSON policy. The whitespace is required. If the whitespace is not preserved when you paste the JSON policy, paste it into a text editor and restore the whitespace. Then, copy and paste the JSON policy from your text editor into the Edit Policy Document window.

      3. Change the Resource in this policy document to match the ARN of your SQS queue from step 3, and the "aws:SourceArn" to match the ARN of your bucket that you recorded when you completed the Finding the S3 bucket that contains the data that you want to collect.

  5. Click Review Policy. Ensure the data is correct, and then click Save Changes.

Creating ObjectCreated notifications

You must create ObjectCreated notifications for the folders that you want to monitor in the bucket.

  1. Log in to the AWS Management Console as an administrator.
  2. Click Services, then navigate to S3.
  3. Select a bucket.
  4. Click the Properties tab.
  5. In the Events pane, click Add notification and then configure the parameters for the new event.

    The following table shows a sample of an ObjectCreated notification parameter configuration:

    Table 3: Sample new ObjectCreated notification parameter configuration

    Parameter

    Value

    Name

    Type a name of your choosing.

    Events

    Select All object create events.

    Prefix

    AWSLogs/

    Tip: You can choose a prefix that contains the data that you want to find, depending on where the CloudTrail data is located and what data that you want to go to the queue. For example, AWSLogs/, CustomPrefix/AWSLogs/, AWSLogs/ 123456789012/.

    Suffix

    json.gz

    Send to

    SQS queue

    Tip: You can send the data from different folders to the same or different queues to suit your collection or JSA tenant needs. Choose one or more of the following methods:

    • Different folders that go to different queues

    • Different folders from different buckets that go to the same queue

    • Everything from a single bucket that goes to a single queue

    • Everything from multiple buckets that go to a single queue

    SQS

    The Queue Name from Creating the SQS queue that is used to receive ObjectCreated notifications.

    Figure 1: Sample Events
    Sample Events

    In the preceding sample parameter configuration, notifications are created for AWSLogs/ off the root of the bucket, which is the default CloudTrail location. When you use this configuration, All ObjectCreated events trigger a notification. If there are multiple accounts and regions in the bucket, everything gets processed. In this example, json.gz files are specified because CloudTrail uses this extension. For types other than CloudTrail, you can omit the extension or choose an extension that matches the data you are looking for in the folders that you have events set up for.

    After approximately 5 minutes, the queue that contains data displays. In the Messages Available column, you can view the number of messages.

    Figure 2: Number of available messages
    Number of available messages
  6. Click Services, then navigate to Simple Queue Services.
  7. Right-click the Queue Name from Creating the SQS queue that is used to receive ObjectCreated notifications, then select View/Delete Messages to view the messages.
    Figure 3: SecureQueue TEST list
    SecureQueue TEST list

    Sample message:

  8. Click Services, then navigate to IAM.
  9. Set a User or Role permission to access the SQS queue and for permission to download from the target bucket. The user or user role must have permission to read and delete from the SQS queue. After JSA reads the notification and then downloads and processes the target file, the message must be deleted from the queue.

    Sample Policy:

    You can add multiple buckets. To ensure that all objects are accessed, you must have a trailing /* at the end of the folder path that you added.

    You can add this policy directly to a user, a user role, or you can create a minimal access user with sts:AssumeRole only. When you configure a log source in JSA, configure the assume Role ARN parameter for JSA to assume the role. To ensure that all files waiting to be processed in a single run (emptying the queue) can finish without retries, use the default value of 1 hour for the API Session Duration parameter.

    When using assumed roles, ensure that the ARN of the user assuming the rule is in the Trusted Entities for that role. From the Trusted entities pane, you can view the trusted entities that can assume the role. In addition, the user must have permission to assume roles in that (or any) account.

    The following image shows a sample Amazon AWS CloudTrail log source configuration in JSA.

    Figure 4: Amazon AWS CloudTrail log source configuration in JSA
    Amazon AWS CloudTrail log source configuration in JSA

Configuring security credentials for your AWS user account

You must have your AWS user account access key and the secret access key values before you can configure a log source in JSA.

  1. Log in to your IAM console.
  2. Select Users from left navigation pane and then select your user name from the list.
  3. Click the Security Credentials tab.
  4. In the Access Keys section, click Create access key.
  5. From the window that displays after the access key and corresponding secret access key are created, download the .csv file that contains the keys or copy and save the keys.Note

    Save the Access key ID and Secret access key and use them when you configure a log source in JSA.

    Note

    You can view the Secret access key only when it is created.

Adding an Amazon AWS CloudTrail log source on the JSA Console using an SQS queue

If you want to collect AWS CloudTrail logs from multiple accounts or regions in an Amazon S3 bucket, add a log source on the JSA Console so that Amazon AWS CloudTrail can communicate with JSA by using the Amazon AWS S3 REST API protocol and a Simple Queue Service (SQS) queue.

  1. Use the following table to set the parameters for an Amazon AWS CloudTrail log source that uses the Amazon AWS S3 REST API protocol and an SQS queue.

    Table 4: Amazon AWS S3 REST API protocol log source parameters

    Parameter

    Description

    Log Source Type

    Amazon AWS CloudTrail

    Protocol Configuration

    Amazon AWS S3 REST API

    Log Source Identifier

    Type a unique name for the log source.

    The Log Source Identifier can be any valid value and does not need to reference a specific server. The Log Source Identifier can be the same value as the Log Source Name. If you have more than one Amazon AWS CloudTrail log source that is configured, you might want to identify the first log source as awscloudtrail1, the second log source as awscloudtrail2, and the third log source as awscloudtrail3.

    Authentication Method

    Access Key ID / Secret Key

    Standard authentication that can be used from anywhere.

    Assume IAM Role

    Authenticate with keys and then temporarily assume a role for access. This option is available only when you select SQS Event Notifications for the S3 Collection Method. The supported S3 Collection Method is Use a Specific Prefix.

    EC2 Instance IAM Role

    If your JSA managed host is running in an AWS EC2 instance, choosing this option uses the IAM role from the metadata that is assigned to the instance for authentication; no keys are required. This method works only for managed hosts that are running within an AWS EC2 container.

    Access Key

    If you selected Access Key ID / Secret Key for the Authentication Method, the Access Key parameter is displayed.

    The Access Key ID that was generated when you configured the security credentials for your AWS user account. This value is also the Access Key ID that is used to access the AWS S3 bucket.

    Secret Key

    If you selected Access Key ID / Secret Key for the Authentication Method, the Secret Key ID parameter is displayed.

    The Secret Key ID that was generated when you configured the security credentials for your AWS user account. This value is also the Secret Key ID that is used to access the AWS S3 bucket.

    Event Format

    Select AWS Cloud Trail JSON. The log source retrieves JSON formatted events.

    S3 Collection Method

    Select SQS Event Notifications.

    SQS Queue URL

    Enter the full URL, starting with https://, of the SQS queue that is set up to receive notifications for ObjectCreate events from S3.

    Region Name

    The region that the SQS Queue or the S3 Bucket is in.

    Example: us-east-1, eu-west-1, ap-northeast-3

    Use As A Gateway Log Source

    Select this option for the collected events to flow through the JSA Traffic Analysis engine and for JSA to automatically detect one or more log sources.

    Log Source Identifier Pattern

    This option is available when you set Use as a Gateway Log Source is set to yes.

    Use this option if you want to define a custom Log Source Identifier for events being processed. This field accepts key value pairs to define the custom Log Source Identifier, where the key is the Identifier Format String, and the value is the associated regex pattern. You can define multiple key value pairs by entering a pattern on a new line. When multiple patterns are used, they are evaluated in order until a match is found and a custom Log Source Identifier can be returned.

    Show Advanced Options

    Select this option if you want to customize the event data.

    File Pattern

    This option is available when you set Show Advanced Options to Yes.

    Type a regex for the file pattern that matches the files that you want to pull; for example, .*?\.json\.gz

    Local Directory

    This option is available when you set Show Advanced Options to Yes.

    The local directory on the Target Event Collector. The directory must exist before the AWS S3 REST API PROTOCOL attempts to retrieve events.

    S3 Endpoint URL

    This option is available when you set Show Advanced Options to Yes.

    The endpoint URL that is used to query the AWS REST API.

    If your endpoint URL is different from the default, type your endpoint URL. The default is http://s3.amazonaws.com

    Use S3 Path-Style Access

    Forces S3 requests to use path-style access.

    This method is deprecated by AWS. However, it might be required when you use other S3 compatible APIs. For example, the https://s3.region.amazonaws.com/bucket-name/keyname path-style is automatically used when a bucket name contains a period (.). Therefore, this option is not required, but can be used.

    Use Proxy

    If JSA accesses the Amazon Web Service by using a proxy, enable Use Proxy.

    If the proxy requires authentication, configure the Proxy Server, Proxy Port, Proxy Username, and Proxy Password fields.

    If the proxy does not require authentication, configure the Proxy Server and Proxy Port fields.

    Recurrence

    How often the Amazon AWS S3 REST API Protocol connects to the Amazon cloud API, checks for new files, and if they exist, retrieves them. Every access to an AWS S3 bucket incurs a cost to the account that owns the bucket. Therefore, a smaller recurrence value increases the cost.

    Type a time interval to determine how frequently the remote directory is scanned for new event log files. The minimum value is 1 minute. The time interval can include values in hours (H), minutes (M), or days (D). For example, 2H = 2 hours, 15 M = 15 minutes.

    EPS Throttle

    The maximum number of events per second that are sent to the flow pipeline. The default is 5000.

    Ensure that the EPS Throttle value is higher than the incoming rate or data processing might fall behind.

  2. To verify that JSA is configured correctly, review the following table to see an example of a parsed event message.

    Table 5: Amazon AWS CloudTrail sample message supported by Amazon AWS CloudTrail.

    Event name

    Low level category

    Sample log message

    Console Login

    General Audit Event

    {"eventVersion":"1.02", "userIdentity":{"type":"IAMUser", "principalId":"XXXXXXXXXXXXXXXXXXXXX", "arn":"arn:aws:iam::<Account_number>:user/ xx.xxccountId"

    :"<Account_number>","userName":"

    <Username>"},"eventTime":"2016-05-

    04T14:10:58Z","eventSource":

    "f.amazonaws.com","eventName": "ConsoleLogin","awsRegion": "us-east-1","sourceIPAddress":"<Source_IP_address> Agent":"Mozilla/5.0(Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.1.1 Safari/537.36", "requestParameters":null, "responseElements": {"ConsoleLogin":"Success"}, "additionalEventData": {"LoginTo":"www.webpage.com", "MobileVersion":"No","MFAUsed":"No"}, "eventID":"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "eventType"

    :"AwsConsoleSignIn","recipientAccountId"

    :"<Account_ID>"}

Configuring an Amazon AWS CloudTrail Log Source that uses an S3 Bucket with a Directory Prefix

If you want to collect AWS CloudTrail logs from a single account and region in an Amazon S3 bucket, configure a log source on the JSA Console so that Amazon AWS CloudTrail can communicate with JSA by using the Amazon AWS S3 REST API protocol with a directory prefix.

If you have log sources in an S3 bucket from multiple regions or using multiple accounts, Configuring an Amazon AWS CloudTrail Log Source that uses an S3 Bucket with an SQS Queue instead of with a directory prefix.

Tip

A log source using directory prefix can retrieve data from only one region and one account, so use a different log source for each region and account. Include the region folder name in the file path for the Directory Prefix value when you configure the log source.

Procedure

  1. Finding an S3 bucket name and directory prefix.

  2. Creating an Identity and Access Management (IAM) user in the AWS Management Console.

  3. Configuring security credentials for your AWS user account.

  4. Adding an Amazon AWS CloudTrail log source on the JSA Console using a directory prefix.

Finding an S3 bucket name and directory prefix

An Amazon administrator must create a user and then apply the AmazonS3ReadOnlyAccess policy in the AWS Management Console. The JSA user can then create a log source in JSA.

Note

Alternatively, you can assign more granular permissions to the bucket. The minimum required permissions are s3:listBucket and s3:getObject.

  1. Click Services.
  2. From the list, select CloudTrail.
  3. From the Trails page, click the name of the trail.
  4. Note the name of the S3 bucket that is displayed in the S3 bucket field.
  5. Click the Edit icon.
  6. Note the location path for the S3 bucket that is displayed underneath the Log file prefix field.

Creating an Identity and Access Management (IAM) user in the AWS Management Console

An Amazon administrator must create a user and then apply the s3:listBucket and s3:getObject permissions to that user in the AWS Management Console. The JSA user can then create a log source in JSA.

Note

The minimum required permissions are s3:listBucket and s3:getObject. You can assign other permissions to the user as needed.

  1. Log in to the AWS Management Console as an administrator.
  2. Click Services.
  3. From the list, select IAM.
  4. Click Users.
  5. Click Add user.
  6. Create an Amazon AWS IAM user and then apply the AmazonS3ReadOnlyAccess policy.

Configuring security credentials for your AWS user account

You must have your AWS user account access key and the secret access key values before you can configure a log source in JSA.

  1. Log in to your IAM console.
  2. Select Users from left navigation pane and then select your user name from the list.
  3. Click the Security Credentials tab.
  4. In the Access Keys section, click Create access key.
  5. From the window that displays after the access key and corresponding secret access key are created, download the .csv file that contains the keys or copy and save the keys.Note

    Save the Access key ID and Secret access key and use them when you configure a log source in JSA.

    Note

    You can view the Secret access key only when it is created.

Adding an Amazon AWS CloudTrail log source on the JSA Console using a directory prefix

If you want to collect AWS CloudTrail logs from a single account and region in an Amazon S3 bucket, add a log source on the JSA Console so that Amazon AWS CloudTrail can communicate with JSA by using the Amazon AWS S3 REST API protocol with a directory prefix.

  1. Use the following table to set the parameters for an Amazon AWS CloudTrail log source that uses the Amazon AWS S3 REST API protocol and a directory prefix.

    Table 6: Amazon AWS S3 REST API protocol log source parameters

    Parameter

    Description

    Log Source Type

    Amazon AWS CloudTrail

    Protocol Configuration

    Amazon AWS S3 REST API

    Log Source Identifier

    Type a unique name for the log source.

    The Log Source Identifier can be any valid value and does not need to reference a specific server. The Log Source Identifier can be the same value as the Log Source Name. If you have more than one Amazon AWS CloudTrail log source that is configured, you might want to identify the first log source as awscloudtrail1, the second log source as awscloudtrail2, and the third log source as awscloudtrail3.

    Authentication Method

    Access Key ID / Secret Key

    Standard authentication that can be used from anywhere.

    Assume IAM Role

    Authenticate with keys and then temporarily assume a role for access. This option is available only when you select SQS Event Notifications for the S3 Collection Method. The supported S3 Collection Method is Use a Specific Prefix.

    EC2 Instance IAM Role

    If your managed host is running on an AWS EC2 instance, choosing this option uses the IAM Role from the instance metadata assigned to the instance for authentication; no keys are required. This method works only for managed hosts that are running within an AWS EC2 container.

    Access Key ID

    If you selected Access Key ID / Secret Key for the Authentication Method, the Access Key ID parameter is displayed.

    The Access Key that was generated when you configured the security credentials for your AWS user account. This value is also the Access Key ID that is used to access the AWS S3 bucket.

    Secret Key

    If you selected Access Key ID / Secret Key or Assume IAM Role, the Secret Key parameter displays.

    The Secret Key that was generated when you configured the security credentials for your AWS user account. This value is also the Secret Key ID that is used to access the AWS S3 bucket.

    Event Format

    Select AWS Cloud Trail JSON. The log source retrieves JSON formatted events.

    S3 Collection Method

    Select Use a Specific Prefix.

    Bucket Name

    The name of the AWS S3 bucket where the log files are stored.

    Directory Prefix

    The root directory location on the AWS S3 bucket from where the CloudTrail logs are retrieved; for example, AWSLogs/<AccountNumber>/Cloudtrial/<RegionName>/

    To pull files from the root directory of a bucket, you must use a forward slash (/) in the Directory Prefix file path.

    Note:

    • Changing the Directory Prefix value clears the persisted file marker. All files that match the new prefix are downloaded in the next pull.

    • The Directory Prefix file path cannot begin with a forward slash (/) unless only the forward slash is used to collect data from the root of the bucket.

    • If the Directory Prefix file path is used to specify folders, you must not begin the file path with a forward slash (for example, use folder1/folder2 instead).

    Region Name

    The region that the SQS Queue or the S3 Bucket is in.

    us-east-1, eu-west-1, ap-northeast-3

    Use as a Gateway Log Source

    Select this option for the collected events to flow through the JSA Traffic Analysis engine and for JSA to automatically detect one or more log sources.

    Log Source Identifier Pattern

    This option is available when you set Use as a Gateway Log Source is set to yes.

    Use this option if you want to define a custom Log Source Identifier for events being processed. This field accepts key value pairs to define the custom Log Source Identifier, where the key is the Identifier Format String, and the value is the associated regex pattern. You can define multiple key value pairs by entering a pattern on a new line. When multiple patterns are used, they are evaluated in order until a match is found and a custom Log Source Identifier can be returned.

    Show Advanced Options

    Select this option if you want to customize the event data.

    File Pattern

    This option is available when you set Show Advanced Options to Yes.

    Type a regex for the file pattern that matches the files that you want to pull; for example, .*?\.json\.gz

    Local Directory

    This option is available when you set Show Advanced Options to Yes.

    The local directory on the Target Event Collector. The directory must exist before the AWS S3 REST API PROTOCOL attempts to retrieve events.

    S3 Endpoint URL

    This option is available when you set Show Advanced Options to Yes.

    The endpoint URL that is used to query the AWS REST API.

    If your endpoint URL is different from the default, type your endpoint URL. The default is http://s3.amazonaws.com

    Use S3 Path-Style Access

    Forces S3 requests to use path-style access.

    This method is deprecated by AWS. However, it might be required when you use other S3 compatible APIs. For example, the https://s3.region.amazonaws.com/bucket-name/keyname path-style is automatically used when a bucket name contains a period (.). Therefore, this option is not required, but can be used.

    Use Proxy

    If JSA accesses the Amazon Web Service by using a proxy, enable Use Proxy.

    If the proxy requires authentication, configure the Proxy Server, Proxy Port, Proxy Username, and Proxy Password fields.

    If the proxy does not require authentication, configure the Proxy Server and Proxy Port fields.

    Recurrence

    How often the Amazon AWS S3 REST API Protocol connects to the Amazon cloud API, checks for new files, and if they exist, retrieves them. Every access to an AWS S3 bucket incurs a cost to the account that owns the bucket. Therefore, a smaller recurrence value increases the cost.

    Type a time interval to determine how frequently the remote directory is scanned for new event log files. The minimum value is 1 minute. The time interval can include values in hours (H), minutes (M), or days (D). For example, 2H = 2 hours, 15 M = 15 minutes.

    EPS Throttle

    The maximum number of events per second that are sent to the flow pipeline. The default is 5000.

    Ensure that the EPS Throttle value is higher than the incoming rate or data processing might fall behind.

  2. To verify that JSA is configured correctly, review the following table to see an example of a parsed event message.

    Table 7: Amazon AWS CloudTrail sample message supported by Amazon AWS CloudTrail

    Event name

    Low-level category

    Sample log message

    Console Login

    General Audit Event

    {"eventVersion":"1.02", "userIdentity":{"type":"IAMUser", "principalId":"XXXXXXXXXXXXXXXXXXXXX", "arn":"arn:aws:iam::

    <Account_number>:user/

    xx.xxccountId":"<Account_number>",

    "userName":"<Username>"},

    "eventTime":"2016-05-04T14

    :10:58Z","eventSource": "f.amazonaws.com","eventName": "ConsoleLogin","awsRegion": "us-east-1",

    "sourceIPAddress"

    :"<Source_IP_address>

    Agent":"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.1.1 Safari/537.36", "requestParameters":null, "responseElements": {"ConsoleLogin":"Success"}, "additionalEventData": {"LoginTo":"www.webpage.com", "MobileVersion":"No","MFAUsed":"No"},

    "eventID":"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "eventType":

    "AwsConsoleSignIn",

    "recipientAccountId":

    "<Account_ID>"}