Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

IBM DB2

The IBM DB2 DSM collects events from an IBM DB2 mainframe that uses IBM Security zSecure.

When you use a zSecure process, events from the System Management Facilities (SMF) can be transformed into Log Event Extended Format (LEEF) events. These events can be sent near real-time by using UNIX Syslog protocol or JSA can retrieve the LEEF event log files by using the Log File protocol and then process the events. When you use the Log File protocol, you can schedule JSA to retrieve events on a polling interval, which enables JSA to retrieve the events on the schedule that you define.

To collect IBM DB2 events, complete the following steps:

  1. Verify that your installation meets any prerequisite installation requirements.

  2. Configure your IBM DB2 image to write events in LEEF format.

  3. Create a log source in JSA for IBM DB2.

  4. If you want to create a custom event property for IBM DB2 in JSA, for more information, see the Custom Event Properties for IBM Z/OS Tech note.

Before You Begin

Before you can configure the data collection process, you must complete the basic zSecure installation process and complete the post-installation activities to create and modify the configuration.

The following prerequisites are required:

  • You must ensure parmlib member IFAPRDxx is enabled for IBM Security zSecure Audit on your z/OS image.

  • The SCKRLOAD library must be APF-authorized.

  • If you are using the direct SMF INMEM real-time interface, you must have the necessary software installed (APAR OA49263) and set up the SMFPRMxx member to include the INMEM keyword and parameters. If you decide to use the CDP interface, you must also have CDP installed and running.

  • You must configure a process to periodically refresh your CKFREEZE and UNLOAD data sets.

  • If you are using the Log File protocol method, you must configure a SFTP, FTP, or SCP server on your z/OS image for JSA to download your LEEF event files.

  • If you are using the Log File protocol method, you must allow SFTP, FTP, or SCP traffic on firewalls that are located between JSA and your z/OS image.

Create a Log Source for Near Real-time Event Feed

The Syslog protocol enables JSA to receive System Management Facilities (SMF) events in near real-time from a remote host.

The following DSMs are supported:

  • IBM z/OS

  • IBM CICS

  • IBM RACF

  • IBM DB2

  • CA Top Secret

  • CA ACF2

If JSA does not automatically detect the log source, add a log source for your DSM on the JSA console.

The following table describes the parameters that require specific values for event collection for your DSM:

Table 1: Log Source Parameters

Parameter

Value

Log Source type

Select your DSM name from the list.

Protocol Configuration

Syslog

Log Source Identifier

Type a unique identifier for the log source.

Creating a Log Source for Log File Protocol

The Log File protocol enables JSA to retrieve archived log files from a remote host for the IBM z/OS, IBMCICS, IBM RACF, IBM DB2, CA Top Secret, and CA ACF2 DSM's.

Log files are transferred, one at a time, to JSA for processing. The Log File protocol can manage plain text event logs, compressed files, or archives. Archives must contain plain-text files that can be processed one line at a time. Multi-line event logs are not supported by the Log File protocol. IBM z/OS with zSecure writes log files to a specified directory as gzip archives. JSA extracts the archive and processes the events, which are written as one event per line in the file.

To retrieve these events, you must create a log source that uses the Log File protocol. JSA requires credentials to log in to the system that hosts your LEEF formatted event files and a polling interval.

  1. Log in to JSA.

  2. Click the Admin tab.

  3. Click the Log Sources icon.

  4. Click Add.

  5. In the Log Source Name field, type a name for the log source.

  6. In the Log Source Description field, type a description for the log source.

  7. From the Log Source Type list, select your DSM name.

  8. From the Protocol Configuration list, select Log File.

  9. Configure the Log File protocol parameters.

    The following table describes the parameters that require specific values for the DSM event collection:

    Table 2: Log File Protocol Parameters

    Parameter

    Value

    Log Source Identifier

    Type an IP address, host name, or name to identify the event source. IP addresses or host names are suggested as they allow JSA to identify a log file to a unique event source.

    For example, if your network contains multiple devices, such as multiple z/OS images or a file repository that contains all of your event logs, you must specify a name, IP address, or host name for the image or location that uniquely identifies events for the DSM log source. This specification enables events to be identified at the image or location level in your network that your users can identify.

    Service Type

    From the Service Type list, select the protocol that you want to use when retrieving log files from a remote server. The default is SFTP.

    • SFTP - SSH File Transfer Protocol

    • FTP - File Transfer Protocol

    • SCP - Secure Copy

    The underlying protocol that is used to retrieve log files for the SCP and SFTP service type requires that the server that is specified in the Remote IP or Hostname field has the SFTP subsystem enabled.

    Remote IP or Hostname

    Type the IP address or host name of the device that stores your event log files.

    Remote Port

    Type the TCP port on the remote host that is running the selected Service Type. The valid range is 1 - 65535.

    The options include ports:

    • FTP - TCP Port 21

    • SFTP - TCP Port 22

    • SCP - TCP Port 22

    If the host for your event files is using a non-standard port number for FTP, SFTP, or SCP, you must adjust the port value.

    Remote User

    Type the user name or user ID necessary to log in to the system that contains your event files.

    • If your log files are on your IBM z/OS image, type the user ID necessary to log in to your IBM z/OS. The user ID can be up to 8 characters in length.

    • If your log files are on a file repository, type the user name necessary to log in to the file repository. The user name can be up to 255 characters in length.

    Remote Password

    Type the password necessary to log in to the host.

    Confirm Password

    Confirm the password necessary to log in to the host.

    SSH Key File

    If you select SCP or SFTP as the Service Type, this parameter gives you the option to define an SSH private key file. When you provide an SSH Key File, the Remote Password field is ignored.

    Remote Directory

    Type the directory location on the remote host from which the files are retrieved, relative to the user account you are using to log in.

    Recursive

    If you want the file pattern to search sub folders in the remote directory, select this check box. By default, the check box is clear.

    If you configure SCP as the Service Type, the Recursive option is ignored.

    FTP File Pattern

    If you select SFTP or FTP as the Service Type, you can configure the regular expression (regex) needed to filter the list of files that are specified in the Remote Directory. All matching files are included in the processing.

    The IBM z/OS mainframe that uses IBM Security zSecure Audit writes event files by using the pattern: <product_name>.<timestamp>.gz

    The FTP file pattern that you specify must match the name that you assigned to your event files. For example, to collect files that start with zOS and end with .gz, type the following code:

    zOS.*\.gz

    Use of this parameter requires knowledge of regular expressions (regex). For more information about regex, see Lesson: Regular Expressions. (http://download.oracle.com/javase/tutorial/essential/regex/)

    FTP Transfer Mode

    This option displays only if you select FTP as the Service Type. From the list, select Binary.

    The binary transfer mode is needed for event files that are stored in a binary or compressed format, such as zip, gzip, tar, or tar+gzip archive files.

    SCP Remote File

    If you select SCP as the Service Type you must type the file name of the remote file.

    Start Time

    Type the time of day you want the processing to begin. For example, type 00:00 to schedule the Log File protocol to collect event files at midnight.

    This parameter functions with the Recurrence value to establish when and how often the Remote Directory is scanned for files. Type the start time, based on a 24-hour clock, in the following format: HH: MM.

    Recurrence

    Type the frequency, beginning at the Start Time, that you want the remote directory to be scanned. Type this value in hours (H), minutes (M), or days (D).

    For example, type 2H if you want the remote directory to be scanned every 2 hours from the start time. The default is 1H.

    Run On Save

    If you want the Log File protocol to run immediately after you click Save, select this check box.

    After the Run On Save completes, the Log File protocol follows your configured start time and recurrence schedule.

    Selecting Run On Save clears the list of previously processed files for the Ignore Previously Processed File parameter.

    EPS Throttle

    Type the number of Events Per Second (EPS) that you do not want this protocol to exceed. The valid range is 100 - 5000.

    Processor

    From the list, select gzip.

    Processors enable event file archives to be expanded and contents are processed for events. Files are processed after they are downloaded to JSA. JSA can process files in zip, gzip, tar, or tar+gzip archive format.

    Ignore Previously Processed File(s)

    Select this check box to track and ignore files that are already processed by the Log File protocol.

    JSA examines the log files in the remote directory to determine whether a file is previously processed by the Log File protocol. If a previously processed file is detected, the Log File protocol does not download the file for processing. All files that are not previously processed are downloaded.

    This option applies only to FTP and SFTP service types.

    Change Local Directory?

    Select this check box to define a local directory on your JSA for storing downloaded files during processing.

    It is suggested that you leave this check box clear. When this check box is selected, the Local Directory field is displayed, which gives you the option to configure the local directory to use for storing files.

    Event Generator

    From the Event Generator list, select LineByLine.

    The Event Generator applies more processing to the retrieved event files. Each line is a single event. For example, if a file has 10 lines of text, 10 separate events are created.

  10. Click Save.

  11. On the Admin tab, click Deploy Changes.

    The DSM configuration is complete. If your DSM requires custom event properties, see the Custom Event Properties for IBM Z/OS Tech note.

Integrating IBM DB2 Audit Events

The IBM DB2 DSM allows you to integrate your DB2 audit logs into JSA for analysis.

The db2audit command creates a set of comma-delimited text files with a .del extension that defines the scope of audit data for JSA when auditing is configured and enabled. Comma-delimited files created by the db2audit command include:

  • audit.del

  • checking.del

  • context.del

  • execute.del

  • objmaint.del

  • secmaint.del

  • sysadmin.del

  • validate.del

To integrate the IBM DB2 DSM with JSA, you must:

  1. Use the db2audit command to ensure the IBM DB2 records security events. See your IBM DB2 vendor documentation for more information.

  2. Extract the DB2 audit data of events contained in the instance to a log file, depending on your version of IBM DB2.

  3. Use the Log File protocol source to pull the output instance log file and send that information back to JSA on a scheduled basis. JSA then imports and processes this file.

Extracting Audit Data for DB2 V8.x to V9.4

You can extract audit data when you are using IBM DB2 v8.x to v9.4.

  1. Log into a DB2 account with SYSADMIN privilege.

  2. Type the following start command to audit a database instance:

    db2audit start

    For example, the start command response might resemble the following output:

    AUD00001 Operation succeeded.

  3. Move the audit records from the instance to the audit log:

    db2audit flush

    For example, the flush command response might resemble the following output:

    AUD00001 Operation succeeded.

  4. Extract the data from the archived audit log and write the data to .del files:

    db2audit extract delasc

    For example, an archive command response might resemble the following output:

    AUD00001 Operation succeeded.

    Note:

    Double-quotation marks (") are used as the default text delimiter in the ASCII files, do not change the delimiter.

  5. Remove non-active records:

    db2audit prune all

  6. Move the .del files to a storage location where JSA can pull the file. The movement of the comma-delimited (.del) files should be synchronized with the file pull interval in JSA.

    You are now ready to create a log source in JSA to collect DB2 log files.

Extracting Audit Data for DB2 V9.5

You can extract audit data when you are using IBM DB2 v9.5.

  1. Log in to a DB2 account with SYSADMIN privilege.

  2. Move the audit records from the database instance to the audit log:

    db2audit flush

    For example, the flush command response might resemble the following output:

    AUD00001 Operation succeeded.

  3. Archive and move the active instance to a new location for future extraction:

    db2audit archive

    For example, an archive command response might resemble the following output:

    Note:

    In DB2 v9.5 and later, the archive command replaces the prune command.

    The archive command moves the active audit log to a new location, effectively pruning all non-active records from the log. An archive command must be complete before an extract can be executed.

  4. Extract the data from the archived audit log and write the data to .del files:

    db2audit extract delasc from files db2audit.instance.log.0.200912171528

    For example, an archive command response might resemble the following output:

    AUD00001 Operation succeeded.

    Note:

    Double-quotation marks (") are used as the default text delimiter in the ASCII files, do not change the delimiter.

  5. Move the .del files to a storage location where JSA can pull the file. The movement of the comma-delimited (.del) files should be synchronized with the file pull interval in JSA.

    You are now ready to create a log source in JSA to collect DB2 log files.