Setting Up Security Director Log Collector
You must use 20.1R1 Log Collector builds for Security Director Release 20.3R1. There are no Log Collector builds for 20.3R1 release. When you upgrade Security Director from 19.3R1, 19.4R1, or 20.1R1 version to 20.3R1 version, you must use Log Collector Release 20.1R1.
A single Security Director image installs Security Director, Log Director, and Security Director Logging and Reporting applications.
The prerequisites for setting up Log Collector are as follows:
Make sure that the JA2500 appliance or VM is running supported release of Junos Space Network Management Platform and Junos Space Security Director.
The Junos Space Network Management Platform must be active and functioning.
The following ports are required for Log Collector to function and these ports must be open between the Junos Space server and the Log Collector:
Port 8004 (TCP)—For communication between the Junos Space server and the Log Collector node agent.
Port 8003 (TCP)—For log data queries.
Port 9200 (TCP)—For Log Storage nodes.
Port 9300 (TCP)—For communicating across elasticsearch cluster.
Port 4567 (TCP)—For communication between the Log Receiver node and Log Storage node.
Port 514 (TCP)—For receiving system logs.
Port 514 (UDP)—For receiving system logs.
Port 22 (TCP)—For SSH connectivity.
Port 4514 (TCP)—Used for TCP forwarding.
The following ports are not required for Log Collector to function, but they are used by other peripheral services:
Port 5671 (TCP)
Port 32803 (TCP)
Port 32769 (UDP)
See the following topics for information about deploying Log Collector.
Specifications for Deploying a Log Collector Virtual Machine
You can use the tables below to decide if you require a single Log Collector or multiple Log Collectors.
The following tables describe the VM configuration with Solid State Drives (SSD) and with non Solid State Drives for different Security Director Releases. They list the required specifications for deploying a Log Collector VM for various events per second (eps) rates. The eps rates shown in the tables were achieved in a testing environment. Your results might differ, depending on your configuration and network environment.
Table 1: With Solid State Drives (SSD) for Security Director Release 15.2R1 and 15.2R2
Setup | Log Receiver Node | Log Indexer Node | Log Query Node | Cluster Manager Node | Total Nodes | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Number of Nodes | CPU | Memory | Number of Nodes | CPU | Memory | CPU | Memory | CPU | Memory | ||
4K eps | 1 | 4 | 16 GB | - | - | - | - | - | - | - | 1 |
7K eps | 1 | 4 | 16 GB | 1 | 4 | 32 GB | - | - | - | - | 2 |
10K eps | 2 | 8 | 32 GB | 1 | 8 | 32 GB | - | - | - | 16 GB | 2 |
20K eps | 2 | 16 | 32 GB | 3 | 16 | 32 GB | 8 | 16 GB | 4 | 16 GB | 6 |
Table 2: With Non Solid State Drives (SSD) for Security Director Release 15.2R1 and 15.2R2
Setup | Log Receiver Node | Log Indexer Node | Log Query Node | Cluster Manager Node | Total Nodes | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Number of Nodes | CPU | Memory | Number of Nodes | CPU | Memory | CPU | Memory | CPU | Memory | ||
2K eps | 1 | 4 | 16 GB | - | - | - | - | - | - | - | 1 |
5K eps | 1 | 8 | 16GB | 1 | 4 | 32 GB | - | - | - | - | 2 |
10K eps | 2 | 8 | 32 GB | 1 | 8 | 32 GB | - | - | - | 16 GB | 3 |
20K eps | 2 | 16 | 32 GB | 4 | 16 | 32 GB | 8 | 16 GB | 4 | 16 GB | 8 |
Table 3: With Solid State Drives (SSD) for Security Director Release 16.1 and Later
Setup | Log Receiver Node | Log Storage Node | Total Nodes | ||||
---|---|---|---|---|---|---|---|
Number of Nodes | CPU | Memory | Number of Nodes | CPU | Memory | ||
4K eps | 1 | 4 | 16 GB | - | - | - | 1 |
10K eps | 1 | 8 | 32 GB | 1 | 8 | 64 GB | 2 |
20K eps | 1 | 8 | 32 GB | 2 | 8 | 64 GB | 3 |
Table 4: With Non-Solid State Drives for Security Director Release 16.1 and Later
Setup | Log Receiver Node | Log Storage Node | Total Nodes | ||||
---|---|---|---|---|---|---|---|
Number of Nodes | CPU | Memory | Number of Nodes | CPU | Memory | ||
3K eps | 1 | 4 | 16 GB | - | - | - | 1 |
10K eps | 1 | 8 | 32 GB | 2 | 8 | 64 GB | 3 |
20K eps | 1 | 8 | 32 GB | 3 | 8 | 64 GB | 4 |
VMs with 64 GB memory provide better stability for log storage.
Deploying Log Collector VM on a VMWare ESX Server
Install VMware vSphere or vCenter client on your local system.
To deploy Log Collector VM on a VMware ESX server:
- Download the latest Log Collector open virtual appliance (OVA) image from the download site.
- Using VMware vSphere or vCenter client, deploy the Log Collector OVA image onto the VMware ESX server.
- Edit the CPU and memory as per the system requirement
for the required events per second (eps).
Note For Security Director Release 15.2R1 and 15.2R2, see Table 1 and Table 2. For Security Director Release 16.1R1 and later see Table 3 and Table 4.
- Power on the Log Collector VM.
- Use the default credentials to log in to Log Collector. The username is root and password is juniper123.
- Change the default password of the VM.
- Select one of the following node
types:
Enter 1 to deploy Log Collector as an All-in-One node.
Enter 2 to deploy Log Collector as a Log Receiver node.
Enter 3 to deploy Log Collector as a Log Storage node.
- Configure your network settings.
After setting up the Log Collector, add the Log Collector node to Security Director. See Adding Log Collector to Security Director.
Using VMware vSphere Client version 5.5 and earlier, you cannot edit the settings of virtual machines of version 10 or later. See VMware Knowledge Base.
Deploying Log Collector VM on a KVM Server
Starting in Security Director Release 15.2R2, you can deploy Log Collector VM on a kernel-based virtual machine (KVM) server installed on CentOS Release 6.5.
Before You Begin
The KVM server and supported packages must be installed on a machine running CentOS Release 6.5 with the required kernels and packages. See http://wiki.centos.org/HowTos/KVM.
Install the Virtual Machine Manager (VMM) client on your local system.
Configure the bridge interface according to your environment. You must have at least two static IP addresses that are unused.
We recommend you to install the Log Collector virtual machine on a KVM server using VMM.
To deploy Log Collector VM on a KVM server:
- Download the Log Collector KVM image from the download site on the KVM host and extract the tgz file, which
contains the
system.qcow2
anddata.qcow2
files. - Launch the VMM client by typing virt-manager from your terminal or from the Applications menu, click System
Tools and select Virtual Machine Manager.
The Virtual Machine Manager window appears.
- Select File > New Virtual Machine to install a new virtual machine.
The new VM dialog box appears.
- In the new VM dialog box:
- Select Import existing disk image and click Next.
- Click Browse and then select the
system.qcow2
file. - Select Linux as the operating system and the version as Red Hat Enterprise Linux 6.6 or later.
- Click Forward.
- Set the CPU settings as 4 , and then select or enter the minimum memory (RAM) value as 16384 MB.
- Click Forward.
- Edit the Name field, select or set up the network for each bridge or interface configured, and select the Customize Configuration Before Install option.
- Click Finish.
- Select the Storage option from the left navigation on the Add New Virtual Hardware window, and then click Add Hardware.
- On the Storage window:
- Click Select managed or other existing storage and choose the
data.qcow2
file. - Select the storage format as
qcow2
under Advanced Options. - Click Finish.
- Click Select managed or other existing storage and choose the
- Select one of the following node
types:
Enter 1 to deploy Log Collector as an All-in-One node.
Enter 2 to deploy Log Collector as a Log Receiver node.
Enter 3 to deploy Log Collector as a Log Storage node.
- Click Begin Installation to start the Log Collector VM.
- After the installation, you can configure the IP address, name server, and time zone.
After setting up the Log Collector, add the Log Collector node to Security Director. See Adding Log Collector to Security Director.
Deploying Log Collector on a JA2500 Appliance
Starting in Security Director Release 15.2R2, you can deploy Log Collector on a JA2500 appliance. To install the Log Collector on the JA2500 appliance using a USB flash drive, you must create a bootable USB flash drive, install the Log Collector node using the USB flash drive, and add the Log Collector node to Security Director.
Before creating a bootable USB flash drive, download and install Rufus software on your system.
To create a bootable USB flash drive:
- Plug the USB flash drive into the USB port of a laptop or PC.
- Download the Log Collector ISO image from the download site to your laptop or PC.
If you are using a computer with Microsoft Windows as the operating system, follow these steps to create a bootable USB flash drive:
- Open Rufus software installed on your computer.
The Rufus window opens.
- Select the USB storage device from the Device list.
- Select the ISO image downloaded in Step 2 in the Format options section. Click the open or browse icon next to the Create a bootable disk using option to select the ISO image.
- Click Start.
A progress bar indicates the status of the bootable USB flash drive creation. A success message is displayed once the process completes successfully.
- Click Exit to exit the window.
- Eject the USB flash drive and unplug it from the computer.
To install Log Collector using a USB flash drive:
- Power down the JA2500 appliance.
- Plug the USB flash drive into the USB port of the JA2500 appliance.
- Perform the following steps to access the JA2500 appliance
boot menu:
- Power on the JA2500 appliance.
- While the JA2500 appliance powers on, press the key mapped
to send the DEL character in the terminal emulation utility.
Note Typically, the Backspace key is mapped to send the DEL character.
- The boot menu appears after a few minutes.
- Ensure that the USB boot is at the top of the appliance
boot-priority order.
If USB KEY: CBM USB 2.0 - (USB 2.0) is not at the top of the list, perform the following steps:
- Use the Down Arrow key to select USB KEY:CBM USB 2.0- (USB 2.0), and use the + key to move the entry to the top of the list.
- Press the F4 key to save your changes and exit the BIOS setup.
- After Verifying the BIOS setting, power off the JA2500 appliance.
- Power on the appliance again. The boot menu displays the
following options:
- Install Log Collector on Juniper JA2500 Hardware.
- Boot from local drive.
- Select Install Log Collector on Juniper JA2500 Hardware.
- Power off the appliance once the installation is completed.
- Restart the appliance and select Boot from local drive.
- Use the default credentials to log in to the JA2500 appliance; username is root and password is juniper123.
- Change the default password.
- After logging in, select the desired
Log Collector node type.
Enter 1 to deploy Log Collector as an All-in-One node.
Enter 2 to deploy Log Collector as a Log Receiver node.
Enter 3 to deploy Log Collector as a Log Storage node.
- Configure the IP address and gateway.
- Configure settings for the DNS name server and the NTP server.
After setting up the Log Collector, add the Log Collector node to Security Director. See Adding Log Collector to Security Director.
Installing Integrated Log Collector on a JA2500 Appliance or Junos Space Virtual Appliance
Starting in Security Director Release 16.1R1, you can install an integrated Log Collector on a JA2500 appliance or Junos Space virtual appliance. The integrated Log Collector is installed on Junos Space node (JA2500 appliance or virtual appliance) and it works as both the Log Receiver node and Log Storage node.
Integrated Log Collector on a JA2500 appliance or Junos Space virtual appliance supports only 500 eps.
Before You Begin
Integrated Log Collector uses the 9200, 514, and 4567 ports.
Junos Space Network Management Platform must be configured with Ethernet Interface eth0 and management IP addresses.
OpenNMS must be disabled on Junos Space Network Management Platform.
Ethernet Interface eth0 on the Junos Space Network Management Platform must be connected to the network to receive logs.
/var should have a minimum of 500-GB disk space for the integrated Log Collector installation to complete.
Table 5 shows the specifications for installing the integrated Log Collector on a JA2500 appliance.
Table 5: Specifications for Installing an Integrated Log Collector on a JA2500 appliance
Component | Specification |
---|---|
Memory | 8 GB Log Collector uses 8 GB of the available 32-GB system RAM. |
Disk space | 500 GB This is used from the existing JA2500 appliance disk space. |
CPU | Single core |
These specifications are used internally by the integrated Log Collector on JA2500 appliance.
Table 6 shows the specifications for installing the integrated Log Collector on Junos Space virtual appliance.
Table 6: Specifications for Installing an Integrated Log Collector on a Junos Space Virtual Appliance
Component | Specification |
---|---|
Memory | 8 GB If integrated Log Collector is running on the Junos Space virtual appliance, we recommend that you add 8 GB of RAM to maintain the Junos Space performance. It uses 8 GB of system RAM from the total system RAM. |
Disk space | 500 GB Minimum 500 GB free space is required. You can add any amount of disk space. |
CPU | 2 CPUs of 3.20 GHz |
These specifications are used internally by the integrated Log Collector running on the Junos Space virtual appliance.
To install an integrated Log Collector on a JA2500 appliance or virtual appliance:
- Download the integrated Log Collector script from the download site.
- Copy the integrated Log Collector script to a JA2500 appliance or virtual appliance.
- Connect to the CLI of JA2500 appliance or virtual appliance with admin privileges.
- Navigate to the location where you have copied the integrated Log Collector script.
- Change the file permission using the following command:
chmod +x Integrated-Log-Collector-xx.xxx.xxx.sh
For example,
chmod +x Integrated-Log-Collector-20.1R1.xxx.sh
- Install the integrated Log Collector script using the
following command:
./Integrated-Log-Collector-xx.xxx.xxx.sh
For example, ./Integrated-Log-Collector-20.1R1.xxx.sh
The installation stops if the following error message is displayed while installing the integrated Log Collector on the virtual appliance. You must expand the virtual appliance disk size to proceed with the installation.
ERROR: Insufficient HDD size, Please upgrade the VM HDD size to minimum 500 GB to install Log Collector
To expand the hard disk size for the Junos Space virtual appliance:
- Add a 500 GB capacity hard disk on the Junos Space virtual appliance through VMware vSphere client.
- Connect to the console of the Junos Space virtual appliance through SSH.
- Select Expand VM Drive Size.
- Enter the admin password and expand /var with 500 GB.
- Once /var is expanded, you are prompted for any further
HDD expansion. Select No to reboot the system.
Note Junos Space Network Management Platform must be active and functioning. You must be able to log in to the Junos Space Network Management Platform and Security Director user interfaces before attempting to run the integrated Log Collector setup script again.
- After the disk size is expanded and Junos Space Network
Management Platform and Security Director user interfaces are accessible,
run the following command:
./Integrated-Log-Collector-xx.xxx.xxx.sh
For example, ./Integrated-Log-Collector-20.1R1.xxx.sh
The installation stops if the following error message is displayed while installing the integrated Log Collector on a JA2500 appliance or virtual appliance. You must disable OpenNMS by following the steps mentioned in the error message to proceed with the installation.
ERROR: Opennms is running...
Please try to disable opennms as described below or in document and retry Log Collector installation...
STEPS: Login to Network Management Platform --> Administration --> Applications
Right Click on Network Management Platform --> Manage Services -> Select Network Monitoring and click Stop
Service Status should turn to DisabledAfter OpenNMS is disabled, run the following command:
./Integrated-Log-Collector-xx.xxx.xxx.sh
For example, ./Integrated-Log-Collector-20.1R1.xxx.sh
When the integrated Log Collector is installed on the JA2500 appliance or virtual appliance, the following message is displayed:
Shutting down system logger: [ OK ]
Starting jingest ... jingest started.
{"log-collector-node": {"id":376,"ip-address":"x.x.x.x","priority":0,"node-type": "INTEGRATED","cpu-usage":0,"memory-usage":0, "fabric-id":0,"display-name": "Integrated","timestamp":0}}After the installation is complete, a logging node is automatically added in Administration > Logging Management > Logging Nodes.
Configuring Log Collector Using Scripts
You can use the following command to configure Log Collector using script described in Table 7.
“jnpr-” <TAB> [root@NWAPPLIANCE25397 ~]# jnpr- jnpr-configure-node jnpr-configure-ntp jnpr-configure-timezone jnpr-network-script healthcheckOSLC
Table 7: Description of the Log Collector Script
Script | Description |
---|---|
jnpr-configure-node | Master script for the node configuration and network settings. |
jnpr-configure-ntp | Script for NTP configuration. |
jnpr-configure-timezone | Script for time zone configuration. |
jnpr-network-script | Script for interface configuration. |
healthcheckOSLC | Script for checking the issues with logging infrastructure. |
You can only configure the IP address of all Log Collector nodes by using the configuration script. If an IP address is configured manually, the Log Collector node cannot be added to Security Director.
Figure 1 shows the configuration options.

Starting in Log Collector Release 19.3 onward, the Update Log Collector database password option is mandatory in the configuration CLI. Without updating the password you cannot exit the configuration CLI.
When you upgrade a Log Collector application to 19.3 or later and execute configureNode.sh, the configuration CLI prompts you to update Log Collector database password. Until the password is updated, you cannot exit the CLI. The password change is required only for the first execution of the configureNode.sh script, after successful upgrade of Log Collector application. On subsequent executions of the script it is not mandatory to change the password.
While updating Log Collector database password:
For All-in-One node setup, the password update will get reflected as soon as you change the password through CLI.
If you want to setup distributed Log Collector, you can update the Log Collector database password in the receiver through configuration CLI. The update operation will be successful, but to reflect this change in the cluster you need to add at least one storage node. You must add the Log Collector to Security Director only after the password update is reflected in the cluster.
In an existing distributed Log Collector setup, do not modify the Log Collector database password if no storage nodes are available in the setup, otherwise it will create conflict in the cluster.
Expanding the Size of the VM Disk for Log Collector
You can increase the disk size of your virtual machine (VM) when the log files created by your application become too large.
The default shipping configuration of your VM includes 500 GB of disk space.
Before You Begin
Ensure that the VM is powered off.
Ensure that the VM has no snapshots.
To expand the disk size using VMware VSphere or VCenter:
- Deploy the Log Collector VM on a VMware ESX server.
- Using VSphere client (either the desktop client or the Web), right-click the VM settings.
- Click Edit Settings.
- Set the Hard disk 2 option to 600. The default disk configuration is 12 GB for hard disk 1 and 500 GB for hard disk 2.
- Click Save.
- Power on the VM.
To verify and apply the configuration:
- Log in as a root user from the Log Collector VM.
- Check the current file system state by entering the df -h command.
Filesystem
Size
Used
Available
Use%
Mounted On
/dev/mapper/data1_vg-elasticsearch
500G
267M
500G
1%
/var/lib/elasticsearch
- Run the /opt/jnpr/bin/resizeFS.sh script.
You see the following sample output:
[root@LOG-COLLECTOR ~]# /opt/jnpr/bin/resizeFS.sh
Physical volume "/dev/sdb" changed 1 physical volume(s) resized / 0 physical volume(s) not resized Extending logical volume elasticsearch to 600.00 GB Logical volume elasticsearch successfully resized meta-data=/dev/mapper/data1_vg-elasticsearch isize=256 agcount=4, agsize=32767744 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=131070976, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=63999, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 131070976 to 157285376
- Enter the df -h command again.
Verify the expanded disk space, which should now be 600 GB.
/dev/mapper/data1_vg-elasticsearch
Filesystem
Size
Used
Available
Use%
Mounted On
/dev/mapper/data1_vg-elasticsearch
600G
267M
600G
1%
/var/lib/elasticsearch
You must restart the VM after editing the disk size and then execute the resizeFS.sh script.
For more information on troubleshooting any issue while setting up Log Collector, see the following:
To learn more about enabling vMotion and fault tolerance logging, see Enabling vMotion and Fault tolerance logging.
To learn more about VMWare chassis cluster and fault tolerance, see vSphere Availability
To learn more about configuring vMotion, see Creating a VMkernel port and enabling vMotion on an ESXi/ESX host and Set Up a Cluster for vMotion.