Unusual Process For a Linux Host

edit

Identifies rare processes that do not usually run on individual hosts, which can indicate execution of unauthorized services, malware, or persistence mechanisms. Processes are considered rare when they only run occasionally as compared with other processes running on the host.

Rule type: machine_learning

Rule indices: None

Severity: low

Risk score: 21

Runs every: 15m

Searches indices from: now-45m (Date Math format, see also Additional look-back time)

Maximum alerts per execution: 100

References:

Tags:

  • Domain: Endpoint
  • OS: Linux
  • Use Case: Threat Detection
  • Rule Type: ML
  • Rule Type: Machine Learning
  • Tactic: Persistence

Version: 105

Rule authors:

  • Elastic

Rule license: Elastic License v2

Investigation guide

edit

Triage and analysis

Investigating Unusual Process For a Linux Host

Searching for abnormal Linux processes is a good methodology to find potentially malicious activity within a network. Understanding what is commonly run within an environment and developing baselines for legitimate activity can help uncover potential malware and suspicious behaviors.

This rule uses a machine learning job to detect a Linux process that is rare and unusual for an individual Linux host in your environment.

Possible investigation steps

  • Investigate the process execution chain (parent process tree) for unknown processes. Examine their executable files for prevalence, and whether they are located in expected locations.
  • Investigate other alerts associated with the user/host during the past 48 hours.
  • Consider the user as identified by the user.name field. Is this program part of an expected workflow for the user who ran this program on this host?
  • Validate the activity is not related to planned patches, updates, network administrator activity, or legitimate software installations.
  • Validate if the activity has a consistent cadence (for example, if it runs monthly or quarterly), as it could be part of a monthly or quarterly business process.
  • Examine the arguments and working directory of the process. These may provide indications as to the source of the program or the nature of the tasks it is performing.

False Positive Analysis

  • If this activity is related to new benign software installation activity, consider adding exceptions — preferably with a combination of user and command line conditions.
  • Try to understand the context of the execution by thinking about the user, machine, or business purpose. A small number of endpoints, such as servers with unique software, might appear unusual but satisfy a specific business need.

Response and Remediation

  • Initiate the incident response process based on the outcome of the triage.
  • Isolate the involved hosts to prevent further post-compromise behavior.
  • If the triage identified malware, search the environment for additional compromised hosts.
  • Implement temporary network rules, procedures, and segmentation to contain the malware.
  • Stop suspicious processes.
  • Immediately block the identified indicators of compromise (IoCs).
  • Inspect the affected systems for additional malware backdoors like reverse shells, reverse proxies, or droppers that attackers could use to reinfect the system.
  • Remove and block malicious artifacts identified during triage.
  • Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords for these accounts and other potentially compromised credentials, such as email, business systems, and web services.
  • Run a full antimalware scan. This may reveal additional artifacts left in the system, persistence mechanisms, and malware components.
  • Determine the initial vector abused by the attacker and take action to prevent reinfection through the same vector.
  • Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).

Setup

edit

Setup

This rule requires the installation of associated Machine Learning jobs, as well as data coming in from one of the following integrations: - Elastic Defend - Auditd Manager

Anomaly Detection Setup

Once the rule is enabled, the associated Machine Learning job will start automatically. You can view the Machine Learning job linked under the "Definition" panel of the detection rule. If the job does not start due to an error, the issue must be resolved for the job to commence successfully. For more details on setting up anomaly detection jobs, refer to the helper guide.

Elastic Defend Integration Setup

Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app.

Prerequisite Requirements:

  • Fleet is required for Elastic Defend.
  • To configure Fleet Server refer to the documentation.

The following steps should be executed in order to add the Elastic Defend integration to your system:

  • Go to the Kibana home page and click "Add integrations".
  • In the query bar, search for "Elastic Defend" and select the integration to see more details about it.
  • Click "Add Elastic Defend".
  • Configure the integration name and optionally add a description.
  • Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads".
  • Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. Helper guide.
  • We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions"
  • Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. For more details on Elastic Agent configuration settings, refer to the helper guide.
  • Click "Save and Continue".
  • To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. For more details on Elastic Defend refer to the helper guide.

Auditd Manager Integration Setup

The Auditd Manager Integration receives audit events from the Linux Audit Framework which is a part of the Linux kernel. Auditd Manager provides a user-friendly interface and automation capabilities for configuring and monitoring system auditing through the auditd daemon. With auditd_manager, administrators can easily define audit rules, track system events, and generate comprehensive audit reports, improving overall security and compliance in the system.

The following steps should be executed in order to add the Elastic Agent System integration "auditd_manager" to your system:

  • Go to the Kibana home page and click “Add integrations”.
  • In the query bar, search for “Auditd Manager” and select the integration to see more details about it.
  • Click “Add Auditd Manager”.
  • Configure the integration name and optionally add a description.
  • Review optional and advanced settings accordingly.
  • Add the newly installed “auditd manager” to an existing or a new agent policy, and deploy the agent on a Linux system from which auditd log files are desirable.
  • Click “Save and Continue”.
  • For more details on the integration refer to the helper guide.

Rule Specific Setup Note

Auditd Manager subscribes to the kernel and receives events as they occur without any additional configuration. However, if more advanced configuration is required to detect specific behavior, audit rules can be added to the integration in either the "audit rules" configuration box or the "auditd rule files" box by specifying a file to read the audit rules from. - For this detection rule no additional audit rules are required.

Framework: MITRE ATT&CKTM