Shashank K S

Streamlining Security: Integrating Amazon Bedrock with Elastic

This article will guide you through the process of setting up the Amazon Bedrock integration and enabling Elastic's prebuilt detection rules to streamline your security operations.

Streamlining Security: Integrating Amazon Bedrock with Elastic

Preamble

In the ever-evolving landscape of cloud computing, maintaining robust security while ensuring compliance is a critical challenge for organizations of all sizes. As businesses increasingly adopt the cloud, the complexity of managing and securing data across various platforms grows exponentially.

Amazon Bedrock, with its powerful foundation of machine learning and AI services, offers a scalable, secure environment for organizations to develop and deploy intelligent applications. However, to fully harness the potential of these innovations, it’s essential to implement a streamlined approach to security and compliance.

Integrating Elastic with Amazon Bedrock can significantly enhance security monitoring and compliance management within your cloud environment. This integration leverages Elastic’s search, observability, and security capabilities to optimize how you manage and secure applications and data hosted on Amazon Bedrock.

Elastic’s security information and event management (SIEM) capabilities can be used to analyze logs and monitor events generated by applications running on Amazon Bedrock. This allows for the detection of potential security threats in real-time and automated response actions to mitigate risks.

This article will guide you through the process of setting up Amazon Bedrock integration and enabling our prebuilt detection rules to streamline your security operations. We will cover the following key aspects:

  1. Prerequisites for Elastic Amazon Bedrock Integration: Understanding the core requirements for setting up Elastic Amazon Bedrock integration for cloud security.
  2. Setting Up Amazon Bedrock Integration: Step-by-step instructions to set up Amazon Bedrock in your existing AWS infrastructure.
  3. Enabling Prebuilt Security Rules: How to leverage prebuilt rules to detect high-confidence policy violations and other security threats.
  4. Exploring High-Confidence Misconduct Blocks Detection: An in-depth look at a specific prebuilt rule designed to detect high-confidence misconduct blocks within Amazon Bedrocklogs.
  5. Demonstrate an Exploit Case Scenario for Amazon Bedrock: Using a sample python script to simulate interactions with an Amazon Bedrock model for testing exploit scenarios that could trigger Elastic prebuilt detection rules.

Prerequisites for Elastic Amazon Bedrock Integration

Elastic Integration for Amazon Bedrock

The Amazon Bedrock integration collects Amazon Bedrock model invocation logs and runtime metrics with Elastic Agent. For a deeper dive on the integration, documentation can be found in our documentation.

Below are the list of prerequisites to have a complete and successful configuration of Amazon Bedrock Elastic Integration:

  • AWS Account Setup
  • Elastic Cloud Requirements
  • Terraform (Optional)

AWS Account Setup

  • Active AWS Account: Ensure you have an active AWS account with the appropriate permissions to deploy and manage resources on Amazon Bedrock.
  • Amazon Bedrock Setup: Confirm that Amazon Bedrock is correctly configured and operational within your AWS environment. This includes setting up AI models, datasets, and other resources necessary for your applications. Refer to Getting started with Amazon Bedrock for additional information on the setup.
  • IAM Roles and Permissions: Create or configure Identity and Access Management (IAM) roles with the necessary permissions to allow Elastic to access Amazon Bedrock resources. These roles should have sufficient privileges to read logs, metrics, and traces from AWS services. Additional details of the requirements can be found in our AWS documentation.

Elastic Cloud Requirements

Version0.7.0 (Beta)
Compatible Kibana version(s)8.13.0 or higher for integration version 0.2.0 and above. Minimum Kibana Version 8.12.0
Supported Serverless project typesSecurity Observability
Subscription levelBasic
Level of supportElastic

Note: Since the integration is in Beta Release Stage, please enable Display Beta Integrations in the browse integration section of the Management pane in your Elastic stack.

Terraform

Terraform is an open source infrastructure-as-code (IaC) tool created by HashiCorp that allows you to define, provision, and manage cloud and on-premises infrastructure in a consistent and repeatable way.

This is an optional step, but good to have as the next sections of the article we use this tool to set up the required AWS Infrastructure. Deep dive on installation and docs can be found here.

Setting Up Amazon Bedrock Integration

In this section of the article, we will walk through the steps to set up Amazon Bedrock integration with Elastic in two parts:

  1. Setting Up AWS Infrastructure with Terraform: In this section, we'll walk through the steps to set up an AWS infrastructure using Terraform. We'll create an S3 bucket, an EC2 instance with the necessary IAM roles and policies to access the S3 bucket, and configure security groups to allow SSH access. This setup is ideal for scenarios where you need an EC2 instance to interact with S3, such as for data processing or storage.
  2. Elastic Agent and Integration Setup: In this section, we'll walk through the steps to install Elastic Agent on the AWS EC2 instance and Configure the Amazon Bedrock Integration.

Setting Up AWS Infrastructure with Terraform

The high-level configuration process will involve the following steps:

  1. Configuring providers.tf
  2. Configuring variables.tf
  3. Configuring outputs.tf
  4. Configuring main.tf

The providers.tf file typically contains the configuration for any Terraform providers you are using in your project. In our example, it includes the configuration for the AWS provider. Here is the sample content of our providers.tf file. The profile mentioned in the providers.tf should be configured in the user’s space of the AWS credentials file (~/.aws/credentials). Refer to Configuration and credential file settings - AWS Command Line Interface, which is also highlighted in the credential section of Elastic’s AWS documentation.

The variables.tf file contains the variable definitions used throughout your Terraform configuration. For our scenario, it includes the definition for the aws_region and resource_labels. Here is the sample content of our variables.tf file.

The outputs.tf file typically contains the output definitions for your Terraform configuration. These outputs can be used to display useful information after your infrastructure is provisioned. Here is the sample content of our outputs.tf file

The main.tf file typically contains the collection of all of these resources such as data sources, S3 bucket and bucket policy, Amazon Bedrock Model Invocation Log configuration, SQS Queue configuration, IAM Role and Policies required by the EC2 instance that would install Elastic Agent and stream logs and Amazon Bedrock Guardrail configuration. Here is the sample content of our main.tf file.

Once the main.tf is configured according to the requirements we can then initialize, plan and apply the terraform configuration.

terraform init // initializes the directory and sets up state files in backend
terraform plan // command creates an execution plan
terraform apply // command applies the configuration aka execution step

To tear down the infrastructure that terraform has previously created one can use the terraform destroy command.

Once the infrastructure setup is completed, necessary resource identifiers are provided via outputs.tf. We can conduct a basic verification of the infrastructure created using the following steps:

  1. Verify the S3 Bucket created from the Terraform, one can either use aws cli command reference list-buckets — AWS CLI 1.34.10 Command Reference or navigate via AWS console to verify the same. 2. Verify the SQS Queue created from the terraform, one can either use aws cli command reference list-queues — AWS CLI 1.34.10 Command Reference or navigate via AWS console to verify the same.
  2. Verify the EC2 Instance created from the AWS console and connect to the ec2-instance via Connect using EC2 Instance Connect - Amazon Elastic Compute Cloud and run aws s3 ls example-bucket-name to check if the instance has access to the created S3 bucket.
  3. Verify the Amazon Bedrock Guardrail created from the Terraform, once can either use Amazon Bedrock API ListGuardrails - Amazon Bedrock or navigate via AWS console to verify the same.

Setting Up Elastic Agent and Integration Setup

To install Elastic Agent on the AWS EC2 instance and configure the Amazon Bedrock integration, create an agent policy using the guided steps in Elastic Agent policies | Fleet and Elastic Agent Guide [8.15]. Then log into to the ec2-instance created in the infrastructure setup steps via Connect using EC2 Instance Connect - Amazon Elastic Compute Cloud, and install the elastic agent using the guided steps in Install Elastic Agents | Fleet and Elastic Agent Guide [8.15]. During the agent installation, remember to select the agent policy created at the beginning of this setup process and use the relevant agent installation method depending on the instance created. Finally, ensure the agent is properly configured and there is incoming data from the agent.

To configure the Amazon Bedrock integration in the newly-created policy, add the Amazon Bedrock integration using the guided steps: Add an Elastic Agent integration to a policy. Enable Beta Integrations to use Amazon Bedrock integration as displayed in the image below.

Configure the Integration with AWS Access Keys to access the AWS account where Amazon Bedrock is configured. Use the Collect Logs from S3 bucket and specify the Bucket ARN created in the setup step. Please note to use either the S3 Bucket or the SQS Queue URL during the setup and not both. Add this integration to the existing policy where the ec2-instance is configured.

Verify Amazon Bedrock Model Invocation Log Ingestions

Once the Elastic Agent and integration setup is completed, we can conduct a basic verification of the integration to determine if the logs are being ingested as expected by using the following example API call:

aws bedrock-runtime converse \
--model-id "anthropic.claude-3-5-sonnet-20240620-v1:0" \
--messages '[{"role":"user","content":[{"text":"Hello "}]}]' \
--inference-config '{"maxTokens":2000,"stopSequences":[],"temperature":1,"topP":0.999}' \
--additional-model-request-fields '{"top_k":250}' \
--region us-east-1

The example API call assumes a working setup with aws cli and there is access for the foundational model Anthropic Claude Messages API - Amazon Bedrock. If the user does not have access to the model one can simply request access for models from the model-access page as suggested in Access Amazon Bedrock foundation models, or we can optionally change the API call to any existing model the user can access.

On successful execution of the above API call, the Amazon Bedrock Model invocation logs are populated and in Kibana logs-aws_bedrock.invocation-default should be populated with those invocation logs. We can use the following simple ES|QL query to return recently ingested events.

from logs-aws_bedrock.invocation-* | LIMIT 10

Enable Prebuilt Detection Rules

To enable prebuilt detection rules, first login to the elastic instance and from the left pane navigation navigate to Security → Rules → Detection rules (SIEM). Filter for “Data Source: Amazon Bedrock” from the tags section.

Enable the available prebuilt rules. For prebuilt rules, the Setup information contains a helper guide to setup AWS Guardrails for Amazon Bedrock, which is accomplished in the Setting Up AWS Infrastructure with Terraform step if the example is followed correctly and the terraform has the Amazon Bedrock Guardrail configuration. Please note this setup is vital for some of the rules to generate alerts–we need to ensure the guardrail is set up accordingly if skipped in the infrastructure setup stage.

Exploring High-Confidence Misconduct Blocks Detection

Let’s simulate a real world scenario in which a user queries a topic denied to the Amazon Bedrock model. Navigate to the Amazon Bedrock section in the Amazon UI Console, and use the left navigation pane to navigate to the Guardrails subsection under Safeguards. Use the sample guardrail created during our setup instructions for this exercise, and use the test option to run a model invocation with the guardrails and query the denied topic configured.

Repeat the query at least 6 times as the prebuilt rule is designed to alert on greater than 5 high confidence blocks. When the Alert schedule runs, we can see an alert populate for Unusual High Confidence Misconduct Blocks Detected.

Demonstrate an Exploit Case Scenario for Amazon Bedrock

To simulate an Amazon Bedrock Security bypass, we need an exploit simulation script to interact with Amazon Bedrock models. The exploit script example we provide simulates the following attack pattern:

  • Attempts multiple successive requests to use denied model resources within AWS Bedrock
  • Generates multiple successive validation exception errors within Amazon Bedrock
  • User consistently generates high input token counts, submits numerous requests, and receives large responses that mimic patterns of resource exhaustion
  • Combines repeated high-confidence 'BLOCKED' actions coupled with specific violation codes such as 'MISCONDUCT', indicating persistent misuse or attempts to probe the model's ethical boundaries
class BedrockModelSimulator:
   def __init__(self, profile_name, region_name):
       // Create a Boto3 Session Client for Ineration 
   def generate_args_invoke_model(self, model_id, user_message, tokens): 	// Generate Model Invocation parameters
       guardrail_id = <<GUARDRAIL_ID>>
       guardrail_version = <<GUARDRAIL_VERSION>>

       guardrail_config = {
           "guardrailIdentifier": guardrail_id,
           "guardrailVersion": guardrail_version,
           "trace": "enabled"
       }
       conversation = [
           {
               "role": "user",
               "content": [{"text": user_message}],
           }
       ]
       inference_config = {"maxTokens": tokens, "temperature": 0.7, "topP": 1}
       additional_model_request_fields = {}

       kwargs = {
           "modelId": model_id,
           "messages": conversation,
           "inferenceConfig": inference_config,
           "additionalModelRequestFields": additional_model_request_fields
	    "guardrailConfig" : guardrail_config
       }
       return kwargs
  
   def invoke_model(self, invocation_arguments):
       for _ in range(count):
           try:
               // Invoke Model With right invocation_arguments
           except ClientError as e:
               // Error meesage

def main():
   profile_name = <<AWS Profile>>
   region_name = 'us-east-1'
   denied_model_id = // Use a denied model   
   denied_model_user_message = // Sample Message 
   available_model_id = // Use an available model  
   validation_exception_user_message = // Sample Message 
   resource_exploit_user_message = // A very big message for resource exhuastion
   denied_topic_user_message = // Sample Message that can query denied topic configured
   simulator = BedrockModelSimulator(profile_name, region_name)
   denied_model_invocation_arguments = simulator.generate_args_invoke_model(denied_model_id, denied_model_user_message, 200)
   simulator.invoke_model(denied_model_invocation_arguments)
   validation_exception_invocation_arguments = simulator.generate_args_invoke_model(available_model_id, validation_exception_user_message, 6000)
   simulator.invoke_model(validation_exception_invocation_arguments)
   resource_exhaustion_invocation_arguments = simulator.generate_args_invoke_available_model(available_model_id, resource_exploit_user_message, 4096)
   simulator.invoke_model(resource_exhaustion_invocation_arguments)
   denied_topic_invocation_arguments = simulator.generate_args_invoke_available_model_guardrail(available_model_id, denied_topic_user_message, 4096)
   simulator.invoke_model(denied_topic_invocation_arguments)

if __name__ == "__main__":
   main()

Note: The GUARDRAIL_ID and GUARDRAIL_VERSION can be found in outputs.tf

When executed in a controlled environment, the provided script simulates an exploit scenario that would generate detection alerts in Elastic Security. When analyzing these alerts using the Elastic Attack Discovery feature, the script creates attack chains that show the relationships between various alerts, giving analysts a clear understanding of how multiple alerts might be part of a larger attack.

Conclusion

Integrating Elastic with Amazon Bedrock empowers organizations to maintain a secure and compliant cloud environment while maximizing the benefits of AI and machine learning. By leveraging Elastic’s advanced security and observability tools, businesses can proactively detect threats, automate compliance reporting, and gain deeper insights into their cloud operations. Increasingly, enterprises rely on opaque data sources and technologies to reveal the most serious threats-- our commitment to transparent security is evident in our open artifacts, integrations, and source code.