Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon with a single API, along with a broad set of capabilities you need to build generative AI applications, simplifying development while maintaining privacy and security. Since Amazon Bedrock is serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with.
In this example, we will demonstrate how to split documents into passages, index these passages into Elasticsearch, and use Amazon Bedrock to answer questions based on the indexed data. This approach enhances the retrieval specificity and ensures comprehensive answers by leveraging relevant passages from the indexed documents.
1. Install packages and import modules
Firstly we need to install modules. Make sure python is installed with min version 3.8.1.
Then we need to import modules
Note: boto3 is part of AWS SDK for Python and is required to use Bedrock LLM
2. Init Amazon Bedrock client
To authorize in AWS service we can use ~/.aws/config
file with configuring credentials or pass AWS_ACCESS_KEY
, AWS_SECRET_KEY
, AWS_REGION
to boto3 module.
We're using second approach for our example.
3. Connect to Elasticsearch
ℹ️ We're using an Elastic Cloud deployment of Elasticsearch for this notebook. If you don't have an Elastic Cloud deployment, sign up here for a free trial.
Use your Elastic Cloud deployment’s Cloud ID and API key to connect to Elasticsearch. We will use ElasticsearchStore to connect to our elastic cloud deployment. This would help to create and index data easily. In the ElasticsearchStore instance, will set embedding to BedrockEmbeddings to embed the texts and elasticsearch index name that will be used in this example.
Getting Your Cloud ID
To find the Cloud ID for your deployment, log in to your Elastic Cloud account and select your deployment. The Cloud ID can be found on the deployment overview page. For detailed instructions, refer to the Elastic Cloud finding your Cloud ID instruction.
Creating an API Key
To create an API key, navigate to the “Management” section of your Elastic Cloud deployment, and select “API keys” under the “Security” tab. Follow the prompts to generate a new API key. For more information, see the Elastic Cloud creating API key instruction.
4. Download the dataset
Let's download the sample dataset and deserialize the document.
5. Split documents into passages
We’ll chunk documents into passages in order to improve the retrieval specificity and to ensure that we can provide multiple passages within the context window of the final question answering prompt.
Here we are using a simple splitter but Langchain offers more advanced splitters to reduce the chance of context being lost. To improve retrieval specificity and ensure comprehensive context, chunk the documents into smaller passages.
6. Index data into elasticsearch
Next, we will index data to elasticsearch using ElasticsearchStore.from_documents. We will use Cloud ID, Password and Index name values set in the Create cloud deployment
step.
7. Init Amazon Bedrock LLM
Next, we will initialize Amazon Bedrock LLM. In the Bedrock instance, will pass bedrock_client
and specific model_id
: amazon.titan-text-express-v1
, ai21.j2-ultra-v1
, anthropic.claude-v2
, cohere.command-text-v14
or etc. You can see list of available base models on Amazon Bedrock User Guide
8. Asking a question
Now that we have the passages stored in Elasticsearch and llm is initialized, we can now ask a question to get the relevant passages.
Example Output
For the question “What is our work from home policy?”, you might get:
Trying it out
Amazon Bedrock LLM is a powerful tool that can be used in many ways. You can try it out with different base models and different questions. You can also try it out with different datasets and see how it performs. To learn more about Amazon Bedrock, check out the documentation.
You can try to run this example in Google Colab.
Elasticsearch has native integrations to industry leading Gen AI tools and providers. Check out our webinars on going Beyond RAG Basics, or building prod-ready apps Elastic Vector Database.
To build the best search solutions for your use case, start a free cloud trial or try Elastic on your local machine now.