Elasticsearch and LangChain: unlocking the potential of large language models (LLMs)

Today we’ll be diving into the cutting-edge library, firmly situated in the world of large language models (LLMs), called LangChain and its synergy with Elastic. With LangChain's focus on making large language models more usable and versatile, and Elastic's scalable and fast search and analytics capabilities, this combination is opening new doors in the realm of application development and search application. In this blog, we'll explore what LangChain is, who it's for, and how Elastic fits into this picture, as well as discuss a recent contribution to LangChain by Elastic.

Understanding LangChain

When it comes to the world of LLMs, LangChain is a name that's increasingly hard to ignore. Its approach to building interactions with LLMs is transforming the tech landscape and opening new avenues for developers all around the globe. But what exactly is LangChain? How does it empower its users? And why should you, as a part of the tech community, be interested in it?

LangChain is a library designed to leverage the power of LLMs and combine them with other sources of computation or knowledge to create powerful applications. It is a toolbox of sorts, providing a standard interface for LLMs and facilitating their integration with other tools. LangChain makes developing applications that can answer questions over specific documents, power chatbots, and even create decision-making agents easier. All of this is done by blending LLMs with other computations (for example, the ability to perform complex maths) and knowledge bases (providing real-time inventory, for example), thus enabling the development of applications that were previously out of reach.

But who is it for? LangChain is geared towards developers who are looking to harness the potential of LLMs to build transformative applications. If you're a developer seeking to enhance your application's abilities by using LLMs in conjunction with other sources of computation or knowledge, LangChain is just the tool you need. Anyone interested in the future of technology and the exciting possibilities of LLMs can find value in LangChain. From question-answering applications to chatbots and intelligent agents, the scope of possibilities with LangChain is truly vast.

Elasticsearch and LangChain

One of the exciting aspects of LangChain is its ability to interact seamlessly with powerful tools like Elasticsearch. The Elasticsearch Relevance Engine™ (ESRE™) provides capabilities for creating highly relevant AI search applications. It's capable of storing, searching, and analyzing large volumes of data quickly and in near real-time, making it an ideal partner for LangChain's capabilities. More specifically, Elastic's ability to handle hybrid scoring with BM25, approximate k-nearest neighbors (kNN), or Elastic’s out-of-the-box Learned Sparse Encoder model, adds a layer of flexibility and precision to the applications developed with LangChain.

In addition to providing efficient storage and search capabilities, Elastic can also supply LLMs with vital context. Real-time information, proprietary data, and other knowledge sources stored in Elastic can be leveraged by LLMs to generate more accurate and contextually relevant responses. This unique synergy between Elastic and LangChain can make applications more dynamic and capable of handling complex queries and tasks.

Moreover, Elastic can be used to store the chat history between users and LLMs. This feature can be particularly useful in applications like chatbots, where the context of previous interactions can play a crucial role in shaping responses. By storing this history in Elastic, LangChain can retrieve and utilize this information, making conversations more coherent and personalized over time. In essence, the integration of Elastic with LangChain not only enhances the power of LLMs but also paves the way for creating truly responsive and intuitive applications.

As LangChain is designed to be LLM agnostic, LangChain and Elasticsearch can work whether the LLM you are using is hosted by a third party like OpenAI, or you are hosting your own open-source model in your own tenant.

Elastic’s first PR

Elastic recently made its first contribution to LangChain, opening a new realm of possibilities for generating embeddings using Elasticsearch models. The Pull Request (PR) added a new module, elasticsearch_embeddings.py. The ElasticsearchEmbeddings class introduced in this PR enables users to generate embeddings for documents and query texts using a model deployed in an Elasticsearch cluster. This simple solution offers a clean and intuitive interface to interact with Elasticsearch’s machine learning (ML) client and simplifies the process of generating embeddings.

The benefits of this addition to the LangChain library are manifold. It allows developers to easily integrate Elasticsearch-generated embeddings, simplifies the process of generating these embeddings, and supports custom input text field names. This means the implementation is compatible with supported models deployed in Elasticsearch that generate embeddings as output.

Below is a sample of how you might use this new feature:

import elasticsearch
from langchain_elasticsearch import ElasticsearchEmbeddings
# Instantiate ElasticsearchEmbeddings using credentials
embeddings = ElasticsearchEmbeddings.from_credentials(
   model_id,
   es_cloud_id='your_cloud_id',
   es_user='your_user',
   es_password='your_password'
)
# Create embeddings for multiple documents
documents = [
   'This is an example document.',
   'Another example document to generate embeddings for.'
]
document_embeddings = embeddings.embed_documents(documents)
# Print document embeddings
for i, embedding in enumerate(document_embeddings):
   print(f"Embedding for document {i+1}: {embedding}")
# Create an embedding for a single query
query = 'This is a single query.'
query_embedding = embeddings.embed_query(query)
# Print query embedding
print(f"Embedding for query: {query_embedding}")

This code first initializes the ElasticsearchEmbeddings class with an Elasticsearch connection object and a model_id. Then, it uses the embed_documents() method to generate embeddings for a list of documents. Lastly, the embed_query() method is used to generate an embedding for a single query text. This new functionality opens up new possibilities for developers working with LangChain and Elastic, making it easier than ever to create sophisticated, context-aware applications.

The intersection of LangChain and Elastic showcases the power of collaboration in technology. With the recent contribution from Elastic, LangChain has extended its capabilities, offering developers an easier and more efficient way to generate embeddings using Elasticsearch models. This is just one of the many ways in which LangChain and Elastic are pushing the boundaries of what's possible with large language models. It’s just the beginning of what we expect to be a long, fruitful collaboration. Stay tuned for more exciting developments in this space!

Learn about privacy-first AI search using LangChain and Elasticsearch.

In this blog post, we may have used third party generative AI tools, which are owned and operated by their respective owners. Elastic does not have any control over the third party tools and we have no responsibility or liability for their content, operation or use, nor for any loss or damage that may arise from your use of such tools. Please exercise caution when using AI tools with personal, sensitive or confidential information. Any data you submit may be used for AI training or other purposes. There is no guarantee that information you provide will be kept secure or confidential. You should familiarize yourself with the privacy practices and terms of use of any generative AI tools prior to use.

Elastic, Elasticsearch and associated marks are trademarks, logos or registered trademarks of Elasticsearch N.V. in the United States and other countries. All other company and product names are trademarks, logos or registered trademarks of their respective owners.

Ready to try this out on your own? Start a free trial.
Elasticsearch has integrations for tools from LangChain, Cohere and more. Join our advanced semantic search webinar to build your next GenAI app!
Recommended Articles