Relevance workbench
In this workbench, you can compare our Elastic Learned Sparse Encoder model (with or without RRF) and traditional textual search using BM25.
Start comparing different hybrid search techniques using TMDB's movies dataset as sample data. Or fork the code and ingest your own data to try it on your own!
Try these queries to get started:
- "The matrix"
- "Movies in Space"
- "Superhero animated movies"
Notice how some queries work great for both search techniques. For example, 'The Matrix' performs well with both models. However, for queries like "Superhero animated movies", the Elastic Learned Sparse Encoder model outperforms BM25. This can be attributed to the semantic search capabilities of the model.
Explore similar demos
Search
Build a RAG app
Search AI 101: Lesson 4 of 4 - Create a retrieval augmented generation (RAG) application using Elastic’s generative AI capabilities. Learn how to set up Elastic to power a RAG system, with the outcome being a fully functioning RAG chatbot application.
Search
Vector search
Search AI 101: Lesson 3 of 4 - This hands-on learning will guide you through integrating your custom model to create vector embeddings, configuring Elasticsearch for vector search, and running similarity queries to retrieve contextually relevant results.
Search
Lexical search
Search AI 101: Lesson 1 of 4 - Learn the basics of building a keyword search application with Elasticsearch. This hands-on learning will guide you through data indexing, setting up simple search queries, and configuring basic search functionalities. Perfect for those starting their journey with search technology.