By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

We Don’t Use No Vector Databases

April 12, 2024

We Don’t Use No Vector Databases

by Vikram Srinivasan, Varun Tulsian

Introduction

In the rapidly evolving world of artificial intelligence and natural language processing, retrieval augmented generation (RAG) has emerged as a powerful technique for building informative and engaging conversational AI systems. While vector databases have gained popularity as the go-to solution for building retrieval systems, Needl.ai has taken a different approach, focusing on traditional information retrieval techniques enhanced with a semantic layer. In this blog post, we'll explore Needl.ai's innovative approach to building Retrieval solutions at scale, discussing their design principles and how they've achieved a cost-efficient, petabyte-scale solution without relying on vector databases.

The Allure of Vector Databases

Vector databases have become the new shiny toy in the world of RAG, with many companies rushing to build their solutions on top of these databases. The idea behind vector databases is to store "vector representations" or embeddings of entire documents or document chunks, capturing semantic information about the indexed content. While it's relatively easy to build impressive proof-of-concept RAG systems using vector databases, Needl.ai recognized that this approach might not be the best fit for real-world engineering constraints.

Needl.ai's Design Constraints

When Needl.ai set out to build their retrieval system, they had five key design constraints in mind:

Challenges with Vector Databases at Scale

Needl.ai realized that a RAG system built over vector databases would not satisfy their design constraints. The most popular technique used in vector databases, hierarchical navigable small world graphs, can be difficult to scale when dealing with billions of vectors. Sharding the datastore and clustering nearest vectors becomes increasingly complex in high-dimensional vector spaces, making it hard to retrieve the most relevant results at scale.

Cost Considerations

Embedding billions of vectors can be prohibitively expensive, requiring the use of GPUs at scale. While some may argue that this is a one-time cost, Needl.ai recognized that their solution would require constant embedding of streaming data and potential re-indexing using different embedding techniques. This ongoing cost and the time required for such operations could be significant, especially for a startup with limited resources.

Explainability and Bizarre Results

Although embedding-based retrieval techniques can produce impressive results, they can also generate bizarre and hard-to-explain outputs. Needl.ai sought a solution that would provide more control over the underlying algorithms and facilitate the generation of explainable results.

Maintainability

Maintainability poses a formidable challenge in managing a vector database of billion-scale magnitude. For Needl.ai, ensuring the system remains agile and efficient over time is paramount. Indexing, updating, removing operations in a Billion order vector database is extremely challenging. Vector Databases are yet to mature to a level where they can efficiently do this at such scale.

Data Stewardship and Confidentiality

Needl.ai’s promise is to not use customers' business data and conversations for training. While vector databases promise to build highly meaningful vector representations from a chunk of data. We would need a very sophisticated embedding model that has a nuanced understanding of all subjects that Needl.ai’s customers work on. It is near impossible to do this well without using the data for training. Whereas, using an inferior model would result in sub-par performance in retrieval.

Needl.ai's Innovative Approach

Instead of relying on vector databases, Needl.ai decided to use traditional information retrieval techniques, enhanced with a semantic layer built using cutting-edge AI techniques. Their solution uses a keyword-based index, with a semantic understanding layer for document understanding, query understanding, query expansion, and a reranking layer. This approach offers several advantages:

Conclusion

Needl.ai's innovative approach to building retrieval solutions demonstrates that vector databases are not the only way to achieve impressive results. By combining traditional information retrieval techniques with a semantic layer powered by advanced AI, Needl.ai has created a cost-effective, scalable, and explainable solution that meets the demands of real-world engineering constraints. As the field of conversational AI continues to evolve, Needl.ai's approach serves as a compelling alternative to the vector database hype, offering a practical and efficient path forward for businesses and developers alike.

X iconLinkedin icon

Read more from Needl.ai