Teradata has announced a significant expansion of its platform capabilities with the introduction of an Enterprise Vector Store, an in-database offering designed to enable and augment use cases that require retrieval augmented generation (RAG), such as agentic AI. This move marks a significant step forward in the company's efforts to support enterprises in their adoption of generative AI models.
The Enterprise Vector Store is designed to provide vector data management capabilities, starting from embedding generation and indexing to metadata management and intelligent search. Additionally, it supports frameworks such as LangChain and planned temporal vector embedding capabilities, which are designed to aid explainability by tracking changes to data over time, thereby improving accuracy and decision making.
Industry analysts believe that Teradata's move is part of a broader trend, with almost all data platform providers adding support for the storage and processing of vectors to their products. This is driven by the growing need for enterprises to support applications based on generative AI, which require the ability to store and process high-dimensional data in the form of vectors. Rival data platform providers such as Snowflake, Databricks, and Google BigQuery have all added vector storage capabilities, and Oracle has unveiled HeatWave GenAI with in-database LLMs and vector store capabilities.
Teradata has also partnered with Nvidia to accelerate RAG use cases via NIM, a containerized inference microservice that is specifically designed to accelerate the deployment and operational performance of applications, agents, and assistants based on generative AI models. This partnership will enable developers to fine-tune NeMo Retriever microservices, a part of NIM, in combination with community or custom models to build scalable document ingestion and RAG applications that can be connected to proprietary data wherever it resides.
According to analysts, NIM is proving attractive to data management software providers as it enables them to accelerate the delivery of support for high-performance operationalization of AI models in a containerized architecture that can be deployed on-premises or in the cloud. This is particularly important for enterprises looking to tap into the potential of generative AI, which requires the ability to process and store large volumes of vectors.
The Enterprise Vector Store is currently available in private preview, with general availability expected in July. While it is unclear whether it will be added to Teradata's Vantage Cloud, several analysts believe that it will be integrated into its VantageCloud and IntelliFlex offerings by the second half of this year. If the new vector store is added to IntelliFlex, Teradata will become the first and only one to offer a vector store on-premises, which will be a significant advantage for existing customers in hybrid and on-premises scenarios looking to tap into generative AI capabilities.
Looking ahead, analysts predict that almost all enterprises developing applications based on generative AI will invest in data platforms with vector search and retrieval-augmented generation to complement foundation models with proprietary data and content. Teradata's move is well-positioned to capitalize on this trend, and its partnership with Nvidia will provide a significant boost to its AI capabilities.
In conclusion, Teradata's introduction of the Enterprise Vector Store and its partnership with Nvidia mark a significant milestone in the company's efforts to support enterprises in their adoption of generative AI models. As the demand for AI capabilities continues to grow, Teradata is well-positioned to capitalize on this trend and provide its customers with the tools they need to succeed in an increasingly AI-driven world.