Twelve Labs Raises $30M to Unlock Video Analysis Potential with AI Models

Max Carter

Max Carter

December 12, 2024 · 3 min read
Twelve Labs Raises $30M to Unlock Video Analysis Potential with AI Models

Twelve Labs, a startup pioneering video analysis with AI models, has raised $30 million in new investments to further develop its technology and expand its reach. The funding round, which brings the company's total raised to $107.1 million, was led by strategic partners including Databricks, Snowflake, SK Telecom, and Hubspot Ventures, as well as In-Q-Tel, a nonprofit VC supporting U.S. intelligence capabilities.

The company's AI models, which can search through videos for specific moments, summarize clips, and answer questions like "When did the person in the red shirt enter the restaurant?", have attracted big-name backers including Nvidia, Samsung, and Intel. According to Jae Lee, co-founder and CEO of Twelve Labs, the company's focus on video analysis sets it apart from general-purpose multimodal models developed by companies like Google and OpenAI.

Twelve Labs' technology has far-reaching implications for various industries, including media, entertainment, and security. The company's models can drive applications such as ad insertion, content moderation, and auto-generating highlight reels from clips. Developers can create apps on top of Twelve Labs models to search across video footage and more.

One of the key concerns surrounding AI models is bias, and Twelve Labs is no exception. Lee acknowledged that the company is still working on releasing model-ethics-related benchmarks and data sets, but assured that bias tests are conducted on all models prior to release. The company trains its models on a mix of public domain and licensed data, without sourcing customer data for training.

In addition to video analysis, Twelve Labs is branching out into areas like "any-to-any" search and multimodal embeddings. The company's Marengo model can search across images and audio in addition to video, and accept a reference audio recording, image, or video clip to help guide a search. The Embed API creates multimodal embeddings for videos, text, images, and audio files, useful for applications like anomaly detection.

Twelve Labs has secured clients in the enterprise, media, and entertainment spaces, with major partners including Databricks and Snowflake. Both companies have integrated Twelve Labs tooling into their offerings, with Databricks developing an integration that lets customers invoke Twelve Labs' embedding service from existing data pipelines, and Snowflake creating connectors to Twelve Labs models in Cortex AI, its fully managed AI service.

The company has also announced the addition of Yoon Kim, former CTO of SK Telecom and a key architect behind Apple's Siri, as president and chief strategy officer. Kim will spearhead Twelve Labs' aggressive expansion plan, driving future growth with key acquisitions, expanding the company's global presence, and aligning teams toward ambitious goals.

Looking ahead, Twelve Labs aims to grow into new and adjacent verticals, such as automotive and security, in the next few years. While Lee wouldn't confirm outright, the investment from In-Q-Tel suggests that security and possibly defense work may be on the horizon. The company remains committed to exploring opportunities where its technology can have a positive, meaningful, and responsible impact that aligns with its ethical guidelines.

Similiar Posts

Copyright © 2024 Starfolk. All rights reserved.