Ticker

6/recent/ticker-posts

Ad Code

Responsive Advertisement

Rockset Says It’s Ready for Real-Time AI

Shrinking decision windows and faster data generation set the table for the rise of real-time analytics as a product category. And now with large language models and vector databases paving the path toward enterprise AI, we’ve suddenly entered the era of real-time AI systems, according to Rockset CEO and Co-founder Venkat Venkataramani.

Rockset’s claim to fame up to this point has been developing a relational database that enables users to run SQL queries continuously on large amounts of fresh incoming data. This has a Holy Grail of sorts in advanced analytics, and something that many big data developers–from traditional data warehouse vendors to real-time stream processors—has struggled to do, for one reason or another.

Rockset addresses the real-time analytics need with a slew of capabilities built on the open RocksDB key-value store, which Rockset CTO and Co-founder Dhruba Borthakur helped create at Facebook. This includes Rockset’s powerful converged indexing capabilities, but also includes its schemaless data ingestion, time-series optimization, query planning, and its cloud-based architecture.

The goal up to this point has been to give real-time applications access to the freshest, most up-to-data data arriving over a Kafka pipe. Like other database companies chasing the real-time analytics dream (Imply, Clickhouse, and StarTree), there’s no single brilliant feature that enables you to suddenly successfully run tens of thousands of SQL queries per second on massive amounts of incoming data. Instead, it’s a capability that’s enabled through tireless engineering.

But the goal lines moved in April when Rockset rolled out its initial support for vector search functionality in the database. The new capability allows Rockset to not only store and index vector embeddings in its database, but to combine those vector embedding with metadata filtering, keyword searches, and vector similarity scores.

(Gorodenkoff/Shutterstock)

These new vector-related features will unlock real-time AI use cases for customers, with a particular focus on product recommendations, personalization, and fraud detection, Venkataramani says.

“The old word for this is predictive analytics. I want to predict what is about to happen,” he says. “Nobody says those words anymore. It’s all real-time AI. But essentially the corpus of use cases is very similar to what people would have done.”

Since ChatGPT emerged late last year, companies have started rethinking how and where they can apply AI. New technologies and techaniques based on neural networks and vector embeddings are upending machine learning appraoches that were considered cutting-edge just five years ago, Venkataramani says.

For example, take product recommendation, a time-tested application for data scientists. Instead of a painstaking process that involves identifying the most predictive features and attributes, building a pipeline to automatically extract them, and then carefully constructing a machine learning model to infer consumer preferences at runtime, with the advent of LLMs companies now can just basically just throw all this data into a text document and let the neural nets sort it out, Venkataramani says.

“Previously, the machine learning models will try to extract attributes about your product, color of the product, manufacturer, what category it is in, etc.” he says. “But now, you can just give these AI models and these neural nets just a BLOB of text. You could just give a catalog of images for every product, and you don’t need to tell it ‘Go and tag these images saying this is blue in color, this falls in this category.’

(Wright Studio/Shutterstock)

“Now you can feed all the products that the user is looking at, and an AI model can understand the likings and the disliking of the user without having to codify it in terms of particular attributes and particular rules,” Venkataramani continues. “So you can feed and build a vector for the user, and that vector represents all the potential products that they have a higher chance of liking or buying.”

This is dramatically lowering the bar for using AI in production, and enable companies to do much more with it, says Venkataramani, a 2022 Datanami Person to Watch. This could theoretically enable a company to perform predictive analytics on 100,000 items in their catalog, instead of limiting it to their top 1,000 items, he says.

“With AI, it’s almost like some bot is observing all the behavior of the user, and have understood every product at a much deeper level and then building the recommendation in real time when the user is there on the website, not an hour later, not a day later or a week later,” he says. “The level to which you can personalize has gone through the roof because you can now automate all of this.”

Rockset doesn’t create vector embeddings, which are condensed representations of large amounts of unstructured text or image data. But it does allow users to treat vector embeddings as basically another data type in the database, and to perform actions upon them, such as similarity search.

“What models you use to take unstructured data and turn that into a vector, we don’t care,”  Venkataramani says. “Think of it as another data type, another column in your table. You need to now to do similarity searches on them. You need to say, given a vector, find me all the other vectors that are closer to this thing that I’m searching for.”

For example, say a customer wanted to identify all images that resemble a daisy in the incoming stream of data (replace “daisy” with “gun” or “knife” if your use case is public safety instead of garden tours).

“The vector that I’m looking for is a daisy, but here are all the other images represented as vector,” Venkataramani explains. “Now you need an index on that. If you do a brute force search on the whole thing, it’ll take 10 days for this question to be answered. I want this to be done in 100 milliseconds. How do you do it? This is where indexing is the name of the game.”

Running machine learning algorithms, such as K-Nearest Neighbor (KNN) or Approximate Nearest Neighbor (ANN), against the index of vector embeddings dramatically speeds up the identification of daisies and daisy-adjacent images in the incoming data.

“No one is exactly looking for this vector in the database. They’re looking for all the ones that are closer, or the closest, and that’s where the indexes are lot more mathematically complex than building indexes on numbers or strings or dates or time,” Venkataramani says. “That’s why vector search is a very different capability and that’s what we’ve added.”

Related Items:

Vector Databases Emerge to Fill Critical Role in AI

Home Depot Finds DIY Success with Vector Search

Did Rockset Just Solve Real-Time Analytics?

The post Rockset Says It’s Ready for Real-Time AI appeared first on Datanami.

Enregistrer un commentaire

0 Commentaires