The new paradigm: Architecting the data stack for AI agents - Dr-Buzz4U

header-ads

Top 10

The new paradigm: Architecting the data stack for AI agents


insurance,insurance companies,insurance broker,insurance policy,insurance premium,insurance deutsch,insurance in germany,insurance companies in germany,

The new paradigm: Architecting the data stack for AI agents



Source: Venture Beat

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


The launch of ChatGPT two years ago was nothing less than a watershed moment in AI research. It gave a new meaning to consumer-facing AI and spurred enterprises to explore how they could leverage GPT or similar models into their respective business use cases. Fast-forward to 2024: there’s a flourishing ecosystem of language models, which both nimble startups and large enterprises are leveraging in conjunction with approaches like retrieval augmented generation (RAG) for internal copilots and knowledge search systems. 

The use cases have grown multifold and so has the investment in enterprise-grade gen AI initiatives. After all, the technology is expected to add $2.6 trillion to $4.4 trillion annually to the global economy. But, here’s the thing: what we have seen so far is only the first wave of gen AI. 

Over the last few months, multiple startups and large-scale organizations – like Salesforce and SAP – have started moving to the next phase of so-called “agentic systems.” These agents transition enterprise AI from a prompt-based system capable of leveraging internal knowledge (via RAG) and answering business-critical questions to an autonomous, task-oriented entity. They can make decisions based on a given situation or set of instructions, create a step-by-step action plan and then execute that plan within digital environments on the fly by using online tools, APIs, etc.

The transition to AI agents marks a major shift from the automation we know and can easily give enterprises an army of ready-to-deploy virtual coworkers that could handle tasks – be it booking a ticket or moving data from one database to another – and save a significant amount of time. Gartner estimates that by 2028, 33% of enterprise software applications will include AI agents, up from less than 1% at present, enabling 15% of day-to-day work decisions to be made autonomously.

But, if AI agents are on track to be such a big deal? How does an enterprise bring them to its technology stack, without compromising on accuracy? No one wants an AI-driven system that fails to understand the nuances of the business (or specific domain) and ends up executing incorrect actions. 

The answer, as Google Cloud’s VP and GM of data analytics Gerrit Kazmaier puts it, lies in a carefully crafted data strategy.

“The data pipeline must evolve from a system for storing and processing data to a ‘system for creating knowledge and understanding’. This requires a shift in focus from simply collecting data to curating, enriching and organizing it in a way that empowers LLMs to function as trusted and insightful business partners,” Kazmaier told VentureBeat.

Building the data pipeline for AI agents

Historically, businesses heavily relied on structured data – organized in the form of tables – for analysis and decision-making. It was the easily accessible 10% of the actual data they had. The remaining 90% was “dark,” stored across siloes in varied formats like PDFs and videos. However, when AI sprung into action, this untapped, unstructured data became an instant value store, allowing organizations to power a variety of use cases, including generative AI applications like chatbots and search systems. 

Most organizations today already have at least one data platform (many with vector database capabilities) in place to collate all structured and unstructured data in one place for powering downstream applications. The rise of LLM-powered AI agents marks the addition of another such application in this ecosystem.

So, in essence, a lot of things remain unchanged. Teams don’t have to set up their data stack from scratch but adapt it with a focus on certain key elements to make sure that the agents they develop understand the nuances of their business industry, the intricate relationships within their datasets and the specific semantic language of their operations.

According to Kazmaier, the ideal way to make that happen is by understanding that data, AI models and the value they deliver (the agents) are part of the same value chain and need to be built up holistically. This means going for a unified platform that brings together all the data – from text and images to audio and video – to one place and has a semantic layer, utilizing dynamic knowledge graphs to capture evolving relationships, in place to capture the relevant business metrics/logic required for building AI agents that understand the organization and domain-specific contexts for taking action.

“A crucial element for building truly intelligent AI agents is a robust semantic layer. It’s like giving these agents a dictionary and a thesaurus, allowing them to understand not just the data itself, but the meaning and relationships behind it…Bringing this semantic layer directly into the data cloud, as we’re doing with LookML and BigQuery, can be a game-changer,” he explained.

While organizations can go with manual approaches to generating business semantics and creating this crucial layer of intelligence, Gerrit notes the process can easily be automated with the help of AI.

“This is where the magic truly happens. By combining these rich semantics with how the enterprise has been using its data and other contextual signals in a dynamic knowledge graph, we can create a continuously adaptive and agile intelligent network. It’s like a living knowledge base that evolves in real-time, powering new AI-driven applications and unlocking unprecedented levels of insight and automation,” he explained.

But, training LLMs powering agents on the semantic layer (contextual learning) is just one piece of the puzzle. The AI agent should also understand how things really work in the digital environment in question, covering aspects that are not always documented or captured in data. This is where building observability and strong reinforcement loops come in handy, according to Gevorg Karapetyan, the CTO and co-founder of AI agent startup Hercules AI.

Speaking with VentureBeat at WCIT 2024, Karapetyan said they are taking this exact approach to breach the last mile with AI agents for their customers.

“We first do contextual fine-tuning, based on personalized client data and synthetic data, so that the agent can have the base of general and domain knowledge. Then, based on how it starts to work and interact with its respective environment (historical data), we further improve it. This way, they learn to deal with dynamic conditions rather than a perfect world,” he explained.

Data quality, governance and security remain as important

With the semantic layer and historical data-based reinforcement loop in place, organizations can power strong agentic AI systems. However, it’s important to note that building a data stack this way does not mean downplaying the usual best practices. 

This essentially means that the platform being used should ingest and process data in real-time from all major sources (empowering agents to adapt, learn and act instantaneously according to the situation), have systems in place for ensuring the quality/richness of the data and then have robust access, governance and security policies in place to ensure responsible agent use. 

“Governance, access control, and data quality actually become more important in the age of AI agents. The tools to determine what services have access to what data become the method for ensuring that AI systems behave in compliance with the rules of data privacy. Data quality, meanwhile, determines how well (or how poorly) an agent can perform a task,” Naveen Rao, VP of AI at Databricks, told VentureBeat.

He said missing out on these fronts in any way could prove “disastrous” for both the enterprise’s reputation as well as its end customers. 

“No agent, no matter how high the quality or impressive the results, should see the light of day if the developers don’t have confidence that only the right people can access the right information/AI capability. This is why we started with the governance layer with Unity Catalog and have built our AI stack on top of that,” Rao emphasized. 

Google Cloud, on its part, is using AI to handle some of the manual work that has to go into data pipelines. For instance, the company is using intelligent data agents to help teams quickly discover, cleanse and prepare their data for AI, breaking down data silos and ensuring quality and consistency. 

“By embedding AI directly into the data infrastructure, we can empower businesses to unlock the true potential of generative AI and accelerate their data innovation,” Kazmaier said.

That said, while the rise of AI agents represents a transformative shift in how enterprises can leverage automation and intelligence to streamline operations, the success of these projects will directly depend on a well-architected data stack. As organizations evolve their data strategies, those prioritizing seamless integration of a semantic layer with a specific focus on data quality, accessibility, governance and security be best positioned to unlock the full potential of AI agents and lead the next wave of enterprise innovation.

In the long run, these efforts, combined with the advances in the underlying language models, are expected to mark nearly 45% growth for the AI agent market, propelling it from $5.1 billion in 2024 to $47.1 billion by 2030.



Read Full Article

The new paradigm: Architecting the data stack for AI agents Rating: 4.5 Diposkan Oleh: Dr-tech

No comments