Enterprises worldwide are still wrestling with the challenge of taking AI agents from polished demos to working production systems. Databricks thinks it has cracked the problem with two key innovations, Agent Bricks, a framework for domain-specific AI agents, and Lakebase, an AI-native operational database. Both are part of the company’s larger push to unify data, analytics, and AI.
Recently, the company announced that it hit a $4 billion revenue run-rate, surpassing $1 billion from AI, and is raising $1 billion Series K at over $100 billion valuation to expand Agent Bricks, Lakebase, and global AI growth.
Nick Eayrs, VP of field engineering for APJ at Databricks, in an exclusive interaction with AIM, said he believes the missing piece has been automation in evaluation and optimisation.
He explained that most enterprises are “flying blind” when building agents, without reliable ways to measure quality or balance costs. The complexity, he added, makes progress slow and expensive.
Solving the Agent Bottleneck
Agent Bricks was introduced to address these hurdles directly. Rather than forcing teams to manually tweak prompts, models, and retrieval pipelines, Databricks’ system allows users to simply declare the task in natural language and let the framework auto-optimise. The platform automatically generates evaluation suites, applies techniques such as prompt engineering or reward models, and balances quality with cost.
Eayrs said, “Agent Bricks delivers both higher quality and lower cost through many breakthroughs from our Mosaic AI research team.”
Eayrs mentioned some examples of breakthroughs. Test-time Adaptive Optimisation (TAO), for instance, teaches models to improve at tasks using past input examples, often bringing open source models up to the quality of expensive proprietary ones.
Similarly, Prompt-Guided Reward Models allow rules to be updated with prompts while still delivering reliable assessments.
What sets Agent Bricks apart is the role of human feedback. With Agent Learning from Human Feedback, domain experts can guide agent behaviour without needing deep technical knowledge. As Eayrs put it, this “democratises agent development,” making it possible for subject experts to shape outcomes directly.
The result, Databricks claims, is that enterprises can iterate faster, reduce costs, and bring agents into production with far less friction. Crucially, the framework continues to re-optimise agents even after deployment, ensuring adaptability in real-world settings.
Read: AI Agents Work, But Why Aren’t They Mainstream Yet?
A Database for the AI-First Era
On the data side, Databricks is challenging the dominance of decades-old operational databases with Lakebase. Built on Postgres and powered by Neon technology, Lakebase is designed for the fast, concurrent data demands of AI applications and agents.
“Traditional operational databases (OLTP) are a $100-billion-plus market, but they are based on decades-old architecture designed for slowly changing apps, making them difficult to manage, expensive, and prone to vendor lock-in,” Eayrs said.
Unlike traditional OLTP systems, which are often expensive and prone to vendor lock-in, Lakebase sits within the lakehouse architecture and offers seamless autoscaling.
By converging operational and analytical layers, Lakebase reduces latency and gives enterprises real-time access to current information.
This is key, Eayrs noted, because “every AI application, agent, recommendation, and automated workflow needs fast, reliable data at the speed and scale of AI agents.”
Governance and security are also central to the design. Lakebase integrates with Unity Catalog to enforce consistent permissions, separates storage and compute for efficient scaling, and supports real-time synchronisation between operational and analytical systems.
This combination, Eayrs said, ensures enterprises can move faster without compromising compliance or security.
He explains, “At its core, Lakebase is built upon proven open source technologies. Unlike proprietary systems, it avoids vendor lock-in, promotes transparency, and enables community-driven innovation.”
“Lakebase leverages Postgres, which is widely used by developers and has seen rapid adoption over the last few years,” he said.
With many large language models already trained on Postgres data, the system aligns naturally with the workflows of AI-native applications.
Read: Why TiDB Thinks S3 Will Power the AI-First Database Era
The Road Ahead
Together, Agent Bricks and Lakebase sketch out Databricks’ vision of a unified platform where data, analytics, and AI converge.
Eayrs sees them as “key pillars” of the company’s strategy, with both expected to drive adoption in different ways. While Agent Bricks tackles the complexity of deploying agents, Lakebase reimagines the database for an AI-first world.
If enterprises do manage to move beyond demos, it will likely be through systems that blend simplicity, scalability, and openness. Databricks is betting that the combination of intelligent agents and AI-ready databases is exactly what the market has been waiting for.
The post Databricks Says Enterprises Are Flying Blind on AI Agents, And It Has a Fix appeared first on Analytics India Magazine.