EPAM is difficult the established order of legacy knowledge infrastructure, highlighting how conventional warehouses and batch-driven pipelines have gotten roadblocks to realising the total potential of generative AI.
To maneuver from experimentation to enterprise-scale impression, organisations should rethink their knowledge foundations for agility, real-time intelligence and AI-native design. On this shift, the corporate is pushing for clever knowledge platforms and touchless engineering to energy real-time AI brokers.
At DES 2025, Srinivasa Rao Kattuboina, Senior Director and Head of the info and Analytics Observe at EPAM Programs Inc. (EPAM), delivered a compelling session arguing that the period of agentic AI calls for a radical revamp of how knowledge platforms are architected, shifting from conventional batch processing towards real-time, clever, and open infrastructures.
“AI is not simply an software layer,” Kattuboina stated. “We should now take a look at knowledge platforms themselves as clever methods that combine AI at their core.”
Why Present Platforms are Lacking the Mark
Kattuboina famous that almost all present enterprise knowledge platforms, constructed over many years by way of knowledge warehouses, knowledge lakes and lakehouses, are crumbling below the calls for of generative AI and agentic methods.
These legacy methods, closely reliant on batch processing, are unable to help real-time decision-making or autonomous brokers that rely upon recent, clear, and dependable knowledge to perform successfully.
He described this as a transition from conventional platforms to what he calls clever knowledge platforms. These methods are designed not simply to retailer and handle knowledge but in addition to automate insights, ship real-time suggestions, and align intently with an organization’s AI targets.
One of many standout factors Kattuboina emphasised was “darkish knowledge”, which refers to enterprise knowledge that has been collected however stays unused.
“Each time we construct a mannequin, we solely take a look at a portion of our knowledge,” he stated. “Terabytes are sitting in lakes and warehouses, untouched. With agentic methods, even SQL queries can now discover that darkish knowledge.”
He argued that the appearance of AI assistants and agent-based architectures means organisations can lastly begin tapping into this hidden potential. However to do this, the info have to be real-time, accessible and intelligently built-in throughout the pipeline.
Rethinking the Stack: From Batch to Actual-Time
The shift to agentic AI brings with it new technological imperatives. Kattuboina defined that conventional knowledge engineering practices, like automated pipelines and metadata-driven orchestration, are not adequate.
As an alternative, he proposed reconfiguring the info structure, highlighting the necessity for real-time processing, open architectures, minimal layering and embedded intelligence throughout the pipeline. Applied sciences like Apache Iceberg, Flink and Kafka are more and more changing into the spine of this transformation.
“With the tempo at which Iceberg is evolving, chances are you’ll not even want Snowflake or Databricks sooner or later. Open codecs and compute frameworks can do a lot of the heavy lifting,” he added. Such platforms might dramatically cut back AI implementation timelines—from months or years to simply weeks.
The clever knowledge platform, as envisioned by Kattuboina, automates not simply knowledge ingestion but in addition transformation, characteristic engineering, and MLOps workflows. “You join your supply, and your knowledge is processed to the golden layer with out guide intervention,” he stated.
“You don’t have to insert metadata or orchestrate flows manually. That’s the extent of intelligence we’re aiming for,” he added.
Eliminating Redundant Layers with AI Assistants
Kattuboina additionally explored enterprise use circumstances driving this shift. A typical state of affairs is the will to interchange hundreds of static stories with a single AI assistant able to querying real-time knowledge.
Nevertheless, such a imaginative and prescient is simply possible if the underlying platform is clever and nimble and constructed for AI from the bottom up. “Everybody desires to eradicate the reporting layer with an assistant. However that requires the precise knowledge illustration and infrastructure,” he emphasised.
Too usually, organisations deal with AI initiatives as separate threads, duplicating knowledge into new shops quite than upgrading current platforms. The important thing, he argued, is to convey intelligence into current infrastructure in order that it may possibly serve each conventional analytics and rising AI use circumstances.
For example what’s doable, Kattuboina described a mission carried out on AWS utilizing Snowflake, Kafka, Flink, and Iceberg. The structure enabled “touchless” knowledge engineering, the place engineers solely needed to configure desk names and layer targets.
The system robotically took care of ingestion, transformation, and orchestration. “You simply configure what you wish to course of,” he stated. “The complete pipeline—utilizing Flink, Kafka, and Iceberg—runs with out human contact. That’s the longer term,” he added.
Kattuboina concluded with a name for nimble, easy, and real-time platforms that may combine throughout a number of AI protocols and cloud ecosystems, from AWS to Azure and GCP. “We’re seeing large strain to ship AI quick. The query is, do you want six months to deploy a mannequin, or are you able to do it in just a few weeks?”
For extra info on EPAM’s knowledge & analytics capabilities, go to https://welcome.epam.in/
The submit EPAM Thinks You Ought to Rethink Your Knowledge Stack for AI appeared first on Analytics India Journal.