As models grow in size, one crucial element remains elusive—explainable AI. The larger they grow, the harder it becomes to understand their inner workings. And when we consider models 405b and beyond, it becomes extremely difficult to understand how they arrive at specific outcomes.
Vinay Kumar Sankarapu, CEO of Arya.AI, perfectly encapsulates the challenge: “Capabilities are fine—you can say your model is 99.99% accurate, but I want to know why it is 0.01% inaccurate. Without transparency, it becomes a black box, and no one will trust it enough to put billions of dollars into it.”
His statement cuts to the core of the black box dilemma: trust demands understanding.
The Problem is Beyond Hallucinations
A recent study by the University of Washington revealed significant racial, gender, and disability biases in how LLMs ranked job applicants’ names. The research found that these models favoured names associated with White individuals 85% of the time, and names perceived as Black male were never preferred over White male names.
This study highlights the complex interplay of race and gender in AI systems and underscores the importance of considering intersectional identities when evaluating AI fairness.
While we spoke to Mukundha Madhavan, tech lead at DataStax, about model hallucination and the size of the model, he said, “Foundation models, their training, and architecture—it feels like they are external entities, almost victims of their own complexity. We are still scratching the surface when it comes to understanding how these models work, and this applies to both small and large language models.”
He added that size doesn’t matter. Whether a model boasts 40 billion or 4 billion parameters, these are just numbers. The real challenge is to make sense of these numbers and understand what they represent.
Sankarapu pointed out a paradox in AI development: “We are creating more complicated models that are harder to understand while saying alignment is required for them to become mainstream.” He noted that while AI systems have scaled through brute force—using more data and layers—this approach has plateaued. Now, efficiency and explainability need to scale alongside model complexity.
Arya.AI has been working on solving this issue through its proprietary ‘Backtrace’ technique, which provides accurate explanations for deep learning models across various data types. Sankarapu explains, “We want to create explainability that is very accurate and can work for any kind of model so that LLMs are no longer black boxes but white boxes.”
This effort aligns with emerging regulations like the EU’s AI Act, which mandates explainability for high-risk AI applications.
Sankarapu adds, “Once you start understanding these models, you can do tons of things around them—like improving efficiency or unlocking new research areas.” This positions explainability as a catalyst for both innovation and operational efficiency.
In response to these challenges, Arya.AI has developed AryaXAI, which provides precise, granular explanations for AI decisions across various model architectures, making it particularly valuable for enterprise applications.
AryaXAI stands out by offering feature-level explanations that help users understand exactly which inputs influenced a model’s decision and to what extent. The platform can analyse both structured and unstructured data, providing explanations for decisions made by complex neural networks, including those processing images, text, and tabular data.
A key differentiator is AryaXAI’s ability to provide explanations in real time, making it practical for production environments where quick decision-making is crucial. The platform generates natural language explanations that are easily understood by both technical and non-technical stakeholders, bridging the gap between AI capabilities and business requirements.
For financial institutions, specifically, AryaXAI offers detailed audit trails and compliance documentation, addressing regulatory requirements while maintaining model performance. The platform’s ability to explain decisions in human-readable terms has made it particularly valuable in sectors where transparency is non-negotiable, such as banking, insurance, and healthcare.
What Should be the Vision for Autonomous Systems
On the future of autonomous systems in banking, Sankarapu shared Arya.AI’s focus on aligning models with end goals through feedback loops and explainability. “To build truly autonomous agents capable of handling complex tasks like banking transactions, they must be explainable and aligned with user expectations,” he said.
Madhvan proposed a three-pronged approach to reduce hallucinations. The first part is model explainability research which focuses on uncovering how AI models make decisions by analysing their embeddings, attention mechanisms, and other internal processes, which are often opaque. This research is essential for building trust and transparency in AI.
The second is model alignment, which ensures that AI models behave as intended by aligning their outputs with human values and reducing issues like hallucinations or unintended biases.
Finally, practical implementation prioritises creating reliable systems for real-world applications by incorporating safeguards and guardrails that allow models to operate effectively within specific business contexts, even if complete transparency is unattainable.
Together, these approaches aim to balance the growing complexity of AI systems with operational reliability and ethical considerations.
Arya.AI is also exploring advanced techniques like contextual deep Q-learning to enable agents to handle tasks requiring memory and planning. However, Sankarapu cautions against over-focusing on futuristic visions at the expense of current market needs. “Sometimes you get caught up with too much future vision and lose sight of current realities,” he concluded.
The post This Indian Startup is Making ‘Black-Box’ AI Models Spill the Beans appeared first on Analytics India Magazine.