
 
 
For over a decade, India’s fintech ecosystem has been synonymous with frictionless payments and fast onboarding. But the next phase will be defined by underwriting logic and recovery discipline. The system’s success will hinge on whether technology can help India lend and recover smarter, not just faster.
Superficially, the financial system appears strong. Gross non-performing assets (NPA) have fallen to around 2.6-2.58% by FY25, from 9.11% in FY21, the lowest in a decade.
Yet, dig deeper and much of that “clean-up” came from ₹1.7 trillion in loan write-offs in FY24, according to Reserve Bank of India (RBI) data, not necessarily better credit selection. Even the Insolvency and Bankruptcy Code (IBC), India’s proudest reform, has delivered average recoveries of just 31% and extended timelines.
While fintech has figured out the front end of finance, the back end, where credit becomes a consequence, still needs a structural rewrite.
Compliance, Telemetry and Reality Checks
Collections, the least glamorous segment of finance, remains the weakest link in India’s credit chain. It is dominated by fragmented agencies, manual processes, and minimal oversight. Yet, the RBI has made clear that this era is ending. Its digital lending norms now demand that every intermediary, from field agents to recovery partners, maintain traceable, compliant data trails.
Ranjan Agrawal, CEO of Collectedge, a managed marketplace for debt recovery, is building for this shift. “Most fintech innovation in India has focused on lending and disbursement because that’s the visible, growth-oriented part of the value chain,” he says. “But we chose to build for the hardest part [collections], precisely because it’s where the real risk lies and where lenders’ profitability is ultimately decided.”
Collectedge’s platform uses AI to decode borrower intent and optimise recovery strategy, combining machine scoring with field-level visibility. Its models analyse voice tone, repayment patterns, and follow-up behaviour to distinguish borrowers with an intent to pay from genuine defaulters.
“Our models study tone, responsiveness, and repayment patterns… every field visit feeds back into our AI engine, improving accuracy over time,” Agrawal says, adding that accuracy now exceeds 80% in predicting likely repayments.
But the bigger innovation is compliance. “Our technology ensures every borrower interaction is monitored, auditable, and compliant,” Agrawal says.
“Each field agent uses an app that captures call logs, location trails, and borrower acknowledgements,” he added. This creates a digital chain of custody, a transformation from the opaque, agent-driven world of collections to a system that can finally pass a regulatory audit.
Still, AI cannot solve the macro-structural problem alone. Even as technology improves field efficiency, IBC recoveries continue to plateau, and write-offs remain large.
Agrawal envisions a broader remedy: a “centralised recovery infrastructure layer,” a UPI-like framework for collections. It would standardise lender interfaces, benchmark agent performance, and give regulators an API window into real-time recovery data. It’s an ambitious vision, and one that India’s financial ecosystem badly needs.
Underwriting After Bureaus
If recoveries determine how finance ends, underwriting determines how it begins. For decades, India’s credit system has relied on bureau scores, lagging indicators that offer little insight into a borrower’s current financial behaviour.
“Bureau scores tell you who someone was six months ago. AI lets you see who they are today,” says Joydip Gupta, APAC head at Scienaptic AI, an AI-powered credit underwriting company. Its engine processes over ₹85,000 crore in decisions per quarter, evaluating borrowers through live financial signals, GST data, payment patterns, cash flows, and account behaviour.
In Gupta’s framing, the future of lending is continuous scoring, not static approval. “Static credit limits make no sense when someone’s financial reality changes every month,” he says. “The future is credit that adapts in real time.”
This adaptive lending logic depends on Account Aggregator (AA) networks, India’s newest data infrastructure that securely pipes banking and financial data to lenders with user consent. Yet, adoption remains uneven.
Scienaptic’s focus on explainability also reflects a rising regulatory concern. “Every decision comes with a clear explanation that both regulators and lenders can understand,” Gupta notes. “When we decline an application, we can point to specific factors like irregular income patterns or high debt utilisation, not just ‘the algorithm said no.’”
It’s an overdue shift. In the space where AI-driven credit models could determine inclusion or exclusion, transparency is not optional. The RBI’s data explicitly warned that algorithmic opacity in credit scoring could introduce systemic bias, calling for model audits and risk governance frameworks comparable to financial audits.
“The technology exists and regulators are surprisingly progressive,” Gupta admits. “The real barrier is the institutional mindset… treating AI like a fancy calculator rather than a decision-making partner.”
Behavioural Biases and Financial Decision-Making
The same human biases that distort credit underwriting also skew capital-market behaviour. Yashas Khoday, chief product officer at FYERS, an online trading and investment platform, observes this parallel. “Many investors make decisions based on impulse rather than logic. Human minds are naturally full of biases… We often look for data that supports what we already want to believe,” he says.
FYERS’ work in building AI-assisted investor education isn’t just about trading, it’s a mirror for the financial system itself. Its FIA GPT tool is built around responsibility, not speculation. “FIA is not a recommendation engine. It’s designed to assist, not advise. Transparency is a core principle. FIA is not a black box,” Khoday explains.
That distinction, which is assistive versus advisory, applies equally to AI underwriting and AI recoveries. Both must remain decision aids, not decision proxies. Just as FYERS refuses to let its models execute trades autonomously, lenders must resist the temptation to let opaque AI systems approve or deny loans without human oversight.
Khoday highlights that “Unlike a human, AI has no emotion or personal bias. It can process vast amounts of market data, news, and patterns in real time and offer insights that are hard to catch manually.”
According to him, AI provides an objective evaluation of expert opinions, helping users assess whether the advice aligns with their risk profiles, goals, and timelines. Essentially, it serves as an extra layer of insight, enabling users to trade with more clarity and confidence.
India’s financial modernisation must ensure accountability in AI to sustain lending and avoid NPAs. This includes logging agent interactions for recoveries and treating underwriting decision engines as regulated instruments.
The post When AI Learns the Language of Risk appeared first on Analytics India Magazine.