AI Projects Can’t Run like R&D Experiments in Pharma

Pharmaceutical companies often find themselves at a crossroads with every new technology—eager to lead, yet cautious about its reliability.

AI promises faster drug discovery, smarter clinical trials and more personalised patient care. But as regulators like the FDA roll out detailed guidance on AI and machine-learning-based medical devices, the industry faces a sharper question: how to innovate at speed without tripping over compliance?

According to Manish Mittal, managing principal and India business head at Axtria, the answer lies in embedding compliance into the DNA of AI programmes rather than treating it as an afterthought.

“Compliance should be baked in, not bolted on at the end,” he said. “Doing so not only reduces duplicated work and accelerates approvals, but it also safeguards patient safety and trust—two assets regulators now treat as non-negotiable.”

The FDA, along with its global counterparts, has moved from broad principles to lifecycle expectations. Frameworks like Good Machine Learning Practices, predetermined change control plans and laws such as the EU AI Act have raised the bar for everyone. “AI projects can no longer be run like R&D experiments,” Mittal emphasised.

Building Compliance and Trust Together

AI in pharma needs enterprise-grade governance, not lab-style exploration. From day one, regulatory, clinical, data science, quality and legal experts must work as one team. “Cross-functional AI product teams—bringing together regulatory, clinical, data science, quality and legal expertise—help ensure compliance and innovation evolve together,” Mittal said.

Automation plays a significant role. Version-controlled datasets, continuous integration pipelines, and secure audit trails make regulatory reviews faster and cleaner. Early regulator engagement, through pre-submission meetings or sandboxes, helps prevent last-minute obstacles. Post-market analytics close the loop by tracking data drift and safety issues, keeping systems accountable long after deployment.

Still, technology alone doesn’t create trust. As Mittal pointed out, trust in AI rests on three pillars: privacy, fairness and accountability. For pharma, these are not optional values—they’re the foundation of adoption.

Privacy begins with clarity on data rights and obligations. Companies must map datasets to consent requirements, conduct impact assessments, and design privacy-first architectures using pseudonymisation, encryption, and federated learning to limit exposure. Accountability is built through transparent governance, independent audits and clear consent protocols that make patients partners, not subjects.

“If you want people to trust a new medicine, you show them it’s safe. AI is no different. We have to show, in plain sight, that it’s fair, protects privacy and helps doctors make better choices, not mysterious ones,” Mittal said.

Firms that do this well can even turn trust into an advantage—by publishing transparency metrics or offering privacy-preserving deployment options, they can signal leadership in ethical AI.

Fairness is Non-Negotiable

Bias is the biggest ethical fault line in AI-driven healthcare. Models trained on skewed data can harm underrepresented patient groups. Mittal insisted on diverse datasets that reflect real-world populations across ethnicity, gender, age and income. Validation must go beyond accuracy—testing performance across subgroups, monitoring algorithmic drift and conducting fairness audits regularly.

“AI learns from what we feed it, so if we only show it one kind of person, it won’t know how to help everyone. The goal isn’t just to make things faster; it’s to make them fair for every patient,” Mittal added.

Fairness also depends on openness. Publishing methodologies, validation processes and data sources allow scrutiny and correction. Involving underrepresented communities in development makes systems more relevant and inclusive.

Human Oversight Anchors It All

Generative AI is now being used in clinical trial design—optimising protocols, selecting sites and identifying patient cohorts. It can cut costs and timelines but raises new ethical challenges. AI must remain under human supervision. Ethics committees and investigators should assess every AI-driven recommendation to protect patient safety and scientific validity.

Mittal stressed that efficiency should never outrun ethics. “AI outputs need human oversight,” he said. “Patient-centricity must guide every step, safeguarding autonomy and informed consent.”

Independent oversight from review boards and safety monitoring panels ensures AI supports, not replaces, human judgement. Training clinicians to interpret AI outputs and understand system limits is equally vital.

One Compliance Standard, Many Jurisdictions

The lack of global alignment adds complexity. The EU AI Act, US FDA guidance, and other national rules differ, creating uncertainty for cross-border trials and devices. Mittal advises aligning with the toughest standards available.

“Every country has its own rulebook for AI, but patients everywhere deserve the same care. If we build AI safely enough for the toughest rules, we build it safely enough for everyone. Good governance is the confidence that your AI can stand up to scrutiny in any market,” he said.

By embedding compliance, fairness and transparency throughout the AI lifecycle, pharma companies can stay ahead of shifting rules instead of scrambling to meet them.

The Human Bridge

Perhaps the most important lesson, Mittal said, is that AI should never function in isolation. “Think of responsible AI as building a suspension bridge. Data scientists lay the cables, regulators inspect the beams, and clinicians test the path. The bridge only holds if everyone builds together, with transparency and trust as the anchors.”

AI in pharma is no longer a concept—it’s a reality. But its promise will only hold if innovation moves hand in hand with compliance, fairness and human oversight. Embedding responsibility into every layer of AI is not a choice. It’s the difference between progress that heals and innovation that risks losing public trust.

The post AI Projects Can’t Run like R&D Experiments in Pharma appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...