Modern technology is omnipresent in our lives yet its applications remain long-delayed in healthcare. Among the global healthcare research community is Ziad Obermeyer, a professor at the University of California, Berkeley, who has been focused on the intersection of machine learning and health since 2017.
There are a number of moonshots – large–scale government or conglomerates–backed initiatives – promising to revolutionise the sector but haven’t been able to come up with a pocket friendly solution. Obermeyer, who made it to the TIME AI 100 list, has managed to chart a different course.
He is working on a research project with MIT researchers to build AI based diagnostics that can run on wearable devices.
“Given how cheap and reliable technology is for acquiring data for laymen through a smartphone or a wearable device, it opens up a whole new way to access healthcare, outside the traditional medical system,” he told AIM in an exclusive interview.
For countries like India, that have technological talent and a crippling healthcare system, Obermeyer believes there’s the potential to leapfrog cumbersome electronic health records and clumsy data problems as observed in the case of the US. Through the ongoing project along with his team Obermeyer is training algorithms for smartphones to diagnose things like heart attack, Alzheimer’s dementia and other cognitive problems. They are also testing whether the diagnostic tools can be deployed using community health workers in people’s homes outside of government, hospitals or other established firms to deliver health care at much lower cost, he explained.
Tech-ing it Personally
During certain instances as a researcher, Obermeyer has found it increasingly difficult to access health data. “The data lives inside of silos that are controlled by hospitals, government and agencies akin. It’s hard to collaborate for research, build healthcare-based AI products and evaluate how they work,” he pointed out the case of growing dominance of a handful of big tech companies who have access to a huge amount of medical data.
“Ultimately, it shouldn’t be the case that AI in healthcare or anywhere else. That’s not really how we want markets to work or deliver value,” he stated.
To solve the case in point, the healthcare practitioner launched a nonprofit Nightingale Open Science project in 2021 with $6 million philanthropic funding from Eric Schmidt’s Foundation. “We used that to build up datasets in collaboration with health systems and governments around the world making sure it is ethical and secure. We put that data on our cloud platform where any researcher in the world can access it for free. That is Nightingale’s open science mentality,” he explained.
While there’s no way to reduce the risk to zero, the team at Nightingale have multiple layers of process to protect patients’ privacy and make sure the data is ethically used.
First the data is de-identify before it’s taken out of the health system, in compliance with US as well as European Union laws. Secondly, the data is maintained on their own cloud platform, to monitor how it is being put to use. Obermeyer highlighted that this doesn’t compromise the client’s IP. “Everything is logged and stored so if there’s any allegations of malfeasance we can always go back to the record to figure out exactly what happened,” he said.
Beyond these, there are two additional layers of ethical oversight — internal and external.
Internally, Nightingale evaluates proposals to ascertain alignment with its own ethical standards. Externally, the health systems that serve as data sources wield a veto power, ensuring that nothing transpires that might not be in the best interest of the patients whose data is being used, he explained.
The Hallucination Issue
One cannot talk about AI models without discussing hallucinations—a concern not lost on Obermeyer.
There’s no way for you to check the output as a layperson and it’s incredibly dangerous for people to use chatbots without resources to evaluate the output. The broader problem is how these AI models are currently evaluated. “We need meaningful benchmarks that can’t be gamed or memorised, but give us a real view of how things work,” Obermeyer suggested.
In 2020, he co-founded Dandelion Health, a free public service platform for evaluating algorithms, starting with electrocardiograms, moving on to other data modalities, including notes. “Soon we’ll be able to evaluate large language models,” he revealed.
“Anyone can upload an algorithm to the environment where their intellectual property is protected. We will run that algorithm on our data and give them feedback on the performance of their algorithm on any chosen metric,” he elaborated.
“But this is not possible because they don’t have access to our data. Third-party independent evaluation is also important, to know what these models are good at and where they can be really dangerous,” he added.
The startup has agreements with a hedge fund, and US health systems through which the team can access their clinically generated data — electronic health records, images, waveforms, sleep monitoring, studies, everything.
We identify the data and make curated subsets available to AI product developers, startups and companies that want to build products for better health care. A company can come to us and say they need a specific dataset and we can create it for them to build AI algorithms and then develop them for clinics.
The two separate projects are working towards the same goal; to get more people building and using and validating AI products, Obermeyer said in conclusion.
The post Meet the Researcher Curing the Healthcare System with ML appeared first on Analytics India Magazine.