AI is actively transforming the healthcare sector, especially medical imaging. This data-drivenapproach is helping doctors diagnose and treat patients quickly and more accurately. The technology speeds up the imaging process and supports personalised treatment plans for each patient.
What is AI’s role in the imaging process?
Through segmentation, specific structures are highlighted in images, aiding in the early and accurate detection of diseases. Preprocessing techniques further improve image quality by reconstructing incomplete or noisy computed tomography (CT) data.
Beyond diagnostics, predictive analytics enables doctors to anticipate the rate of progression of a disease and suggest potential treatments. Quality control safeguards ensure images are clear and artefact-free for reliable use. Meanwhile, continuous imaging allows for ongoing monitoring, so treatments can be adjusted as and when needed.
Direct Applications of AI in CT Scans
CT scans, a type of 3D imaging, play a crucial role in detecting conditions like lung cancer, neurological issues, and trauma. Over 70 million exams are conducted annually in the US alone. Tech giant Google recently announced the release of CT Foundation, its new medical foundation tool for 3D CT volumes.
According to an official blog post, the CT Foundation is leveraging Google’s prior expertise in 2D medical imaging for chest radiographs, dermatology, and digital pathology.
This tool, built on VideoCoCa, simplifies processing DICOM format CT scans by creating a 1,408-dimensional vector that captures key details about organs, tissues, and abnormalities.
Announcing CT Foundation, a new medical imaging embedding tool that accepts a computed tomography (CT) volume as input and returns a small, information-rich numerical embedding that can be used to rapidly train models. Learn more and try it out yourself → https://t.co/AFXAj5edTE pic.twitter.com/hXKN8uTh4V
— Google AI (@GoogleAI) October 21, 2024
CT Foundation allows researchers to train AI models more efficiently with less data, significantly reducing the computational resources required, as compared to traditional methods. Researchers can also use its API for free.
Integration of AI in the complex task of interpreting 3D CT scans provides advanced tools for efficient analysis, helping radiologists spot even the smallest abnormalities that might have otherwise been missed.
For example, AI-driven methods now streamline blood flow assessment in stroke patients, providing real-time insights to accelerate treatment decisions during instances of critical care.
During COVID-19 research conducted by Rafał Obuchowicz and two others, 3D CT analysis revealed fibrotic lung changes in cancer patients post-infection. This enhanced the general understanding of infection-induced vulnerabilities.
Generative Adversarial Networks (GANs) are used to enhance CT image reconstruction to fill in missing data. Additionally, UnetU, a deep learning tool, denoises images and enhances material differentiation, reducing processing time and supporting more detailed analysis.
This segmentation of deep learning provides thorough diagnostic insights, replacing manual annotation and increasing workflow efficiency, ultimately improving patient outcomes through enhanced diagnostic clarity.
How do LLMs Analyse Medical Scans?
According to the National Library of Medicine, LLMs have the capacity to enhance transfer learning efficiency, integrate multimodal data, facilitate clinical interactivity, and optimise cost-efficiency in healthcare.
The paper also states that transformer architecture, which is key for LLMs, is gaining prominence in the medical domain.
Potential flow chart for clinical application of LLMs by NIH
Spotlight on ChatGPT
According to another paper published by the National Library of Medicine earlier this year, ChatGPT plays an essential role in enhancing clinical workflow efficiency and diagnosis accuracy. It caters to multiple areas of medical imaging, such as image captioning, report generation and classification, extracting findings from reports, answering visual questions, and making interpretable diagnoses.
The report also establishes that collaboration between researchers and clinicians is needed to fully leverage the use of LLMs in imaging.
LLMs in Radiology
In January this year, the Radiological Society of North America released a paper about Chatbots and Large Language Models in Radiology. The paper discusses LLMs, including multimodal models that consider both text and images. “Such models have the potential to transform radiology practice and research but must be optimised and validated before implementation in supervised settings,” it states.
The paper also mentions hallucinations, knowledge cutoff dates, poor complex reasoning, a tendency to perpetuate bias, and stochasticity as some of the major limitations in radiology currently.
Rising Pace of Development
Two UCLA researchers, Eran Halperin and Oren Avram, recently developed an AI-powered foundation model that can accurately analyse medical 3D imagery. The model, SLIViT (Slice Integration by Vision Transformer), can analyse MRIs and CT scans in much less time than human experts.
SLIViT leverages knowledge from abundantly annotated 2D medical images to perform effectively on 3D imaging tasks, even with limited 3D training data.https://t.co/9VjIyrAq9L
— Simona Cristea (@simocristea) November 2, 2024
Google’s CT Foundation entered a domain already traversed by Microsoft with its Project InnerEye, an open-source software for medical imaging AI used for deep learning research. This project was also covered in Microsoft’s blog on ‘biomedical imaging’, which addressed the challenges of speed, quantification, and cost of medical imaging using AI.
The blog also discusses various research focus areas, namely machine learning for image reconstruction, radiotherapy image segmentation, ophthalmology, digital pathology, pandemic preparedness, and Microsoft’s Connected Imaging Instrument project.
CT Foundation is Just for Research
Along with the tool’s launch, Google also shared a Python Notebook, a demo notebook available to train models, including one for lung cancer detection, using public data.
Google also tested this model across six clinical tasks relevant to the head, chest, and abdominopelvic regions. Each test involved detecting conditions like intracranial haemorrhage, lung cancer, and multiple abdominal abnormalities.
The results indicated that models had over 0.8 area under curve (AUC) scores even with limited training data.AUC is measured between 0.0 and 1.0, where 1.0 is a perfect model and 0.5 represents random chance. Regardless, this tool is not ready for medical diagnosis yet.
“We developed this model for research purposes only and, as such, it may not be used in patient care and is not intended to be used to diagnose, cure, mitigate, treat, or prevent a disease. For example, the model and any embeddings may not be used as a medical device,” Google said.
As machine learning evolves into AI and continues to develop, it promises more accurate diagnoses, fewer mistakes, and better outcomes, ultimately elevating medical imaging to unprecedented levels.
The post With Google’s Latest Breakthrough, AI Reaches the Core of 3D Medical Imaging appeared first on Analytics India Magazine.