Photo credit: news.mit.edu
Revolutionizing Medical Image Segmentation with ScribblePrompt
For those unfamiliar with the intricacies of medical imaging, images such as MRI or X-ray scans may appear as formless collections of black-and-white shapes, presenting a challenge when distinguishing one anatomical structure from another. However, advancements in artificial intelligence (AI) are streamlining this process significantly.
AI systems can be trained to identify and delineate areas of interest within these images, allowing healthcare professionals to monitor diseases and abnormalities more efficiently. This alleviates the burden of manually tracing anatomical boundaries across multiple images, potentially saving valuable time in clinical settings.
A significant hurdle in the implementation of such AI systems is the overwhelming requirement for labeled image datasets. To train a model effectively, countless images need to be annotated. For instance, labeling different shapes of the cerebral cortex in various MRI scans is essential for a model to understand the anatomical variations.
Addressing this challenge, a collaborative team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts General Hospital (MGH), and Harvard Medical School has introduced the “ScribblePrompt” framework. This innovative tool facilitates rapid segmentation of medical images, even those that the system has never encountered before.
Rather than manually annotating each image, the researchers created a simulation to model how users would annotate over 50,000 different scans, encompassing MRIs, ultrasounds, and various images concerning diverse biological structures such as the eyes, brain, and tissues. By employing algorithms to simulate human interactions with these images, they were able to generate synthetic data, making ScribblePrompt proficient enough to manage actual segmentation tasks across diverse medical images.
“AI has substantial potential for enhancing the analysis of medical images, aiding professionals in their tasks more effectively,” stated Hallee Wong, a PhD student at MIT and the lead author of a recent paper detailing ScribblePrompt. Wong emphasized the tool’s purpose: to augment, rather than replace, the work of healthcare professionals. ScribblePrompt stands out for its simplicity and efficiency, claiming a 28 percent reduction in annotation time compared to existing interactive segmentation methods, such as Meta’s Segment Anything Model (SAM).
The user interface of ScribblePrompt is designed for ease of use; individuals can either scribble over an area or click on it, prompting the tool to highlight the structure of interest. For instance, a user can pinpoint specific veins in a retinal scan or outline a kidney in an ultrasound using bounding boxes combined with additional scribbles for precision. The system also allows for corrections via “negative scribbles,” enabling users to refine segments as necessary.
A user study at MGH revealed ScribblePrompt’s popularity among neuroimaging researchers; 93.8 percent preferred it over the SAM baseline for enhancing segmentation accuracy through corrections. Similarly, 87.5 percent favored ScribblePrompt for click-based edits.
Training involved simulating user interactions across 54,000 images drawn from 65 datasets, covering a wide array of medical images, including CT scans, and micrographs. The model learned to recognize 16 different types of medical images, making it versatile in multiple clinical applications.
“Traditional methods struggle with user interactions like scribbling because training models to respond accurately is complex,” Wong commented. “Our approach trains a foundational model using diverse data, allowing it to generalize effectively across various tasks.”
Following extensive training, ScribblePrompt was assessed using 12 new datasets it had never encountered before and demonstrated superior performance compared to four existing segmentation methods in both efficiency and accuracy.
“Segmentation is pivotal in biomedical image analysis, integral both to clinical practice and research,” said Adrian Dalca, a senior author and CSAIL research scientist. “ScribblePrompt is designed with the practical needs of clinicians and researchers in mind, significantly accelerating this critical step in medical imaging.”
Harvard Medical School’s Bruce Fischl, who did not participate in the research, noted that traditionally, segmentation algorithms relied heavily on manual annotation, particularly challenging with 3D medical images. ScribblePrompt streamlines this process considerably, allowing for quicker, more precise interactions with imaging data.
Wong and her team, which includes John Guttag and Marianne Rakic, received support from various institutions, including Quanta Computer Inc., and presented their findings at the 2024 European Conference on Computer Vision. Earlier this year, they were recognized with the Bench-to-Bedside Paper Award at the DCAMI workshop at the Computer Vision and Pattern Recognition Conference for ScribblePrompt’s potential clinical impact.
Source
news.mit.edu