The lack of available annotated medical images has historically hindered healthcare innovation. However, a solution is emerging as healthcare professionals start to share anonymised images and insights on public platforms – which includes the social media site previously known as Twitter (X). This has led to the creation of OpenPath, a comprehensive dataset of over 200,000 pathology images coupled with natural language descriptions, marking it as the largest public dataset of its kind.
Researchers have used OpenPath to develop Pathology Language–Image Pre-training (PLIP), a multimodal AI trained on this dataset. PLIP has demonstrated impressive results in zero-shot learning and transfer learning for classifying new pathology images across various tasks. Additionally, PLIP enables users to locate similar cases using either image or natural language search, encouraging knowledge sharing.
The researchers collected over 240,000 public pathology images using popular pathology-related hashtags and expanded the collection with data from other online sources. After thorough data quality checks, they assembled over 200,000 pathology image-text pairs named OpenPath, which they used to develop the versatile PLIP.
PLIP outperformed previous models in tasks such as zero-shot learning, linear probing, and text-to-image and image-to-image retrieval. Unlike other digital pathology machine learning methods, PLIP can adapt to new datasets and provide zero-shot predictions based on any text input, making it a flexible tool for potential new disease subtypes.
The study did note some limitations, including irrelevant data in the image-text pairs and challenges in accounting for varying magnification levels and staining styles. However, researchers are optimistic that PLIP can adjust to images with diverse magnification levels and staining protocols. They expect that OpenPath and PLIP will significantly contribute to advancing AI in pathology and encourage a data-focused approach in this area.
Reference url