An interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization.
Publication Type | Academic Article |
Authors | Shen Y, Wu N, Phang J, Park J, Liu K, Tyagi S, Heacock L, Kim S, Moy L, Cho K, Geras K |
Journal | Med Image Anal |
Volume | 68 |
Pagination | 101908 |
Date Published | 12/16/2020 |
ISSN | 1361-8423 |
Keywords | Breast Neoplasms |
Abstract | Medical images differ from natural images in significantly higher resolutions and smaller regions of interest. Because of these differences, neural network architectures that work well for natural images might not be applicable to medical image analysis. In this work, we propose a novel neural network model to address these unique properties of medical images. This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions. It then applies another higher-capacity network to collect details from chosen regions. Finally, it employs a fusion module that aggregates global and local information to make a prediction. While existing methods often require lesion segmentation during training, our model is trained with only image-level labels and can generate pixel-level saliency maps indicating possible malignant findings. We apply the model to screening mammography interpretation: predicting the presence or absence of benign and malignant lesions. On the NYU Breast Cancer Screening Dataset, our model outperforms (AUC = 0.93) ResNet-34 and Faster R-CNN in classifying breasts with malignant findings. On the CBIS-DDSM dataset, our model achieves performance (AUC = 0.858) on par with state-of-the-art approaches. Compared to ResNet-34, our model is 4.1x faster for inference while using 78.4% less GPU memory. Furthermore, we demonstrate, in a reader study, that our model surpasses radiologist-level AUC by a margin of 0.11. |
DOI | 10.1016/j.media.2020.101908 |
PubMed ID | 33383334 |
PubMed Central ID | PMC7828643 |