已发表论文

基于INbreast数据集增强RetinaNet网络的乳房x线图像肿块检测

 

Authors Wang M , Liu R, Luttrell IV J , Zhang C, Xie J

Received 30 August 2024

Accepted for publication 10 January 2025

Published 7 February 2025 Volume 2025:18 Pages 675—695

DOI https://doi.org/10.2147/JMDH.S493873

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 3

Editor who approved publication: Prof. Dr. Krzysztof Laudanski

Mingzhao Wang,1 Ran Liu,1 Joseph Luttrell IV,2 Chaoyang Zhang,2 Juanying Xie1 

1School of Computer Science, Shaanxi Normal University, Xian, People’s Republic of China; 2School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS, USA

Correspondence: Juanying Xie, School of Computer Science, Shaanxi Normal University, No. 620, West Chang’an Avenue, Chang’an District, Xi’an, 710119, Shaanxi, People’s Republic of China, Tel +86 13088965815, Email xiejuany@snnu.edu.cn Chaoyang Zhang, School of Computing Sciences and Computer Engineering, University of Southern Mississippi, 118 College Drive, Hattiesburg, MS, 39406-0001, USA, Email chaoyang.zhang@usm.edu

Purpose: Breast cancer is the most common major public health problems of women in the world. Until now, analyzing mammogram images is still the main method used by doctors to diagnose and detect breast cancers. However, this process usually depends on the experience of radiologists and is always very time consuming.
Patients and Methods: We propose to introduce deep learning technology into the process for the facilitation of computer-aided diagnosis (CAD), and address the challenges of class imbalance, enhance the detection of small masses and multiple targets, and reduce false positives and negatives in mammogram analysis. Therefore, we adopted and enhanced RetinaNet to detect masses in mammogram images. Specifically, we introduced a novel modification to the network structure, where the feature map M5 is processed by the ReLU function prior to the original convolution kernel. This strategic adjustment was designed to prevent the loss of resolution for small mass features. Additionally, we introduced transfer learning techniques into training process through leveraging pre-trained weights from other RetinaNet applications, and fine-tuned our improved model using the INbreast dataset.
Results: The aforementioned innovations facilitate superior performance of the enhanced RetiaNet model on the public dataset INbreast, as evidenced by a mAP (mean average precision) of 1.0000 and TPR (true positive rate) of 1.00 at 0.00 FPPI (false positive per image) on the INbreast dataset.
Conclusion: The experimental results demonstrate that our enhanced RetinaNet model defeats the existing models by having more generalization performance than other published studies, and it can also be applied to other types of patients to assist doctors in making a proper diagnosis.

Keywords: computer-aided diagnosis, deep learning, object detection, RetinaNet, transfer learning