Full Text

Turn on search term navigation

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Simple Summary

Deep learning has succeeded greatly in medical image-based cancer diagnosis. To help readers better understand the current research status and ideas, this article provides a detailed overview of the working mechanisms and use cases of commonly used radiological imaging and histopathology, the basic architecture of deep learning, classical pretrained models, common methods to overcome overfitting, and the application of deep learning in medical image-based cancer diagnosis. Finally, the data, label, model, and radiomics were discussed specifically and the current challenges and future research hotspots were discussed and analyzed.

Abstract

(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.

Details

Title
Deep Learning for Medical Image-Based Cancer Diagnosis
Author
Jiang, Xiaoyan 1 ; Hu, Zuojin 1 ; Wang, Shuihua 2 ; Zhang, Yudong 2   VIAFID ORCID Logo 

 School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; [email protected] (X.J.); [email protected] (Z.H.) 
 School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK; [email protected] 
First page
3608
Publication year
2023
Publication date
2023
Publisher
MDPI AG
e-ISSN
20726694
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2843041919
Copyright
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.