REVIEW article

Mammography with deep learning for breast cancer detection.

Lulu Wang*

  • Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen, China

X-ray mammography is currently considered the golden standard method for breast cancer screening, however, it has limitations in terms of sensitivity and specificity. With the rapid advancements in deep learning techniques, it is possible to customize mammography for each patient, providing more accurate information for risk assessment, prognosis, and treatment planning. This paper aims to study the recent achievements of deep learning-based mammography for breast cancer detection and classification. This review paper highlights the potential of deep learning-assisted X-ray mammography in improving the accuracy of breast cancer screening. While the potential benefits are clear, it is essential to address the challenges associated with implementing this technology in clinical settings. Future research should focus on refining deep learning algorithms, ensuring data privacy, improving model interpretability, and establishing generalizability to successfully integrate deep learning-assisted mammography into routine breast cancer screening programs. It is hoped that the research findings will assist investigators, engineers, and clinicians in developing more effective breast imaging tools that provide accurate diagnosis, sensitivity, and specificity for breast cancer.

1 Introduction

Breast cancer is one of the most prevalent cancers among females worldwide ( 1 ). Several factors, including gender, age, family history, obesity, and genetic mutations, contribute to the development of breast cancer ( 2 ). Early diagnosis with prompt treatment can significantly improve the 5-year survival rate of breast cancer ( 3 ). Medical imaging techniques like mammography and ultrasound are widely used for breast cancer detection ( 4 , 5 ). Mammography utilizes low-dose X-rays to generate breast images that aid radiologists in identifying abnormalities like lumps, calcifications, and distortions ( 6 ). Mammography is recommended for women over 40, particularly those with a family history of breast cancer, as it effectively detects early-stage breast cancer ( 7 ). However, mammography has limitations, such as reduced sensitivity in women with dense breast tissue. To overcome these limitations, various imaging methods, such as digital breast tomosynthesis (DBT), ultrasound, magnetic resonance imaging (MRI), and positron emission tomography (PET), have been investigated as alternative tools for breast cancer screening.

DBT uses X-rays to generate three-dimensional breast images, which is particularly useful for detecting breast cancer in dense breasts ( 8 ). Compared to mammography, DBT provides higher accuracy and sensitivity in detecting breast cancer lesions. However, the interpretation of DBT images still faces inter-observer variability, which can affect its accuracy. Ultrasound imaging uses high-frequency sound waves to produce detailed images of breast tissue. Unlike mammography, ultrasound does not involve radiation, making it a safe method for detecting breast abnormalities, especially in women with dense breast tissue. Ultrasound helps evaluate abnormalities detected on a mammogram and can be used to monitor disease progression and assess treatment effectiveness ( 9 ). MRI has been recommended for women with high risks of breast cancer ( 10 ). PET utilizes a radioactive tracer to create breast images and is often used in conjunction with other imaging techniques, such as CT or MRI, to identify areas of cancer cells ( 11 ). Each of these imaging methods has its own set of advantages and disadvantages ( 12 ).

Artificial intelligence (AI) technologies have been extensively investigated to develop cancer prediction models ( 13 , 14 ). AI-based models, such as machine learning (ML) algorithms, can analyze medical image datasets and patient characteristics to identify breast cancer or predict the risk of developing breast cancer. ML algorithms can extract quantitative features from medical images, such as mammograms or ultrasound images, through radiomics. AI-based prediction models can incorporate various cancer risk factors, including genetics, lifestyle, and environmental factors, to establish personalized imaging and treatment plans. In recent years, deep learning (DL) algorithms have emerged as promising AI tools to enhance the accuracy and efficiency of breast cancer detection ( 15 ). These data-driven techniques have the potential to revolutionize breast imaging by leveraging large amounts of data to automatically learn and identify complex patterns associated with malignancy.

This paper provides an overview of the recent developments in DL-based approaches and architectures used in mammography, along with their strengths and limitations. Additionally, the article highlights challenges and opportunities associated with integrating DL-based mammography to enhance breast cancer screening and diagnosis. The remaining sections of the paper are as follows: Section 2 describes the most popular medical imaging application for breast cancer detection. Section 3 discusses DL-based mammography techniques. Section 4 describes breast cancer prediction using DL techniques. Section 5 highlights the challenges and future research directions of DL approaches in mammography. Finally, Section 6 concludes the present study.

2 Medical imaging techniques for breast cancer detection

Medical imaging techniques have become essential in the diagnosis and management of breast cancer. This section provides an overview of several commonly used medical imaging techniques for breast cancer detection. Table 1 compares the most widely utilized medical imaging methods for breast cancer.

www.frontiersin.org

Table 1 Comparison of medical imaging methods for breast cancer.

2.1 Mammography

This section presents the working principle, recent advancements, advantages, and disadvantages of mammography. Mammography is a well-established imaging modality used for breast cancer screening. It is a non-invasive technique that utilizes low-dose X-rays to generate high-resolution images of breast tissue. Mammography operates based on the principle of differential X-ray attenuation. The breast tissue is compressed between two plates, and a low-dose X-ray beam is directed through the breast to create an image. Different types of breast tissues, such as fatty, glandular, and cancerous tissue, attenuate X-rays differently. The X-rays that pass through the breast tissue are detected by a digital detector, and an image of the breast is formed. The resulting image is a two-dimensional projection of the breast tissue. In recent years, mammography has undergone significant advancements. Digital mammography has replaced film-screen mammography, leading to improved image quality and reduced radiation dose. Digital breast tomosynthesis (DBT), a 3D mammography technique, has enhanced breast cancer detection rates and reduced false positives. Automated breast ultrasound (ABUS) is another imaging modality used in conjunction with mammography for breast screening, particularly in women with dense breast tissue.

Numerous studies have investigated the effectiveness of mammography for breast cancer screening, demonstrating that it can reduce breast cancer mortality rates, especially for women aged 50-74 years. Additional screening with MRI or ultrasound may be recommended for women with higher risk of breast cancer, such as those with a family history or genetic predisposition. Several leading companies and research groups have achieved significant advancements in the past decade. For example, Hologic’s Genius 3D mammography technology provides higher-resolution 3D images, increasing detection rates while reducing false positives ( 20 ). However, it entails higher radiation exposure and higher costs compared to traditional mammography.

Other developments include GE Healthcare and Siemens Healthineers’ contrast-enhanced spectral mammography (CESM), which combines mammography with contrast-enhanced imaging to improve diagnostic accuracy ( 21 ). Artificial intelligence tools developed by companies like iCAD and ScreenPoint Medical have been utilized to enhance mammography interpretation, leading to earlier breast cancer detection ( 22 ). Gamma Medica and Dilon Technologies have introduced new breast imaging technologies, such as molecular breast imaging and breast-specific gamma imaging, which utilize different types of radiation to provide more detailed images of breast tissue ( 23 ).

The University of Chicago has made strides in contrast-enhanced mammography (CEM), which is more accurate in detecting invasive breast cancers than traditional mammography alone. CEM provides detailed images of breast tissue without ionizing radiation, though it is not widely available and may not be covered by insurance ( 24 ). The Karolinska Institute’s work on breast tomosynthesis has shown that it is more sensitive in detecting breast cancer than traditional mammography. Tomosynthesis provides a 3D image of the breast, facilitating the detection of small tumors and reducing the need for additional imaging tests. However, it exposes patients to slightly more radiation, takes longer to perform, and is more expensive ( 25 ).

Mammography has certain limitations, including limited sensitivity in women with dense breast tissue, false positives leading to unnecessary procedures, radiation exposure that accumulates over time, inability to distinguish between benign and malignant lesions, inaccuracy in detecting small cancers or cancers in certain breast regions, and limited utility in detecting specific types of breast cancer, such as inflammatory breast cancer. To address these limitations, various new imaging technologies, such as DBT, ultrasound elastography, and molecular breast imaging, have been proposed and investigated. These technologies aim to provide more accurate and reliable breast cancer detection, particularly in high-risk individuals. Future research directions for mammography include improving test accuracy, utilizing AI for image interpretation, and developing new techniques utilizing different radiation or contrast agents.

2.2 Digital breast tomosynthesis

DBT was first introduced in the early 2000s. Unlike traditional Mammography, DBT can generate three-dimensional images, leading to more accurate breast cancer detection by reducing tissue overlap. DBT is particularly effective in detecting small tumors and reducing false positive results compared to mammography ( 26 ). Additionally, it exposes patients to less radiation. However, DBT is more expensive and may not be covered by insurance for all patients. It also requires specialized equipment and training for interpretation, which may not be widely available in all areas.

2.3 Ultrasound

Ultrasound imaging is a non-invasive, relatively low-cost imaging technique that does not involve exposure to ionizing radiation. It can be used as an adjunct to mammography for breast cancer screening, especially in women with dense breast tissue. Nakano et al. ( 27 ) developed real-time virtual sonography (RVS) for breast lesion detection. RVS combines the advantages of ultrasound and MRI and can provide real-time, highly accurate images of breast lesions. However, RVS requires specialized equipment and software, and its diagnostic accuracy may depend on the operator. Standardization of RVS protocols and operator training may improve its accuracy and accessibility.

Zhang et al. ( 28 ) conducted a study on a computer-aided diagnosis (CAD) system called BIRADS-SDL for breast cancer detection using ultrasound images. BIRADS-SDL was compared with conventional stacked convolutional auto-encoder (SCAE) and semi-supervised deep learning (SDL) methods using original images as inputs, as well as an SCAE using BIRADS-oriented feature maps (BFMs) as inputs. The experimental results showed that BIRADS-SDL performed the best among the four networks, with classification accuracy of around 92.00 ± 2.38% and 83.90 ± 3.81% on two datasets. These findings suggest that BIRADS-SDL could be a promising method for effective breast ultrasound lesion CAD, particularly with small datasets. CAD systems can enhance the accuracy and efficiency of breast cancer detection while reducing inter-operator variability. However, CAD systems may produce false-positive or false-negative results, and their diagnostic accuracy may depend on the quality of the input images. Integrating CAD systems with other imaging modalities and developing algorithms to account for image quality variations may improve their accuracy and reliability ( 29 ).

GE Healthcare (USA) developed the Invenia Automated Breast Ultrasound (ABUS) 2.0, which improves breast cancer detection, especially in women with dense breasts, by providing high-resolution 3D ultrasound images ( 30 ). Siemens Healthineers (Germany) developed the ACUSON S2000 Automated Breast Volume Scanner (ABVS), which also provides high-resolution 3D ultrasound images for accurate breast cancer detection, particularly in women with dense breasts ( 31 ). These automated systems enhance breast cancer detection rates, improve workflow, and reduce operator variability.

Canon Medical Systems (Japan) developed the Aplio i-series ultrasound system with the iBreast package, which offers high-resolution breast imaging, leading to improved diagnostic performance for breast cancer detection. Invenia ABUS 2.0 and ACUSON S2000 ABVS are automated systems, while Aplio i-series with iBreast package requires manual scanning. The advantages of ABUS 2.0 and ACUSON S2000 ABVS include enhanced image quality, improved workflow, and reduced operator variability. However, they are more expensive than traditional mammography, and image interpretation may be time-consuming. Ultimately, the choice of system depends on the needs and preferences of healthcare providers and patients. Future research is likely to focus on improving the accuracy of ultrasound imaging techniques, developing new methods for detecting small calcifications, and reducing false-positive results.

2.4 Magnetic resonance imaging

MRI utilizes strong magnetic fields and radio waves to generate images of the body’s internal structures, making it one of the most important diagnostic tools. It has various applications, including the diagnosis and monitoring of neurological, musculoskeletal, cardiovascular, and oncological conditions. Its ability to image soft tissues makes it well-suited for breast imaging. Breast MRI is a non-invasive technique used for the detection and monitoring of breast cancer. It is often used in conjunction with mammography and ultrasound to provide a comprehensive evaluation of breast tissue.

Kuhl et al. ( 32 ) were the first to investigate post-contrast subtracted images and maximum-intensity projection for breast cancer screening with MRI. This approach offers advantages in terms of speed, cost-effectiveness, and patient accessibility. However, abbreviated MRI has limitations, including lower specificity and the potential for false positives. Mann et al. ( 33 ) studied ultrafast dynamic contrast-enhanced MRI for assessing lesion enhancement patterns. The use of new MRI sequences and image reconstruction techniques improved the specificity in distinguishing between malignant and benign lesions. Zhang et al. ( 34 ) explored a deep learning-based segmentation technique for breast MRI, which demonstrated accurate and consistent segmentation of breast regions. However, this method has limitations, such as its reliance on training data and potential misclassification.

MRI has several advantages, including the absence of ionizing radiation and increased accuracy in detecting small tumors within dense breast tissue. However, it is expensive, time-consuming, and associated with a higher false-positive rate. Future research directions involve developing faster and more efficient MRI techniques and utilizing AI techniques to enhance image analysis and interpretation.

Contrast-enhanced MRI (DCE-MRI) has recently become a crucial method in clinical practice for the detection and evaluation of breast cancer. Figure 1 illustrates the workflow of unsupervised analysis based on DCE-MRI radiomics features in breast cancer patients ( 35 ). Ming et al. ( 35 ) utilized DCE-MRI to calculate voxel-based percentage enhancement (PE) and signal enhancement ratio (SER) maps of each breast. This study collected two independent radiogenomics cohorts (n = 246) to identify and validate imaging subtypes. The results demonstrated that these imaging subtypes, with distinct clinical and molecular characteristics, were reliable, reproducible, and valuable for non-invasive prediction of the outcome and biological functions of breast cancer.

www.frontiersin.org

Figure 1 Workflow of unsupervised analysis based on DCE-MRI features in breast cancer patients ( 35 ).

2.5 Positron emission tomography

PET is an advanced imaging technique that has made significant contributions to the diagnosis and treatment of breast cancer. It is a non-invasive procedure that provides healthcare professionals with valuable information about the spread of cancer to other parts of the body, making it an essential tool in the fight against breast cancer. With ongoing technological advancements, PET plays a crucial role in the detection and treatment of breast cancer.

PET utilizes radiotracers to generate three-dimensional images of the interior of the body. It operates by detecting pairs of gamma rays emitted by the radiotracer as it decays within the body. PET imaging was first introduced in the early1950s, and the first PET scanner was developed in the 1970s. Since then, PET has become an indispensable tool for cancer detection. It has been commonly used to diagnose and stage cancer and assess the effectiveness of cancer treatments. It is also utilized in cardiology, neurology, breast, and psychiatry.

PET is more sensitive than mammography and ultrasound in detecting small breast tumors, and it can also distinguish between benign and malignant lesions with higher accuracy ( 36 ). The advantages of PET include its non-invasive and safety for repeated use. However, PET does have limitations, including limited availability, higher cost compared to mammography and ultrasound, a higher rate of false positives, and the requirement for radiotracer injection.

3 Deep learning-based mammography techniques

Several DL architectures, including convolutional neural networks (CNN), transfer learning (TL), ensemble learning (EL), and attention-based methods, have been developed for various applications in mammography. These applications include breast cancer detection, classification, segmentation, image restoration and enhancement, and computer-aided diagnosis (CAD) systems.

CNN is an artificial neural network with impressive results in image recognition tasks. CNN recognizes image patterns using convolutional layers that apply filters to the input image. The filters extract features from the input image, passing through fully connected layers to classify the image. Several CNN-based methods have been proposed in mammography for breast tumor detection. Wang et al. ( 37 ) applied CNN with transfer learning in ultrasound for breast cancer classification. The proposed method achieved an area under the curve (AUC) value of 0.9468 with five-folder cross-validation, for which the sensitivity and specificity were 0.886 and 0.876, respectively. Shen et al. ( 38 ) proposed a deep CNN in Mammography to classify benign and malignant and achieved an accuracy of 0.88, higher than radiologists (0.83). The study showed that CNN had a lower false-positive rate than radiologists. Yala et al. ( 39 ) developed CNN-based mammography to classify mammograms as low or high risk for breast cancer and achieved an AUC of 0.84, which was higher than that of radiologists (0.77). CNN had a lower false-positive rate than radiologists, which has shown promising results in improving the accuracy of mammography screening. CNN has several advantages over traditional mammography screening, including higher accuracy, faster processing, and the ability to identify subtle changes in mammograms. CNN requires large amounts of data to train the network and may not be able to detect all types of breast cancer. Further research is needed to investigate the use of CNN in Mammography.

CNN is an artificial neural network that has shown impressive results in image recognition tasks. It recognizes image patterns using convolutional layers that apply filters to the input image. These filters extract features from the input image, which then pass through fully connected layers to classify the image. In Mammography, several CNN-based methods, such as DenseNet, ResNet, and VGGNet, have been proposed for breast tumor detection. For example, Wang et al. ( 37 ) applied CNN with transfer learning in ultrasound for breast cancer classification, achieving an area under the curve (AUC) value of 0.9468 with five-fold cross-validation. The sensitivity and specificity were 0.886 and 0.876, respectively. Shen et al. ( 38 ) proposed a deep CNN in mammography to classify between benign and malignant tumors, achieving an accuracy of 0.88, higher than that of radiologists (0.83). Yala et al. ( 39 ) developed a CNN-based mammography system to classify mammograms as low or high risk for breast cancer, achieving an AUC of 0.84, higher than that of radiologists (0.77). These studies demonstrated that CNN had a lower false-positive rate than radiologists, showing promise in improving the accuracy of mammography screening. CNN offers advantages over traditional mammography screening, including higher accuracy, faster processing, and the ability to identify subtle changes in mammograms. However, CNN requires large amounts of data to train the network and may not be able to detect all types of breast cancer. Further research is needed to investigate the use of CNN in mammography.

TL utilizes pre-trained DL models to train on small datasets. TL-based methods have shown promising results in improving the accuracy of mammography for breast tumor detection. EL combines multiple DL models to improve the accuracy of predictions. EL-based approaches, such as stacking, boosting, and bagging, have been proposed in mammography for breast tumor detection.

Attention-based methods use attention mechanisms to focus on critical features of the image. Several attention-based methods, such as SE-Net and Channel Attention Networks (CAN), have been proposed for breast tumor detection in mammography. DL is a type of ML that uses neural networks to learn and make predictions. DL methods have gained popularity in recent years due to their ability to work with large datasets and extract meaningful patterns and insights.

DL methods have revolutionized the field of machine learning and are being used in an increasing number of applications, ranging from self-driving cars to medical imaging. As datasets and computing power continue to grow, these methods are expected to become even more powerful and prevalent in the future.

4 Breast cancer prediction using deep learning

This section presents the recent developments in DL methods for breast cancer prediction. The DL-based breast cancer prediction techniques involves the following steps:

● Data Collection: Breast datasets are obtained from various sources such as medical institutions, public repositories, and research studies. These datasets consist of mammogram images, gene expression profiles, and clinical data.

● Data Preprocessing: The collected datasets are preprocessed to eliminate noise, normalize, and standardize the data. This step involves data cleaning, feature extraction, and data augmentation.

● Model Building: DL models, such as CNNs, RNNs, DBNs, and autoencoders, are developed using the preprocessed breast cancer datasets. These models are trained and optimized using training and validation datasets.

● Model Evaluation: The trained DL models are assessed using a separate test dataset to determine their performance. Performance metrics, including sensitivity, specificity, accuracy, precision, F1 score, and AUC, are used for evaluation.

● Model Interpretation: The interpretability of the DL models is evaluated using techniques such as Grad-CAM, saliency maps, and feature visualization. These techniques help identify which features of the input data are utilized by the DL models for making predictions.

● Deployment: The DL model is deployed in a clinical setting to predict breast cancer in patients. The performance of the model is regularly monitored and updated to enhance accuracy and efficiency.

By utilizing DL techniques, breast cancer prediction can be significantly improved, leading to better detection and treatment outcomes.

4.1 Data preprocessing techniques and evaluation

4.1.1 preprocessing techniques.

When applying DL algorithms to analyze breast images, noise can have a negative impact on the accuracy of the image classifier. To address this issue, several image denoising techniques have been developed. These techniques, including the Median filter, Wiener filter, Non-local means filter, Total variation (TV) denoising, Wavelet-based denoising, Gaussian filter, anisotropic diffusion, BM3D denoising, CNN, and autoencoder, aim to reduce image noise while preserving important features and structures that are relevant for breast cancer diagnosis.

After denoising, a normalization method, such as min-max normalization, is typically employed to rescale the images and reduce the complexity of the image datasets before feeding them into the DL model. This normalization process ensures that the model can effectively learn meaningful patterns from the images and improve its ability to accurately classify them.

4.1.2 Performance metrics

Several performance metrics are utilized to evaluate DL algorithms for breast screening. The selection of a specific metric depends on the task at hand and the objectives of the model. Some of the most commonly employed metrics include:

● Accuracy: measures the proportion of correct predictions made by the model.

● Precision: measures the proportion of true positive predictions out of all positive predictions made by the model.

● Sensitivity: measures the proportion of true positive predictions out of all actual positive cases in the dataset.

● F1 score: a composite metric that balances precision and sensitivity.

● Area under the curve (AUC): distinguishes between positive and negative points across a range of threshold values.

● Mean Squared Error (MSE): measures the average squared difference between predicted and actual values in a regression task.

● Mean Absolute Error (MAE): measures the average absolute difference between the predicted and actual values in a regression task.

The commonly used equation for calculating accuracy, as stated in reference ( 40 ), is:

Where TP and TN are the numbers of true positives and true negatives, FP and FN are the numbers of false positives and false negatives, respectively.

AUC is typically computed by plotting the true positive rate against the false positive rate at different threshold values and then calculating the area under this curve.

Where y tru is the true value and y pred is the predicted value, and n is the number of samples.

Equations 1 – 6 provide a general idea of how performance metrics are computed, but the actual implementation may vary depending on the specific task and the software.

4.2 Datasets

Breast datasets play a crucial role in evaluating DL approaches. These datasets offer a comprehensive collection of high-quality and labelled breast images that can be utilized for training and testing DL algorithms. Table 2 presents commonly utilized publicly available breast datasets in mammography for breast screening.

www.frontiersin.org

Table 2 Breast image dataset.

4.3 Breast lesion segmentation

The Nottingham Histological Grading (NHG) system is currently the most commonly utilized tool for assessing the aggressiveness of breast cancer ( 50 ). According to this system, breast cancer scores are determined based on three significant factors: tubule formation ( 51 ), nuclear pleomorphism ( 52 ), and mitotic count ( 53 ). Tubule formation is an essential assessment factor in the NHG grading system for understanding the level of cancer. Before identifying tubule formation, detection or segmentation tasks need to be performed. Pathologists typically conduct these tasks visually by examining whole slide images (WSIs). Medical image segmentation assists pathologists in focusing on specific regions of interest in WSIs and extracting detailed information for diagnosis. Conventional and AI methods have been applied in medical image segmentation, utilizing handcrafted features such as color, shapes, and texture ( 54 – 56 ). Traditional manual tubule detection and segmentation techniques have been employed in medical images. However, these methods are challenging, prone to errors, exhaustive, and time-consuming ( 57 , 58 ).

Table 3 provides a comparison of recently developed DL methods in mammography for breast lesion segmentation. These methods include the Conditional Random Field model (CRF) ( 59 ), Adversarial Deep Structured Net ( 60 ), Deep Learning using You-Only-Look-Once ( 61 ), Conditional Residual U-Net (CRU-Net) ( 62 ), Mixed-Supervision-Guided (MS-ResCU-Net) and Residual-Aided Classification U-Net Model (ResCU-Net) ( 63 ), Dense U-Net with Attention Gates (AGs) ( 64 ), Residual Attention U-Net Model (RU-Net) ( 65 ), Modified U-Net ( 66 ), Mask RCNN ( 67 ), Full-Resolution Convolutional Network (FrCN) ( 68 ), U-Net ( 69 ), Conditional Generative Adversarial Networks (cGAN) ( 70 , 71 ), DeepLab ( 72 ), Attention-Guided Dense-Upsampling Network (AUNet) ( 73 ), FPN ( 74 ), modified CNN based on U-Net Model ( 76 ), deeply supervised U-Net ( 77 ), modified U-Net ( 78 ), and Tubule-U-Net ( 79 ). Among these DL methods, U-Net is the most commonly employed segmentation method.

www.frontiersin.org

Table 3 Deep learning approaches in mammography for breast lesion segmentation.

Naik et al. ( 80 ) developed a likelihood method for the segmentation of lumen, cytoplasm, and nuclei based on a constraint: a lumen area must be surrounded by cytoplasm and a ring of nuclei to form a tubule. Tutac et al. ( 81 ) introduced a knowledge-guided semantic indexing technique and symbolic rules for the segmentation of tubules based on lumen and nuclei. Basavanhally et al. ( 82 ) developed the O’Callaghan neighborhood method for tubule detection, allowing for the characterization of tubules with multiple attributes. The process was tested on 1226 potential lumen areas from 14 patients and achieved an accuracy of 89% for tubule detection. In reference ( 83 ), the authors applied a k-means clustering algorithm to cluster pixels of nuclei and lumens. They employed a level-set method to segment the boundaries of the nuclei surrounding the lumen, achieving an accuracy of 90% for tubule detection. Romo-Bucheli et al. ( 84 ) developed a Convolutional Neural Network (CNN) based detection and classification method to improve the accuracy of nuclei detection in tubules, achieving an accuracy of 90% for tubule nuclei detection. Hu et al. ( 85 ) proposed a breast mass segmentation technique using a full CNN (FCNN), which showed promising results with high accuracy and speed. Abdelhafiz et al. ( 86 ) studied the application of deep CNN for mass segmentation in mammograms and found increased performance in terms of accuracy. Tan et al. ( 87 ) recently developed a tubule segmentation method that investigates geometrical patterns and regularity measurements in tubule and non-tubule regions. This method is based on handcrafted features and conventional segmentation techniques, which are not effective and efficient for tubule structures due to their complex, irregular shapes and orientations with weak boundaries.

4.4 Deep learning approaches in mammography for breast lesion detection and classification

DL approaches have garnered considerable attention in mammography for the detection and classification of breast lesions, primarily due to their ability to automatically extract high-level features from medical images. Numerous popular DL algorithms have been employed in mammography for breast screening, including convolutional neural networks (CNN), deep belief networks (DBN), recurrent neural networks (RNN), autoencoders, generative adversarial networks (GAN), capsule networks (CN), convolutional recurrent neural networks (CRNN), attention mechanisms, multiscale CNN, and ensemble learning (EL).

CNN proves highly effective in extracting and classifying image features into distinct categories. DBN is particularly advantageous in identifying subtle changes in images that may be challenging for human observers to discern. RNN utilizes feedback loops to facilitate predictions, thereby aiding in the analysis of sequential data. Autoencoders are utilized for unsupervised feature learning, which aids in the detection and classification of mammography images. GAN is exceptionally effective in generating synthetic mammography images for training DL models. CN is highly proficient in detecting and classifying mammography images. CRNN combines CNN and RNN, making it particularly useful in analyzing sequential data. Attention mechanisms focus on specific areas of mammography images, proving beneficial in detecting and classifying images that encompass intricate structures and patterns. Multiscale CNN analyzes images at multiple scales, proving invaluable in detecting and classifying images with complex structures and patterns at varying scales. EL combines multiple DL models to enhance accuracy and reduce false positives.

Table 4 analyzes the recently developed DL methods for breast lesion detection using mammography. These methods have the potential to greatly enhance the accuracy and efficiency of breast cancer diagnosis. However, it is important to note that most DL methods for biomedical imaging applications come with certain limitations. These limitations include the need for large training datasets, being limited to mass spectrometry images, and being computationally expensive.

www.frontiersin.org

Table 4 DL-based mammography for breast tumor detection.

Table 5 presents a comprehensive list of the latest DL-based mammogram models developed for breast lesion classification. DL models offer numerous benefits, including exceptional accuracy and optimal performance achieved with fewer parameters. However, it is important to acknowledge certain limitations associated with existing DL methods for breast tumor classification using mammographies. These limitations include the substantial computational power and extensive datasets required for training the models, which can be computationally expensive, intricate, and time-consuming.

www.frontiersin.org

Table 5 DL-based mammography for breast tumor classification.

5 Challenges and future research directions

The emergence of DL techniques has revolutionized medical imaging, offering immense potential to enhance the diagnosis and treatment of various diseases. DL algorithms present several advantages compared to traditional ML methods. For instance, DL algorithms can be trained using robust hardware such as graphical processing units (GPU) and tensor processing units (TPU), greatly accelerating the training process. This has enabled researchers to train large DL models with billions of parameters, yielding impressive results in diverse language tasks. However, to fully leverage the potential of DL in medical imaging, several challenges must be addressed. One of the primary challenges is the scarcity of data. DL algorithms require abundant, high-quality data for effective training. Yet, acquiring medical imaging data is often challenging, particularly for rare diseases or cases requiring long-term follow-up. Furthermore, data privacy regulations and concerns can further complicate the availability of medical imaging data. Another challenge lies in the quality of annotations. DL algorithms typically demand substantial amounts of annotated data for effective training. However, annotating medical imaging data can be subjective and time-consuming, leading to issues with annotation quality and consistency. This can significantly impact the performance of deep learning algorithms, particularly when accurate annotations are vital for diagnosing or treating specific conditions. Additionally, imbalanced classes pose another challenge in medical imaging.

In numerous instances, the occurrence of certain states may be relatively low, which can result in imbalanced datasets that have a detrimental effect on the performance of DL algorithms. This situation can pose a significant challenge, especially for rare diseases or conditions with limited data availability. Another crucial concern in medical imaging is the interpretability of models. Although DL algorithms have showcased remarkable performance across various medical imaging tasks, the lack of interpretability in these models can hinder their adoption. Clinicians frequently necessitate explanations for the predictions made by these algorithms in order to make informed decisions, but the opacity of DL models can make this task arduous.

Data privacy is a paramount concern in medical imaging. Medical images encompass confidential patient information, stringent regulations dictate the utilization and dissemination of such data. The effective training of DL necessitates substantial access to extensive medical imaging data, thereby introducing challenges concerning data privacy and security. Additionally, computational resources pose another challenge in the realm of medical imaging. DL algorithms mandate substantial computational resources for the effective training and of models. This predicament can prove particularly troublesome in medical imaging, given the size and intricacy of medical images, which can strain computing resources. DL algorithms can be vulnerable to adversarial attacks, where small perturbations to input data can cause significant changes in the model’s output. This can be particularly problematic for medical imaging, where even small changes to an image can have substantial implications for diagnosis and treatment.

Several potential strategies can be employed to address these challenges effectively. One approach involves the development of transfer learning techniques, enabling DL models to be trained on smaller datasets by leveraging information from related tasks or domains. This approach holds particular promise in medical imaging, where data scarcity poses a significant obstacle. Another approach involves placing emphasis on the development of annotation tools and frameworks that enhance the quality and consistency of annotations. This becomes important in cases where annotations play a critical role in diagnosing or treating specific conditions. Furthermore, improved data sharing and collaboration between institutions can help alleviate both data scarcity and privacy concerns. By pooling resources and sharing data, it becomes feasible to construct more extensive and diverse datasets that can be employed to train DL models with greater effectiveness. Additionally, enhancing the interpretability of DL models in medical imaging techniques stands as another critical area of research. The development of explainable AI techniques can provide clinicians with valuable insights into the underlying factors contributing to a model’s predictions. Lastly, bolstering the robustness of DL models constitutes a crucial focal point. This entails exploring adversarial training techniques, as well as leveraging ensemble methods and other strategies to enhance the overall robustness and generalizability of DL models.

DL techniques have the potential to revolutionize medical imaging. However, to fully leverage this potential, it is crucial to address several challenges. These challenges encompass data scarcity, annotation quality, imbalanced classes, model interpretability, data privacy, computational resources, and algorithm robustness. By prioritizing strategies to tackle these challenges, it becomes possible to develop DL models that are more effective and reliable for various medical imaging applications.

6 Conclusion

This paper examines the recent advancements in DL-based mammography for breast cancer screening. The authors have investigated the potential of DL techniques in enhancing the accuracy and efficiency of mammography. Additionally, they address the challenges that need to be overcome for the successful adoption of DL techniques in clinical practice.

Author contributions

LW: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Visualization, Writing – original draft, Writing – review & editing.

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This research was funded by the International Science and Technology Cooperation Project of the Shenzhen Science and Technology Innovation Committee (GJHZ20200731095804014).

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

1. Chon JW, Jo YY, Lee KG, Lee HS, Kweon HY. Effect of silk fibroin hydrolysate on the apoptosis of mcf-7 human breast cancer cells. Int J Ind Entomol (2013) 27(2):228–36. doi: 10.7852/ijie.2013.27.2.228

CrossRef Full Text | Google Scholar

2. Habtegiorgis SD, Getahun DS, Telayneh AT, Birhanu MY, Feleke TM, Mingude AB, et al. Ethiopian women's breast cancer self-examination practices and associated factors a systematic review and meta-analysis. Cancer Epidemiol (2022) 78:102128. doi: 10.1016/j.canep.2022.102128

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Ginsburg O, Yip CH, Brooks A, Cabanes A, Caleffi M, Dunstan Yataco JA, et al. Breast cancer early detection: A phased approach to implementation. Cancer (2020) 126:2379–93. doi: 10.1002/cncr.32887

4. Jalalian A, Mashohor SB, Mahmud HR, Saripan MIB, Ramli ARB, Karasfi B. Computer-aided detection/diagnosis of breast cancer in mammography and ultrasound: a review. Clin Imaging (2013) 37(3):420–6. doi: 10.1016/j.clinimag.2012.09.024

5. Gilbert FJ, Pinker-Domenig K. (2019). Diagnosis and staging of breast cancer: when and how to use mammography, tomosynthesis, ultrasound, contrast-enhanced mammography, and magnetic resonance imaging. In: Diseases of the chest, breast, heart and vessels 2019-2022. Springer, Cham (2019). pp. 155–66. doi: 10.1007/978-3-030-11149-6_13

6. Alghaib HA, Scott M, Adhami RR. An overview of mammogram analysis. IEEE Potentials (2016) 35(6):21–8. doi: 10.1109/MPOT.2015.2396533

7. Monticciolo DL, Newell MS, Moy L, Niell B, Monsees B, Sickles EA. Breast cancer screening in women at higher-than-average risk: recommendations from the ACR. J Am Coll Radiol (2018) 15(3):408–14. doi: 10.1016/j.jacr.2017.11.034

8. Alabousi M, Zha N, Salameh JP, Samoilov L, Sharifabadi AD, Pozdnyakov A, et al. Digital breast tomosynthesis for breast cancer detection: a diagnostic test accuracy systematic review and meta-analysis. Eur Radiol (2020) 30:2058–71. doi: 10.1007/s00330-019-06549-2

9. Brem RF, Lenihan MJ, Lieberman J, Torrente J. Screening breast ultrasound: past, present, and future. Am J Roentgenol (2015) 204(2):234–40. doi: 10.2214/AJR.13.12072

10. Heller SL, Moy L. MRI breast screening revisited. J Magnetic Resonance Imaging (2019) 49(5):1212–21. doi: 10.1002/jmri.26547

11. Schöder H, Gönen M. Screening for cancer with PET and PET/CT: potential and limitations. J Nucl Med (2007) 48(1):4S–18S. doi: 10.1016/S0148-2963(03)00075-4

12. Narayanan D, Berg WA. Dedicated breast gamma camera imaging and breast PET: current status and future directions. PET Clinics (2018) 13(3):363–81. doi: 10.1016/j.cpet.2018.02.008

13. He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med (2019) 25:30–6. doi: 10.1038/s41591-018-0307-0

14. Kim KH, Lee SH. Applications of artificial intelligence in mammography from a development and validation perspective. J Korean Soc Radiol (2021) 82(1):12. doi: 10.3348/jksr.2020.0205

15. Hamed G, Marey MAER, Amin SES, Tolba MF. (2020). Deep learning in breast cancer detection and classification. In: Proceedings of the international conference on artificial intelligence and computer vision, advances in intelligent systems and computing. Springer, Cham (2020) 1153:322–33. doi: 10.1007/978-3-030-44289-7_30

16. Wang J, Gottschal P, Ding L, Veldhuizen DAV, Lu W, Houssami N, et al. Mammographic sensitivity as a function of tumor size: a novel estimation based on population-based screening data. Breast (2021) 55:69–74. doi: 10.1016/j.breast.2020.12.003

17. Stein RG, Wollschläger D, Kreienberg R, Janni W, Wischnewsky M, Diessner J, et al. The impact of breast cancer biological subtyping on tumor size assessment by ultrasound and mammography-a retrospective multicenter cohort study of 6543 primary breast cancer patients. BMC Cancer (2016) 16(1):1–8. doi: 10.1186/s12885-016-2426-7

18. Chen HL, Zhou JQ, Chen Q, Deng YC. Comparison of the sensitivity of mammography, ultrasound, magnetic resonance imaging and combinations of these imaging modalities for the detection of small (≤2 cm) breast cancer. Medicine (2021) 100(26):e26531. doi: 10.1097/MD.0000000000026531

19. Gunther JE, Lim EA, Kim HK, Flexman M, Altoé M, Campbell JA, et al. Dynamic diffuse optical tomography for monitoring neoadjuvant chemotherapy in patients with breast cancer. Radiology (2018) 287(3):778–86. doi: 10.1148/radiol.2018161041

20. Movik E, Dalsbø TK, Fagerlund BC, Friberg EG, Håheim LL, Skår Å. (2017). Digital breast tomosynthesis with hologic 3d mammography selenia dimensions system for use in breast cancer screening: a single technology assessment . Oslo, Norway: Knowledge Centre for the Health Services at The Norwegian Institute of Public Health.

Google Scholar

21. Marion W. Siemens Healthineers, GE HealthCare Race To Develop Next-Gen AI Solutions For Personalized Care (2023). Available at: https://medtech.pharmaintelligence.informa.com/MT147893/Siemens-Healthineers-GE-HealthCare-Race-To-Develop-NextGen-AI-Solutions-For-Personalized-Care .

22. NEWS BREAST IMAGING, ScreenPoint Medical: Transpara Breast AI Demonstrates Value in Real-world Clinical Usage (2023). Available at: https://www.itnonline.com/content/screenpoint-medical-transpara-breast-ai-demonstrates-value-real-world-clinical-usage .

23. Gong Z, Williams MB. Comparison of breast specific gamma imaging and molecular breast tomosynthesis in breast cancer detection: evaluation in phantoms. Med Phys (2015) 42(7):4250. doi: 10.1118/1.4922398

24. Łuczyńska E, Heinze-Paluchowska S, Hendrick E, Dyczek S, Ryś J, Herman K, et al. Comparison between breast MRI and contrast-enhanced spectral mammography. Med Sci Monit (2015) 21:1358–67. doi: 10.12659/MSM.893018

25. Cohen EO, Perry RE, Tso HH, Phalak KA, Leung JWT. Breast cancer screening in women with and without implants: retrospective study comparing digital mammography to digital mammography combined with digital breast tomosynthesis. Eur Radiol (2021) 31(12):9499–510. doi: 10.1007/s00330-021-08040-3

26. Gilbert FJ, Tucker L, Young KC. Digital breast tomosynthesis (DBT): a review of the evidence for use as a screening tool. Clin Radiol (2016) 71(2):141–50. doi: 10.1016/j.crad.2015.11.008

27. Nakano S, Fujii K, Yorozuya K, Yoshida M, Fukutomi T, Arai O, et al. P2-10-10: a precision comparison of breast ultrasound images between different time phases by imaging fusion technique using magnetic position tracking system. Cancer Res (2011) 71(24):P2–10-10-P2-10-10. doi: 10.1158/0008-5472.SABCS11-P2-10-10

28. Zhang E, Seiler S, Chen M, Lu W, Gu X. Birads features-oriented semi-supervised deep learning for breast ultrasound computer-aided diagnosis. Phys Med Biol (2020) 65(12):125005. doi: 10.1088/1361-6560/ab7e7d

29. Santos MK, Ferreira Júnior JR, Wada DT, Tenório APM, Barbosa MHN, Marques PMA. Artificial intelligence, machine learning, computer-aided diagnosis, and radiomics: advances in imaging towards to precision medicine. Radiol Bras (2019) 52(6):387–96. doi: 10.1590/0100-3984.2019.0049

30. GE Healthcare. Invenia ABUS 2.0 . Available at: https://www.gehealthcare.com/en-ph/products/ultrasound/abus-breast-imaging/invenia-abus .

31. Siemens Healthineers. ACUSON S2000 ABVS System HELX Evolution with Touch Control . Available at: https://shop.medicalimaging.healthcare.siemens.com.sg/acuson-s2000-abvs-system-helx-evolution-with-touch-control/ .

32. Kuhl CK, Schrading S, Strobel K, Schild HH, Hilgers RD, Bieling HB. Abbreviated breast magnetic resonance imaging (MRI): first postcontrast subtracted images and maximum-intensity projection-a novel approach to breast cancer screening with MRI. J Clin Oncol (2014) 32(22):2304–10. doi: 10.1200/JCO.2013.52.5386

33. Mann RM, Mus RD, van Zelst J, Geppert C, Karssemeijer NA. Novel approach to contrast-enhanced breast magnetic resonance imaging for screening: high-resolution ultrafast dynamic imaging. Invest Radiol (2014) 49(9):579–85. doi: 10.1097/RLI.0000000000000057

34. Zhang Y, Chen JH, Chang KT, Park VY, Kim MJ, Chan S, et al. Automatic breast and fibroglandular tissue segmentation in breast MRI using deep learning by a fully-convolutional residual neural network U-net. Acad Radiol (2019) 26(11):1526–35. doi: 10.1016/j.acra.2019.01.012

35. Ming W, Li F, Zhu Y, Bai Y, Gu W, Liu Y, et al. Unsupervised analysis based on DCE-MRI radiomics features revealed three novel breast cancer subtypes with distinct clinical outcomes and biological characteristics. Cancers (2022) 14(22):5507. doi: 10.3390/cancers14225507

36. Marcus C, Jeyarajasingham K, Hirsch A, Subramaniam RM. PET/CT in the management of thyroid cancers. Am Roentgen Ray Soc Annu Meeting (2014) 202:1316–29. doi: 10.2214/AJR.13.11673

37. Wang Y, Choi EJ, Choi Y, Zhang H, Jin GY, Ko SB. Breast cancer classification in automated breast ultrasound using multiview convolutional neural network with transfer learning - ScienceDirect. Ultrasound Med Biol (2020) 46(5):1119–32. doi: 10.1016/j.ultrasmedbio.2020.01.001

38. Shen L, Margolies LR, Rothstein JH, Fluder E, McBride R, Sieh W. Deep learning to improve breast cancer detection on screening mammography. Sci Rep (2019) 9:12495. doi: 10.1038/s41598-019-48995-4

39. Yala A, Lehman C, Schuster T, Portnoi T, Barzilay R. A deep learning mammography-based model for improved breast cancer risk prediction. Radiology (2019) 292(1):182716. doi: 10.1148/radiol.2019182716

40. Wang L. Deep learning techniques to diagnose lung cancer. Cancers (2022) 14:5569. doi: 10.3390/cancers14225569

41. Heath M, Bowyer K, Kopans D, Kegelmeyer P Jr, Moore R, Chang K. Current Status of the digital database for screening mammography. In: Digital mammography. computational imaging and vision. (Springer, Dordrecht) (1998) 11:457–60. doi: 10.1007/978-94-011-5318-8_75

42. Li B, Ge Y, Zhao Y, Guan E, Yan W. Benign and malignant mammographic image classification based on convolutional neural networks. In: 10th international conference on machine learning and computing. (Beijing, China: Association for Computing Machinery) (2018). pp. 11–15. doi: 10.1145/3195106.3195163

43. Inês CM, Igor A, Inês D, António C, Maria JC, Jaime SC. Inbreast: toward a full-field digital mammographic database. Acad Radiol (2012) 19:236–48. doi: 10.1016/j.acra.2011.09.014

44. Araújo T, Aresta G, Castro E, Rouco J, Aguiar P, Eloy C, et al. Classification of breast cancer histology images using convolutional neural networks. PloS One (2017) 12(6):e0177544. doi: 10.1371/journal.pone.0177544

45. Lee R, Gimenez F, Hoogi A, Miyake K, Gorovoy M, Rubin D. A curated mammography data set for use in computer-aided detection and diagnosis research. Sci Data (2017) 4:170177. doi: 10.1038/sdata.2017.177

46. Ramos-Pollán R, Guevara-López M, Suárez-Ortega C, Díaz-Herrero G, Franco-Valiente J, Rubio-del-Solar M, et al. Discovering mammography-based machine learning classifiers for breast cancer diagnosis. J Med Syst (2011) 1:11. doi: 10.1007/s10916-011-9693-

47. Heath M, Bowyer K, Kopans D, Moore R, Kegelmeyer WP. (2001). Current status of the digital database for screening mammography. In: Digital mammography. computational imaging and vision. Springer Netherlands (1998) 13:457–60. doi: 10.1007/978-94-011-5318-8_75

48. Yoon WB, Oh JE, Chae EY, Kim HH, Lee SY, Kim KG. Automatic detection of pectoral muscle region for computer-aided diagnosis using MIAS mammograms. BioMed Res Int (2016) 2016:5967580. doi: 10.1155/2016/5967580

49. Lopez MG, Posada N, Moura DC, Pollán RR, Valiente JMF, Ortega CS, et al. (2012). BCDR: a breast cancer digital repository. In: International conference on experimental mechanics. Porto, Portugal (2012). pp. 113–20.

50. Dalle J-R, Leow WK, Racoceanu D, Tutac AE, Putti TC. (2008). Automatic breast cancer grading of histopathological images. In: Annu Int Conf IEEE Eng Med Biol Soc. Vancouver, BC, Canada: IEEE (2008). pp. 3052–5. doi: 10.1109/IEMBS.2008.4649847

51. Lee S, Fu C, Salama P, Dunn K, Delp E. Tubule segmentation of fluorescence microscopy images based on convolutional neural networks with inhomogeneity correction. Int Symp Electr Imaging (2018) 30:199–1–199–8. doi: 10.2352/ISSN.2470-1173.2018.15.COIMG-199

52. Kumar N, Verma R, Sharma S, Bhargava S, Vahadane A, Sethi AA. Dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE Trans Med Imaging (2017) 36(7):1550–60. doi: 10.1109/TMI.2017.2677499

53. Saha M, Chakraborty C, Racoceanu D. Efficient deep learning model for mitosis detection using breast histopathology images. Comput Med Imaging Graph (2018) 64:29–40. doi: 10.1016/j.compmedimag.2017.12.001

54. Ronneberger O, Fischer P. Brox, T. U-net: Convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention. Springer Switzerland (2015). p. 234–41. doi: 10.1007/978-3-319-24574-4_28

55. Borgli H, Thambawita V, Pia Helén S, Hicks SA, Lange TD. A comprehensive multi-class image and video dataset for gastrointestinal endoscopy. Sci Data (2020) 7:1–14. doi: 10.1038/s41597-020-00622-y

56. Mamonov AV, Figueiredo IN, Figueiredo PN, Tsai YH. Automated polyp detection in colon capsule endoscopy. IEEE Trans Med Imaging (2013) 33:1488–502. doi: 10.1109/TMI.2014.2314959

57. Bellens S, Probst GM, Janssens M, Vandewalle P, Dewulf W. Evaluating conventional and deep learning segmentation for fast X-ray CT porosity measurements of polymer laser sintered am parts. Polym Test (2022) 110:107540. doi: 10.1016/j.polymertesting.2022.107540

58. Yuan X, Yuxin W, Jie Y, Qian C, Xueding W, Carson PL. Medical breast ultrasound image segmentation by machine learning. Ultrasonics (2018) 91:1–9. doi: 10.1016/j.ultras.2018.07.006

59. Dhungel N, Carneiro G, Bradley AP. Tree RE-weighted belief propagation using deep learning potentials for mass segmentation from mammograms. In: 2015 IEEE 12th international symposium on biomedical imaging Brooklyn, NY, USA: IEEE (2015). pp. 760–3. doi: 10.1109/ISBI.2015.7163983

60. Zhu W, Xiang X, Tran TD, Hager GD, Xie X. Adversarial deep structured nets for mass segmentation from mammograms. In: International symposium on biomedical imaging. Washington, DC, USA: IEEE (2018). 847–50. doi: 10.1109/ISBI.2018.8363704

61. Al-antari MA, Al-masni MA, Choi MT, Han SM, Kim TS. A fully integrated computer-aided diagnosis system for digital x-ray mammograms via deep learning detection, segmentation, and classification. Int J Med Inform (2018) 117:44–54. doi: 10.1016/j.ijmedinf.2018.06.003

62. Li H, Chen D, Nailon WH, Davies ME, Laurenson D. Image analysis for moving organ, breast, and thoracic images. In: Improved breast mass segmentation in mammograms with conditional residual u-net . Cham: Springer (2018). p. 81–9.

63. Shen T, Gou C, Wang J, Wang FY. Simultaneous segmentation and classification of mass region from mammograms using a mixed-supervision guided deep model. IEEE Signal Process Lett (2019) 27:196–200. doi: 10.1109/LSP.2019.2963151

64. Li GDS, Dong M, Xiaomin M. Attention dense-u-net for automatic breast mass segmentation in digital mammogram. IEEE Access (2019) 7:59037–47. doi: 10.1109/ACCESS.2019.2914873

65. Abdelhafiz D, Nabavi S, Ammar R, Yang C, Bi J. Residual deep learning system for mass segmentation and classification in mammography. In: Proceedings of the 10th ACM international conference on bioinformatics. Association for Computing Machinery, New York, NY, USA: Computational Biology and Health Informatics (2019). pp. 475–84. doi: 10.1145/3307339.3342157

66. Hossain MS. Microcalcification segmentation using modified u-net segmentation network from mammogram images. J King Saud University Comput Inf Sci (2022) 34(2):86–94. doi: 10.1016/j.jksuci.2019.10.014

67. Min H, Wilson D, Huang Y, Liu S, Crozier S, Bradley AP, et al. (2020). Fully automatic computer-aided mass detection and segmentation via pseudo-color mammograms and mask R-CNN. In: 17th international symposium on biomedical imaging (ISBI), Iowa City, IA, USA: IEEE (2000). pp. 1111–5. doi: 10.1109/ISBI45749.2020.9098732

68. Al-antari MA, Al-masni MA, Kim TS. Advances in experimental medicine and biology. In: Deep learning computer-aided diagnosis for breast lesion in digital mammogram . Cham: Springer (2020). p. 59–72.

69. Abdelhafiz D, Bi J, Ammar R, Yang C, Nabavi S. Convolutional neural network for automated mass segmentation in mammography. BMC Bioinf (2020) 21(S1):1–19. doi: 10.1186/s12859-020-3521-y

70. Saffari N, Rashwan HA, Abdel-Nasser M, Singh VK, Puig D. Fully automated breast density segmentation and classification using deep learning. Diagnostics (2020) 10(11):988. doi: 10.3390/diagnostics10110988

71. Kumar Singh V, Rashwan HA, Romani S, Akram F, Pandey N, Kamal Sarker MM, et al. Breast tumor segmentation and shape classification in mammograms using generative adversarial and convolutional neural network. Expert Syst Appl (2020) 139:112855. doi: 10.1016/j.eswa.2019.112855

72. Ahmed L, Iqbal MM, Aldabbas H, Khalid S, Saleem Y, Saeed S. Images data practices for semantic segmentation of breast cancer using deep neural network. J Ambient Intell Humanized Comput (2020) 14:15227–243. doi: 10.1007/s12652-020-01680-1

73. Sun H, Cheng L, Liu B, Zheng H, Feng DD, Wang S. AUNet: attention-guided dense-upsampling networks for breast mass segmentation in whole mammograms. Phys Med Biol (2020) 65(5):55005. doi: 10.1088/1361-6560/ab5745

74. Bhatti HMA, Li J, Siddeeq S, Rehman A, Manzoor A. (2020). Multi-detection and segmentation of breast lesions based on mask RCNN-FPN. In: International conference on bioinformatics and biomedicine (BIBM). Seoul, Korea (South): IEEE (2000). pp. 2698–704. doi: 10.1109/BIBM49941.2020.9313170

75. Zeiser FA, da Costa CA, Zonta T, Marques NMC, Roehe AV, Moreno M, et al. Segmentation of masses on mammograms using data augmentation and deep learning. J Digital Imaging (2020) 33(4):858–68. doi: 10.1007/s10278-020-00330-4

76. Tsochatzidis L, Koutla P, Costaridou L, Pratikakis I. Integrating segmentation information into cnn for breast cancer diagnosis of mammographic masses. Comput Methods Programs Biomed (2021) 200:105913. doi: 10.1016/j.cmpb.2020.105913

77. Ravitha Rajalakshmi N, Vidhyapriya R, Elango N, Ramesh N. Deeply supervised u-net for mass segmentation in digital mammograms. Int J Imaging Syst Technol (2021) 31(1):59–71. doi: 10.1002/ima.22516

78. Salama WM, Aly MH. Deep learning in mammography images segmentation and classification: automated cnn approach. Alexandria Eng J (2021) 60(5):4701–9. doi: 10.1016/j.aej.2021.03.048

79. Tekin E, Yazıcı Ç, Kusetogullari H, Tokat F, Yavariabdi A, Iheme LO, et al. Tubule-U-Net: a novel dataset and deep learning-based tubule segmentation framework in whole slide images of breast cancer. Sci Rep (2023) 13:128. doi: 10.1038/s41598-022-27331-3

80. Naik S, Doyle S, Agner S, Madabhushi A, Tomaszewski J. (2008). Automated gland and nuclei segmentation for grading of prostate and breast cancer histopathology. In: International Symposium on Biomedical Imaging: From Nano to Macro. Paris, France: IEEE (2008) pp. 284–7. doi: 10.1109/ISBI.2008.4540988

81. Tutac AE, Racoceanu D, Putti T, Xiong W, Cretu V. (2008). Knowledge-guided semantic indexing of breast cancer histopathology images. In: International conference on biomedical engineering and informatics. China: IEEE (2008). pp. 107–12. doi: 10.1109/BMEI.2008.166

82. Basavanhally A, Summers RM, Van Ginneken B, Yu E, Xu J, Ganesan S, et al. Incorporating domain knowledge for tubule detection in breast histopathology using o'callaghan neighborhoods. Proc SPIE Int Soc Optical Eng (2011) 7963:796310–796310-15. doi: 10.1117/12.878092

83. Maqlin P, Thamburaj R, Mammen JJ, Nagar AK. Automatic detection of tubules in breast histopathological images. In: Proceedings of Seventh International Conference on Bio-Inspired Computing: Theories and Applications. Springer, India: Adv Intelligent Syst Comput. (2013) 202:311–21. doi: 10.1007/978-81-322-1041-2_27

84. Romo-Bucheli D, Janowczyk A, Gilmore H, Romero E, Madabhushi A. Automated tubule nuclei quantification and correlation with oncotype dx risk categories in er+ breast cancer whole slide images. Sci Rep (2016) 6:32706. doi: 10.1038/srep32706

85. Hu Y, Guo Y, Wang Y, Yu J, Li J, Zhou S, et al. Automatic tumor segmentation in breast ultrasound images using a dilated fully convolutional network combined with an active contour model. Med Phys (2019) 46(1):215–28. doi: 10.1002/mp.13268

86. Abdelhafiz D, Yang C, Ammar R, Nabavi S. Deep convolutional neural networks for mammography: advances, challenges and applications. BMC Bioinf (2019) 20:1–20. doi: 10.1186/s12859-019-2823-4

87. Tan XJ, Mustafa N, Mashor MY, Rahman KSA. A novel quantitative measurement method for irregular tubules in breast carcinoma. Eng Sci Technol an Int J (2022) 31:101051. doi: 10.1016/j.jestch.2021.08.008

88. Jiao Z, Gao X, Wang Y, Li J. A deep feature based framework for breast masses classification. Neurocomputing (2016) 197:221–31. doi: 10.1016/j.neucom.2016.02.060

89. Huynh BQ, Li H, Giger ML. Digital mammographic tumor classification using transfer learning from deep convolutional neural networks. J Med Imaging (2016) 3(3):034501. doi: 10.1117/1.JMI.3.3.034501

90. Arevalo J, Gonzalez FA, Ramos-Pollan R, Oliveira JL, Guevara Lopez MA. Representation learning for mammography mass lesion classification with convolutional neural networks. Comput Methods Programs Biomed (2016) 127:248–57. doi: 10.1016/j.cmpb.2015.12.014

91. Leod PM, Verma B. (2016). Polynomial prediction of neurons in neural network classifier for breast cancer diagnosis. In: Proceedings of the international conference on natural computation (ICNC). Zhangjiajie, China: IEEE (2015). pp. 775–80. doi: 10.1109/ICNC.2015.7378089

92. Nascimento CDL, Silva SDS, da Silva TA, Pereira WCA, Costa MGF, Costa Filho CFF. Breast tumor classification in ultrasound images using support vector machines and neural networks. Rev Bras Engenharia Biomedica (2016) 32(3):283–92. doi: 10.1590/2446-4740.04915

93. Zhang Q, Xiao Y, Dai W, Suo JF, Wang CZ, Shi J, et al. Deep learning based classification of breast tumors with shear-wave elastography. Ultrasonics (2016) 72:150–7. doi: 10.1016/j.ultras.2016.08.004

94. Sun W, Tseng TL, Zhang J, Qian W. Enhancing deep convolutional neural network scheme for breast cancer diagnosis with unlabeled data. Comput Med Imaging Graphics (2017) 57:4–9. doi: 10.1016/j.compmedimag.2016.07.004

95. Dhungel N, Carneiro G, Bradley AP. A deep learning approach for the analysis of masses in mammograms with minimal user intervention. Med Image Anal (2017) 37:114–28. doi: 10.1016/j.media.2017.01.009

96. Samala RK, Chan HP, Hadjiiski LM, Helvie MA, Cha K, Richter C. Multi-task transfer learning deep convolutional neural network: application to computer aided diagnosis of breast cancer on mammograms. Phys Med Biol (2017) 62(23):8894–908. doi: 10.1088/1361-6560/aa93d4

97. Jadoon MM, Zhang Q, Ul Haq I, Butt S, Jadoon A. Three-class mammogram classification based on descriptive CNN features. BioMed Res Int (2017) 2017:3640901. doi: 10.1155/2017/3640901

98. Antropova N, Huynh BQ, Giger ML. A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets. Med Phys (2017) 44(10):5162–71. doi: 10.1002/mp.12453

99. Qiu Y, Yan S, Gundreddy RR, Wang Y, Zheng B. A new approach to develop computer-aided diagnosis scheme of breast mass classification using deep learning technology. J X-Ray Sci Technol (2017) 25(5):751–63. doi: 10.3233/XST-16226

100. Gardezi SJS, Awais M, Faye I, Meriaudeau F. (2017). Mammogram classification using deep learning features. In: Proceedings of the 2017 IEEE international conference on signal and image processing applications (ICSIPA). Kuching, Malaysia: IEEE (2017). pp. 485–8. doi: 10.1109/ICSIPA.2017.8120660

101. Kumar I, Bhadauria HS, Virmani J, Thakur S. A classification framework for prediction of breast density using an ensemble of neural network classifiers. Biocybernetics Biomed Eng (2017) 37(1):217–28. doi: 10.1016/j.bbe.2017.01.001

102. Zheng Y, Jiang Z, Xie F, Zhang H, Ma Y, Shi H, et al. Feature extraction from histopathological images based on nucleus-guided convolutional neural network for breast lesion classification. Pattern Recognit (2017) 71:14–25. doi: 10.1016/j.patcog.2017.05.010

103. Han S, Kang HK, Jeong JY, Park MH, Kim W, Bang WC, et al. A deep learning framework for supporting the classification of breast lesions in ultrasound images. Phys Med Biol (2017) 62(19):7714–28. doi: 10.1088/1361-6560/aa82ec

104. Yu S, Liu LL, Wang ZY, Dai GZ, Xie YQ. Transferring deep neural networks for the differentiation of mammographic breast lesions. Sci China Technol Sci (2018) 62(3):441–7. doi: 10.1007/s11431-017-9317-3

105. Ribli D, Horv'ath A, Unger Z, Pollner P, Csabai I. Detecting and classifying lesions in mammograms with deep learning. Sci Rep (2018) 8(1):85–94. doi: 10.1038/s41598-018-22437-z

106. A l-masni MA, Al-antari MA, Park JM, Gi G, Kim TY, Rivera P, et al. Simultaneous detection and classification of breast masses in digital mammograms via a deep learning YOLO-based CAD system. Comput Methods Programs Biomed (2018) 157:85–94. doi: 10.1016/j.cmpb.2018.01.017

107. Chougrad H, Zouaki H, Alheyane Q. Deep convolutional neural networks for breast cancer screening. Comput Methods Programs Biomed: An International Journal Devoted to the Development, Implementation and Exchange of Computing Methodology and Software Systems in Biomedical Research and Medical Practice (2018) 157:19–30. doi: 10.1016/j.cmpb.2018.01.011

108. Jiao Z, Gao X, Wang Y, Li J. A parasitic metric learning net for breast mass classification based on mammography. Pattern Recognit (20118) 75:292–301. doi: 10.1016/j.patcog.2017.07.008

109. Mohamed AA, Berg WA, Peng H, Luo Y, Jankowitz RC, Wu S. A deep learning method for classifying mammographic breast density categories. Med Phys (2018) 45(1):314–21. doi: 10.1002/mp.12683

110. Ribli D, Horváth A, Unger Z, Pollner P, Csabai I. Detecting and classifying lesions in mammograms with deep learning. Sci Rep (2018) 8(1):4165. doi: 10.1038/s41598-018-22437-z

111. Mendel K, Li H, Sheth D, Giger M. Transfer learning from convolutional neural networks for computer-aided diagnosis: a comparison of digital breast tomosynthesis and full-field digital mammography. Acad Radiol (2019) 26(16):735–43. doi: 10.1016/j.acra.2018.06.019

112. Wang H, Feng J, Zhang Z, Su H, Cui L, He H, et al. Breast mass classification via deeply integrating the contextual information from multi-view data. Pattern Recognit (2018) 80:42–52. doi: 10.1016/j.patcog.2018.02.026

113. Charan S, Khan MJ, Khurshid K. (2018). Breast cancer detection in mammograms using convolutional neural network. In: Proceedings of the 2018 international conference on computing, mathematics and engineering technologies (iCoMET). Sukkur, Pakistan: IEEE (2018). pp. 1–5. doi: 10.1109/ICOMET.2018.8346384

114. Feng Y, Zhang L, Yi Z. Breast cancer cell nuclei classification in histopathology images using deep neural networks. Int J Comput Assist Radiol Surg (2018) 13(2):179–91. doi: 10.1007/s11548-017-1663-9

115. Bardou D, Zhang K, Ahmad SM. Classification of breast cancer based on histology images using convolutional neural networks. IEEE Access (2018) 6: 24680–93. doi: 10.1109/access.2018.2831280

116. Nahid AA, Kong Y. Histopathological breast-image classification using local and frequency domains by convolutional neural network. Inf (Switzerland) (2018) 9(1):19. doi: 10.3390/info9010019

117. Touahri R, AzizI N, Hammami NE, Aldwairi M, Benaida F. (2019). Automated breast tumor diagnosis using local binary patterns (LBP) based on deep learning classification. In: Proceedings of the 2019 international conference on computer and information sciences (ICCIS). Sakaka, Saudi Arabia: IEEE (2019). pp. 1–5. doi: 10.1109/ICCISci.2019.8716428

118. Abdel Rahman AS, Belhaouari SB, Bouzerdoum A, Baali H, Alam T, Eldaraa AM. (2020). Breast mass tumor classification using deep learning. In: Proceedings of the 2020 IEEE international conference on informatics, iot, and enabling technologies (ICIoT). Doha, Qatar: IEEE (2020). pp. 271–6. doi: 10.1109/ICIoT48696.2020.9089535

119. Khamparia A, Khanna A, Thanh DNH, Gupta D, Podder P, Bharati S, et al. Diagnosis of breast cancer based on modern mammography using hybrid transfer learning. Multidim Syst Sign Process (2021) 32:747–65. doi: 10.1007/s11045-020-00756-7

120. Kavitha T, Mathai PP, Karthikeyan C, Ashok M, Kohar R, Avanija J, et al. Deep learning based capsule neural network model for breast cancer diagnosis using mammogram images. Interdiscip Sci Comput Life Sci (2022) 14:113–29. doi: 10.1007/s12539-021-00467-y

121. Frazer HM, Qin AK, Pan H, Brotchie P. Evaluation of deep learning-based artificial intelligence techniques for breast cancer detection on mammograms: results from a retrospective study using a breastscreen victoria dataset. J Med Imaging Radiat Oncol (2021) 65(5):529–37. doi: 10.1111/1754-9485.13278

122. Kim KH, Nam H, Lim E, Ock CY. Development of AI-powered imaging biomarker for breast cancer risk assessment on the basis of mammography alone. J Clin Oncol (2021) 39(15):10519–9. doi: 10.1200/JCO.2021.39.15_suppl.10519

123. Li H, Chen D, Nailon WH, Davies ME, Laurenson DI. Dual convolutional neural networks for breast mass segmentation and diagnosis in mammography. IEEE Trans Med Imaging (2022) 41(1):3–13. doi: 10.1109/TMI.2021.3102622

124. Shams S, Platania R, Zhang J, Kim J, Lee K, Park SJ. (2018). Deep generative breast cancer screening and diagnosis. In: Proceedings of the international conference on medical image computing and computer-assisted intervention, Granada, Spain. Granada, Spain: Springer (2018). 11071:859–67. doi: 10.1007/978-3-030-00934-2_95

125. Shen L, Margolies LR, Rothstein JH, Fluder E, McBride R, Sieh W. Deep learning to improve breast cancer detection on screening mammography. Sci Rep (2019) 9:1–12. doi: 10.1038/s41598-019-48995-4

126. Tsochatzidis L, Costaridou L, Pratikakis I. Deep learning for breast cancer diagnosis from mammograms—A comparative study. J Imaging (2019) 5:37. doi: 10.3390/jimaging5030037

127. Agnes SA, Anitha J, Pandian SIA, Peter JD. Classification of mammogram images using multiscale all convolutional neural network (MA-CNN) J. Med Syst (2020) 44:1–9. doi: 10.1007/s10916-019-1494-z

128. Kaur P, Singh G, Kaur P. Intellectual detection and validation of automated mammogram breast cancer images by multi-class SVM using deep learning classification. Inform Med Unlocked (2019) 16:100151. doi: 10.1016/j.imu.2019.01.001

129. Ting FF, Tan YJ, Sim KS. Convolutional neural network improvement for breast cancer classification. Expert Syst Appl (2019) 120:103–15. doi: 10.1016/j.eswa.2018.11.008

130. Falconi LG, Perez M, Aguilar WG, Conci A. Transfer learning and fine tuning in breast mammogram abnormalities classification on CBIS-DDSM database. Adv Sci Technol Eng Syst (2020) 5:154–65. doi: 10.25046/aj050220

131. Ansar W, Shahid AR, Raza B, Dar AH. Breast cancer detection and localization using mobilenet based transfer learning for mammograms. In: International symposium on intelligent computing systems. Sharjah, United Arab Emirates: Springer, Cham (2020) 1187:11–21. doi: 10.1007/978-3-030-43364-2_2

132. Zhang H, Wu R, Yuan T, Jiang Z, Huang S, Wu J, et al. DE-Ada*: A novel model for breast mass classification using cross-modal pathological semantic mining and organic integration of multi-feature fusions. Inf Sci (2020) 539:461–86. doi: 10.1016/j.ins.2020.05.080

133. Shayma'a AH, Sayed MS, Abdalla MI, Rashwan MA. Breast cancer masses classification using deep convolutional neural networks and transfer learning. Multimed Tools Appl (2020) 79:30735–68. doi: 10.1007/s11042-020-09518-w

134. Al-Antari MA, Han SM, Kim TS. Evaluation of deep learning detection and classification towards a computer-aided diagnosis of breast lesions in digital X-ray mammograms. Comput Methods Programs Biomed (2020) 196:105584. doi: 10.1016/j.cmpb.2020.105584

135. El Houby EM, Yassin NI. Malignant and nonmalignant classification of breast lesions in mammograms using convolutional neural networks. Biomed Signal Process Control (2021) 70:102954. doi: 10.1016/j.bspc.2021.102954

136. Zahoor S, Shoaib U, Lali IU. Breast cancer mammograms classification using deep neural network and entropy-controlled whale optimization algorithm. Diagnostics (2022) 12(2):557. doi: 10.3390/diagnostics12020557

137. Chakravarthy SS, Rajaguru H. Automatic detection and classification of mammograms using improved extreme learning machine with deep learning. IRBM (2021) 43:49–61. doi: 10.1016/j.irbm.2020.12.004

138. Mudeng V, Jeong JW, Choe SW. Simply fine-tuned deep learning-based classification for breast cancer with mammograms. Comput Mater Contin (2022) 73(3):4677–93. doi: 10.32604/cmc.2022.031046

Keywords: breast cancer, classification, X-ray mammography, artificial intelligence, machine learning, deep learning, medical imaging, radiology

Citation: Wang L (2024) Mammography with deep learning for breast cancer detection. Front. Oncol. 14:1281922. doi: 10.3389/fonc.2024.1281922

Received: 23 August 2023; Accepted: 19 January 2024; Published: 12 February 2024.

Reviewed by:

Copyright © 2024 Wang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Lulu Wang, [email protected] ; [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 15 February 2023

Efficient breast cancer mammograms diagnosis using three deep neural networks and term variance

  • Ahmed S. Elkorany 1 &
  • Zeinab F. Elsharkawy 2  

Scientific Reports volume  13 , Article number:  2663 ( 2023 ) Cite this article

4247 Accesses

20 Citations

3 Altmetric

Metrics details

  • Biomedical engineering
  • Computer science

Breast cancer (BC) is spreading more and more every day. Therefore, a patient's life can be saved by its early discovery. Mammography is frequently used to diagnose BC. The classification of mammography region of interest (ROI) patches (i.e., normal, malignant, or benign) is the most crucial phase in this process since it helps medical professionals to identify BC. In this paper, a hybrid technique that carries out a quick and precise classification that is appropriate for the BC diagnosis system is proposed and tested. Three different Deep Learning (DL) Convolution Neural Network (CNN) models—namely, Inception-V3, ResNet50, and AlexNet—are used in the current study as feature extractors. To extract useful features from each CNN model, our suggested method uses the Term Variance (TV) feature selection algorithm. The TV-selected features from each CNN model are combined and a further selection is performed to obtain the most useful features which are sent later to the multiclass support vector machine (MSVM) classifier. The Mammographic Image Analysis Society (MIAS) image database was used to test the effectiveness of the suggested method for classification. The mammogram's ROI is retrieved, and image patches are assigned to it. Based on the results of testing several TV feature subsets, the 600-feature subset with the highest classification performance was discovered. Higher classification accuracy (CA) is attained when compared to previously published work. The average CA for 70% of training is 97.81%, for 80% of training, it is 98%, and for 90% of training, it reaches its optimal value. Finally, the ablation analysis is performed to emphasize the role of the proposed network’s key parameters.

Similar content being viewed by others

research papers on breast cancer detection

A convolutional deep learning model for improving mammographic breast-microcalcification diagnosis

research papers on breast cancer detection

Intelligent breast cancer diagnosis with two-stage using mammogram images

research papers on breast cancer detection

A twin convolutional neural network with hybrid binary optimizer for multimodal breast cancer digital image classification

Introduction.

In every country (i.e., rich or developing) in the world, women can develop BC at any age after puberty, however, the incidence rates rise as people age 1 . BC is still the second most common cancer in the world and is still fatal to women 2 . BC is a condition in which the breast's cells grow abnormally. Both men and women can develop BC, but women are much more likely to do so. There are three basic components of a breast: connective tissue, ducts, and lobules. Blood and lymph vessels are two ways that BC can travel outside of the breast. BC is said to have metastasized when it spreads to other body regions. The malignant development is initially restricted to the duct or lobule, where it often exhibits no symptoms and has a low risk of spreading 1 . These tumors may develop over time and spread to neighboring lymph nodes or other body organs after invading the breast tissue around them. Widespread metastases are the cause of breast cancer deaths in women. Treatment for breast cancer can be quite successful, especially if the disease is discovered early. The likelihood of surviving BC is increased by routine screening.

Various imaging modalities have been created and used for image acquisition over time. DL approaches have been applied to medical imaging data, including X-ray and magnetic resonance imaging (MRI) images, demonstrating their effectiveness in identifying and tracking illnesses 2 , 3 , 4 , 5 , 6 , 7 . To assess the usefulness of various imaging modalities, standard metrics like sensitivity and specificity are also provided. Mammograms are the primary topic of the research 3 , 4 .

Digital mammogram analysis using mammography is a reliable early detection technique 2 , 3 , 4 , 5 , 6 , 7 . BC comes in a wide variety of forms, making classification challenging 8 . The kind of BC is determined by which breast cells develop into cancer. The most efficient therapy approach is made possible by the precise classification of the kind of BC. Since human classification is not always exact, an automated accurate breast cancer diagnosis may be advantageous.

Several techniques had been used to classify BC using the MIAS database 9 , such as Bayesian Neural Networks 10 , Relevance Feedback (RF) and Relevance Feedback extreme learning machine (RF-ELM) 11 , optimized kernel extreme learning machine (KELM) 12 , K-nearest neighbor (KNN) 13 , Discrimination Potentiality (DP) 14 , SVM 15 , 16 , 17 , and DL CNN 18 . DL, a component of machine learning algorithms, is primarily focused on automatically extracting and classifying image features. As a result, DL is now a fundamental component of automated clinical decision-making 4 , 18 . Residual Neural Network (ResNet) 19 , 20 , 21 , Inception-V3 20 , ShuffleNet 22 , Squeeznet 22 , DenseNet 23 , GoogleNet 21 , 24 , AlexNet 21 , 24 , 25 , VGG 21 , and Xception 26 are some of the most practical DL algorithms that have lately demonstrated the best performance for a variety of machine learning systems.

This work's goal is to offer a precise automated BC classification method using deeply learned features of three different CNN architectures and a TV algorithm as a feature selector to obtain the images’ important features, hence improving the CA. The TV algorithm, which has previously been used in feature selection for text mining and clustering 27 , 28 , has never been used in BC diagnosis applications. The proposed approach attempts to combine features from the ResNet50, InceptionV3, and AlexNet architectures. The TV model is then used to decrease the number of features by picking the ones with the highest rankings. This increases the classification accuracy and results in a more efficient BC diagnosis system. The suggested system results outperformed previously published findings using the same BC image dataset.

The current proposed study involves the following stages.

Patches of interest (i.e., ROI) : Instead of using whole images, patches are employed to optimize our analysis. It aids in improved performance in addition to efficient computing. From the 322 images in MIAS, 416 image patches have been extracted.

Feature extraction: To extract features, a hybrid model employs three different pretrained DL architectures, namely ResNet50, Inception-V3, and AlexNet.

TV Feature selection: For the first time, TV is employed as a feature selector, selecting the appropriate features from the combined features of the BC image patches.

Classification process: The TV-selected features are used to train and test the MSVM classifier.

The rest of the paper is structured as follows: The related work is outlined in “ Related work ” section. The methods employed in the suggested strategy are discussed in “ The methodology ” section. The suggested method's experimental setup is shown in “ Experimental setup ” section. The proposed CNNs + TV + SVM results are shown and discussed in “ Results and discussion ” section. “ Conclusion ” section summarizes the conclusion.

Related work

Recently, numerous studies using publicly available MIAS mammography images for BC diagnosis and classification have been proposed in the literature. In the last ten years, several computer-aided CAD diagnosis models have been presented for classifying digital mammograms based on three crucial concepts: feature extraction, feature reduction, and image classification. Several researchers have put forth several feature extraction strategies, with improvements made in the detection and classification portions 4 , 5 .

A Medical Active leaRning and Retrieval (MARRow) method was put forth in 29 as a means of assisting BC detection. This technique, which is based on varying degrees of diversity and uncertainty, is dedicated to the relevance feedback (RF) paradigm in the content-based image retrieval (CBIR) process. A precision of 87.3% was attained. An automated mass detection algorithm based on Gestalt psychology was presented by Wang et al. 30 . Sensation and semantic integration, and validation are its three modules. This approach blends aspects of human cognition and the visual features of breast masses. Using 257 images, a sensitivity of 92% was reached. In 31 , a hybrid CAD framework was proposed for Mammogram classification. This framework contains four modules: ROI generation using cropping operation, texture feature extraction using contourlet transformation, a forest optimization algorithm (FOA) to select features, and classifiers like k-NN, SVM, C4.5, and Naive Bayes for classification.

In 32 , an efficient technique for ambiguous area detection in digital mammograms was introduced. This technique depends on Electromagnetism-like Optimization (EML) for image segmentation after the 2D Median noise filtering step. The SVM classifier receives the extracted feature for classification. With just 56 images, an accuracy of 78.57% was achieved. By combining deep CNN (DCNN) and SVM, a CAD system for breast mammography has been presented in 33 . SVM was used for classification, and DCNN was employed to extract features. This system achieved accuracy, sensitivity, and specificity of 92.85, 93.25, and 90.56% respectively.

In 34 , CNN Improvement for BC Classification (CNNI-BCC) algorithm was proposed. This method improves the BC lesion classification for benign, malignant, and healthy patients with 89.47% of sensitivity and an accuracy of 90.5%. Hassan et al. presented an automated algorithm for BC mass detection depending on the feature matching of different areas utilizing Maximally Stable Extremal Regions (MSER) 35 . The system was evaluated using 85 MIAS images, and it was 96.47% accurate in identifying the locations of masses. Patil et al. introduced an automated BC detection method 36 , depending on a combination of recurrent neural network (RNN) and CNN. The Firefly updated chicken-based CSO (FC-CSO) was used to increase segmentation accuracy and optimize the combination of RNN and CNN. A 90.6% accuracy, a 90.42% sensitivity, and an 89.88% specificity are obtained. In 37 , a BC classification method named BDR-CNN-GCN was introduced, the is a combination of dropout (DO), batch normalization (BN), and two advanced NN (CNN, and graph convolutional network (GCN)). On the breast MIAS dataset, the BDR-CNN-GCN algorithm was run ten times, yielding 96.00% specificity, 96.20% sensitivity, and 96.10% accuracy.

For the early diagnosis of BC, Shen et al. introduced a CAD system 38 . To extract features, GLCM is combined with discrete wavelet decomposition (DWD), and Deep Belief Network (DBN) is utilized for classification. To enhance DBN CA, the sunflower optimization technique was applied. The findings demonstrated that the suggested model achieves accuracy, specificity, and sensitivity rates of 91.5%, 72.4%, and 94.1%, respectively. In 39 , an automated DL-based BC diagnosis (ADL-BCD) algorithm was introduced utilizing mammograms. The feature extraction step used the pretrained ResNet34, and its parameters were optimized using the chimp optimization algorithm (COA). The classification stage was then performed using a wavelet neural network (WNN). For 70% training and 90% training, the average accuracy was 93.17% and 96.07%, respectively.

In 6 , a CNN ensemble model based on transfer learning (TL) was introduced to classify benign and malignant cancers in breast mammograms. In order to improve prediction performance, the pre-trained CNNs (VGG-16, ResNet-50, and EfficientnetB7) were integrated depending on TL. The findings revealed a 99.62% accuracy, 99.5% precision, 99.5% specificity, and 99.62% sensitivity.

A CNN model was developed by Muduli et al. to distinguish between benign and malignant BC mammography images 40 . Only one fully connected layer and four convolutional layers make up the model's five learnable layers. The findings revealed a 96.55% accuracy in distinguishing between benign and malignant tumors. Alruwaili et al. presented an automated algorithm based on TL for BC identification 41 . Utilizing ResNet50 for evaluation, the model had an accuracy of 89.5%, while using the Nasnet-Mobile network, it had an accuracy of 70%. The transferable texture CNN (TTCNN) is introduced in 42 for improving BC categorization. Deep features were recovered from eight DCNN models that were fused, and robust characteristics were chosen to distinguish between benign and malignant breast tumors. The results showed a sensitivity of 96.11%, a specificity of 97.03%, and an accuracy of 96.57%.

Oza et al. 5 provide a review of the image analysis techniques for mammography questionable region detection. This paper examines many scientific approaches and methods for identifying questionable areas in mammograms, ranging from those based on low-level image features to the most recent algorithms. Scientific research shows that the size of the training set has a significant impact on the performance of deep learning methods. As a result, many deep learning models are susceptible to overfitting and are unable to create output that can be generalized. Data augmentation is one of the most prominent solutions to this issue 7 .

According to empirical analysis, when it comes to the training-test ratio, the best results are obtained when 70–90% of the initial data are used for training and the rest are used for testing 43 , 44 . In addition, 70%, 80%, and 90% dataset splitting ratios are most frequently used for training, as seen in 12 , 13 , 18 , 23 , 31 , 39 , and 16 , 30 , 39 , 41 , respectively.

Considering this, it can be said that numerous researchers have examined BC detection and classification and have put up various solutions to this issue. However, the majority of them fell short of the necessary high accuracy, particularly for cases belonging to the three classes of benign, malignant, and healthy cases. As a result, the proposed study aims to improve the automatic classification of breast mammography patches as normal, benign, or malignant. This is possible by combining features from three separate pretrained architectural deep learning networks. The robust high-ranking features are then extracted using the TV feature selection approach. They fed the MSVM classifier to finish the classification task.

The methodology

The goal of this work was to enhance a mammogram-based BC diagnosis model employing 3-class cases. Following is a detailed explanation of the prepared dataset and the suggested methodology.

The MIAS created and provided the applied digital mammography datasets, which are widely utilized and freely accessible online for research. The images dataset was introduced in Portable Gray Map (PGM) image format. Each mammography in a Mini-MIAS image has a left- and right-oriented breast and is classified as normal, benign, or malignant. Three different forms of breast background tissue are shown in this collection of images: fatty (F), dense-glandular (D), and fatty-glandular (G). The radiologists' ground truth estimates of the abnormality's center and a rough estimate of the circle's radius enclosing the abnormality. This indicates where the lesion is, so we do a cropping operation on the mammograms that were taken from the standard dataset to extract the ROI of any abnormal area. Mammogram abnormalities or ROIs are extracted and labeled as image patches. For normal mammograms, the ROI is randomly chosen. Table 1 contains a list of the segregated ROI image patches.

The proposed approach

A method for automatically detecting and categorizing BC in mammograms based on deeply learned features is suggested. The pretrained feature extraction models i.e., ResNet50, AlexNet, and Inception-V3 are hired. ResNet is a 50-layer neural network trained on the ImageNet dataset. It creates shortcuts between layers to avoid distortion as the network grows deeper and more complicated. AlexNet is a type of CNN that has gained worldwide recognition. It has five convolution layers, pooling layers, and three fully connected (FC) layers. Inception-V3 was created with DL techniques to aid in object detection and image analysis. It has 48 deep layers trained on the ImageNet dataset, including convolution, maximum pooling, and FC layers. The TV feature selection algorithm is then used to pick the most reliable features. The MSVM is employed to perform the classification task. Table 2 lists the introduced network parameters where all DL networks make use of the Adam optimizer.

TV is one of the most basic filter-based unsupervised feature selection approaches 27 , 28 . The variance of each feature in the features matrix is used to rank features. The variance along each dimension shows the dimension's representative power. As a result, TV can be utilized as a criterion for feature selection and extraction. It has already been used to select features from the face database for clustering 27 , as well as for text mining 28 . To determine which features to employ, this approach calculates the variance for each image patch. The TV algorithm searches the matrix for features that fulfill both the non-uniform feature distribution and the high feature frequency criterion. The process was implemented by calculating the variance of each feature, \({f}_{j}\) , in the features matrix. TV is a variance score calculated using the following formula:

where \(N and M\) are the features’ matrix dimensions which N represents the number of BC patches and M represents the number of features. The \(\overline{{f}_{j}}\) is the mean of \({f}_{j}\) . The discriminative feature receives a high Variance score (high TV).

The architecture of the suggested classification algorithm is shown in Fig.  1 . The input images from the prepared dataset are scaled in the first stage to fit each pretrained network. From each input image, the Inception-V3 and ResNet50 networks each generate 2048 features on their global average pooling layers (Avg-pool). 4096 features are generated by AlexNet on its FC layer. Each CNN is submitted to the TV algorithm for feature selection. For Inception-V3, ResNet50, and AlexNet, TV generates 1500, 600, and 1400 features, respectively. The MSVM classifier received these features individually to evaluate the classification performance of each DL CNN algorithm. In the following stage, the 3500 total selected features that were gathered from the three DL CNN architectures were grouped into a single feature vector. The TV algorithm is used once more to further reduce the number of features. 600 features with the best prediction ability were found and split into 100 sub-features. The MSVM classifier received feature subsets of 100, 200, 300, 400, 500, and 600, respectively. Finally, the classification performance was assessed. As a result, the TV algorithm was used to first decrease a total of 8192 features to 3500 features, and then to further reduce them to 600 features. Based on the obtained classification performance and comparison to other published approaches, the 600-feature selection had the highest classification performance.

figure 1

The concept of the suggested method.

Experimental setup

Implementation.

In our study, two stages are used to identify and categorize BC in the MIAS dataset. 416 mammography image patches are used at each stage to train suggested models and divide mammogram patches into three groups: normal, malignant, and benign. In each stage, this scenario was repeated for F, D, G, and a combination of them (All). In the first stage, features are extracted using the three individually pretrained DL CNNs (Inception-V3, ResNet50, and AlexNet), and reduced using the TV algorithm. For the classification task, the selected features feed MSVM. The proposed method was applied to a total of 3500 features that were chosen from the three pretrained CNNs in the second stage. 100 sub-features were created from the top 600 features with the best prediction capabilities. The MSVM classifier received inputs of 100, 200, 300, 400, 500, and 600 features. All models are trained using 70, 80, and 90% of the dataset.

Evaluation metrics

To illustrate the suggested model's performance, the Receiver Operation Characteristics (ROC) curve and Confusion Matrix (CM) are used. The performance of CNN and the suggested networks are also assessed using the following metrics: Specificity, Recall, Precision, Accuracy, and F1- Score, as follows.

where T N and T P are the sum of all true negative and true positive respectively. F N and F P are the sum of all false negative and false positive, respectively.

Results and discussion

Table 3 and Fig.  2 reveal the findings of our experiment's initial stage. The table compares the CA of the individually pretrained Inception-V3, ResNet50, AlexNet DL CNN models with that of the suggested model with a 70% training rate. For F and G types, the proposed model obtained the optimal CA of 100%. Higher performance is also obtained for the other types (D, and All) compared to individually pretrained DL CNN models. Using the suggested model, an average CA of 97.81% is attained. Table 4 presents the results of the second stage with a 70% training rate. The table, which is also shown in Fig.  3 , clarifies the impact of several selected features on the CA of the suggested model. It is obvious that, 600 features achieve the highest CA. As seen in Fig.  3 , the performance rate quickly increased as the number of feature sets increased from 100 to 200. In each features’ subset, the rate increased slightly. As a result, instead of a total of 8192 features that were reduced to 3500 features in the first stage and then to 600 features in the second stage, the combined CNNs with the TV feature selection algorithm achieved the highest performance with only 600 features.

figure 2

The accuracy comparison between CNNs models and the proposed one.

figure 3

Accuracy of CNN-TV-SVM method with different sub-feature sets.

Specificity, recall, precision, accuracy, F1-score, and AUC are the key parameters used to assess the effectiveness of the suggested method. The average performance of the proposed model for D, F, and G breast tissue types, and All of them together are presented in Table 5 . The proposed model, according to the table, performed best for the F and G types and somewhat less for the remaining types, where an acceptable average performance was reached. The suggested model is depicted as a CM and ROC in Figs. 4 and 5 , respectively. 42 mammography patches of the G and F types are examined, and each patch receives the appropriate classification. Out of 41 patches of D type, only 1 is incorrectly classed. 8 of the 117 patches of the All type have been classified wrongly. The ROC curves in Fig.  5 show that the F tissue type achieves the best classifier performance. For other types, though, a good performance was attained.

figure 4

Confusion matrices of the proposed method.

figure 5

ROC curves of proposed mothed.

Table 6 and Fig.  6 provide descriptions of the average individual performance of the proposed method for all breast tissue types "All" for various classes (i.e., normal, malignant, and benign). The performance rate decreases from the normal to the benign to the malignant class, as seen in the figure. The normal class achieves the maximum performance, whereas the malignant class achieves the lowest performance.

figure 6

The proposed method average performance of all tissue type.

Finally, different training rates—70%, 80%, and 90% of our provided dataset—were used to train the individual DL CNNs and the suggested model. Table 7 displays the performance outcomes of the DL CNN models in comparison to the suggested model. The table clearly shows that the suggested model beats existing CNN models in CA, achieving an optimal CA of 100% at a 90% training rate.

Additionally, as shown in Table 8 , the proposed strategy has been compared with other current state-of-the-art researches that make use of MIAS mammography dataset. In the table for 3-class cases in the term of CA, the performance values of each study are presented. Results indicate that the suggested model outperformed other models.

Ablation analysis

To examine the efficiency of the key elements (i.e., CNN networks and TV) in our proposed architecture, we conduct ablation studies, and the numerical outcomes are shown in Table 9 . Only the studied component is eliminated from the proposed system during each ablation research, while the others remain. The impact of eliminating each of the three pretrained networks is investigated. In each ablation trial, two CNNs are employed, and 600 features are chosen using the TV model and supplied to the MSVM for classification. In comparison to the proposed network, the CA is reduced without (W/o) ResNet50, Inception-V3, or AlexNet, and the highest reduction occurs without ResNet50, as shown in Table 9 . The effect of the TV feature selection model is also examined. In this investigation, the three CNNs' 8192 extracted features are all sent to the MSVM Classifier, which performs the classification operation. As noticed in the table, without the TV, the worst CA is reached. The best CA is realized only when the proposed network is utilized.

This paper proposes and tests a new automated BC detection and classification algorithm with the fewest possible features. The Inception-V3, ResNet50, and AlexNet CNN models, three of the most popular pretrained architectures, provided the effective DL features used in this model. In the two stages of the experiment, the TV algorithm is applied twice for the selection of robust high-ranking features. Using the TV algorithm, features are chosen from each distinct DL CNN model in the initial stage and provided to the MSVM classifier independently. 3500 robust features were left out of the original 8192 features. These features were subjected to the TV algorithm once more, which reduced them to 600 weighted features that influence classification performance. MSVM was utilized to classify the first 100, 200, 300, 400, 500, and 600 features with the highest feature weight. The newly proposed hybrid technique, which combines CNNs + TV + MSVM, obtained 97.81% for training on 70% of the data, 98% for training on 80% of the data, and meets the ideal value of 100% for training on 90% of the data. When compared with separate DL CNN models, i.e., InceptionV3, ResNet 50, and AlexNet, as well as other studies in the literature, the suggested hybrid technique achieves the highest performance for BC diagnosis. The importance of the proposed network's key parameters is highlighted using the ablation analysis.

Data availability

The datasets analyzed during the current study are publicly available in the (mammographic image analysis homepage) repository, ( https://www.mammoimage.org/databases/ ).

WHO. Breast cancer. https://www.who.int/news-room/fact-sheets/detail/breast-cancer (Accessed 23 Aug 2022).

Sannasi Chakravarthy, S. R. & Rajaguru, H. Automatic detection and classification of mammograms using improved extreme learning machine with deep learning. Irbm 43 (1), 49–61. https://doi.org/10.1016/j.irbm.2020.12.004 (2022).

Article   Google Scholar  

Oza, P., Sharma, P., Patel, S. & Kumar, P. Computer-aided breast cancer diagnosis: Comparative analysis of breast imaging modalities and mammogram repositories. Curr. Med. Imaging 19 (5), 456–468. https://doi.org/10.2174/1573405618666220621123156 (2023).

Oza, P., Sharma, P., Patel, S. & Kumar, P. Deep convolutional neural networks for computer-aided breast cancer diagnostic: A survey. Neural Comput. Appl. 34 (3), 1815–1836. https://doi.org/10.1007/s00521-021-06804-y (2022).

Oza, P., Sharma, P., Patel, S. & Bruno, A. A bottom-up review of image analysis methods for suspicious region detection in mammograms. J. Imaging https://doi.org/10.3390/jimaging7090190 (2021).

Article   PubMed   PubMed Central   Google Scholar  

Oza, P., Sharma, P. & Patel, S. Deep ensemble transfer learning-based framework for mammographic image classification. J. Supercomput. https://doi.org/10.1007/s11227-022-04992-5 (2022).

Oza, P., Sharma, P., Patel, S., Adedoyin, F. & Bruno, A. Image augmentation techniques for mammogram analysis. J. Imaging 8 (5), 1–22. https://doi.org/10.3390/jimaging8050141 (2022).

Elkorany, A. S., Marey, M., Almustafa, K. M. & Elsharkawy, Z. F. Breast cancer diagnosis using support vector machines optimized by whale optimization and dragonfly algorithms. IEEE Access 10 (June), 1–1. https://doi.org/10.1109/access.2022.3186021 (2022).

Mammographic Image Analysis Society (MIAS). https://www.mammoimage.org/databases/ (Accessed 20 May 2021).

Martins, L. D. O., Santos, A. M., Silva, C. & Paiva, A. C. Classification of normal, benign and malignant tissues using co-occurrence matrix and bayesian neural network in mammographic images. In 2006 Ninth Brazilian Symposium on Neural Networks (SBRN'06), 24–29 https://doi.org/10.1109/SBRN.2006.14 (2006).

Ghongade, R. D. & Wakde, D. G. Detection and classification of breast cancer from digital mammograms using RF and RF-ELM algorithm. In 1st International Conference on Electronics, Materials Engineering and Nano-Technology (IEMENTech), 1–6 https://doi.org/10.1109/IEMENTECH.2017.8076982 (2017).

Mohanty, F., Rup, S. & Dash, B. Automated diagnosis of breast cancer using parameter optimized kernel extreme learning machine. Biomed. Signal Process. Control 62 , 102108. https://doi.org/10.1016/j.bspc.2020.102108 (2020).

Kaur, P., Singh, G. & Kaur, P. Intellectual detection and validation of automated mammogram breast cancer images by multi-class SVM using deep learning classification. Inform. Med. Unlocked 16 , 100151. https://doi.org/10.1016/j.imu.2019.01.001 (2019).

Shastri, A. A., Tamrakar, D. & Ahuja, K. Density-wise two stage mammogram classification using texture exploiting descriptors R. Expert Syst. Appl. 99 , 71–82. https://doi.org/10.1016/j.eswa.2018.01.024 (2018).

Vijayarajeswari, R., Parthasarathy, P., Vivekanandan, S. & Basha, A. A. Classification of mammogram for early detection of breast cancer using SVM classifier and Hough transform. Measurement 146 , 800–805. https://doi.org/10.1016/j.measurement.2019.05.083 (2019).

Article   ADS   Google Scholar  

Benzebouchi, N. E., Azizi, N. & Ayadi, K. A computer-aided diagnosis system for breast cancer using deep convolutional neural networks. Comput. Intell. Data Mining Adv. Intell. Syst. Comput. 711 , 583–593. https://doi.org/10.1007/978-981-10-8055-5 (2019).

Arafa, A. A., Asad, A. H., Hefny, H. A. & Authority, A. E. Computer-aided detection system for breast cancer based on GMM and SVM. Arab J. Nucl. Sci. Appl. 52 (2), 142–150. https://doi.org/10.21608/ajnsa.2019.7274.1170 (2019).

Hepsağ P. U., Özel, S. A. & Yazıcı, A. Using deep learning for mammography classification. In 2017 International Conference on Computer Science and Engineering (UBMK), 418–423 https://doi.org/10.1109/UBMK.2017.8093429 (2017).

Chakravarthy, S. R. S. & Rajaguru, H. Automatic detection and classification of mammograms using improved extreme learning machine with deep learning. IRBM 43 (1), 49–61. https://doi.org/10.1016/j.irbm.2020.12.004 (2022).

Narin, A., Kaya, C. & Pamuk, Z. Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks. Pattern Anal. Appl. 24 , 1207–1220. https://doi.org/10.1007/s10044-021-00984-y (2021).

Oyelade, O. N. & Ezugwu, A. E. A deep learning model using data augmentation for detection of architectural distortion in whole and patches of images. Biomed. Signal Process. Control 65 (2020), 102366. https://doi.org/10.1016/j.bspc.2020.102366 (2021).

Elkorany, A. S. & Elsharkawy, Z. F. COVIDetection-Net: A tailored COVID-19 detection from chest radiography images using deep learning. Optik 231 , 166405–166405. https://doi.org/10.1016/j.ijleo.2021.166405 (2021).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Yu, X., Zeng, N., Liu, S. & Dong, Y. Utilization of DenseNet201 for diagnosis of breast abnormality. Mach. Vis. Appl. 30 (7), 1135–1144. https://doi.org/10.1007/s00138-019-01042-8 (2019).

Samala, R. K., Chan, H. P., Hadjiiski, L. M., Helvie, M. A. & Richter, C. D. Generalization error analysis for deep convolutional neural network with transfer learning in breast cancer diagnosis. Phys Med. Biol. https://doi.org/10.1088/1361-6560/ab82e8 (2020).

Oyelade, O. N. & Ezugwu, A. E. Biomedical signal processing and control a deep learning model using data augmentation for detection of architectural distortion in whole and patches of images. Biomed. Signal Process. Control 65 , 102366–102366. https://doi.org/10.1016/j.bspc.2020.102366 (2021).

Ahmed, L. et al. Images data practices for Semantic Segmentation of Breast Cancer using Deep Neural Network. J. Ambient Intell. Human. Comput. https://doi.org/10.1007/s12652-020-01680-1 (2020).

He, X., Cai, D. & Niyogi, P. Laplacian score for feature selection. Adv. Neural Inf. Process. Syst. 507–514 (2005).

Wang, H. & Hong, M. Distance variance score: An efficient feature selection method in text classification. Math. Probl. Eng. 2015 (1), 1–10. https://doi.org/10.1155/2015/695720 (2015).

Bressan, R. S., Bugatti, P. H. & Saito, P. T. M. Breast cancer diagnosis through active learning in content-based image retrieval. Neurocomputing 357 , 1–10. https://doi.org/10.1016/j.neucom.2019.05.041 (2019).

Wang, H. et al. Breast mass detection in digital mammogram based on gestalt psychology. J. Healthc. Eng. https://doi.org/10.1155/2018/4015613 (2018).

Mohanty, F., Rup, S., Dash, B. & Swamy, B. M. M. N. S. Mammogram classification using contourlet features with forest optimization-based feature selection approach. Multimed. Tools Appl. 78 , 12805–12834. https://doi.org/10.1007/s11042-018-5804-0 (2019).

Soulami, K. B., Saidi, M. N., Honnit, B., Anibou, C. & Tamtaoui, A. Detection of breast abnormalities in digital mammograms using the electromagnetism-like algorithm. Multimed. Tools Appl. 78 , 12835–12863. https://doi.org/10.1007/s11042-018-5934-4 (2019).

Jaffar, M. A. Deep learning based computer aided diagnosis system for breast mammograms. Int. J. Adv. Comput. Sci. Appl. 8 (7), 286–290 (2017).

Google Scholar  

Ting, F. F., Tan, Y. J. & Sim, K. S. Convolutional neural network improvement for breast cancer classification. Expert Syst. Appl. 120 , 103–115. https://doi.org/10.1016/j.eswa.2018.11.008 (2019).

Hassan, S. A., Sayed, M. S., Abdalla, M. I. & Rashwan, M. A. Detection of breast cancer mass using MSER detector and features matching. Multimed. Tools Appl. 78 , 20239–20262. https://doi.org/10.1007/s11042-019-7358-1 (2019).

Patil, R. S. & Biradar, N. Automated mammogram breast cancer detection using the optimized combination of convolutional and recurrent neural network. Evol. Intell. 14 , 1459–1474. https://doi.org/10.1007/s12065-020-00403-x (2021).

Zhang, Y.-D., Chandra, S. & Guttery, D. S. Improved breast cancer classification through combining graph convolutional network and convolutional neural network. Inf. Process. Manag. 58 , 102439. https://doi.org/10.1016/j.ipm.2020.102439 (2021).

Shen, L., He, M., Shen, N., Yousefi, N. & Wang, C. Optimal breast tumor diagnosis using discrete wavelet transform and deep belief network based on improved sunflower optimization method. Biomed. Signal Process. Control 60 , 101953. https://doi.org/10.1016/j.bspc.2020.101953 (2020).

Escorcia-Gutierrez, J. et al. Automated deep learning empowered breast cancer diagnosis using biomedical mammogram images. Comput. Mater. Continua 71 (2), 4221–4235. https://doi.org/10.32604/cmc.2022.022322 (2022).

Muduli, D., Dash, R. & Majhi, B. Automated diagnosis of breast cancer using multi-modal datasets: A deep convolution neural network based approach. Biomed. Signal Process. Control 71 , 102825. https://doi.org/10.1016/j.bspc.2021.102825 (2022).

Alruwaili, M. & Gouda, W. Automated breast cancer detection models based on transfer learning. Sensors https://doi.org/10.3390/s22030876 (2022).

Maqsood, S., Damaševičius, R. & Maskeliūnas, R. TTCNN: A breast cancer detection and classification towards computer-aided diagnosis using digital mammography in early stages. Appl. Sci. 12 (7), 1–27. https://doi.org/10.3390/app12073273 (2022).

Article   CAS   Google Scholar  

Gholamy, A., Kreinovich, V. & Kosheleva, O. Why 70/30 or 80/20 relation between training and testing sets: A pedagogical explanation. Departmental Technical Reports (CS), 1–6 (2018) https://scholarworks.utep.edu/cs_techrep/1209/#:~:text=We%20first%20train%20our%20model,of%20the%20data%20for%20training .

Joseph, V. R. Optimal ratio for data splitting. Stat. Anal. Data Min. 15 (4), 531–538. https://doi.org/10.1002/sam.11583 (2022).

Article   MathSciNet   Google Scholar  

Download references

Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB).

Author information

Authors and affiliations.

Department of Electronics and Electrical Comm. Eng., Faculty of Electronic Engineering, Menoufia University, Menouf, 32952, Egypt

Ahmed S. Elkorany

Engineering Department, Nuclear Research Center, Egyptian Atomic Energy Authority, Cairo, Egypt

Zeinab F. Elsharkawy

You can also search for this author in PubMed   Google Scholar

Contributions

A.S.E., Z.E.: participation in preparing software, suggesting techniques used in research, reviewing and evaluating results, and participating in research paper writing.

Corresponding author

Correspondence to Zeinab F. Elsharkawy .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Elkorany, A.S., Elsharkawy, Z.F. Efficient breast cancer mammograms diagnosis using three deep neural networks and term variance. Sci Rep 13 , 2663 (2023). https://doi.org/10.1038/s41598-023-29875-4

Download citation

Received : 08 November 2022

Accepted : 11 February 2023

Published : 15 February 2023

DOI : https://doi.org/10.1038/s41598-023-29875-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Research on sports image classification method based on se-res-cnn model.

  • Jichong Lei

Scientific Reports (2024)

Fully Automated Measurement of the Insall-Salvati Ratio with Artificial Intelligence

  • J. Adleberg
  • C. L. Benitez

Journal of Imaging Informatics in Medicine (2024)

Two-level content-based mammogram retrieval using the ACR BI-RADS assessment code and learning-driven distance selection

  • Amira Jouirou
  • Ines Souissi
  • Walid Barhoumi

The Journal of Supercomputing (2024)

AI-based methods for detecting and classifying age-related macular degeneration: a comprehensive review

  • Niveen Nasr El-Den
  • Mohamed Elsharkawy
  • Ayman El-Baz

Artificial Intelligence Review (2024)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Cancer newsletter — what matters in cancer research, free to your inbox weekly.

research papers on breast cancer detection

  • Open access
  • Published: 24 July 2023

Deep learning applications to breast cancer detection by magnetic resonance imaging: a literature review

  • Richard Adam 1 ,
  • Kevin Dell’Aquila 1 ,
  • Laura Hodges 1 ,
  • Takouhie Maldjian 1 &
  • Tim Q. Duong 1  

Breast Cancer Research volume  25 , Article number:  87 ( 2023 ) Cite this article

10k Accesses

15 Citations

1 Altmetric

Metrics details

Deep learning analysis of radiological images has the potential to improve diagnostic accuracy of breast cancer, ultimately leading to better patient outcomes. This paper systematically reviewed the current literature on deep learning detection of breast cancer based on magnetic resonance imaging (MRI). The literature search was performed from 2015 to Dec 31, 2022, using Pubmed. Other database included Semantic Scholar, ACM Digital Library, Google search, Google Scholar, and pre-print depositories (such as Research Square). Articles that were not deep learning (such as texture analysis) were excluded. PRISMA guidelines for reporting were used. We analyzed different deep learning algorithms, methods of analysis, experimental design, MRI image types, types of ground truths, sample sizes, numbers of benign and malignant lesions, and performance in the literature. We discussed lessons learned, challenges to broad deployment in clinical practice and suggested future research directions.

Breast cancer is the most common cancer and the second leading cause of cancer death in women. One in eight American women (13%) will be diagnosed with breast cancer in their lifetime, and one in 39 women (3%) will die from breast cancer (American Cancer Society Statistics, 2023). The American Cancer Society recommends yearly screening mammography for early detection of breast cancer for women, which may begin at age 40 [ 1 ]. About 2%–5% of women in the general population in the US have a lifetime risk of breast cancer of 20% or higher [ 1 ], although it can vary depending on the population being studied and the risk assessment method used. The ACS recommends yearly breast magnetic resonance imaging (MRI) in addition to mammography for women with 20–25% or greater lifetime risk [ 1 ]. Early detection and treatment are likely to result in better patient outcomes.

MRI is generally more sensitive and offers more detailed pathophysiological information but is less cost effective compared to mammography for population-based screening [ 2 , 3 ]. Breast MRI utilizes high-powered magnets and radio waves to generate 3D images. Cancer yield from MRI alone averages 22 cancers for every 1000 women screened, a rate of cancer detection roughly 10 times that achieved with screening mammography in average-risk women, and roughly twice the yield achieved with screening mammography in high-risk women [ 4 ]. Many recent studies have established contrast-enhanced breast MRI as a screening modality for women with a hereditary or familial increased risk for the development of breast cancer [ 5 ].

Interpretation of breast cancer on MRI relies on the expertise of radiologists. The growing demand for breast MRI and the shortage of radiologists has resulted in increased workload for radiologists [ 6 , 7 ], leading to long wait times and delays in diagnosis [ 8 , 9 ]. Machine learning methods show promise in assisting radiologists, in improving accuracy with the interpretation of breast MRI images and supporting clinical decision-making and improving patient outcomes [ 10 , 11 ]. By analyzing large datasets of MRIs, machine learning algorithms can learn to identify and classify suspicious areas, potentially reducing the number of false positives and false negatives [ 11 , 12 ] and thus improving diagnostic accuracy. A few studies have shown that machine learning can outperform radiologists in detecting breast cancer on MRIs [ 13 ]. Machine learning could also help to prioritize worklists in a radiology department.

In recent years, deep learning (DL) methods have revolutionized the field of computer vision with wide range of applications, from image classification and object detection to semantic segmentation and medical image analysis [ 14 ]. Deep learning is superior to traditional machine learning because of its ability to learn from unstructured or unlabeled data [ 14 ]. Unlike traditional machine algorithms which require time-consuming data labeling, deep learning algorithms are more flexible and adaptable as they can learn from data that are not labeled or structured [ 15 ]. There have been a few reviews on deep learning breast cancer detection. Oza et al. reviewed detection and classification on mammography [ 16 ]. Saba et al. [ 17 ] presented a compendium of state-of-the-art techniques for diagnosing breast cancers and other cancers. Hu et al. [ 18 ] provided a broad overview on the research and development of artificial intelligence systems for clinical breast cancer image analysis, discussing the clinical role of artificial intelligence in risk assessment, detection, diagnosis, prognosis, and treatment response assessment. Mahoro et al. [ 10 ] reviewed the applications of deep learning to breast cancer diagnosis across multiple imaging modalities. Sechopoulos et al. [ 19 ] discussed the advances of AI in the realm of mammography and digital tomosynthesis. AI-based workflows integrating multiple datastreams, including breast imaging, can support clinical decision-making and help facilitate personalized medicine [ 20 ]. To our knowledge, there is currently no review that systematically compares different deep learning studies of breast cancer detection using MRI. Such a review would be important because it could help to delineate the path forward.

Figure  1 shows a graphic representation of a deep learning workflow. The input layer represents the breast cancer image that serves as input to the CNN. The multiple convolutional layers are stacked on top of the input layer. Each convolutional layer applies filters or kernels to extract specific features from the input image. These filters learn to detect patterns such as edges, textures, or other relevant features related to breast cancer. After each convolutional layer, activation functions like rectified linear unit (ReLU) are typically applied to introduce nonlinearity into the network. Following some of the convolutional layers, pooling layers are used to downsample the spatial dimensions of the feature maps. Common pooling techniques include max-pooling or average pooling. Pooling helps reduce the computational complexity and extract the most salient features. After the convolutional and pooling layers, fully connected layers are employed. These layers connect all the neurons from the previous layers to the subsequent layers. Fully connected layers enable the network to learn complex relationships between features. The final layer is the output layer, which provides the classification or prediction. In the case of breast cancer detection, it might output the probability or prediction of malignancy or benignity.

figure 1

The input layer represents the breast cancer image that serves as input to the CNN. The multiple convolutional layers are stacked on top of the input layer. Pooling layers are used to downsample the spatial dimensions of the feature maps. Fully connected layers are then employed to connect all the neurons from the previous layers to the subsequent layers. The final layer is the output layer, which provides the classification

The goal of this study was to review the current literature on deep learning detection of breast cancer using breast MRI. We included literature in which DL was used for both primary screening setting and as a supplemental detection tool. We compared different deep learning algorithms, methods of analysis, types of ground truths, sample size, numbers of benign and malignant lesions, MRI image types, and performance indices, among others. We also discussed lessons learned, challenges of deployment in clinical practice and suggested future research directions.

Materials and methods

No ethics committee approval was required for this review.

Search strategy and eligibility criteria

PRISMA guidelines for reporting were adopted in our systematic review. The literature search was performed from 2017 to Dec 31, 2022, using the following key words: “breast MRI,” “breast magnetic resonance imaging,” “deep learning,” “breast cancer detection,” and “breast cancer screening.” The database included Pubmed, Semantic Scholar, ACM Digital Library, Google search, Google Scholar, and pre-print depositories (such as Research Square). We noted that many of the computing or machine learning journals were found on sites other than Pubmed. Some were full-length peer-reviewed conference papers, in contrast with small conference abstracts. Articles that were not deep learning (such as texture analysis) were excluded. Only original articles written in English were selected. Figure  2 shows the flowchart demonstrating how articles were included and excluded for our review. The search and initial screening for eligibility were performed by RA and independently verified by KD and/or TD. This study did not review DL prediction of neoadjuvant chemotherapy which has recently been reviewed [ 21 ].

figure 2

PRISMA selection flowchart

Pubmed search yielded 59 articles, of which 22 were review articles, 30 were not related to breast cancer detection on MRI, and two had unclear/unconventional methodologies. Five articles were found in Pubmed search after exclusion (Fig.  2 ). In addition, 13 articles were found on different databases outside of Pubmed, because many computing and machine learning journals were not indexed by Pubmed. A total of 18 articles were included in our study (Table 1 ). Two of the studies stated that the patient populations were moderate/high risk [ 22 , 23 ] or high risk [ 23 ], while the remaining papers do not state whether the dataset was from screening or supplemental MRI.

In this review, we first summarized individual papers and followed by generalization of lessons learned. We then discussed challenges of deployment in the clinics and suggested future research directions.

Summary of individual papers

Adachi et al. [ 13 ] performed a retrospective study using RetinaNet as a CNN architecture to analyze and detect breast cancer in MIPs of DCE fat-suppressed MRI images. Images of breast lesions were annotated with a rectangular region-of-interest (ROI) and labeled as “benign” or “malignant” by an experienced breast radiologist. The AUCs, sensitivities, and specificities of four readers were also evaluated as well as those of readers combined with CNN. RetinaNet alone had a higher area under the curve (AUC) and sensitivity (0.925 and 0.926, respectively) than any of the readers. In two cases, the AI system misdiagnosed normal breast as malignancy, which may be the result of variations in normal breast tissue. Invasive ductal carcinoma near the axilla was missed by AI, possibly due to confusion for normal axillary lymph node. Wider variety of data and larger datasets for training could alleviate these problems.

Antropova et al. [ 24 ] compared MIP derived from the second post-contrast subtraction T1-weighted image to the central slice of the second post-contrast image with and without subtraction. The ground truth was ROIs based on radiology assessment with biopsy-proven malignancy. MIP images showed the highest AUC. Feature extraction and classifier training for each slice for DCE-MRI sequences, with slices in the hundreds, would have been computationally expensive at the time. MIP images, in widespread use clinically, contain enhancement information throughout the tumor volume. MIP images, which represent a volume data, avoid using a plethora of slices, and are, therefore, faster and computationally less intensive and less expensive. MIP (AUC = 0.88) outperformed one-slice DCE image, and subtracted DCE image (AUC = 0.83) outperformed single-slice DCE image (AUC = 0.80). The subtracted DCE image is derived from 2 timepoints, the pre-contrast image subtracted from the post-contrast image, which produces a higher AUC. Using multiple slices and/or multiple timepoints could further increase the AUC with DCE images, possibly even exceeding that of the MIP image (0.88). This would be an area for further exploration.

Ayatollahi et al. [ 22 ] performed a retrospective study using 3D RetinaNet as a CNN architecture to analyze and detect breast cancer in ultrafast TWIST DCE-MRI images. They used 572 images (365 malignant and 207 benign) taken from 462 patients. Bounding boxes drawn around the lesion in the images were used as ground truth. They found a detection rate of 0.90 and a sensitivity of 0.95 with tenfold cross validation.

Feng et al. [ 23 ] implemented the Knowledge-Driven Feature Learning and Integration model (KFLI) using DWI and DCE-MRI data from 100 high-risk female patients with 32 benign and 68 malignant lesions, segmented by two experienced radiologists. They reported 0.85 accuracy. The model formulated a sequence division module and adaptive weighting module. The sequence division module based on lesion characteristics is proposed for feature learning, and the adaptive weighting module proposed is used for automatic feature integration while improving the performance of cooperative diagnosis. This model provides the contribution of sub-sequences and guides the radiologists to focus on characteristic-related sequences with high contribution to lesion diagnosis. This can save time for the radiologists and helps them to better understand the output results of the deep networks. As such, it can extract sufficient and effective features from each sub-sequence for a comprehensive diagnosis of breast cancer. This model is a deep network and domain knowledge ensemble that achieved high sensitivity, specificity, and accuracy.

Fujioka et al. [ 25 ] used 3D MIP projection from early phase (1–2 min) of dynamic contrast-enhanced axial fat-suppressed DCE mages, with performance of CNN models compared to two human readers (Reader 1 = breast surgeon with 5 years of experience and Reader 2 = radiologist with 20 years of experience) in distinguishing benign from malignant lesions. The highest AUC achieved with deep learning was with InceptionResNetV2 CNN model, at 0.895. Mean AUC across the different CNN models was 0.830, and range was 0.750–0.895, performing comparably to human readers. False-positive masses tended to be relatively large with fast pattern of strong enhancement, and false-negative masses tended to be relatively small with medium to slow pattern of enhancement. One false positive and one false negative for non-mass enhancing lesion that was observed were also incorrectly diagnosed by the human readers. The main limitation of their study was small sample size.

Haarburger et al. [ 26 ] performed an analysis of 3D whole-volume images on a larger cohort ( N  = 408 patients), yielding an AUC of up to 0.89 and accuracy of 0.81, further establishing the feasibility of using 3D DCE whole images. Their method involved feeding DCE images from 5 timepoints (before contrast and 4 times post-contrast) and T2-weighted images to the algorithms. The multicurriculum ensemble consisted of a 3D CNN that generates feature maps and a classification component that performs classification based on the aggregated feature maps made by the previous components. AUC range of 0.50–0.89 was produced depending on the CNN models used. Multiscale curriculum training improved simple 3D ResNet18 from an AUC of 0.50 to an AUC of 0.89 (ResNet18 curriculum). A radiologist with 2 years of experience demonstrated AUC of 0.93 and accuracy of 0.93. An advantage of the multicurriculum ensemble is the elimination of the need for pixelwise segmentation for individual lesions, as only coarse localization coordinates for Stage 1 training (performed in 3D in this case) and one global label per breast for Stage 2 training is needed, where Stage 2 involved predictions of whole images in 3D in this study. The high performance of this model can be attributed to the high amount of context and global information provided. Their 3D data use whole breast volumes without time-consuming and cost prohibitive lesion segmentation. A major drawback of 3D images is the requirement of more RAM and many patients required to train the model.

Herent et al. [ 27 ] used T1-weighted fat-suppressed post-contrast MRI in a CNN model that detected and then characterized lesions ( N  = 335). Lesion characterization consisted of diagnosing malignancy and lesion classification. Their model, therefore, performed three tasks and thereby was a multitask technique, which limits overfitting. ResNET50 neural network performed feature extraction from images, and images were processed by the algorithm’s attention block which learned to detect abnormalities. Images were fed into a second branch where features were averaged over the selected regions, then fitted to a logistic regression to produce the output. On an independent test set of 168 images, a weighted mean AUC of 0.816 was achieved. The training dataset consisted of 17 different histopathologies, of which most were represented as very small percentages of the whole dataset of 335. Several of the listed lesion types represented less than 1% of the training dataset. This leads to the problem of overfitting. The authors mention that validation of the algorithm by applying it to 3D images in an independent dataset, rather than using the single 2D images as they did, would show if the model is generalizable. The authors state that training on larger databases and with multiparametric MRI would likely increase accuracy. This study shows good performance of a supervised attention model with deep learning for breast MRI.

Hu et al. [ 28 ] used multiparametric MR images (DCE-MRI sequence and a T2-weighted MRI sequence) in a CNN model including 616 patients with 927 unique breast lesions, 728 of which were malignant. A pre-trained CNN extracted features from both DCE and T2w sequences depicting lesions that were classified as benign or malignant by support vector machine classifiers. Sequences were integrated at different levels using image fusion, feature fusion, and classifier fusion. Feature fusion from multiparametric sequences outperformed DCE-MRI alone. The feature fusion model had an AUC of 0.87, sensitivity of 0.78, and specificity of 0.79. CNN models that used separate T2w and DCE images into combined RBG images or aggregates of the probability of malignancy output from DCE and T2w classifiers both did not perform significantly better than the CNN model using DCE-alone. Although other studies have demonstrated that single-sequence MRI is sufficient for high CNN performance, this study demonstrates that multiparametric MRI (as fusion of features from DCE-MRI and T2-weighted MRI) offers enough information to outperform single-sequence MRI.

Li et al. [ 29 ] used 3D CNNs in DCE-MR images to differentiate between benign and malignant tumors from 143 patients. In 2D and 3D DCE-MRI, a region-of-interest (ROI) and volume-of-interest (VOI) were segmented, and enhancement ratios for each MR series were calculated. The AUC value of 0.801 for the 3D CNN was higher than the value of 0.739 for 2D CNN. Furthermore, the 3D CNN achieved higher accuracy, sensitivity, and specificity values of 0.781, 0.744, and 0.823, respectively. The DCE-MRI enhancement maps had higher accuracy by using more information to diagnose breast cancer. The high values demonstrate that 3D CNN in breast cancer MR imaging can be used for the detection of breast cancer and reduce manual feature extraction.

Liu et al. [ 30 ] used CNN to analyze and detect breast cancer on T1 DCE-MRI images from 438 patients, 131 from I-SPY clinical trials and 307 from Columbia University. Segmentation was performed through an automated process involving fuzzy C-method after seed points were manually indicated. This study included analysis of commonly excluded image features such as background parenchymal enhancement, slice images of breast MRI, and axilla/axillary lymph node involvement. The methods also minimized annotations done at pixel level, to maximize automation of visual interpretation. These objectives increased efficiency, decreased subjective bias, and allowed for complete evaluation of the whole image. Obtaining images with multiple timepoints from multiple institutions made the algorithm more generalizable. The CNN model achieved AUC of 0.92, accuracy of 0.94, sensitivity of 0.74, and specificity of 0.95.

Marrone et al. [ 31 ] used CNN to evaluate 42 malignant and 25 benign lesions in 42 women. ROIs were obtained by an experienced radiologist, and manual segmentation was performed. Accuracy of up to 0.76 was achieved. AUC as high as 0.76 was seen on pre-trained AlexNet versus 0.73 on fine-tuning of pre-trained AlexNet where the last trained layers were replaced by untrained layers. The latter method could allow reduced number of training images needed. The training from scratch AlexNet model is accomplished when AlexNet pre-trained on the ImageNet database is used to extract a feature vector from the last internal CNN layer, and a new supervised training is employed, which yielded the lowest AUC of 0.68 and accuracy of 0.55.

Rasti et al. [ 32 ] analyzed DCE-MRI subtraction images from MRI studies ( N  = 112) using a multi-ensemble CNN (ME-CNN) functioning as a CAD system to distinguish benign from malignant masses, producing 0.96 accuracy with their method. The ME-CNN is a modular and image-based ensemble, which can stochastically partition the high-dimensional image space through simultaneous and competitive learning of its modules. It also has the advantages of fast execution time in both training and testing and a compact structure with a small number of free parameters. Among several promising directions, one could extend the ME-CNN approach to the pre-processing stage, by combining ME-CNN with recent advances in fully autonomous CNNs for semantic segmentation.

Truhn et al. [ 33 ] used T2-weighted images with one pre-contrast and four post-contrast DCE images in 447 patients with 1294 enhancing lesions (787 malignant and 507 benign) manually segmented by a breast radiologist. Deep learning with CNN demonstrated an AUC of 0.88 which was inferior to prospective interpretation by one of the three breast radiologists (7–25 years of experience) reading cases in equal proportion (0.98). When only half of the dataset was used for training ( n  = 647), the AUC was 0.83. The authors speculate that with increased training on a greater number of cases that their model could improve its performance.

Wu et al. [ 34 ] trained a CNN model to analyze and detect lesions from DCE T1-weighted images from 130 patients, 71 of which had malignant lesions and 59 had benign tumors. Fuzzy C-means clustering-based algorithm automatically segmented 3D tumor volumes from DCE images after rectangular region-of-interest were placed by an expert radiologist. An objective of the study was to demonstrate that single-sequence MRI at multiple timepoints provides sufficient information for CNN models to accurately classify lesions.

Yurtakkal et al. [ 35 ] utilized DCE images of 98 benign and 102 malignant lesions, producing 0.98 accuracy, 1.00 sensitivity, and 0.96 specificity. The multi-layer CNN architecture utilized consisted of six groups of convolutional, batch normalization, rectified linear activation function layers, and five max-pooling followed by one dropout layer, one fully connected layer, and one softmax layer.

Zheng et al. [ 36 ] used a dense convolutional long short-term memory (DC-LSTM) on a dataset of lesions obtained through a university hospital ( N  = 72). The method was inspired by DenseNet and built on convolutional LSTM. It first uses a three-layer convolutional LSTM to encode DCE-MRI as sequential data and extract time-intensity information then adds a simplified dense block to reduce the amount of information being processed and improve feature reuse. This lowered the variance and improved accuracy in the results. Compared to a ResNet-50 model trained only on the main task, the combined model of DC-LSTM + ResNet improved the accuracy from 0.625 to 0.847 on the same dataset. Additionally, the authors proposed a latent attributes method to efficiently use the information in diagnostic reports and accelerate the convergence of the network.

Jiejie Zhou et al. [ 37 ] evaluated 133 lesions (91 malignant and 62 benign) using ResNET50, which is similar to ResNET18 used by Truhn et al. [ 33 ] and Haarburger et al . [ 26 ]. Their investigation demonstrated that deep learning produced higher accuracy compared to ROI-based and radiomics-based models in distinguishing between benign and malignant lesions. They compared the metrics resulting from using five different bounding boxes. They found that using the tumor alone and smaller bounding boxes yielded the highest AUC of 0.97–0.99. They also found that the inclusion of a small amount of peritumoral tissue improved accuracy compared to smaller boxes that did not include peritumoral tissue (tumor alone boxes) or larger input boxes (that include tissue more remote from peritumoral tissue), with accuracy of 0.91 in the testing dataset. The tumor microenvironment influences tumor growth, and the tumor itself can alter its microenvironment to become more supportive of tumor growth. Therefore, the immediate peritumoral tissue, which would include the tumor microenvironment, was important in guiding the CNN to accurately differentiate between benign and malignant tumors. This dynamic peritumoral ecosystem can be influenced by the tumor directing heterogeneous cells to aggregate and promote angiogenesis, chronic inflammation, tumor growth, and invasion. Recognizing features displayed by biomarkers of the tumor microenvironment may help to identify and grade the aggressiveness of a lesion. This complex interaction between the tumor and its microenvironment may potentially be a predictor of outcomes as well and should be included in DL models that require segmentation. In DL models using whole images without segmentation of any sort, the peritumoral tissue would already be included, which would preclude the need for precise bounding boxes.

Juan Zhou et al. [ 38 ] used 3D deep learning models to classify and localize malignancy from cases ( N  = 1537) of MRIs. The deep 3D densely connected networks were utilized under image-level supervision (weakly supervised). Since 3D weakly supervised approach was not well studied compared to 2D methods, the purpose of this study was to develop a 3D deep learning model that could identify malignant cancer from benign lesions and could localize the cancer. The model configurations of global average pooling (GAP) and global max-pooling (GMP) that were used both achieved over 0.80 accuracy with AUC of 0.856 (GMP) and 0.858 (GAP) which demonstrated the effectiveness of the 3D DenseNet deep learning method in MRI scans to diagnose breast cancer. The model ensemble achieved AUC of 0.859.

Summary of lessons learned

Most studies were single-center studies, but they came from around the world, with the majority coming from the US, Asia, and Europe. All studies except one [ 33 ] were retrospective studies. The sample size of each study ranged from 42 to 690 patients, generally small for DL analysis. Sample sizes for patients with benign and malignant lesions were comparable and were not skewed toward either normal or malignant lesions, suggesting that these datasets were not from high-risk screening patients because high-risk screening dataset would have consisted of very low (i.e., typically < 5%) positive cases.

Image types

Most studies used private datasets as their image source. ISPY-1 data were the only public dataset noted ( https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=20643859 ). Most studies involved DCE data acquisition, but most analysis include only a single post-contrast MRI. For those that used multiple post-contrast MRI dynamics, most fed each dynamic into each separate independent channel, which does not optimally make use of the relationships between imaging dynamics. Some studies used subtraction of post- and pre-contrast or signal enhancement ratio (SER) [ 24 , 32 , 35 ]. Three studies utilized MIP DCE images to minimize computation cost [ 13 , 24 , 25 ]. However, collapsing images by MIP has drawbacks (i.e., collapse enhanced vascular structures into a single plane may be mistaken as cancer). There were only five studies [ 23 , 26 , 28 , 33 , 36 ] that utilized multiparametric data types (i.e., DCE, T2-weighted, and DWI). Although combining multiple types of MRIs should improve performance, this has not been conclusively demonstrated in practice.

Types of DL architectures

RetinaNet and KFLi are optimized for object detection, while VGGNet, InceptionResNet, and AlexNet are designed for image classification (see review [ 16 , 17 , 39 ]). LSTM is used for time-series modeling. DenseNet, on the other hand, can be used for a wide range of tasks, including image classification, object detection, and semantic segmentation. Ensemble methods, which combine multiple models, are useful for boosting the overall performance of a system. U-Net and R-Net are specialized deep learning models for semantic segmentation tasks in medical image analysis. U-Net uses an encoder–decoder architecture to segment images into multiple classes, while R-Net is a residual network that improves the accuracy and efficiency of the segmentation task.

The most used algorithm is CNN or CNN-based. There is no consensus that certain algorithms are better than others. Given the fact that different algorithms were tested on different datasets, it is not possible to conclude that a particular DL architecture performs better than others. Careful comparison of multiple algorithms on the same datasets is needed. Thus, we only discussed potential advantages and disadvantages of each DL architecture. Performance indices could be misleading.

Although each model has its own unique architecture and design principles, most of the above-mentioned methods utilized convolutional layers, pooling layers, activation functions, and regularization techniques (such as dropout and batch normalization) for model optimization. Additionally, the use of pre-trained models and transfer learning has become increasingly popular, allowing leverage of knowledge learned from large datasets such as ImageNet to improve the performance of their models on smaller, specialized datasets. However, the literature on transfer learning in breast cancer MRI detection is limited. A relatively new deep learning method known as transformer has found exciting applications in medical imaging [ 40 , 41 ].

Ground truths

Ground truths were either based on pathology (i.e., benign versus malignant cancer), radiology reports, radiologist annotation (ROI contoured on images), or a bounding box, with reference to pathology or clinical follow-up (i.e., absence of a positive clinical diagnosis). While the gold standard is pathology, imaging or clinical follow-up without adverse change over a prescribed period has been used as empiric evidence of non-malignancy. This is an acceptable form of ground truth.

Only four out of 18 studies provided heatmaps of the regions that the DL algorithms consider important. Heatmaps enable data to be presented visually in color showing whether the area of activity makes sense anatomically or if it is artifactual (i.e., biopsy clip, motion artifact, or outside of the breast). Heatmaps are important for interpretability and explainability of DL outputs.

Performance

All studies include some performance indices, and most include AUC, accuracy, sensitivity, and specificity. AUC ranged from 0.5 to 1.0, with the majority around 0.8–0.9. Other metrics also varied over a wide range. DL training methods varied, and they included leave-one-out method, hold-out method, and splitting the dataset (such as 80%/20% training/testing) with cross validation. Most studies utilized five- or tenfold cross validation for performance evaluation but some used a single hold-out method, and some did not include cross validation. Cross validation is important to avoid unintentional skewing of data due to partition for training and testing. Different training methods could affect performance. Interpretation of these metrics needs to be made with caution as there could be study reporting bias, small sample size, and overfitting, among others. High-performance indices of the DL algorithm performance are necessary for adoption in clinical use. However, good performance indices alone are not sufficient. Other measures such as heatmaps and experience to gain trust are needed for widespread clinical adoption of DL algorithms.

DL detection of axillary lymph node involvement

Accurate assessment of the axillary lymph node involvement in breast cancer patients is also essential for prognosis and treatment planning [ 42 , 43 ]. Current radiological staging of nodal metastasis has poor accuracy. DL detection of lymph node involvement is challenging because of their small sizes and difficulty in getting ground truths. Only a few studies have reported the use of DL to detect lymph node involvement [ 44 , 45 , 46 ].

Challenges for DL to achieve routine clinical applications

Although deep learning is a promising tool in the diagnosis of breast cancer, there are several challenges that need to be addressed before routine clinical applications can be broadly realized.

Data availability: One of the major challenges in medical image diagnosis (and breast cancer MRI in particular) is the availability of large, diverse, and well-annotated datasets. Deep learning models require a large amount of high-quality data to learn from, but, in many cases, medical datasets are small and imbalanced. In medical image diagnosis, it is important to have high-quality annotations of images, which can be time-consuming and costly to obtain. Annotating medical images requires specialized expertise, and there may be inconsistencies between different experts. This can lead to challenges in building accurate and generalizable models. Medical image datasets can lack diversity, which can lead to biased models. For example, a model trained on images with inadequate representation of racial or ethnicity subgroups may not be broadly generalizable. Private medical datasets obtained from one institution could be non-representative of certain racial or ethnic subgroups and, therefore, may not be generalizable. Publicly available data are unfortunately limited, one of which can be found on cancerimagingarchive.net. Collaborative learning facilitating training of DL models by sharing data without breaching privacy can be accomplished with federated learning [ 47 ].

Interpretability , explainability, and generalizability [ 48 ]: Deep learning models are often seen as “black boxes” that can be difficult to interpret. This is especially problematic in medical image diagnosis, where it is important to understand why a particular diagnosis is made. Recent research has focused on developing methods to explain the decision-making process of deep learning models, such as using attention mechanisms or generating heatmaps to highlight relevant regions in the MRI image. While efforts have been made to develop methods to explain the decision-making process of deep learning models, the explainability of these models is still limited [ 49 ]. This can make it difficult for clinicians to understand the model's decision and to trust the model. Deep learning models may perform well on the datasets on which they were trained but may not generalize well to new datasets or to patients with different characteristics. This can lead to challenges in deploying the model in a real-world setting.

Ethical concerns: Deep learning models can be used to make life-or-death decisions, such as the diagnosis of cancer. This raises ethical concerns about the safety, responsibility, privacy, fairness, and transparency of these models [ 50 ]. There are also social implications (including but not limited to equity) of using artificial intelligence in health care. This needs to be addressed as we develop more and more powerful DL algorithms.

Perspectives and conclusions

Artificial intelligence has the potential to revolutionize breast cancer screening and diagnosis, helping radiologists to be more efficient and more accurate, ultimately leading to better patient outcomes. It can also help to reduce the need for biopsy or unnecessary testing and treatment. However, some challenges exist that preclude broad deployment in clinical practice to date. There need to be large, diverse, and well-annotated images that are readily available for research. Deep learning results need to be more accurate, interpretable, explainable, and generalizable. A future research direction includes incorporation of other clinical data and risk factors into the model, such as age, family history, or genetic mutations, to improve diagnostic accuracy and enable personalized medicine. Another direction is to assess the impact of deep learning on health outcomes to enable more investment in hospital administrators and other stakeholders. Finally, it is important to address the ethical, legal, and social implications of using artificial intelligence.

Availability of data and materials

Not applicable.

Saslow D, Boetes C, Burke W, Harms S, Leach MO, Lehman CD, Morris E, Pisano E, Schnall M, Sener S, et al. American Cancer Society guidelines for breast screening with MRI as an adjunct to mammography. CA Cancer J Clin. 2007;57(2):75–89.

Article   PubMed   Google Scholar  

Feig S. Comparison of costs and benefits of breast cancer screening with mammography, ultrasonography, and MRI. Obstet Gynecol Clin North Am. 2011;38(1):179–96.

Kumar NA, Schnall MD. MR imaging: its current and potential utility in the diagnosis and management of breast cancer. Magn Reson Imaging Clin N Am. 2000;8(4):715–28.

Article   CAS   PubMed   Google Scholar  

Lehman CD, Smith RA. The role of MRI in breast cancer screening. J Natl Compr Canc Netw. 2009;7(10):1109–15.

Mann RM, Kuhl CK, Moy L. Contrast-enhanced MRI for breast cancer screening. J Magn Reson Imaging. 2019;50(2):377–90.

Article   PubMed   PubMed Central   Google Scholar  

Batchu S, Liu F, Amireh A, Waller J, Umair M. A review of applications of machine learning in mammography and future challenges. Oncology. 2021;99(8):483–90.

Wuni AR, Botwe BO, Akudjedu TN. Impact of artificial intelligence on clinical radiography practice: futuristic prospects in a low resource setting. Radiography (Lond). 2021;27(Suppl 1):S69–73.

Skegg D, Paul C, Benson-Cooper D, Chetwynd J, Clarke A, Fitzgerald N, Gray A, St George I, Simpson A. Mammographic screening for breast cancer: prospects for New Zealand. N Z Med J. 1988;101(852):531–3.

CAS   PubMed   Google Scholar  

Wood DA, Kafiabadi S, Busaidi AA, Guilhem E, Montvila A, Lynch J, Townend M, Agarwal S, Mazumder A, Barker GJ, et al. Deep learning models for triaging hospital head MRI examinations. Med Image Anal. 2022;78: 102391.

Mahoro E, Akhloufi MA. Applying deep learning for breast cancer detection in radiology. Curr Oncol. 2022;29(11):8767–93.

Deo RC, Nallamothu BK. Learning about machine learning: the promise and pitfalls of big data and the electronic health record. Circ Cardiovasc Qual Outcomes. 2016;9(6):618–20.

Meyer-Base A, Morra L, Tahmassebi A, Lobbes M, Meyer-Base U, Pinker K. AI-enhanced diagnosis of challenging lesions in breast mri: a methodology and application primer. J Magn Reson Imaging. 2021;54(3):686–702.

Adachi M, Fujioka T, Mori M, Kubota K, Kikuchi Y, Xiaotong W, Oyama J, Kimura K, Oda G, Nakagawa T, et al. Detection and diagnosis of breast cancer using artificial intelligence based assessment of maximum intensity projection dynamic contrast-enhanced magnetic resonance images. Diagnostics (Basel). 2020;10(5):330.

LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44.

Koh DM, Papanikolaou N, Bick U, Illing R, Kahn CE Jr, Kalpathi-Cramer J, Matos C, Marti-Bonmati L, Miles A, Mun SK, et al. Artificial intelligence and machine learning in cancer imaging. Commun Med (Lond). 2022;2:133.

Oza P, Sharma P, Patel S, Bruno A. A bottom-up review of image analysis methods for suspicious region detection in mammograms. J Imaging. 2021;7(9):190.

Saba T. Recent advancement in cancer detection using machine learning: systematic survey of decades, comparisons and challenges. J Infect Public Health. 2020;13(9):1274–89.

Hu Q, Giger ML. Clinical artificial intelligence applications: breast imaging. Radiol Clin North Am. 2021;59(6):1027–43.

Sechopoulos I, Teuwen J, Mann R. Artificial intelligence for breast cancer detection in mammography and digital breast tomosynthesis: state of the art. Semin Cancer Biol. 2021;72:214–25.

Sheth D, Giger ML. Artificial intelligence in the interpretation of breast cancer on MRI. J Magn Reson Imaging. 2020;51(5):1310–24.

Khan N, Adam R, Huang P, Maldjian T, Duong TQ. Deep learning prediction of pathologic complete response in breast cancer using MRI and other clinical data: a systematic review. Tomography. 2022;8(6):2784–95.

Ayatollahi F, Shokouhi SB, Mann RM, Teuwen J. Automatic breast lesion detection in ultrafast DCE-MRI using deep learning. Med Phys. 2021;48(10):5897–907.

Feng H, Cao J, Wang H, Xie Y, Yang D, Feng J, Chen B. A knowledge-driven feature learning and integration method for breast cancer diagnosis on multi-sequence MRI. Magn Reson Imaging. 2020;69:40–8.

Antropova N, Abe H, Giger ML. Use of clinical MRI maximum intensity projections for improved breast lesion classification with deep convolutional neural networks. J Med Imaging (Bellingham). 2018;5(1): 014503.

PubMed   Google Scholar  

Fujioka T, Yashima Y, Oyama J, Mori M, Kubota K, Katsuta L, Kimura K, Yamaga E, Oda G, Nakagawa T, et al. Deep-learning approach with convolutional neural network for classification of maximum intensity projections of dynamic contrast-enhanced breast magnetic resonance imaging. Magn Reson Imaging. 2021;75:1–8.

Haarburger C, Baumgartner M, Truhn D, Broeckmann M, Schneider H, Schrading S, Kuhl C, Merhof D: Multi scale curriculum CNN for context-aware breast mri malignancy classification. In: Medical Image Computing and Computer Assisted Intervention—MICCAI ; 2019: 495–503.

Herent P, Schmauch B, Jehanno P, Dehaene O, Saillard C, Balleyguier C, Arfi-Rouche J, Jegou S. Detection and characterization of MRI breast lesions using deep learning. Diagn Interv Imaging. 2019;100(4):219–25.

Hu Q, Whitney HM, Giger ML. A deep learning methodology for improved breast cancer diagnosis using multiparametric MRI. Sci Rep. 2020;10(1):10536.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Li J, Fan M, Zhang J, Li L: Discriminating between benign and malignant breast tumors using 3D convolutional neural network in dynamic contrast enhanced-MR images. In: SPIE Medical Imaging : SPIE; 2017: 10138.

Liu MZ, Swintelski C, Sun S, Siddique M, Desperito E, Jambawalikar S, Ha R. Weakly supervised deep learning approach to breast mri assessment. Acad Radiol. 2022;29(Suppl 1):S166–72.

Marrone S, Piantadosi G, Fusco R, Petrillo A, Sansone M, Sansone C: An investigation of deep learning for lesions malignancy classification in breast DCE-MRI. In: International Conference on Image Analysis and Processing (ICIAP) 2017: 479–489.

Rasti R, Teshnehlab M, Phung SL. Breast cancer diagnosis in DCE-MRI using mixture ensemble of convolutional neural networks. Pattern Recogn. 2017;72:381–90.

Article   Google Scholar  

Truhn D, Schrading S, Haarburger C, Schneider H, Merhof D, Kuhl C. Radiomic versus convolutional neural networks analysis for classification of contrast-enhancing lesions at multiparametric breast MRI. Radiology. 2019;290(2):290–7.

Wu W, Wu J, Dou Y, Rubert N, Wang Y. A deep learning fusion model with evidence-based confidence level analysis for differentiation of malignant and benign breast tumors using dynamic contrast enhanced MRI. Biomed Signal Process Control. 2022;72: 103319.

Yurttakal AH, Erbay H, İkizceli T, Karaçavuş S. Detection of breast cancer via deep convolution neural networks using MRI images. Multimed Tools Appl. 2020;79:15555–73.

Zheng H, Gu Y, Qin Y, Huang X, Yang J, Yang G-Z: Small lesion classification in dynamic contrast enhancement MRI for breast cancer early detection. In: International Conference on Medical Image Computing and Computer-Assisted Intervention; 2018.

Zhou J, Zhang Y, Chang KT, Lee KE, Wang O, Li J, Lin Y, Pan Z, Chang P, Chow D, et al. Diagnosis of benign and malignant breast lesions on DCE-MRI by using radiomics and deep learning with consideration of peritumor tissue. J Magn Reson Imaging. 2020;51(3):798–809.

Zhou J, Luo LY, Dou Q, Chen H, Chen C, Li GJ, Jiang ZF, Heng PA. Weakly supervised 3D deep learning for breast cancer classification and localization of the lesions in MR images. J Magn Reson Imaging. 2019;50(4):1144–51.

Alzubaidi L, Zhang J, Humaidi AJ, Al-Dujaili A, Duan Y, Al-Shamma O, Santamaria J, Fadhel MA, Al-Amidie M, Farhan L. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. J Big Data. 2021;8(1):53.

Li J, Chen J, Tang Y, Wang C, Landman BA, Zhou SK. Transforming medical imaging with transformers? A comparative review of key properties, current progresses, and future perspectives. Med Image Anal. 2023;85: 102762.

Moutik O, Sekkat H, Tigani S, Chehri A, Saadane R, Tchakoucht TA, Paul A. Convolutional neural networks or vision transformers: Who will win the race for action recognitions in visual data? Sensors (Basel). 2023;23(2):734.

Chang JM, Leung JWT, Moy L, Ha SM, Moon WK. Axillary nodal evaluation in breast cancer: state of the art. Radiology. 2020;295(3):500–15.

Zhou P, Wei Y, Chen G, Guo L, Yan D, Wang Y. Axillary lymph node metastasis detection by magnetic resonance imaging in patients with breast cancer: a meta-analysis. Thorac Cancer. 2018;9(8):989–96.

Ren T, Cattell R, Duanmu H, Huang P, Li H, Vanguri R, Liu MZ, Jambawalikar S, Ha R, Wang F, et al. Convolutional neural network detection of axillary lymph node metastasis using standard clinical breast MRI. Clin Breast Cancer. 2020;20(3):e301–8.

Ren T, Lin S, Huang P, Duong TQ. Convolutional neural network of multiparametric MRI accurately detects axillary lymph node metastasis in breast cancer patients with pre neoadjuvant chemotherapy. Clin Breast Cancer. 2022;22(2):170–7.

Golden JA. Deep learning algorithms for detection of lymph node metastases from breast cancer: helping artificial intelligence be seen. JAMA. 2017;318(22):2184–6.

Gupta S, Kumar S, Chang K, Lu C, Singh P, Kalpathy-Cramer J. Collaborative privacy-preserving approaches for distributed deep learning using multi-institutional data. Radiographics. 2023;43(4): e220107.

Miotto R, Wang F, Wang S, Jiang X, Dudley JT. Deep learning for healthcare: review, opportunities and challenges. Brief Bioinform. 2018;19(6):1236–46.

Holzinger A, Langs G, Denk H, Zatloukal K, Muller H. Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip Rev Data Min Knowl Discov. 2019;9(4): e1312.

Smallman M. Multi scale ethics-why we need to consider the ethics of AI in healthcare at different scales. Sci Eng Ethics. 2022;28(6):63.

Download references

Author information

Authors and affiliations.

Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA

Richard Adam, Kevin Dell’Aquila, Laura Hodges, Takouhie Maldjian & Tim Q. Duong

You can also search for this author in PubMed   Google Scholar

Contributions

RA performed literature search, analyzed data, and wrote paper. KD performed literature search, analyzed data, and edited paper. LH analyzed literature and edited paper. TM analyzed literature and edited paper. TQD wrote and edited paper, and supervised.

Corresponding author

Correspondence to Tim Q. Duong .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Adam, R., Dell’Aquila, K., Hodges, L. et al. Deep learning applications to breast cancer detection by magnetic resonance imaging: a literature review. Breast Cancer Res 25 , 87 (2023). https://doi.org/10.1186/s13058-023-01687-4

Download citation

Received : 09 March 2023

Accepted : 11 July 2023

Published : 24 July 2023

DOI : https://doi.org/10.1186/s13058-023-01687-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Machine learning
  • Artificial intelligence
  • Texture feature analysis
  • Convolutional neural network
  • Dynamic contrast enhancement

Breast Cancer Research

ISSN: 1465-542X

research papers on breast cancer detection

Advertisement

Advertisement

A Systematic Review on Breast Cancer Detection Using Deep Learning Techniques

  • Review article
  • Published: 25 April 2022
  • Volume 29 , pages 4599–4629, ( 2022 )

Cite this article

research papers on breast cancer detection

  • Kamakshi Rautela 1 ,
  • Dinesh Kumar 1 &
  • Vijay Kumar   ORCID: orcid.org/0000-0002-3460-6989 2  

2568 Accesses

25 Citations

Explore all metrics

Breast cancer is a common health problem in women, with one out of eight women dying from breast cancer. Many women ignore the need for breast cancer diagnosis as the treatment is not secure due to the exposure of radioactive rays. The breast cancer screening techniques suffer from non-invasive, unsafe radiations, and specificity of diagnosis of tumor in the breast. The deep learning techniques are widely used in medical imaging. This paper aims to provide a detailed survey dealing with the screening techniques for breast cancer with pros and cons. The applicability of deep learning techniques in breast cancer detection is studied. The performance measures and datasets for breast cancer are also investigated. The future research directions associated with breast cancer are studied. The primary aim is to provide a comprehensive study in this field and to help motivate the innovative researchers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research papers on breast cancer detection

Similar content being viewed by others

research papers on breast cancer detection

Recent Advancements for Detection and Prediction of Breast Cancer Using Deep Learning: A Review?

research papers on breast cancer detection

Recent advancements of deep learning in detecting breast cancer: a survey

research papers on breast cancer detection

Breast cancer image analysis using deep learning techniques – a survey

Explore related subjects.

  • Medical Imaging
  • Artificial Intelligence

Bahramiabarghouei H, Porter E, Santorelli A, Gosselin B, Popovíc M, Rusch LA (2015) Flexible 16 antenna array for microwave breast cancer detection. IEEE Trans Biomed Eng 62(10):2516–2525. https://doi.org/10.1109/TBME.2015.2434956

Article   Google Scholar  

Xu J et al (2016) Stacked sparse autoencoder (SSAE) for nuclei detection on breast cancer histopathology images. IEEE Trans Med Imaging 35(1):119–130. https://doi.org/10.1109/TMI.2015.2458702

National Cancer Registry Programme (2008) Report of population based cancer registries 2012–2018 National Cancer Registry Programme. National Centre for Disease Informatics and Research. and Indian Council of Medical Research, Bangalore. https://www.ncdirindia.org/Reports.aspx . Accessed 3 Dec 2020

Eismann J et al (2019) Interdisciplinary management of transgender individuals at risk for breast cancer: case reports and review of the literature. Clin Breast Cancer 19(1):e12–e19. https://doi.org/10.1016/j.clbc.2018.11.007

Stone JP, Hartley RL, Temple-Oberle C (2018) Breast cancer in transgender patients: a systematic review. Part 2: female to male. Eur J Surg Oncol. https://doi.org/10.1016/j.ejso.2018.06.021

Gooren LJ, van Trotsenburg MAA, Giltay EJ, van Diest PJ (2013) Breast cancer development in transsexual subjects receiving cross-sex hormone treatment. J Sex Med 10(12):3129–3134. https://doi.org/10.1111/jsm.12319

De Blok CJM et al (2019) Breast cancer risk in transgender people receiving hormone treatment: Nationwide cohort study in the Netherlands. BMJ. https://doi.org/10.1136/bmj.l1652

Nikolic DV et al (2012) Importance of revealing a rare case of breast cancer in a female to male transsexual after bilateral mastectomy. World J Surg Oncol 10:2–5. https://doi.org/10.1186/1477-7819-10-280

Chen D, Huang M, Li W (2019) Knowledge-powered deep breast tumor classification with multiple medical reports. IEEE/ACM Trans Comput Biol Bioinf. https://doi.org/10.1109/tcbb.2019.2955484

Duffy SW et al (2002) The impact of organized mammography service screening on breast carcinoma mortality in seven Swedish counties: a collaborative evaluation. Cancer 95(3):458–469. https://doi.org/10.1002/cncr.10765

Watine J (2002) “Prognostic factors for patients with small lung carcinoma. Cancer 94(2):576–578. https://doi.org/10.1002/cncr.10243

Duffy S, Tabár L, Smith RA (2002) The mammographic screening trials: commentary on the recent work by Olsen and Gøtzsche. J Surg Oncol 81(4):68–71. https://doi.org/10.1002/jso.10193

Ferlay J et al (2015) Cancer incidence and mortality worldwide: sources, methods and major patterns in GLOBOCAN 2012. Int J cancer 136(5):E359–E386

US Preventive Services Task Force (2002) Clinical Guidelines Preventive Services Task Force. Ann Intern Med 137(11):917–933

Google Scholar  

Boyd NF et al (2006) Body size, mammographic density, and breast cancer risk. Cancer Epidemiol Biomarkers Prev 15(11):2086–2092. https://doi.org/10.1158/1055-9965.EPI-06-0345

Carney PA, Miglioretti DL, Yankaskas BC, Kerlikowske K, Rosenberg R, Rutter CM (2003) Erratum: Individual and combined effects of age, breast density, and hormone replacement therapy use on the accuracy of screening mammography (Annals of Internal Medicine (2003) 138 (168–175)). Ann Intern Med 138(9):771. https://doi.org/10.7326/0003-4819-138-9-200305060-00025

Elter M, Horsch A (2009) CADx of mammographic masses and clustered microcalcifications: a review. Med Phys 36(6):2052–2068. https://doi.org/10.1118/1.3121511

Marmot MG, Altman DG, Cameron DA, Dewar JA, Thompson SG, Wilcox M (2013) The benefits and harms of breast cancer screening: an independent review. Br J Cancer 108(11):2205–2240. https://doi.org/10.1038/bjc.2013.177

Gu X et al (2020) Age-associated genes in human mammary gland drive human breast cancer progression. Breast Cancer Res 22(1):1–15. https://doi.org/10.1186/s13058-020-01299-2

Ferguson NL et al (2013) Prognostic value of breast cancer subtypes, Ki-67 proliferation index, age, and pathologic tumor characteristics on breast cancer survival in caucasian women. Breast J 19(1):22–30. https://doi.org/10.1111/tbj.12059

Rana P, Ratcliffe J, Sussman J, Forbes M, Levine M, Hodgson N (2017) Young women with breast cancer: needs and experiences. Cogent Med 4(1):1–11. https://doi.org/10.1080/2331205x.2017.1278836

Anderson BO et al (2008) Guideline implementation for breast healthcare in low-income and middle-income countries: Overview of the breast health global initiative Global Summit 2007. Cancer 113(8 Suppl.):2221–2243. https://doi.org/10.1002/cncr.23844

Hopp T, Duric N, Ruiter NV (2015) Image fusion of Ultrasound Computer Tomography volumes with X-ray mammograms using a biomechanical model based 2D/3D registration. Comput Med Imaging Graph 40:170–181. https://doi.org/10.1016/j.compmedimag.2014.10.005

Pisano ED et al (2008) Diagnostic accuracy of digital versus film mammography: Exploratory analysis of selected population subgroups in DMIST. Radiology 246(2):376–383. https://doi.org/10.1148/radiol.2461070200

Yassin NIR, Omran S, El Houby EMF, Allam H (2018) Machine learning techniques for breast cancer computer aided diagnosis using different image modalities: a systematic review. Comput Methods Programs Biomed 156:25–45. https://doi.org/10.1016/j.cmpb.2017.12.012

Gupta NP, Malik PK, Ram BS (2020) A review on methods and systems for early breast cancer detection. In: Proceedings of international conference on computation, automation and knowledge management (ICCAKM 2020), 42–46. https://doi.org/10.1109/ICCAKM46823.2020.9051554

Lu Y, Li JY, Su YT, Liu AA (2018) A review of breast cancer detection in medical images. In: VCIP 2018—IEEE international conference on visual communications and image processing conference, pp 11–14. https://doi.org/10.1109/VCIP.2018.8698732

Huppe AI, Mehta AK, Brem RF (2018) Molecular breast imaging: a comprehensive review. Semin Ultrasound CT MRI 39(1):60–69. https://doi.org/10.1053/j.sult.2017.10.001

Oyelade ON, Ezugwu AES (2020) A state-of-the-art survey on deep learning methods for detection of architectural distortion from digital mammography. IEEE Access 8:148644–148676. https://doi.org/10.1109/ACCESS.2020.3016223

Al Husaini MAS, Habaebi MH, Hameed SA, Islam MR, Gunawan TS (2020) A systematic review of breast cancer detection using thermography and neural networks. IEEE Access 8:208922–208937. https://doi.org/10.1109/ACCESS.2020.3038817

Hartley RL, Stone JP, Temple-Oberle C (2018) Breast cancer in transgender patients: a systematic review. Part 1: male to female. Eur J Surg Oncol. https://doi.org/10.1016/j.ejso.2018.06.035

Page MJ et al (2021) The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 372(71):1–9

Agarwal T, Kumar V (2021) A systematic review on bat algorithm: theortical foundations, variants, and applications. Archives of Computational Methods in Engineering. https://doi.org/10.1007/s11831-021-09673-9

Yin T, Ali FH, Reyes-Aldasoro CC (2015) A robust and artifact resistant algorithm of ultrawideband imaging system for breast cancer detection. IEEE Trans Biomed Eng 62(6):1514–1525. https://doi.org/10.1109/TBME.2015.2393256

Li Q et al (2015) Direct extraction of tumor response based on ensemble empirical mode decomposition for image reconstruction of early breast cancer detection by UWB. IEEE Trans Biomed Circuits Syst 9(5):710–724. https://doi.org/10.1109/TBCAS.2015.2481940

Alexandrou G et al (2021) Detection of multiple breast cancer ESR1 mutations on an ISFET based Lab-on-chip platform. IEEE Trans Biomed Circuits Syst 15(3):380–389. https://doi.org/10.1109/TBCAS.2021.3094464

Article   MathSciNet   Google Scholar  

Suckling J. The mini-MIAS database of mammograms. http://peipa.essex.ac.uk/info/mias.html

Kopans D. DDSM: digital database for screening mammography. http://www.eng.usf.edu/cvprg/Mammography/Database.html . Accessed 24 Dec 2020

Moreira IC, Amaral I, Domingues I, Cardoso A, Cardoso MJ, Cardoso JS (2012) Inbreast: toward a full-field digital mammographic database. Acad Radiol 19(2):236–248

Prapavesis S, Fornage BD, Weismann CF, Palko A, Zoumpoulis P (2003) Breast ultrasound and US-guided interventional techniques: a multimedia teaching file. Thessaloniki, Greece

Yap MH et al (2018) Automated breast ultrasound lesions detection using convolutional neural networks. IEEE J Biomed Health Inf 22(4):1218–1226

Al-Dhabyani W, Gomaa M, Khaled H, Fahmy A (2020) Dataset of breast ultrasound images. Data Br 28:104863

Man Ng HH et al (2019) Hepatitis B virus-associated intrahepatic cholangiocarcinoma has distinct clinical, pathological and radiological characteristics: a systematic review. Surg Gastroenterol Oncol 24(1):5. https://doi.org/10.21614/sgo-24-1-5

Commean P. RIDER breast MRI. https://wiki.cancerimagingarchive.net/display/Public/RIDER+Breast+MRI

Kirby J. QIN breast DCE-MRI. https://wiki.cancerimagingarchive.net/display/Public/QIN+Breast+DCE-MRI

Silva LF et al (2014) A new database for breast research with infrared image. J Med Imaging Heal Inf 4(1):92–100. https://doi.org/10.1166/jmihi.2014.1226

Bhowmik MK, Gogoi UR, Majumdar G, Bhattacharjee D, Datta D, Ghosh AK (2017) Designing of ground-truth-annotated DBT-TU-JU breast thermogram database toward early abnormality prediction. IEEE J Biomed Heal Inf 22(4):1238–1249

Mahrooghy M et al (2015) Pharmacokinetic tumor heterogeneity as a prognostic biomarker for classifying breast cancer recurrence risk. IEEE Trans Biomed Eng 62(6):1585–1594. https://doi.org/10.1109/TBME.2015.2395812

Singh VP, Srivastava S, Srivastava R (2017) Effective mammogram classification based on center symmetric-LBP features in wavelet domain using random forests. Technol Health Care 25(4):709–727. https://doi.org/10.3233/THC-170851

Ribli D, Horváth A, Unger Z, Pollner P, Csabai I (2018) Detecting and classifying lesions in mammograms with Deep Learning. Sci Rep 8(1):16–20. https://doi.org/10.1038/s41598-018-22437-z

Ekici S, Jawzal H (2020) Breast cancer diagnosis using thermography and convolutional neural networks. Med Hypotheses 137:109542. https://doi.org/10.1016/j.mehy.2019.109542

Roslidar R, Saddami K, Arnia F, Syukri M, Munadi K (2019) A study of fine-tuning CNN models based on thermal imaging for breast cancer classification. In: Proceedings of Cybernetics 2019—2019 International conference on cybernetics and computational intelligence: towards a smart and human-centered cyber world, pp 77–81. https://doi.org/10.1109/CYBERNETICSCOM.2019.8875661

Wang Z, Zhang L, Shu X, Lv Q, Yi Z (2020) An end-to-end mammogram diagnosis: a new multi-instance and multi-scale method based on single-image feature. IEEE Trans Cogn Dev Syst. https://doi.org/10.1109/TCDS.2019.2963682

Wang Y et al (2020) Deeply-supervised networks with threshold loss for cancer detection in automated breast ultrasound. IEEE Trans Med Imaging 39(4):866–876. https://doi.org/10.1109/TMI.2019.2936500

Shu X, Zhang L, Wang Z, Lv Q, Yi Z (2020) Deep neural networks with region-based pooling structures for mammographic image classification. IEEE Trans Med Imaging 39(6):2246–2255. https://doi.org/10.1109/TMI.2020.2968397

Dogra N, Kumar V (2021) A comphrehensive review on deep synergistic drug prediction techniques for cancer. Arch Comput Methods Eng 29:1443–1461

Helwan A, Abiyev R (2016) Shape and texture features for the identification of breast cancer. Lecture notes in computational science and engineering, vol 2226, pp 542–547

Pandit VR, Bhiwani RJ (2015) Image fusion in remote sensing applications: a review. Int J Comput Appl 120(10):22–32. https://doi.org/10.5120/21263-3846

El Hami A, Pougnet P (2019) Embedded mechatronic system 2: analyses of failures, modeling, simulation and optimization. Elsevier, Amsterdam, p 300

Yang Y, Wan W, Huang S, Yuan F, Yang S, Que Y (2016) Remote sensing image fusion based on adaptive IHS and multiscale guided filter. IEEE Access 4:4573–4582. https://doi.org/10.1109/ACCESS.2016.2599403

Naidu VPS, Raol JR (2008) Pixel-level image fusion using wavelets and principal component analysis. Def Sci J 58(3):338–352. https://doi.org/10.14429/dsj.58.1653

Murtaza G et al (2020) Deep learning-based breast cancer classification through medical imaging modalities: state of the art and research challenges. Artif Intell Rev 53(3):1655–1720. https://doi.org/10.1007/s10462-019-09716-5

Agarwal S (2014) Data mining: data mining concepts and techniques. In: 2013 International conference on machine intelligence and research advancement (ICMIRA)

Edman MC, Marchelletta RR, Hamm-Alvarez SF (2010). Lacrimal gland overview. In: Encyclopedia of the eye, pp 522–527. https://doi.org/10.1016/B978-0-12-374203-2.00049-X .

Barnes NLP, Ooi JL, Yarnold JR, Bundred NJ (2012) Ductal carcinoma in situ of the breast. BMJ 344(7846):1430–1441. https://doi.org/10.1136/bmj.e797

Schnitt SJ (2010) Classification and prognosis of invasive breast cancer: from morphology to molecular taxonomy. Mod Pathol 23:60–64. https://doi.org/10.1038/modpathol.2010.33

Gupta A, Shridhar K, Dhillon PK (2015) A review of breast cancer awareness among women in India: cancer literate or awareness deficit? Eur J Cancer 51(14):2058–2066. https://doi.org/10.1016/j.ejca.2015.07.008

Brooks AD et al (2009) Modern breast cancer detection: a technological review. Int J Biomed Imaging. https://doi.org/10.1155/2009/902326

Nam KJ et al (2015) Comparison of full-field digital mammography and digital breast tomosynthesis in ultrasonography-detected breast cancers. Breast 24(5):649–655. https://doi.org/10.1016/j.breast.2015.07.039

Abdelhafiz D, Yang C, Ammar R, Nabavi S (2019) Deep convolutional neural networks for mammography: advances, challenges and applications. BMC Bioinf. https://doi.org/10.1186/s12859-019-2823-4

Heidari M, Mirniaharikandehei S, Liu W, Hollingsworth AB, Liu H, Zheng B (2020) Development and assessment of a new global mammographic image feature analysis scheme to predict likelihood of malignant cases. IEEE Trans Med Imaging 39(4):1235–1244. https://doi.org/10.1109/TMI.2019.2946490

Shen S et al (2015) A multi-centre randomised trial comparing ultrasound vs mammography for screening breast cancer in high-risk Chinese women. Br J Cancer 112(6):998–1004. https://doi.org/10.1038/bjc.2015.33

Brem RF, Lenihan MJ, Lieberman J, Torrente J (2015) Screening breast ultrasound: past, present, and future. Am J Roentgenol 204(2):234–240. https://doi.org/10.2214/AJR.13.12072

Kolb TM, Lichy J, Newhouse JH (2002) Comparison of the performance of screening mammography, physical examination, and breast US and evaluation of factors that influence them: an analysis of 27,825 patient evaluations. Radiology 225(1):165–175. https://doi.org/10.1148/radiol.2251011667

Wang X, Chen X, Cao C (2019) Hierarchically engineering quality-related perceptual features for understanding breast cancer. J Vis Commun Image Represent 64:102644. https://doi.org/10.1016/j.jvcir.2019.102644

Nyayapathi N et al (2020) Dual scan mammoscope (DSM)—a new portable photoacoustic breast imaging system with scanning in craniocaudal plane. IEEE Trans Biomed Eng 67(5):1321–1327. https://doi.org/10.1109/TBME.2019.2936088

Mann RM, Cho N, Moy L (2019) Reviews and commentary—state of the art. Radiology 292(3):520–536. https://doi.org/10.1148/radiol.2019182947

Sun P, Wang D, Mok VC, Shi L (2019) Comparison of feature selection methods and machine learning classifiers for radiomics analysis in glioma grading. IEEE Access 7:102010–102020. https://doi.org/10.1109/access.2019.2928975

Wu N et al (2020) Deep neural networks improve radiologists’ performance in breast cancer screening. IEEE Trans Med Imaging 39(4):1184–1194. https://doi.org/10.1109/TMI.2019.2945514

Kale MC, Clymer BD, Koch RM, Heverhagen JT, Sammet S, Stevens R, Knopp MV (2008) Multispectral co-occurrence with three random variables in dynamic contrast enhanced magnetic resonance imaging of breast cancer. IEEE Trans Med Imaging 27(10):1425–1431

Srivastava S, Sharma N, Singh SK, Srivastava R (2014) Quantitative analysis of a general framework of a CAD tool for breast cancer detection from mammograms. J Med Imaging Heal Inf 4(5):654–674. https://doi.org/10.1166/jmihi.2014.1304

Haeri Z, Shokoufi M, Jenab M, Janzen R, Golnaraghi F (2016) Electrical impedance spectroscopy for breast cancer diagnosis: clinical study. Integr. Cancer Sci Ther 3(6):1–6. https://doi.org/10.1038/nprot.2014.110

Huerta-Nuñez LFE et al (2019) A biosensor capable of identifying low quantities of breast cancer cells by electrical impedance spectroscopy. Sci Rep 9(1):1–12. https://doi.org/10.1038/s41598-019-42776-9

Lederman D, Zheng B, Wang X, Wang XH, Gur D (2011) Improving breast cancer risk stratification using resonance-frequency electrical impedance spectroscopy through fusion of multiple classifiers. Ann Biomed Eng 39(3):931–945. https://doi.org/10.1007/s10439-010-0210-4

Ward LC, Dylke E, Czerniec S, Isenring E, Kilbreath SL (2011) Confirmation of the reference impedance ratios used for assessment of breast cancer-related lymphedema by bioelectrical impedance spectroscopy. Lymphat Res Biol 9(1):47–51. https://doi.org/10.1089/lrb.2010.0014

Etehadtavakol M, Chandran V, Ng EYK, Kafieh R (2013) Breast cancer detection from thermal images using bispectral invariant features. Int J Therm Sci 69:21–36. https://doi.org/10.1016/j.ijthermalsci.2013.03.001

Gonzalez-Hernandez JL, Recinella AN, Kandlikar SG, Dabydeen D, Medeiros L, Phatak P (2020) An inverse heat transfer approach for patient-specific breast cancer detection and tumor localization using surface thermal images in the prone position. Infrared Phys Technol 105:103202. https://doi.org/10.1016/j.infrared.2020.103202

Mambou SJ, Maresova P, Krejcar O, Selamat A, Kuca K (2018) Breast cancer detection using infrared thermal imaging and a deep learning model. Sensors (Switzerland). https://doi.org/10.3390/s18092799

Casalegno F et al (2019) Caries detection with near-infrared transillumination using deep learning. J Dent Res 98(11):1227–1233. https://doi.org/10.1177/0022034519871884

Klemm M, Leendertz JA, Gibbins D, Craddock IJ, Preece A, Benjamin R (2009) Microwave radar-based breast cancer detection: Imaging in inhomogeneous breast phantoms. IEEE Antennas Wirel Propag Lett 8:1349–1352. https://doi.org/10.1109/LAWP.2009.2036748

Grzegorczyk TM, Meaney PM, Kaufman PA, Diflorio-Alexander RM, Paulsen KD (2012) Fast 3-D tomographic microwave imaging for breast cancer detection. IEEE Trans Med Imaging 31(8):1584–1592. https://doi.org/10.1109/TMI.2012.2197218

Tunçay AH, Akduman I (2015) Realistic microwave breast models through T1-weighted 3-D MRI data. IEEE Trans Biomed Eng 62(2):688–698. https://doi.org/10.1109/TBME.2014.2364015

Botterill T, Lotz T, Kashif A, Chase JG (2014) Reconstructing 3-D skin surface motion for the DIET breast cancer screening system. IEEE Trans Med Imaging 33(5):1109–1118. https://doi.org/10.1109/TMI.2014.2304959

Kao TJ et al (2008) Regional admittivity spectra with tomosynthesis images for breast cancer detection: preliminary patient study. IEEE Trans Med Imaging 27(12):1762–1768. https://doi.org/10.1109/TMI.2008.926049

Baran P et al (2018) High-resolution X-ray phase-contrast 3-d imaging of breast tissue specimens as a possible adjunct to histopathology. IEEE Trans Med Imaging 37(12):2642–2650. https://doi.org/10.1109/TMI.2018.2845905

McKnight AL, Kugel JL, Rossman PJ, Manduca A, Hartmann LC, Ehman RL (2002) MR elastography of breast cancer: preliminary results. Am J Roentgenol 178(6):1411–1417. https://doi.org/10.2214/ajr.178.6.1781411

Goddi A, Bonardi M, Alessi S (2012) Breast elastography: a literature review. J Ultrasound 15(3):192–198

Landhuis E (2020) Deep learning takes on tumours. Nature 580(7804):551–553. https://doi.org/10.1038/d41586-020-01128-8

Bengio Y (2009) Learning deep architectures for AI. Found Trends Mach Learn 2(1):1–27. https://doi.org/10.1561/2200000006

Article   MathSciNet   MATH   Google Scholar  

Fu B, Liu P, Lin J, Deng L, Hu K, Zheng H (2019) Predicting invasive disease-free survival for early stage breast cancer patients using follow-up clinical data. IEEE Trans Biomed Eng 66(7):2053–2064. https://doi.org/10.1109/TBME.2018.2882867

Dheeba J, Selvi ST (2011) A CAD system for breast cancer diagnosis using modified genetic algorithm optimized artificial neural network. Lecture notes in computer science (including subseries: Lecture notes in artificial intelligence and lecture notes in bioinformatics), vol 7076, Part 1, pp 349–357. https://doi.org/10.1007/978-3-642-27172-4_43

Gubern-Mérida A, Kallenberg M, Mann RM, Martí R, Karssemeijer N (2015) Breast segmentation and density estimation in breast MRI: a fully automatic framework. IEEE J Biomed Health Inf 19(1):349–357. https://doi.org/10.1109/JBHI.2014.2311163

Bándi P et al (2019) From detection of individual metastases to classification of lymph node status at the patient level: the CAMELYON17 challenge. IEEE Trans Med Imaging 38(2):550–560. https://doi.org/10.1109/TMI.2018.2867350

Le H et al (2020) Utilizing automated breast cancer detection to identify spatial distributions of tumor-infiltrating lymphocytes in invasive breast cancer. Am J Pathol 190(7):1491–1504. https://doi.org/10.1016/j.ajpath.2020.03.012

Shen L, Margolies LR, Rothstein JH, Fluder E, McBride R, Sieh W (2019) Deep learning to improve breast cancer detection on screening mammography. Sci Rep. https://doi.org/10.1038/s41598-019-48995-4

Graziani M, Andrearczyk V, Müller H (2018) Regression concept vectors for bidirectional explanations in histopathology. Lecture notes in computer science (including subseries: Lecture notes in artificial intelligence and lecture notes in bioinformatics), vol 11038, pp 124–132. https://doi.org/10.1007/978-3-030-02628-8_14

Jonnalagedda P, Schmolze D, Bhanu B (2018) MVPNets: multi-viewing path deep learning neural networks for magnification invariant diagnosis in breast cancer. In: Proceedings of 2018 IEEE 18th international conference on bioinformatics and bioengineering (BIBE 2018), pp 189–194. https://doi.org/10.1109/BIBE.2018.00044

Arya N, Saha S (2020) Multi-modal classification for human breast cancer prognosis prediction: proposal of deep-learning based stacked ensemble model. IEEE/ACM Trans Comput Biol Bioinform 5963:2–11. https://doi.org/10.1109/TCBB.2020.3018467

Sanyal R, Kar D, Sarkar R, Member S (2021) Carcinoma type classification from high-resolution breast microscopy images using a hybrid ensemble of deep convolutional features and gradient boosting trees classifiers. IEEE/ACM Trans Comput Biol Bioinform 5963:1–14. https://doi.org/10.1109/TCBB.2021.3071022

Wang D, Khosla A, Gargeya R, Irshad H, Beck AH (2016) Deep learning for identifying metastatic breast cancer, pp 1–6. http://arxiv.org/abs/1606.05718

Reis S et al (2017) Automated classification of breast cancer stroma maturity from histological images. IEEE Trans Biomed Eng 64(10):2344–2352. https://doi.org/10.1109/TBME.2017.2665602

Feng X et al (2019) Accurate prediction of neoadjuvant chemotherapy pathological complete remission (PCR) for the four sub-types of breast cancer. IEEE Access 7:134697–134706. https://doi.org/10.1109/ACCESS.2019.2941543

Siegel RL, Miller KD, Jemal A (2019) Cancer statistics, 2019. CA Cancer J Clin 69(1):7–34. https://doi.org/10.3322/caac.21551

Hajela P, Pawar AV, Ahirrao S (2018) Deep learning for cancer cell detection and segmentation: a survey. In: 1st International conference on data science and analytics (PuneCon 2018)—proceedings, pp 1–6. https://doi.org/10.1109/PUNECON.2018.8745420

Quellec G, Lamard M, Cozic M, Coatrieux G, Cazuguel G (2016) Multiple-instance learning for anomaly detection in digital mammography. IEEE Trans Med Imaging 35(7):1604–1614. https://doi.org/10.1109/TMI.2016.2521442

Sehgal CM, Weinstein SP, Arger PH, Conant EF (2006) A review of breast ultrasound. J Mammary Gland Biol Neoplasia 11(2):113–123. https://doi.org/10.1007/s10911-006-9018-0

Youk JH, Gweon HM, Son EJ (2017) Shear-wave elastography in breast ultrasonography: the state of the art. Ultrasonography 36(4):300–309. https://doi.org/10.14366/usg.17024

Roslidar R et al (2020) A review on recent progress in thermal imaging and deep learning approaches for breast cancer detection. IEEE Access 8:116176–116194. https://doi.org/10.1109/ACCESS.2020.3004056

Hooker KA (2014) Installing concrete anchors. Concr Constr World Concr 59(3):25–30

Baker MJ et al (2014) Using Fourier transform IR spectroscopy to analyze biological materials. Nat Protoc 9(8):1771–1791. https://doi.org/10.1038/nprot.2014.110

Gonzalez-Hernandez JL, Recinella AN, Kandlikar SG, Dabydeen D, Medeiros L, Phatak P (2019) Technology, application and potential of dynamic breast thermography for the detection of breast cancer. Int J Heat Mass Transf 131:558–573. https://doi.org/10.1016/j.ijheatmasstransfer.2018.11.089

Meaney PM, Kaufman PA, Muffly LS, Click M, Poplack SP, Wells WA, Schwartz GN, di Florio-Alexander RM, Tosteson TD, Li Z, Geimer SD (2013) Microwave imaging for neoadjuvant chemotherapy monitoring: initial clinical experience. Breast Cancer Res 15(2):1–16

Download references

Author information

Authors and affiliations.

Delhi Technological University, New Delhi, India

Kamakshi Rautela & Dinesh Kumar

National Institute of Technology, Hamirpur, Himachal Pradesh, India

Vijay Kumar

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Vijay Kumar .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Rautela, K., Kumar, D. & Kumar, V. A Systematic Review on Breast Cancer Detection Using Deep Learning Techniques. Arch Computat Methods Eng 29 , 4599–4629 (2022). https://doi.org/10.1007/s11831-022-09744-5

Download citation

Received : 16 September 2021

Accepted : 11 April 2022

Published : 25 April 2022

Issue Date : November 2022

DOI : https://doi.org/10.1007/s11831-022-09744-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

Subscribe to the PwC Newsletter

Join the community, add a new evaluation result row, breast cancer detection.

34 papers with code • 4 benchmarks • 8 datasets

Benchmarks Add a Result

--> --> --> --> -->
Trend Dataset Best ModelPaper Code Compare
CFS-TSK+
XBNET
Luminal vs Non Luminal
IRv2-CXL

research papers on breast cancer detection

Most implemented papers

Breastscreening: on the use of multi-modality in medical imaging diagnosis.

MIMBCD-UI/prototype-multi-modality • 7 Apr 2020

This paper describes the field research, design and comparative deployment of a multimodal medical imaging user interface for breast screening.

Deep Learning to Improve Breast Cancer Early Detection on Screening Mammography

We also demonstrate that a whole image classifier trained using our end-to-end approach on the DDSM digitized film mammograms can be transferred to INbreast FFDM images using only a subset of the INbreast data for fine-tuning and without further reliance on the availability of lesion annotations.

Deep Convolutional Neural Networks for Breast Cancer Histology Image Analysis

In this work, we develop the computational approach based on deep convolution neural networks for breast cancer histology image classification.

High-Resolution Breast Cancer Screening with Multi-View Deep Convolutional Neural Networks

research papers on breast cancer detection

In our work, we propose to use a multi-view deep convolutional neural network that handles a set of high-resolution medical images.

Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening

We present a deep convolutional neural network for breast cancer screening exam classification, trained and evaluated on over 200, 000 exams (over 1, 000, 000 images).

CEIMVEN: An Approach of Cutting Edge Implementation of Modified Versions of EfficientNet (V1-V2) Architecture for Breast Cancer Detection and Classification from Ultrasound Images

In this research, we focused mostly on our rigorous implementations and iterative result analysis of different cutting-edge modified versions of EfficientNet architectures namely EfficientNet-V1 (b0-b7) and EfficientNet-V2 (b0-b3) with ultrasound image, named as CEIMVEN.

Detecting and classifying lesions in mammograms with Deep Learning

riblidezso/frcnn_cad • 26 Jul 2017

In the last two decades Computer Aided Diagnostics (CAD) systems were developed to help radiologists analyze screening mammograms.

On Breast Cancer Detection: An Application of Machine Learning Algorithms on the Wisconsin Diagnostic Dataset

The hyper-parameters used for all the classifiers were manually assigned.

Conditional Infilling GANs for Data Augmentation in Mammogram Classification

ericwu09/mammo-cigan • 21 Jul 2018

Deep learning approaches to breast cancer detection in mammograms have recently shown promising results.

Regression Concept Vectors for Bidirectional Explanations in Histopathology

Explanations for deep neural network predictions in terms of domain-related concepts can be valuable in medical applications, where justifications are important for confidence in the decision-making.

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Captcha Page

We apologize for the inconvenience...

To ensure we keep this website safe, please can you confirm you are a human by ticking the box below.

If you are unable to complete the above request please contact us using the below link, providing a screenshot of your experience.

https://ioppublishing.org/contacts/

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

diagnostics-logo

Article Menu

research papers on breast cancer detection

  • Subscribe SciFeed
  • Recommended Articles
  • Author Biographies
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Radiotracer innovations in breast cancer imaging: a review of recent progress.

research papers on breast cancer detection

1. Introduction

2. sources and selection criteria, 3. types of breast cancer, 3.1. ductal carcinoma in situ, 3.2. invasive ductal carcinoma, 3.3. invasive lobular carcinoma, 3.4. triple-negative breast cancer, 3.5. her2-positive breast cancer, 3.6. hormone receptor-positive breast cancer (luminal subtypes), 4. current radiotracers in breast cancer imaging, 4.1. 18 f-fluorodeoxyglucose ( 18 f-fdg), 4.1.1. 18 f-fdg pet/ct, ductal carcinoma in situ, invasive ductal carcinoma, invasive lobular carcinoma, triple-negative breast cancer, 4.1.2. 18 f-fdg positron emission mammography (pem), 4.1.3. 18 f-fdg pet/mri, 4.2. fibroblast activation protein inhibitor (fapi), 4.3. 16α-[ 18 f]fluoroestradiol (fes), 4.4. 4-fluoro-11β-methoxy-16α- 18 f-fluoroestradiol ( 18 f-4fmfes), 4.5. 21- 18 f-fluoro-16α,17α-[(r)-(1′-α-furylmethylidene)dioxy]-19-norpregn-4-ene-3,20-dione (ffnp), 4.6. 89 zr-trastuzumab, 4.7. 64 cu-dota-trastuzumab, 4.8. 89 zr-pertuzumab, 5. investigational radiotracers in breast cancer imaging, 5.1. [ 18 f]psma-1007, 5.2. 68 ga-aby-025, 5.3. 68 ga-nota-fab-trastuzumab, 5.4. 89 zr∙df-her2-fab-pas200, 5.5. 68 ga-her2-nanobody, 5.6. 68 ga-nota-mal-cys-mzher 2:342, 5.7. 18 f-labeled 1-amino-3-fluorocyclobutane-1-carboxylic acid ( 18 f-fluciclovine), 5.8. 3′-deoxy-3′-[ 18 f]-fluorothymidine ( 18 f-flt) and 18 f-dpa-714, 5.9. others, 6. future directions, 7. conclusions, author contributions, informed consent statement, data availability statement, acknowledgments, conflicts of interest.

  • Wilkinson, L.; Gathani, T. Understanding breast cancer as a global health concern. Br. J. Radiol. 2022 , 95 , 20211033. [ Google Scholar ] [ CrossRef ]
  • Katsura, C.; Ogunmwonyi, I.; Kankam, H.K.; Saha, S. Breast cancer: Presentation, investigation and management. Br. J. Hosp. Med. 2022 , 83 , 1–7. [ Google Scholar ] [ CrossRef ]
  • Tollens, F.; Baltzer, P.A.T.; Dietzel, M.; Schnitzer, M.L.; Schwarze, V.; Kunz, W.G.; Rink, J.; Rübenthaler, J.; Froelich, M.F.; Schönberg, S.O.; et al. Economic potential of abbreviated breast MRI for screening women with dense breast tissue for breast cancer. Eur. Radiol. 2022 , 32 , 7409–7419. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Lebron, L.; Greenspan, D.; Pandit-Taskar, N. PET imaging of breast cancer: Role in patient management. PET Clin. 2015 , 10 , 159–195. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Fowler, A.M.; Cho, S.Y. PET imaging for breast cancer. Radiol. Clin. N. Am. 2021 , 59 , 725–735. [ Google Scholar ] [ CrossRef ]
  • Wang, J.; Li, B.; Luo, M.; Huang, J.; Zhang, K.; Zheng, S.; Zhang, S.; Zhou, J. Progression from ductal carcinoma in situ to invasive breast cancer: Molecular features and clinical significance. Signal Transduct. Target. Ther. 2024 , 9 , 83. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • van Seijen, M.; Lips, E.H.; Thompson, A.M.; Nik-Zainal, S.; Futreal, A.; Hwang, E.S.; Verschuur, E.; Lane, J.; Jonkers, J.; Rea, D.W.; et al. Ductal carcinoma in situ: To treat or not to treat, that is the question. Br. J. Cancer 2019 , 121 , 285–292. [ Google Scholar ] [ CrossRef ]
  • Siegel, R.L.; Giaquinto, A.N.; Jemal, A. Cancer statistics, 2024. CA Cancer J. Clin. 2024 , 74 , 12–49. [ Google Scholar ] [ CrossRef ]
  • Narod, S.A.; Iqbal, J.; Giannakeas, V.; Sopik, V.; Sun, P. Breast cancer mortality after a diagnosis of ductal carcinoma in situ. JAMA Oncol. 2015 , 1 , 888–896. [ Google Scholar ] [ CrossRef ]
  • Wu, S.G.; Zhang, W.-W.; Sun, J.-Y.; He, Z.-Y. Prognostic value of ductal carcinoma in situ component in invasive ductal carcinoma of the breast: A Surveillance, Epidemiology, and End Results database analysis. Cancer Manag. Res. 2018 , 10 , 527–534. [ Google Scholar ] [ CrossRef ]
  • Chamalidou, C.; Fohlin, H.; Albertsson, P.; Arnesson, L.-G.; Einbeigi, Z.; Holmberg, E.; Nordenskjöld, A.; Nordenskjöld, B.; Karlsson, P.; Linderholm, B. Survival patterns of invasive lobular and invasive ductal breast cancer in a large population-based cohort with two decades of follow up. Breast 2021 , 59 , 294–300. [ Google Scholar ] [ CrossRef ]
  • Kole, A.J.; Park, H.S.; Johnson, S.B.; Kelly, J.R.; Moran, M.S.; Patel, A.A. Overall survival is improved when DCIS accompanies invasive breast cancer. Sci. Rep. 2019 , 9 , 9934. [ Google Scholar ] [ CrossRef ]
  • Grabenstetter, A.; Mohanty, A.S.; Rana, S.; Zehir, A.; Brannon, A.R.; D’Alfonso, T.M.; DeLair, D.F.; Tan, L.K.; Ross, D.S. E-cadherin immunohistochemical expression in invasive lobular carcinoma of the breast: Correlation with morphology and CDH1 somatic alterations. Hum. Pathol. 2020 , 102 , 44–53. [ Google Scholar ] [ CrossRef ]
  • Yang, C.; Lei, C.; Zhang, Y.; Zhang, J.; Ji, F.; Pan, W.; Zhang, L.; Gao, H.; Yang, M.; Li, J.; et al. Comparison of overall survival between invasive lobular breast carcinoma and invasive ductal breast carcinoma: A propensity score matching study based on SEER database. Front. Oncol. 2020 , 10 , 590643. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Derakhshan, F.; Reis-Filho, J.S. Pathogenesis of triple-negative breast cancer. Annu. Rev. Pathol. Mech. Dis. 2022 , 17 , 181–204. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Borri, F.; Granaglia, A. Pathology of triple negative breast cancer. Semin. Cancer Biol. 2021 , 72 , 136–145. [ Google Scholar ] [ CrossRef ]
  • Liedtke, C.; Mazouni, C.; Hess, K.R.; André, F.; Tordai, A.; Mejia, J.A.; Symmans, W.F.; Gonzalez-Angulo, A.M.; Hennessy, B.; Green, M.; et al. Response to neoadjuvant therapy and long-term survival in patients with triple-negative breast cancer. J. Clin. Oncol. 2008 , 26 , 1275–1281. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Slamon, D.J.; Godolphin, W.; Jones, L.A.; Holt, J.A.; Wong, S.G.; Keith, D.E.; Levin, W.J.; Stuart, S.G.; Udove, J.; Ullrich, A.; et al. Studies of the HER-2/neu proto-oncogene in human breast and ovarian cancer. Science 1989 , 244 , 707–712. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Slamon, D.J.; Clark, G.M.; Wong, S.G.; Levin, W.J.; Ullrich, A.; McGuire, W.L. Human breast cancer: Correlation of relapse and survival with amplification of the HER-2/neu oncogene. Science 1987 , 235 , 177–182. [ Google Scholar ] [ CrossRef ]
  • Konecny, G.; Pauletti, G.; Pegram, M.; Untch, M.; Dandekar, S.; Aguilar, Z.; Wilson, C.; Rong, H.-M.; Bauerfeind, I.; Felber, M.; et al. Quantitative association between HER-2/neu and steroid hormone receptors in hormone receptor-positive primary breast cancer. J. Natl. Cancer Inst. 2003 , 95 , 142–153. [ Google Scholar ] [ CrossRef ]
  • Cancer Stat Facts: Female Breast Cancer Subtypes. Available online: https://seer.cancer.gov/statfacts/html/breast-subtypes.html (accessed on 15 July 2024).
  • Gupta, R.; Gupta, S.; Antonios, B.; Ghimire, B.; Jindal, V.; Deol, J.; Gaikazian, S.; Huben, M.; Anderson, J.; Stender, M.; et al. Therapeutic landscape of advanced HER2-positive breast cancer in 2022. Med. Oncol. 2022 , 39 , 258. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Mayrovitz, H.N. (Ed.) Breast Cancer ; Exon Publications: Brisbane, Australia, 2022. [ Google Scholar ]
  • Sørlie, T.; Perou, C.M.; Tibshirani, R.; Aas, T.; Geisler, S.; Johnsen, H.; Hastie, T.; Eisen, M.B.; van de Rijn, M.; Jeffrey, S.S.; et al. Gene expression patterns of breast carcinomas distinguish tumor subclasses with clinical implications. Proc. Natl. Acad. Sci. USA 2001 , 98 , 10869–10874. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Hadebe, B.; Harry, L.; Ebrahim, T.; Pillay, V.; Vorster, M. The role of PET/CT in breast cancer. Diagnostics 2023 , 13 , 597. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Paydary, K.; Seraj, S.M.; Zadeh, M.Z.; Emamzadehfard, S.; Shamchi, S.P.; Gholami, S.; Werner, T.J.; Alavi, A. The evolving role of FDG-PET/CT in the diagnosis, staging, and treatment of breast cancer. Mol. Imaging Biol. 2019 , 21 , 1–10. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Gradishar, W.J.; Moran, M.S.; Abraham, J.; Aft, R.; Agnese, D.; Allison, K.H.; Anderson, B.; Burstein, H.J.; Chew, H.; Dang, C.; et al. Breast Cancer, Version 3.2022, NCCN Clinical Practice Guidelines in Oncology. J. Natl. Compr. Cancer Netw. 2022 , 20 , 691–722. [ Google Scholar ] [ CrossRef ]
  • Jung, N.Y.; Kang, B.J.; Kim, S.H.; Yoo, I.R.; Lim, Y.S.; Yoo, W. Role of FDG-PET/CT in identification of histological upgrade of ductal carcinoma in situ (DCIS) in needle biopsy. Iran. J. Radiol. 2021 , 18 , e113862. [ Google Scholar ] [ CrossRef ]
  • Ulaner, G.A. PET/CT for patients with breast cancer: Where is the clinical impact? AJR Am. J. Roentgenol. 2019 , 213 , 254–265. [ Google Scholar ] [ CrossRef ]
  • Dashevsky, B.Z.; Goldman, D.A.; Parsons, M.; Gönen, M.; Corben, A.D.; Jochelson, M.S.; Hudis, C.A.; Morrow, M.; Ulaner, G.A. Appearance of untreated bone metastases from breast cancer on FDG PET/CT: Importance of histologic subtype. Eur. J. Nucl. Med. Mol. Imaging 2015 , 42 , 1666–1673. [ Google Scholar ] [ CrossRef ]
  • Groheux, D.; Hindié, E.; Giacchetti, S.; Hamy, A.-S.; Berger, F.; Merlet, P.; de Roquancourt, A.; de Cremoux, P.; Marty, M.; Hatt, M.; et al. Early assessment with 18F-fluorodeoxyglucose positron emission tomography/computed tomography can help predict the outcome of neoadjuvant chemotherapy in triple negative breast cancer. Eur. J. Cancer 2014 , 50 , 1864–1871. [ Google Scholar ] [ CrossRef ]
  • Park, H.L.; Lee, S.-W.; Hong, J.H.; Lee, J.; Lee, A.; Kwon, S.J.; Park, S.Y.; Yoo, I.R. Prognostic impact of 18F-FDG PET/CT in pathologic stage II invasive ductal carcinoma of the breast: Re-illuminating the value of PET/CT in intermediate-risk breast cancer. Cancer Imaging 2023 , 23 , 2. [ Google Scholar ] [ CrossRef ]
  • Kitajima, K.; Miyoshi, Y. Present and future role of FDG-PET/CT imaging in the management of breast cancer. Jpn. J. Radiol. 2016 , 34 , 167–180. [ Google Scholar ] [ CrossRef ]
  • Vag, T.; Steiger, K.; Rossmann, A.; Keller, U.; Noske, A.; Herhaus, P.; Ettl, J.; Niemeyer, M.; Wester, H.-J.; Schwaiger, M. PET imaging of chemokine receptor CXCR4 in patients with primary and recurrent breast carcinoma. EJNMMI Res. 2018 , 8 , 90. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Song, B.I.; Hong, C.M.; Lee, H.J.; Kang, S.; Jeong, S.Y.; Kim, H.W.; Chae, Y.S.; Park, J.Y.; Lee, S.-W.; Ahn, B.-C.; et al. Prognostic value of primary tumor uptake on F-18 FDG PET/CT in patients with invasive ductal breast cancer. Nucl. Med. Mol. Imaging 2011 , 45 , 117–124. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Hildebrandt, M.G.; Gerke, O.; Baun, C.; Falch, K.; Hansen, J.A.; Farahani, Z.A.; Petersen, H.; Larsen, L.B.; Duvnjak, S.; Buskevica, I.; et al. [18F]fluorodeoxyglucose (FDG)-positron emission tomography (PET)/computed tomography (CT) in suspected recurrent breast cancer: A prospective comparative study of dual-time-point FDG-PET/CT, contrast-enhanced CT, and bone scintigraphy. J. Clin. Oncol. 2016 , 34 , 1889–1897. [ Google Scholar ] [ CrossRef ]
  • Loh, C.Y.; Chai, J.Y.; Tang, T.F.; Wong, W.F.; Sethi, G.; Shanmugam, M.K.; Chong, P.P.; Looi, C.Y. The E-cadherin and N-cadherin switch in epithelial-to-mesenchymal transition: Signaling, therapeutic implications, and challenges. Cells 2019 , 8 , 1118. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sahin, E.; Kus, T.; Aytekin, A.; Uzun, E.; Elboga, U.; Yilmaz, L.; Cayirli, Y.B.; Okuyan, M.; Cimen, V.; Cimen, U. 68 Ga-FAPI PET/CT as an alternative to 18 F-FDG PET/CT in the imaging of invasive lobular breast carcinoma. J. Nucl. Med. 2024 , 65 , 512–519. [ Google Scholar ] [ CrossRef ]
  • Gilardi, L.; Farulla, L.S.A.; Curigliano, G.; Corso, G.; Leonardi, M.C.; Ceci, F. FDG and Non-FDG radiopharmaceuticals for PET imaging in invasive lobular breast carcinoma. Biomedicines 2023 , 11 , 1350. [ Google Scholar ] [ CrossRef ]
  • Bonnin, D.; Ladoire, S.; Briot, N.; Bertaut, A.; Drouet, C.; Cochet, A.; Alberini, J.-L. Performance of [18F]FDG-PET/CT imaging in first recurrence of invasive lobular carcinoma. J. Clin. Med. 2023 , 12 , 2916. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Park, H.L.; Yoo, I.R.; O, J.H.; Kim, H.; Kim, S.H.; Kang, B.J. Clinical utility of 18F-FDG PET/CT in low 18F-FDG-avidity breast cancer subtypes: Comparison with breast US and MRI. Nucl. Med. Commun. 2018 , 39 , 35–43. [ Google Scholar ] [ CrossRef ]
  • Ulaner, G.A.; Castillo, R.; Goldman, D.A.; Wills, J.; Riedl, C.C.; Pinker-Domenig, K.; Jochelson, M.S.; Gönen, M. (18)F-FDG-PET/CT for systemic staging of newly diagnosed triple-negative breast cancer. Eur. J. Nucl. Med. Mol. Imaging 2016 , 43 , 1937–1944. [ Google Scholar ] [ CrossRef ]
  • Lee, H.J.; Lim, H.S.; Ki, S.Y.; Park, H.M.; Lee, J.E.; Jeong, W.G.; Shin, S.S.; Kwon, S.Y.; Park, M.H.; Lee, J.S. 18F-fluorodeoxyglucose uptake on PET/computed tomography in association with androgen receptor expression and other clinicopathologic factors in surgically resected triple-negative breast cancer. Nucl. Med. Commun. 2021 , 42 , 101–106. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Patel, M.M.; Adrada, B.E.; Fowler, A.M.; Rauch, G.M. Molecular breast imaging and positron emission mammography. PET Clin. 2023 , 18 , 487–501. [ Google Scholar ] [ CrossRef ]
  • Keshavarz, K.; Jafari, M.; Lotfi, F.; Bastani, P.; Salesi, M.; Gheisari, F.; Hemami, M.R. Positron Emission Mammography (PEM) in the diagnosis of breast cancer: A systematic review and economic evaluation. Med. J. Islam. Repub. Iran 2020 , 34 , 100. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Berg, W.A.; Weinberg, I.N.; Narayanan, D.; Lobrano, M.E.; Ross, E.; Amodei, L.; Tafra, L.; Adler, L.P.; Uddo, J.; Stein, W.; et al. High-resolution fluorodeoxyglucose positron emission tomography with compression (“positron emission mammography”) is highly accurate in depicting primary breast cancer. Breast J. 2006 , 12 , 309–323. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Yamamoto, Y.; Tasaki, Y.; Kuwada, Y.; Ozawa, Y.; Inoue, T. A preliminary report of breast cancer screening by positron emission mammography. Ann. Nucl. Med. 2016 , 30 , 130–137. [ Google Scholar ] [ CrossRef ]
  • Freitas, V.; Li, X.; Scaranelo, A.; Au, F.; Kulkarni, S.; Ghai, S.; Taeb, S.; Bubon, O.; Baldassi, B.; Komarov, B.; et al. Breast cancer detection using a low-dose positron emission digital mammography system. Radiol. Imaging Cancer 2024 , 6 , e230020. [ Google Scholar ] [ CrossRef ]
  • Schilling, K.; Narayanan, D.; Kalinyak, J.E.; The, J.; Velasquez, M.V.; Kahn, S.; Saady, M.; Mahal, R.; Chrystal, L. Positron emission mammography in breast cancer presurgical planning: Comparisons with magnetic resonance imaging. Eur. J. Nucl. Med. Mol. Imaging 2011 , 38 , 23–36. [ Google Scholar ] [ CrossRef ]
  • Berg, W.A.; Madsen, K.S.; Schilling, K.; Tartar, M.; Pisano, E.D.; Larsen, L.H.; Narayanan, D.; Ozonoff, A.; Miller, J.P.; Kalinyak, J.E. Breast cancer: Comparative effectiveness of positron emission mammography and MR imaging in presurgical planning for the ipsilateral breast. Radiology 2011 , 258 , 59–72. [ Google Scholar ] [ CrossRef ]
  • Koyasu, H.; Goshima, S.; Noda, Y.; Nishibori, H.; Takeuchi, M.; Matsunaga, K.; Yamada, T.; Matsuo, M. The feasibility of dedicated breast PET for the assessment of residual tumor after neoadjuvant chemotherapy. Jpn. J. Radiol. 2019 , 37 , 81–87. [ Google Scholar ] [ CrossRef ]
  • Sasada, S.; Masumoto, N.; Goda, N.; Kajitani, K.; Emi, A.; Kadoya, T.; Okada, M. Dedicated breast PET for detecting residual disease after neoadjuvant chemotherapy in operable breast cancer: A prospective cohort study. Eur. J. Surg. Oncol. 2018 , 44 , 444–448. [ Google Scholar ] [ CrossRef ]
  • Argus, A.; Mahoney, M.C. Positron emission mammography: Diagnostic imaging and biopsy on the same day. AJR Am. J. Roentgenol. 2014 , 202 , 216–222. [ Google Scholar ] [ CrossRef ]
  • Hendrick, R.E. Radiation doses and cancer risks from breast imaging studies. Radiology 2010 , 257 , 246–253. [ Google Scholar ] [ CrossRef ]
  • Moy, L.; Noz, M.E.; Maguire, G.Q., Jr.; Melsaether, A.; Deans, A.E.; Murphy-Walcott, A.D.; Ponzo, F. Role of fusion of prone FDG-PET and magnetic resonance imaging of the breasts in the evaluation of breast cancer. Breast J. 2010 , 16 , 369–376. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Botsikas, D.; Bagetakos, I.; Picarra, M.; Barisits, A.C.D.C.A.; Boudabbous, S.; Montet, X.; Lam, G.T.; Mainta, I.; Kalovidouri, A.; Becker, M. What is the diagnostic performance of 18-FDG-PET/MR compared to PET/CT for the N- and M- staging of breast cancer? Eur. Radiol. 2019 , 29 , 1787–1798. [ Google Scholar ] [ CrossRef ]
  • Zhang, C.; Liang, Z.; Liu, W.; Zeng, X.; Mo, Y. Comparison of whole-body 18F-FDG PET/CT and PET/MRI for distant metastases in patients with malignant tumors: A meta-analysis. BMC Cancer 2023 , 23 , 37. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Fowler, A.M.; Strigel, R.M. Clinical advances in PET-MRI for breast cancer. Lancet Oncol. 2022 , 23 , e32–e43. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Lindner, T.; Loktev, A.; Altmann, A.; Giesel, F.; Kratochwil, C.; Debus, J.; Jäger, D.; Mier, W.; Haberkorn, U. Development of quinoline-based theranostic ligands for the targeting of fibroblast activation protein. J. Nucl. Med. 2018 , 59 , 1415–1422. [ Google Scholar ] [ CrossRef ]
  • Pang, Y.; Zhao, L.; Chen, H. 68Ga-FAPI outperforms 18F-FDG PET/CT in identifying bone metastasis and peritoneal carcinomatosis in a patient with metastatic breast cancer. Clin. Nucl. Med. 2020 , 45 , 913–915. [ Google Scholar ] [ CrossRef ]
  • Li, T.; Jiang, X.; Zhang, Z.; Chen, X.; Wang, J.; Zhao, X.; Zhang, J. Case Report: (68)Ga-FAPI PET/CT, a more advantageous detection mean of gastric, peritoneal, and ovarian metastases from breast cancer. Front. Oncol. 2022 , 12 , 1013066. [ Google Scholar ] [ CrossRef ]
  • Elboga, U.; Sahin, E.; Kus, T.; Cayirli, Y.B.; Aktas, G.; Uzun, E.; Cinkir, H.Y.; Teker, F.; Sever, O.N.; Aytekin, A.; et al. Superiority of 68Ga-FAPI PET/CT scan in detecting additional lesions compared to 18FDG PET/CT scan in breast cancer. Ann. Nucl. Med. 2021 , 35 , 1321–1331. [ Google Scholar ] [ CrossRef ]
  • Chen, H.; Pang, Y.; Wu, J.; Zhao, L.; Hao, B.; Wu, J.; Wei, J.; Wu, S.; Zhao, L.; Luo, Z.; et al. Comparison of [(68)Ga]Ga-DOTA-FAPI-04 and [(18)F] FDG PET/CT for the diagnosis of primary and metastatic lesions in patients with various types of cancer. Eur. J. Nucl. Med. Mol. Imaging 2020 , 47 , 1820–1832. [ Google Scholar ] [ CrossRef ]
  • Giesel, F.L.; Kratochwil, C.; Schlittenhardt, J.; Dendl, K.; Eiber, M.; Staudinger, F.; Kessler, L.; Fendler, W.P.; Lindner, T.; Koerber, S.A.; et al. Head-to-head intra-individual comparison of biodistribution and tumor uptake of (68)Ga-FAPI and (18)F-FDG PET/CT in cancer patients. Eur. J. Nucl. Med. Mol. Imaging 2021 , 48 , 4377–4385. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Eshet, Y.; Tau, N.; Apter, S.; Nissan, N.; Levanon, K.; Bernstein-Molho, R.; Globus, O.; Itay, A.; Shapira, T.; Oedegaard, C.; et al. The role of 68 Ga-FAPI PET/CT in detection of metastatic lobular breast cancer. Clin. Nucl. Med. 2023 , 48 , 228–232. [ Google Scholar ] [ CrossRef ]
  • Ballal, S.; Yadav, M.P.; Roesch, F.; Wakade, N.; Raju, S.; Sheokand, P.; Mishra, P.; Moon, E.S.; Tripathi, M.; Martin, M.; et al. Head-to-head comparison between [(68)Ga]Ga-DOTA.SA.FAPi and [(18)F]F-FDG PET/CT imaging in patients with breast cancer. Pharmaceuticals 2023 , 16 , 521. [ Google Scholar ] [ CrossRef ]
  • Katzenellenbogen, J.A. The quest for improving the management of breast cancer by functional imaging: The discovery and development of 16α-[18F]fluoroestradiol (FES), a PET radiotracer for the estrogen receptor, a historical review. Nucl. Med. Biol. 2021 , 92 , 24–37. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Katal, S.; McKay, M.J.; Taubman, K. PET molecular imaging in breast cancer: Current applications and future perspectives. J. Clin. Med. 2024 , 13 , 3459. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Mortimer, J.E.; Dehdashti, F.; Siegel, B.A.; Trinkaus, K.; Katzenellenbogen, J.A.; Welch, M.J. Metabolic flare: Indicator of hormone responsiveness in advanced breast cancer. J. Clin. Oncol. 2001 , 19 , 2797–2803. [ Google Scholar ] [ CrossRef ]
  • Peterson, L.M.; Kurland, B.F.; Schubert, E.K.; Link, J.M.; Gadi, V.; Specht, J.M.; Eary, J.F.; Porter, P.; Shankar, L.K.; Mankoff, D.A.; et al. A phase 2 study of 16α-[18F]-fluoro-17β-estradiol positron emission tomography (FES-PET) as a marker of hormone sensitivity in metastatic breast cancer (MBC). Mol. Imaging Biol. 2014 , 16 , 431–440. [ Google Scholar ] [ CrossRef ]
  • Dehdashti, F.; Flanagan, F.L.; Mortimer, J.E.; Katzenellenbogen, J.A.; Welch, M.J.; Siegel, B.A. Positron emission tomographic assessment of “metabolic flare” to predict response of metastatic breast cancer to antiestrogen therapy. Eur. J. Nucl. Med. 1999 , 26 , 51–56. [ Google Scholar ] [ CrossRef ]
  • van Kruchten, M.; Glaudemans, A.W.J.M.; Schröder, C.P.; de Vries, E.G.E.; Hospers, G.A.P. Positron emission tomography of tumour [18F]fluoroestradiol uptake in patients with acquired hormone-resistant metastatic breast cancer prior to oestradiol therapy. Eur. J. Nucl. Med. Mol. Imaging 2015 , 42 , 1674–1681. [ Google Scholar ] [ CrossRef ]
  • Kurland, B.F.; Peterson, L.M.; Lee, J.H.; Schubert, E.K.; Currin, E.R.; Link, J.M.; Krohn, K.A.; Mankoff, D.A.; Linden, H.M. Estrogen receptor binding (18F-FES PET) and glycolytic activity (18F-FDG PET) predict progression-free survival on endocrine therapy in patients with ER+ breast cancer. Clin. Cancer Res. 2017 , 23 , 407–415. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Hao, W.; Li, Y.; Du, B.; Li, X. Heterogeneity of estrogen receptor based on 18F-FES PET imaging in breast cancer patients. Clin. Transl. Imaging 2021 , 9 , 599–607. [ Google Scholar ] [ CrossRef ]
  • Gennari, A.; Brain, E.; De Censi, A.; Nanni, O.; Wuerstlein, R.; Frassoldati, A.; Cortes, J.; Rossi, V.; Palleschi, M.; Alberini, J.; et al. Early prediction of endocrine responsiveness in ER+/HER2-negative metastatic breast cancer (MBC): Pilot study with 18F-fluoroestradiol (18F-FES) CT/PET. Ann. Oncol. 2024 , 35 , 549–558. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Gupta, M.; Datta, A.; Choudhury, P.S.; Dsouza, M.; Batra, U.; Mishra, A. Can 18 F-fluoroestradiol positron emission tomography become a new imaging standard in the estrogen receptor-positive breast cancer patient: A prospective comparative study with 18 F-fluorodeoxyglucose positron emission tomography? World J. Nucl. Med. 2017 , 16 , 133–139. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Mankoff, D.A.; Tewson, T.J.; Eary, J.F. Analysis of blood clearance and labeled metabolites for the estrogen receptor tracer [F-18]-16α-Fluorestradiol (FES). Nucl. Med. Biol. 1997 , 24 , 341–348. [ Google Scholar ] [ CrossRef ]
  • Bénard, F.; Ahmed, N.; Beauregard, J.-M.; Rousseau, J.; Aliaga, A.; Dubuc, C.; Croteau, E.; van Lier, J.E. [18F]Fluorinated estradiol derivatives for oestrogen receptor imaging: Impact of substituents, formulation and specific activity on the biodistribution in breast tumour-bearing mice. Eur. J. Nucl. Med. Mol. Imaging 2008 , 35 , 1473–1479. [ Google Scholar ] [ CrossRef ]
  • Beauregard, J.M.; Croteau, E.; Ahmed, N.; van Lier, J.E.; Bénard, F. Assessment of human biodistribution and dosimetry of 4-fluoro-11beta-methoxy-16alpha-18F-fluoroestradiol using serial whole-body PET/CT. J. Nucl. Med. 2009 , 50 , 100–107. [ Google Scholar ] [ CrossRef ]
  • Paquette, M.; Phoenix, S.; Lavallée, É.; Rousseau, J.A.; Guérin, B.; Turcotte, É.E.; Lecomte, R. Cross-species physiological assessment of brain estrogen receptor expression using (18)F-FES and (18)F-4FMFES PET imaging. Mol. Imaging Biol. 2020 , 22 , 1403–1413. [ Google Scholar ] [ CrossRef ]
  • Paquette, M.; Lavallée, É.; Phoenix, S.; Ouellet, R.; Senta, H.; van Lier, J.E.; Guérin, B.; Lecomte, R.; Turcotte, E.E. Improved estrogen receptor assessment by PET using the novel radiotracer (18)F-4FMFES in estrogen receptor-positive breast cancer patients: An ongoing phase II clinical trial. J. Nucl. Med. 2018 , 59 , 197–203. [ Google Scholar ] [ CrossRef ]
  • Lee, J.H.; Zhou, H.-B.; Dence, C.S.; Carlson, K.E.; Welch, M.J.; Katzenellenbogen, J.A. Development of [F-18]fluorine-substituted Tanaproget as a progesterone receptor imaging agent for positron emission tomography. Bioconjugate Chem. 2010 , 21 , 1096–1104. [ Google Scholar ] [ CrossRef ]
  • Kochanny, M.J.; VanBrocklin, H.F.; Kym, P.R.; Carlson, K.E.; O’Neil, J.P.; Bonasera, T.A.; Welch, M.J.; Katzenellenbogen, J.A. Fluorine-18-labeled progestin ketals: Synthesis and target tissue uptake selectivity of potential imaging agents for receptor-positive breast tumors. J. Med. Chem. 1993 , 36 , 1120–1127. [ Google Scholar ] [ CrossRef ]
  • Buckman, B.O.; Bonasera, T.A.; Kirschbaum, K.S.; Welch, M.J.; Katzenellenbogen, J.A. Fluorine-18-labeled progestin 16 alpha, 17 alpha-dioxolanes: Development of high-affinity ligands for the progesterone receptor with high in vivo target site selectivity. J. Med. Chem. 1995 , 38 , 328–337. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Kym, P.R.; Carlson, K.E.; Katzenellenbogen, J.A. Progestin 16 alpha, 17 alpha-dioxolane ketals as molecular probes for the progesterone receptor: Synthesis, binding affinity, and photochemical evaluation. J. Med. Chem. 1993 , 36 , 1111–1119. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Vijaykumar, D.; Mao, W.; Kirschbaum, K.S.; Katzenellenbogen, J.A. An efficient route for the preparation of a 21-fluoro progestin-16 alpha,17 alpha-dioxolane, a high-affinity ligand for PET imaging of the progesterone receptor. J. Org. Chem. 2002 , 67 , 4904–4910. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Jeselsohn, R.; Yelensky, R.; Buchwalter, G.; Frampton, G.; Meric-Bernstam, F.; Gonzalez-Angulo, A.M.; Ferrer-Lozano, J.; Perez-Fidalgo, J.A.; Cristofanilli, M.; Gómez, H.; et al. Emergence of constitutively active estrogen receptor-α mutations in pretreated advanced estrogen receptor-positive breast cancer. Clin. Cancer Res. 2014 , 20 , 1757–1767. [ Google Scholar ] [ CrossRef ]
  • Dehdashti, F.; Laforest, R.; Gao, F.; Aft, R.L.; Dence, C.S.; Zhou, D.; Shoghi, K.I.; Siegel, B.A.; Katzenellenbogen, J.A.; Welch, M.J. Assessment of progesterone receptors in breast carcinoma by PET with 21-18F-fluoro-16α,17α-[(R)-(1′-α-furylmethylidene)dioxy]-19-norpregn-4-ene-3,20-dione. J. Nucl. Med. 2012 , 53 , 363–370. [ Google Scholar ] [ CrossRef ]
  • Dehdashti, F.; Wu, N.; Ma, C.X.; Naughton, M.J.; Katzenellenbogen, J.A.; Siegel, B.A. Association of PET-based estradiol-challenge test for breast cancer progesterone receptors with response to endocrine therapy. Nat. Commun. 2021 , 12 , 733. [ Google Scholar ] [ CrossRef ]
  • Dijkers, E.C.; Oude Munnink, T.H.; Kosterink, J.G.; Brouwers, A.H.; Jager, P.L.; De Jong, J.R.; Van Dongen, G.A.; Schroder, C.P.; Lub-de Hooge, M.N.; de Vries, E.G. Biodistribution of 89Zr-trastuzumab and PET imaging of HER2-positive lesions in patients with metastatic breast cancer. Clin. Pharmacol. Ther. 2010 , 87 , 586–592. [ Google Scholar ] [ CrossRef ]
  • Ulaner, G.A.; Hyman, D.M.; Lyashchenko, S.K.; Lewis, J.S.; Carrasquillo, J.A. 89Zr-trastuzumab PET/CT for detection of human epidermal growth factor receptor 2-positive metastases in patients with human epidermal growth factor receptor 2-negative primary breast cancer. Clin. Nucl. Med. 2017 , 42 , 912–917. [ Google Scholar ] [ CrossRef ]
  • Gebhart, G.; Lamberts, L.E.; Wimana, Z.; Garcia, C.; Emonts, P.; Ameye, L.; Stroobants, S.; Huizing, M.; Aftimos, P.; Tol, J.; et al. Molecular imaging as a tool to investigate heterogeneity of advanced HER2-positive breast cancer and to predict patient outcome under trastuzumab emtansine (T-DM1): The ZEPHIR trial. Ann. Oncol. 2016 , 27 , 619–624. [ Google Scholar ] [ CrossRef ]
  • Tamura, K.; Kurihara, H.; Yonemori, K.; Tsuda, H.; Suzuki, J.; Kono, Y.; Honda, N.; Kodaira, M.; Yamamoto, H.; Yunokawa, M.; et al. 64Cu-DOTA-trastuzumab PET imaging in patients with HER2-positive breast cancer. J. Nucl. Med. 2013 , 54 , 1869–1875. [ Google Scholar ] [ CrossRef ]
  • Mortimer, J.E.; Bading, J.R.; Colcher, D.M.; Conti, P.S.; Frankel, P.H.; Carroll, M.I.; Tong, S.; Poku, E.; Miles, J.K.; Shively, J.E.; et al. Functional imaging of human epidermal growth factor receptor 2-positive metastatic breast cancer using (64)Cu-DOTA-trastuzumab PET. J. Nucl. Med. 2014 , 55 , 23–29. [ Google Scholar ] [ CrossRef ]
  • Mortimer, J.E.; Bading, J.R.; Park, J.M.; Frankel, P.H.; Carroll, M.I.; Tran, T.T.; Poku, E.K.; Rockne, R.C.; Raubitschek, A.A.; Shively, J.E.; et al. Tumor uptake of (64)Cu-DOTA-trastuzumab in patients with metastatic breast cancer. J. Nucl. Med. 2018 , 59 , 38–43. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Mortimer, J.E.; Bading, J.R.; Frankel, P.H.; Carroll, M.I.; Yuan, Y.; Park, J.M.; Tumyan, L.; Gidwaney, N.; Poku, E.K.; Shively, J.E.; et al. Use of (64)Cu-DOTA-trastuzumab PET to predict response and outcome of patients receiving trastuzumab emtansine for metastatic breast cancer: A pilot study. J. Nucl. Med. 2022 , 63 , 1145–1148. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ulaner, G.A.; Lyashchenko, S.K.; Riedl, C.; Ruan, S.; Zanzonico, P.B.; Lake, D.; Jhaveri, K.; Zeglis, B.; Lewis, J.S.; O’donoghue, J.A. First-in-human human epidermal growth factor receptor 2-targeted imaging using (89)Zr-pertuzumab PET/CT: Dosimetry and clinical application in patients with breast cancer. J. Nucl. Med. 2018 , 59 , 900–906. [ Google Scholar ] [ CrossRef ]
  • Ulaner, G.A.; Carrasquillo, J.A.; Riedl, C.C.; Yeh, R.; Hatzoglou, V.; Ross, D.S.; Jhaveri, K.; Chandarlapaty, S.; Hyman, D.M.; Zeglis, B.M.; et al. Identification of HER2-positive metastases in patients with HER2-negative primary breast cancer by using HER2-targeted (89)Zr-pertuzumab PET/CT. Radiology 2020 , 296 , 370–378. [ Google Scholar ] [ CrossRef ]
  • Yeh, R.; O’donoghue, J.A.; Jayaprakasam, V.S.; Mauguen, A.; Min, R.; Park, S.; Brockway, J.P.; Bromberg, J.F.; Zhi, W.I.; Robson, M.E.; et al. First-in-human evaluation of site-specifically labeled (89)Zr-pertuzumab in patients with HER2-positive breast cancer. J. Nucl. Med. 2024 , 65 , 386–393. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Arslan, E.; Ergül, N.; Beyhan, E.; Erol Fenercioglu, Ö.; Sahin, R.; Cin, M.; Battal Havare, S.; Can Trabulus, F.D.; Mermut, Ö.; Akbas, S.; et al. The roles of 68 Ga-PSMA PET/CT and 18 F-FDG PET/CT imaging in patients with triple-negative breast cancer and the association of tissue PSMA and claudin 1, 4, and 7 levels with PET findings. Nucl. Med. Commun. 2023 , 44 , 284–290. [ Google Scholar ] [ CrossRef ]
  • Andryszak, N.; Świniuch, D.; Wójcik, E.; Ramlau, R.; Ruchała, M.; Czepczyński, R. Head-to-head comparison of [(18)F]PSMA-1007 and [(18)F]FDG PET/CT in patients with triple-negative breast cancer. Cancers 2024 , 16 , 667. [ Google Scholar ] [ CrossRef ]
  • Sörensen, J.; Velikyan, I.; Sandberg, D.; Wennborg, A.; Feldwisch, J.; Tolmachev, V.; Orlova, A.; Sandström, M.; Lubberink, M.; Olofsson, H.; et al. Measuring HER2-receptor expression in metastatic breast cancer using [68Ga]ABY-025 Affibody PET/CT. Theranostics 2016 , 6 , 262–271. [ Google Scholar ] [ CrossRef ]
  • Alhuseinalkhudhur, A.; Lindman, H.; Liss, P.; Sundin, T.; Frejd, F.Y.; Hartman, J.; Iyer, V.; Feldwisch, J.; Lubberink, M.; Rönnlund, C.; et al. Human epidermal growth factor receptor 2-targeting [(68)Ga]Ga-ABY-025 PET/CT predicts early metabolic response in metastatic breast cancer. J. Nucl. Med. 2023 , 64 , 1364–1370. [ Google Scholar ] [ CrossRef ]
  • Alhuseinalkhudhur, A.; Lindman, H.; Liss, P.; Frejd, F.Y.; Feldwisch, J.; Brun, N.C.; Lubberink, M.; Velikyan, I.; Sörensen, J. [68Ga]Ga-ABY-025 PET in HER2-positive breast cancer: Benefits and pitfalls in staging of axillary disease. J. Clin. Oncol. 2024 , 42 (Suppl. 16), 1035. [ Google Scholar ] [ CrossRef ]
  • Rathore, Y.; Shukla, J.; Laroiya, I.; Deep, A.; Lakhanpal, T.; Kumar, R.; Singh, H.; Bal, A.; Singh, G.; Thakur, K.G.; et al. Development 68Ga trastuzumab Fab and bioevaluation by PET imaging in HER2/neu expressing breast cancer patients. Nucl. Med. Commun. 2022 , 43 , 458–467. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Rathore, Y.; Shukla, J.; Bhusari, P.; Laroiya, I.; Sood, A.; Kumar, R.; Singh, H.; Singh, G.; Mittal, B.; Bal, A. Clinical evaluation of Ga-68 NOTA-Fab in HER2 expressing breast cancer patients. J. Nucl. Med. 2019 , 60 (Suppl. S1), 1228. [ Google Scholar ]
  • Yue, T.T.C.; Ge, Y.; Aprile, F.A.; Ma, M.T.; Pham, T.T.; Long, N.J. Site-specific (68)Ga radiolabeling of trastuzumab Fab via methionine for immunoPET imaging. Bioconjugate Chem. 2023 , 34 , 1802–1810. [ Google Scholar ] [ CrossRef ]
  • Richter, A.; Knorr, K.; Schlapschy, M.; Robu, S.; Morath, V.; Mendler, C.; Yen, H.-Y.; Steiger, K.; Kiechle, M.; Weber, W.; et al. First in-human medical imaging with a PASylated (89)Zr-labeled anti-HER2 Fab-fragment in a patient with metastatic breast cancer. Nucl. Med. Mol. Imaging 2020 , 54 , 114–119. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Keyaerts, M.; Xavier, C.; Heemskerk, J.; Devoogdt, N.; Everaert, H.; Ackaert, C.; Vanhoeij, M.; Duhoux, F.P.; Gevaert, T.; Simon, P.; et al. Phase I study of 68Ga-HER2-Nanobody for PET/CT assessment of HER2 expression in breast carcinoma. J. Nucl. Med. 2016 , 57 , 27–33. [ Google Scholar ] [ CrossRef ]
  • Gondry, O.; Caveliers, V.; Xavier, C.; Raes, L.; Vanhoeij, M.; Verfaillie, G.; Fontaine, C.; Glorieus, K.; De Grève, J.; Joris, S.; et al. Phase II trial assessing the repeatability and tumor uptake of [(68)Ga]Ga-HER2 single-domain antibody PET/CT in patients with breast carcinoma. J. Nucl. Med. 2024 , 65 , 178–184. [ Google Scholar ] [ CrossRef ]
  • Xu, Y.; Wang, L.; Pan, D.; Yu, C.; Mi, B.; Huang, Q.; Sheng, J.; Yan, J.; Wang, X.; Yang, R.; et al. PET imaging of a (68)Ga labeled modified HER2 affibody in breast cancers: From xenografts to patients. Br. J. Radiol. 2019 , 92 , 20190425. [ Google Scholar ] [ CrossRef ]
  • Savir-Baruch, B.; Schuster, D.M. Prostate cancer imaging with 18F-fluciclovine. PET Clin. 2022 , 17 , 607–620. [ Google Scholar ] [ CrossRef ]
  • Ulaner, G.A.; Goldman, D.A.; Corben, A.; Lyashchenko, S.K.; Gönen, M.; Lewis, J.S.; Dickler, M. Prospective clinical trial of (18)F-fluciclovine PET/CT for determining the response to neoadjuvant therapy in invasive ductal and invasive lobular breast cancers. J. Nucl. Med. 2017 , 58 , 1037–1042. [ Google Scholar ] [ CrossRef ]
  • Ma, G.; Goldman, D.A.; Corben, A.; Lyashchenko, S.K.; Gönen, M.; Lewis, J.S.; Dickler, M. (18)F-FLT PET/CT imaging for early monitoring response to CDK4/6 inhibitor therapy in triple negative breast cancer. Ann. Nucl. Med. 2021 , 35 , 600–607. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Rousseau, C.; Metz, R.; Kerdraon, O.; Ouldamer, L.; Boiffard, F.; Renaudeau, K.; Ferrer, L.; Vercouillie, J.; Doutriaux-Dumoulin, I.; Mouton, A.; et al. Pilot feasibility study: 18 F-DPA-714 PET/CT macrophage imaging in triple-negative breast cancers (EITHICS). Clin. Nucl. Med. 2024 , 49 , 701–708. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sathekge, M.; Lengana, T.; Modiselle, M.; Vorster, M.; Zeevaart, J.; Maes, A.; Ebenhan, T.; Van de Wiele, C. (68)Ga-PSMA-HBED-CC PET imaging in breast carcinoma patients. Eur. J. Nucl. Med. Mol. Imaging 2017 , 44 , 689–694. [ Google Scholar ] [ CrossRef ]
  • Venema, C.M.; Mammatas, L.H.; Schröder, C.P.; Van Kruchten, M.; Apollonio, G.; Glaudemans, A.W.J.M.; Bongaerts, A.H.; Hoekstra, O.S.; Verheul, H.M.; Boven, E.; et al. Androgen and estrogen receptor imaging in metastatic breast cancer patients as a surrogate for tissue biopsies. J. Nucl. Med. 2017 , 58 , 1906. [ Google Scholar ] [ CrossRef ]
  • Stoykow, C.; Erbes, T.; Maecke, H.R.; Bulla, S.; Bartholomä, M.; Mayer, S.; Drendel, V.; Bronsert, P.; Werner, M.; Gitsch, G.; et al. Gastrin-releasing peptide receptor imaging in breast cancer using the receptor antagonist (68)Ga-RM2 and PET. Theranostics 2016 , 6 , 1641–1650. [ Google Scholar ] [ CrossRef ]
  • Vorster, M.; Hadebe, B.P.; Sathekge, M.M. Theranostics in breast cancer. Front. Nucl. Med. 2023 , 3 , 1236565. [ Google Scholar ] [ CrossRef ]
  • Choi, H.; Kim, K. Theranostics for triple-negative breast cancer. Diagnostics 2023 , 13 , 272. [ Google Scholar ] [ CrossRef ]
  • Altena, R.; Tzortzakakis, A.; Burén, S.A.; Tran, T.A.; Frejd, F.Y.; Bergh, J.; Axelsson, R. Current status of contemporary diagnostic radiotracers in the management of breast cancer: First steps toward theranostic applications. EJNMMI Res. 2023 , 13 , 43. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Haidar, M.; Rizkallah, J.; El Sardouk, O.; El Ghawi, N.; Omran, N.; Hammoud, Z.; Saliba, N.; Tfayli, A.; Moukadem, H.; Berjawi, G.; et al. Radiotracer Innovations in Breast Cancer Imaging: A Review of Recent Progress. Diagnostics 2024 , 14 , 1943. https://doi.org/10.3390/diagnostics14171943

Haidar M, Rizkallah J, El Sardouk O, El Ghawi N, Omran N, Hammoud Z, Saliba N, Tfayli A, Moukadem H, Berjawi G, et al. Radiotracer Innovations in Breast Cancer Imaging: A Review of Recent Progress. Diagnostics . 2024; 14(17):1943. https://doi.org/10.3390/diagnostics14171943

Haidar, Mohamad, Joe Rizkallah, Omar El Sardouk, Nour El Ghawi, Nadine Omran, Zeinab Hammoud, Nina Saliba, Arafat Tfayli, Hiba Moukadem, Ghina Berjawi, and et al. 2024. "Radiotracer Innovations in Breast Cancer Imaging: A Review of Recent Progress" Diagnostics 14, no. 17: 1943. https://doi.org/10.3390/diagnostics14171943

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

research papers on breast cancer detection

Ph.D. Student Receives Patent for Thermographic Breast Cancer Detection Device

Aug 30, 2024 —.

GS

Mammograms can be an effective resource for detecting breast cancer, but for some women, it can be an invasive and uncomfortable experience.

That’s why Gianna Slusher, Ph.D. student in the George W. Woodruff School of Mechanical Engineering, developed a device that could serve as an effective alternative to traditional early detection methods for breast cancer.

Slusher and her partner, Caitlin Reina, received an official patent for inventing a mounted thermographic imaging system that can be used at home to detect medical issues such as breast cancer.

The device includes a mount that can attach to a wall and a clamp that holds a smartphone or tablet. Through an app programmed by the pair, it uses thermal images as a non-invasive and radiation-free way to capture changes in breast temperature associated with cancerous tumors. The mount can be positioned in multiple discreet and various angles, which can allow for consistent imaging. The user would be instructed on the app to see a doctor if an anomaly is detected.

Breast cancer screening device

Slusher and Reina began working on the project at the Invention Factory – a summer program they attended at Cooper Union for the Advancement of Science and Art in New York City while the pair were working towards their bachelor's degrees in mechanical engineering.

Slusher hopes the thermal imaging system and ease of use can help women battle all types of breast cancer in the early stages from the convenience of their own home.

During the summer of the Invention Factory when the device was created, Slusher’s aunt was diagnosed with breast cancer, which Slusher says deeply influenced her work.“Her journey inspired the creation of this device, and I am pleased to share that she is now healthy!”

Now Slusher hopes the invention can help other women gain easier access to a solution to a problem that many women will face in their lifetime.

“As a woman in mechanical engineering, I have strived to use my education and research to contribute to efforts that benefit other women,” she says.

After graduating from Cooper Union, Slusher was inspired to continue her research at the Georgia Institute of Technology through the bioengineering Ph.D. program under the supervision of Andrei Fedorov , who serves as associate chair for graduate studies, professor, Rae S. and Frank H. Neely Chair,  and Regents' Entrepreneur in the Woodruff School.

The patent was filed independently by Slusher and Reina. However, Slusher credits her advisor, Fedorov, as a significant source of support and inspiration when it comes to innovation and design throughout her research.

Fedorov says Slusher embodies the Georgia Tech motto of “Progress and Service,” and is grateful the graduate program can attract such brilliant and caring students.

“Becoming a lead inventor on a patented technology speaks volumes about the student’s thoughtfulness and ingenuity, as well as fearlessness of an innovator,” Fedorov says. “It takes not only the engineering talent and confidence in one’s ability to innovate and invent, but also the passion for helping others.”

The next stages of the invention involve refining the technology, conducting clinical trials if necessary, and ultimately bringing the innovation to market. Slusher hopes the patent gains recognition and interest from potential collaborators and investors.

Slusher continues to research cancer technologies in her Ph.D. studies, but at a micro-level, focusing on therapeutic cells and microfluidic device design and fabrication. She is designing and fabricating devices aimed at enabling rapid processing and analyses of cell therapies, thereby making this life-changing treatment more easily monitored, manufactured, affordable, and accessible to all.

Slusher is undecided on her plans after completing her Ph.D., but hopes to continue working in a capacity that allows her the freedom to research and design topics that inspire her, and where she can contribute meaningfully to advancements in her field.  

Breast cancer screening device

By Mikey Fuller

Twitter

This website uses cookies.  For more information, review our  Privacy & Legal Notice Questions? Please email [email protected]. More Info Decline --> Accept

  • Current Students
  • Online Only Students
  • Faculty & Staff
  • Parents & Family
  • Alumni & Friends
  • Community & Business
  • Student Life
  • College of the Arts
  • Architecture & Construction Management
  • Campus & Community
  • Computing & Software Engineering
  • Engineering & Engineering Technology
  • Health & Human Services
  • Humanities & Social Sciences
  • Science & Mathematics
  • Publications
  • Merit Pages
  • Film on Campus
  • For the Media
  • Kennesaw State doctoral student advances breast cancer research

Kennesaw State doctoral graduate advances breast cancer research

KENNESAW, Ga. | Sep 10, 2024

Linglin Zhang

Zhang, who recently defended her dissertation to earn a Ph.D. in Data Science and Analytics from Kennesaw State University, delves into the detection of abnormalities in screening mammography, which can help find breast cancer early.

“Imagine your body is a big city, and the cells are the people living in it,” Zhang said. “Sometimes, some people (cells) start acting strange and causing trouble, just like bad guys in the city. My job is to find out from the ‘forensic photography’ of where these bad guys are starting to cause trouble and figure out ways to prevent them so everyone can be happy and healthy again.”

Her research, guided by KSU and Emory faculties, involves using AI to classify images of breast cancer. Zhang has utilized a decade's worth of material from Emory University to enhance her research, providing a comprehensive dataset for analyzing mammogram images.

“Artificial intelligence is like having a super-smart assistant who can help the radiologists to analyze a vast amount of data much faster and more accurately,” she said.

AI helps to detect patterns and abnormalities in screening mammograms, enabling earlier detection and more personalized treatments. This innovative approach could elevate breast cancer research by advancing treatment development and improving patient outcomes.

Zhang’s recent findings have been featured at the Society for Imaging Informatics in Medicine annual meeting. Her research found that the AI model isn’t equally accurate for all groups of people. The model's predictions vary depending on the patient's background, health conditions, and the details in the images.

Breast cancer is the most common cancer among women and causes about 42,000 deaths annually in the U.S. Getting regular mammograms can lower the risk of dying from breast cancer by up to 48 percent because they help spot problems like lumps or unusual changes early on.

A recent study by Zhang found that when mammograms show certain types of changes, like architectural distortion in the breast tissue, the risk of missing a cancer diagnosis is increased. This means the screening might overlook cancer cases when these changes are present, which could delay early detection and treatment. Zhang's research leverages ten years of data from Emory University, enhancing the accuracy and depth of her analysis.

“Science is a journey of discovery,” she said. “Each step we take brings us closer to understanding this complex disease and finding new ways to combat it.”

Zhang’s academic journey began in China where she earned a bachelor’s degree in biological sciences from Hubei University. Zhang then earned two master’s degrees in chemical biology and bioinformatics at the University of Michigan and the Georgia Institute of Technology. Her educational foundation prepared her for advanced research at KSU, where she discovered her passion in data science.

“I decided to switch to data science because I wanted to use the most efficient tools to do research in healthcare,” Zhang said. “I’m really interested in using data science and machine learning to build better tools that could improve people’s lives.”

In addition to her research, Zhang mentors undergraduate students and participates in outreach programs, emphasizing the importance of diversity in science.

“I’m a researcher, but more to that I’m a people person; I cherish people and appreciate their support. It’s the strong motivation of improving people’s living that drove my passion for research, and all the support I got from people at KSU continuously fuels that enthusiasm,” she said.

Zhang’s commitment also extends to ensuring fairness and accuracy in AI models used in her research.

“We’re training the computer to be intelligent and accurate enough to help detect abnormalities in mammograms and assess the risk of breast cancer through them,” she said. “AI can be a strong helper to the medical professionals if being used appropriately, and to ensure the ethics of AI tools in completing the tasks, our goal is to ensure the model is fair and unbiased across all demographic groups.”

Zhang identified biases in her AI model by analyzing where it failed. She found the model was less accurate with dense tissue and images with structural changes. Using these insights, she is now developing a model that performs equally accurately in all patient subgroups for breast cancer detection and hopes her work will aid other researchers.

Her experience at KSU has been overwhelmingly positive.

“Everyone here is so friendly and supportive,” Zhang said. “From the PhD program department to the School of Data Science and Analytics, the College of Computing and Software Engineering, and the Graduate College, everyone has been incredibly helpful.”

She also values her collaboration with industry partners like Equifax, which has allowed her to apply her academic knowledge in real-world settings.

MinJae Woo, Zhang’s former advisor at KSU, highlighted the rigorous demands of medical research and praised Zhang's perseverance.

“Working on medical research is often more challenging than it seems. We frequently ditch projects after putting in months of effort if they end up showing no potential benefit for patients. Dr. Zhang did not give up despite these challenges and discouragement. Her perseverance was undoubtedly the key to her success as a researcher.”

Looking ahead, Zhang plans to continue her research and apply innovative ideas to healthcare and beyond. Her work aims to advance the understanding of breast cancer and improve patient care and outcomes, demonstrating her dedication to making a significant impact in the fight against the disease.

– Story by Raynard Churchwell

Photos by Matt Yung

Related Stories

Joachim James

Kennesaw State graduate student drives innovation in civil engineering

President Kathy S. Schwaig speaks at Launch 24.

Kennesaw State maps path to growth, increased national prominence

Miyanna Clements-Williamson

Kennesaw State alum's inspiring, full-circle journey

William Reed

Kennesaw State master's graduate makes the personal academic

A leader in innovative teaching and learning, Kennesaw State University offers undergraduate, graduate and doctoral degrees to its more than 45,000 students. Kennesaw State is a member of the University System of Georgia with 11 academic colleges. The university’s vibrant campus culture, diverse population, strong global ties and entrepreneurial spirit draw students from throughout the country and the world. Kennesaw State is a Carnegie-designated doctoral research institution (R2), placing it among an elite group of only 7 percent of U.S. colleges and universities with an R1 or R2 status. For more information, visit kennesaw.edu .

Contact Info

Kennesaw Campus 1000 Chastain Road Kennesaw, GA 30144

Marietta Campus 1100 South Marietta Pkwy Marietta, GA 30060

Campus Maps

Phone 470-KSU-INFO (470-578-4636)

kennesaw.edu/info

Media Resources

Resources For

Related Links

  • Financial Aid
  • Degrees, Majors & Programs
  • Job Opportunities
  • Campus Security
  • Global Education
  • Sustainability
  • Accessibility

470-KSU-INFO (470-578-4636)

© 2024 Kennesaw State University. All Rights Reserved.

  • Privacy Statement
  • Accreditation
  • Emergency Information
  • Report a Concern
  • Open Records
  • Human Trafficking Notice

Cancer Chat | Cancer Research UK

  • Search forum

Cancer Chat | Cancer Research UK

All Categories

  • Create a post

Found a white spot on my breast. Should I be worried?

Zake

I am a 66 years old woman. This morning I noticed a small white spot in my right nipple. I looked at the internet, and found out some information saying a fat spot related to usually with women with breast feeding. I am not in this category. Should I worry and contact with my GP?

  • Join to reply
  • Sign in to reply
NumberSubjectDescriptionEdited User Id
CurrentFound a white spot on…I am a 66 years old woman. This morning I noticed a…Moderator Steph
1 I am a 66 years old woman. This morning I noticed a…

Speak to a nurse

  • Nurse Helpline 0808 800 4040
  • Questions about cancer? Call freephone 9 to 5 Monday to Friday or email us

Quick links

  • Find local shops
  • Shop online
  • About our information
  • Terms and conditions
  • Modern Slavery Statement
  • Accessibility

Registered with Fundraising Regulator (logo)

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Healthcare (Basel)

Logo of healthcare

A Comparative Analysis of Breast Cancer Detection and Diagnosis Using Data Visualization and Machine Learning Applications

In the developing world, cancer death is one of the major problems for humankind. Even though there are many ways to prevent it before happening, some cancer types still do not have any treatment. One of the most common cancer types is breast cancer, and early diagnosis is the most important thing in its treatment. Accurate diagnosis is one of the most important processes in breast cancer treatment. In the literature, there are many studies about predicting the type of breast tumors. In this research paper, data about breast cancer tumors from Dr. William H. Walberg of the University of Wisconsin Hospital were used for making predictions on breast tumor types. Data visualization and machine learning techniques including logistic regression, k-nearest neighbors, support vector machine, naïve Bayes, decision tree, random forest, and rotation forest were applied to this dataset. R, Minitab, and Python were chosen to be applied to these machine learning techniques and visualization. The paper aimed to make a comparative analysis using data visualization and machine learning applications for breast cancer detection and diagnosis. Diagnostic performances of applications were comparable for detecting breast cancers. Data visualization and machine learning techniques can provide significant benefits and impact cancer detection in the decision-making process. In this paper, different machine learning and data mining techniques for the detection of breast cancer were proposed. Results obtained with the logistic regression model with all features included showed the highest classification accuracy (98.1%), and the proposed approach revealed the enhancement in accuracy performances. These results indicated the potential to open new opportunities in the detection of breast cancer.

1. Introduction

Data science has become one of the most popular research areas of interest in the world. Many datasets can be useful in different situations such as marketing, transportation, social media, and healthcare [ 1 ]. However, only a few of them have been interpreted by data science researchers, and they believe that these datasets can be useful for predictions. Nowadays, many of the marketers have started to analyze their datasets because of the big information they have on hand, and they want to turn these data into meaningful information for future predictions. By doing that, marketers can apply some new tactics or change their goal [ 2 ].

Data mining and machine learning techniques are straightforward and effective ways to understand and predict future data. Dealing with large data manually is almost impossible [ 3 ]. Therefore, data visualization is a very important step to have a general idea about given data. Data analysis techniques are popular in many companies and have an impact on different study areas. For instance, Facebook’s News Feed uses machine learning by following user patterns [ 4 ]. Another study has been made about optimizing energy consumption in large-scale buildings [ 5 ]. Customer relationship management systems are also using machine learning techniques [ 6 ]. In addition to all these different studies, machine learning studies in healthcare are very popular [ 7 ]. Data mining techniques and clustering methods are used for different types of diseases to make data understandable and teach the computer to predict current data.

Cancer death is one of the major issues for the healthcare environment. It is one of the most significant reasons for women’s death [ 8 ]. Breast cancer is the most common type of cancer in women with denser breast tissue due to its physiological features. The detection of this disease in the early stages can help to avoid the rising number of deaths [ 8 ]. According to the Globocan 2018 data, one of every four cancer cases diagnosed in women worldwide is breast cancer, and it ranks fifth among the causes of death worldwide [ 9 ]. According to the same data, the incidence of age-related breast cancer worldwide in 2018 was 23.7 per 100,000, whereas the mortality rate due to breast cancer was reported as 6.8 per 100,000 [ 9 ]. Despite the increase in the number of medical studies and technological developments that contribute to the treatment of cancer, there are still some problems in the diagnosis of cancer. After lung cancer, breast cancer is the major cause of women’s death [ 9 ]. Breast cancer originates from breast tissue, most commonly from the inner lining of milk ducts or the lobules that supply the ducts with milk [ 10 , 11 ]. A mutation or modification of DNA or RNA could force normal cells to transform into cancer cells, and these mutations could occur due to an increase in entropy or nuclear radiation, chemicals in the air, bacteria, fungi, electromagnetic radiation viruses, parasites, heat, water, food, mechanical cell-level injury, free radicals, evolution, and aging of DNA and RNA [ 12 ]. It is important to make an accurate diagnosis of tumors. Most tumors are the result of benign (non-cancerous) changes within the breast, but if a malignant tumor is diagnosed as benign it will cause serious problems. Early detection of breast cancer and getting modern cancer treatment are the most important strategies to prevent deaths from breast cancer. It is easy to treat early, small, and non-spreading breast cancer successfully. The most reliable way to find breast cancer early is by having regular screening tests.

Age, family history, genetics, race, ethnicity, being overweight, drinking alcohol, and lack of exercise are risk factors associated with breast cancer [ 13 ]. Healthcare is an open-ended environment with very rich information, yet very poor knowledge. There is a huge amount of data in healthcare systems, and it is important to discover and build relationships with hidden data. The main causes of death were classified into five broad groups according to the International Classification of Diseases (ICD), and breast cancer was included in two groups [ 14 ]. A report from McKinsey states that the volume of data is growing at a rate of 50% per year [ 15 ]. Currently, data science has officially become a very significant field even though the term data science was first coined in the early 1990s. A study defined the term data science as implying focus around data and, by extension, statistics, which is a systematic study about the organization, properties, and analysis of data and their role in inference [ 16 ]. In previous data mining research about healthcare, some methods were applied to different types of diseases and genes, and the methods including analytical, collecting, sharing, and compressing methods were applied on healthcare datasets [ 17 ]. Even though multiple disciplines can be applied to data science, machine learning methods are mostly applied to healthcare datasets. Machine learning is a data analysis technique that teaches a computer what comes as an output with different algorithms. Decision tree, k-means clustering, and neural networks are the most common algorithms for machine learning applications [ 18 ]. While there is no better way to diagnose breast cancer, early diagnosis can be accepted as the first step of treatment and risk assessment to minimize factors. It allows a person to control risk factors, although some breast cancer risk factors cannot be changed.

In this study, public data about breast cancer tumors from Dr. William H. Walberg of the University of Wisconsin Hospital were taken and used for data visualization, classification, and machine learning algorithms, which included logistic regression, k-nearest neighbors, support vector machine, and decision tree [ 19 ]. Public data included samples taken from patients with solid breast masses and a user-friendly usage of graphical programs called City. This study aimed to establish an adequate model by revealing the predictive factors of early-stage breast cancer patients from a wider perspective and compare the strength of the model with accuracy measures.

The organization of the remaining sections of the current study is as follows. A literature review that contains recent related studies on breast cancer detection and diagnosis is in Section 2 . In Section 3 and Section 4 , methods and application of the proposed method to a dataset are given. The results, discussion, and comparative analysis are demonstrated in the last section.

2. Literature Review

The healthcare environment is one of the most accurate fields for data science applications due to the amount of data that it contains and the suitability of data type. The flow of data in hospitals is a continuous process and includes numerical values in general. Healthcare is an open system for improvements with studies about data mining and machine learning techniques. Dhar claims that expertise on a computer would give you significant results and the possibility to predict the future with given data [ 14 ]. There are many studies done on breast cancer datasets, and most of them have sufficient classification accuracy [ 20 , 21 ].

Aruna et al. [ 22 ] used naïve Bayes, support vector machine, and decision trees to classify a Wisconsin breast cancer dataset and got the best result by using support vector machine (SVM) with an accuracy score of 96.99%. Chaurasia et al. [ 23 ] compared the performance of supervised learning classifiers by using a Wisconsin breast cancer dataset and naïve Bayes, SVM, neural networks, decision tree methods applied. According to the study results, SVM gave the most accurate result with a score of 96.84%. Asri et al. [ 24 ] also used the same data and made a performance comparison among machine learning algorithms: SVM, decision tree (c4.5), naïve Bayes, and k-nearest neighbors. The study aimed to classify data in terms of efficiency and effectiveness by comparing the accuracy, precision, sensitivity, and specificity of each algorithm. The experimental result showed that SVM had the best score with an accuracy of 97.13%. Delen et al. [ 25 ] studied the prediction of breast cancer data with 202,932 patient records. The dataset was divided into two different groups as survived (93,273) and not survived (109,659), then naïve Bayes, neural network, and c4.5 decision tree algorithms were applied. The achieved results showed that the c4.5 decision tree had better performance than the other techniques.

Ou et al. [ 26 ] made a comparison between naïve Bayes, decision tree, and random tree to get the best results for classifying the Diabetic disease dataset. From the findings of this study, naïve Bayes was decided as the best classifier with a score of 76.3%. Srinivas et al. [ 27 ] studied one dependency augmented naïve Bayes classifier and naïve creedal classifier to make predictions on heart attacks by using medical profiles such as age, sex, blood pressure, and blood sugar. The study result indicated that naïve Bayes observed better results. Bernal et al. [ 28 ] used clinical data on medical intensive care units. Machine learning techniques such as logistic regression, neural networks, decision tree, and k-nearest neighbors were applied to predict the decrease of patients inside the hospital over 24 hours. The highest accuracy scores were obtained with logistic regression and k-nearest neighbor (KNN)-5 technique among training data. Bernal [ 28 ] pointed out that it is necessary to decide parameters rather than the algorithm to get better accuracy results.

Wang et al. [ 29 ] studied to find the best way for breast cancer predictions by using data mining methods on several records. They applied support vector machine (SVM), artificial neural network (ANN), naïve Bayes classifier, and AdaBoost Tree. Reducing the feature space was discussed, then Principle Component Analysis (PCA) was applied with the aim of reduction. In the evaluation part of the performance of the models, they used two datasets that were the Wisconsin Breast Cancer Database (1991) and Wisconsin Diagnostic Breast Cancer (1995) [ 18 , 30 ]. They provided a detailed evaluation of the models and test errors.

Williams et al. [ 31 ] made studies about risk prediction on breast cancer by using data mining classification techniques. Breast cancer is the most common cancer type for women throughout Nigeria. There are limited services to predict breast cancer before it is too late to aid. So, they needed to obtain an efficient way to predict breast cancer. Two data mining techniques used in their study were naïve Bayes and the J48 decision trees.

Nithya et al. [ 32 ] believe that the main problem of breast cancer is about classifying the breast tumor. Computer-Aided Diagnosis (CAD) has been used for the detection and characterization of breast cancer. Their main idea was improving breast cancer prediction by using data mining methods. Bagging, multiboot, random subspace to the classification performance of naïve Bayes, support vector machine-sequential minimal optimization (SVM-SMO), and multilayer perceptron were applied.

Oyewola et al. [ 33 ] made investigations on breast cancer biopsy predictions with a mammographic diagnosis. Logistic regression (LR), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), random forest (FR), and support vector machine (SVM) classifications were used in their study.

Agarap [ 34 ] used SVM, LR, multilayer perceptron, KNN, softmax regression, SVM, and Gated Recurrent Unit (GRU) SVM techniques. The most reliable result was obtained from a multilayer perceptron with an accuracy score of 99.4%.

Westerdijk [ 35 ] studied several machine learning techniques for the prediction of breast cancer cells. She tested the performance of the models by looking at their accuracies, sensitivities, and specificities. Accuracy scores of LR, random forest, SVM, neural network, and ensemble models were compared. The prediction of breast cancer should be improved with the accuracy score.

Vard et al. [ 36 ] studied a robust method for predicting eight cancer types such as breast cancer, lung cancer, and ovarian cancer. In their research, firstly they used Particle Swarm Optimization to normalize datasets and statistical feature selection methods to separate features on a normalized dataset. They then applied decision tree, support vector machines, and a multilayer perceptron neural network for classifications.

Kourou et al. [ 37 ] investigated the classification of cancer patients’ risk groups in two types as low and high. ANN, Bayesian networks (BNs), SVM, and decision tree (DT) techniques were used to present a model for cancer risks or patient outcomes.

Pratiwi [ 38 ] considered breast cancer is the most common death reason for women. Machine learning techniques were preferred to diagnose breast cancer. Pratiwi used Java to develop intelligent breast cancer prediction, proving that all functionalities worked well and were done without significant delay.

Shukla et al. [ 39 ] studied a robust data analytical model that could be useful on breast cancer datasets. The model included the survivability of patients and tumors. They used the Surveillance, Epidemiology, and End Results (SEER) program to identify, and the Self-Organizing Map (SOM) and Density-Based Spatial Clustering of Applications with Noise (DBSCAN) for clustering. Table 1 summarizes recent related studies, their aim, and approach.

Recent Related Studies.

ObjectiveStudyApproach and Methods Used
Dhar [ ] Literature reviews
Aruna et al. [ ]Naïve Bayes, support vector machine, decision tree
Chaurasia et al. [ ]Naïve Bayes, SVM, neural networks, decision tree
Asri et al. [ ]SVM, decision tree (c4.5), naïve Bayes, k-nearest neighbors
Delen et al. [ ]Naïve Bayes, neural network, c4.5 decision tree
Qu et al. [ ]Naïve Bayes, decision tree, random tree
Sriniva et al. [ ]One dependency augmented naïve Bayes, naïve Bayes
Bernal et al. [ ]Logistic regression, neural networks, decision tree, nearest neighbors
Wang et al. [ ]Support vector machine (SVM), artificial neural network (ANN), naïve Bayes classifier, adaboost tree
Williams et al. [ ]Naïve Bayes and the J48 decision trees
Nithya et al. [ ]Naïve Bayes, support vector machine-sequential minimal optimization, decision tree, multilayer perceptron
Oyewola et al. [ ]Logistic regression, linear discriminant analysis, quadratic discriminant analysis, random forest and support vector machine
Agarap [ ]Gru- SVMS, linear regression, multilayer perceptron, nearest neighbor, softmax regression and support vector machine
Westerdijk [ ]Logistic regression, random forest, support vector machine, neural network, and ensemble models
Vard et al. [ ]Particle swarm optimization (PSO), support vector machine (SVMs), decision tree and multilayer perceptron neural network
Kourou et al. [ ]Artificial neural networks (ANNs), Bayesian networks (BNs), support vector machines (SVMs) and decision trees (DTs)
Pratiwi [ ]Extreme learning machine methods
Shukla et al. [ ]Self-organizing map (SOM) and density-based spatial clustering of applications with noise (DBscan), multilayer perceptron (MLP), SEER program

Clarification of data science [ 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 ] and prediction of breast cancer by using data mining techniques [ 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 ] are objectives of the recent studies as two main categories. Experimental results showed that the best score had an accuracy of 97.13% [ 19 ].

The main contributions of this paper are provided in the following:

  • Establish an adequate model by revealing the predictive factors of early-stage breast cancer patients from a broader perspective and compare the robustness of the model by accuracy measures;
  • A more comprehensive comparison and analysis using data visualization and machine learning applications for breast cancer detection and visibility to validate the model;
  • Observe which features are most effective in predicting breast cancer and to understand general trends;
  • A better prediction of breast cancer by using data mining methods.

3. Methods and Application

3.1. logistic regression.

Logistic regression is a technique that firstly used for biological studies in the early twentieth century. It has become widespread for social studies too. Logistic regression is also one of the predictive analyses. Logistic regression is appropriate to use when there is one binary dependent variable and other independent variables. Linear and logistic regressions are different in terms of the dependent variable. Linear regression is a more appropriate technique for continuous variables. Figure 1 indicates the visualization of logistic regression steps.

An external file that holds a picture, illustration, etc.
Object name is healthcare-08-00111-g001.jpg

Visualization of logistic regression steps.

Logistic regression has two phases: forward propagation and backward propagation. The first step of forward propagation is multiplying weights with features. Initially, since weights are unknown, random values can be assigned. A sigmoid function assigns a probability between 0 and 1. According to a threshold value, the prediction is performed. After prediction, the predictive value is compared with the observed values, and then a loss function is generated. The loss function indicates how far the predicted value is from the real value. If the loss function value is very high, then backward propagation is applied. The aim of backward propagation is updating weight values according to cost function by taking the derivative [ 40 ]. The sigmoid function is shown below:

3.2. K-Nearest Neighbor (KNN)

KNN is a supervised learning technique that means the label of the data is identified before making predictions. Clustering and regression are two purposes to use it. K represents a numerical value for the nearest neighbors. KNN algorithm does not have a training phase. Predictions are made based on the Euclidean distance to k-nearest neighbors. This technique is applied to the prediction of breast cancer dataset since it already has labels such as malignant and benign. The label is classified according to the nearest neighbor to the class labels of its neighbors. A representation of the KNN algorithm is shown in Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is healthcare-08-00111-g002.jpg

Representation of KNN algorithm [ 12 ].

3.3. Support Vector Machine

Support vector machine is one of the most common machine learning techniques. The objective of the algorithm is to find a hyperplane in N-dimensions that classifies the data points. The major part of this algorithm is finding the plane that maximizes the margin. N dimension diversifies based on the feature numbers. Comparing two features could be done smoothly. However, if there are several features for classification, it is not always that straightforward. Maximizing the margin provides more accurate prediction results [ 41 ]. Figure 3 indicates the visualization of SVM.

An external file that holds a picture, illustration, etc.
Object name is healthcare-08-00111-g003.jpg

Support vector visualization [ 8 ].

SVM has a small tradeoff between large margin and accurate classification. If the exact classification without sacrificing any individual sample is applied, the margin could be very narrow, which could lead to a lower accuracy level. On the other hand, by maximizing the margin between classes to get a better accuracy, support vectors that are closest to the hyperplane could be considered with other class members.

3.4. Naïve Bayes

Naïve Bayes is a straightforward and also fast algorithm for classification. Its working process is based on Bayes theorem. It is represented below:

The fundamentals of this algorithm assume that each variable contributes to the outcome independently and equally. In this case, each feature will not be dependent on each other and will affect the output with the same weight. Therefore, the naïve Bayes theorem does not apply to real-life problems, and it is possible to get low accuracies while using this algorithm. Gaussian Naïve Bayes is one kind of naïve Bayes application. It assumes that features follow a normal distribution. The possibility of features is considered to be Gaussian and has a conditional probability. Gaussian naïve Bayes theorem is given below:

3.5. Decision Tree

A decision tree (DT) is one of the most common supervised learning techniques. Regression and classification are two main goals to use it. It seeks to solve problems by drawing a tree figure. Features are known as decision nodes, and outputs are leaf nodes. Feature values are considered as categorical in the decision tree algorithm. At the very beginning of this algorithm, it is essential to choose the best attribute and place it at the top on tree figure and then split the tree. Gini index and information gain are two methods for the selection of features.

Randomness or uncertainty of feature x is defined as entropy and can be calculated as follows:

Entropy values for each variable are calculated, and by subtracting these values from one, information values can be obtained. A higher information gain makes an attribute better and places it on top of the tree.

Gini index is a measure of how often a randomly chosen element would be incorrectly identified. Therefore, a lower Gini index value means better attributes. Gini index can be found with the given formula:

A decision tree is easy to understand. However, if data contain various features it might cause problems that are called overfitting. Therefore, it is crucial to know when to stop growing trees. Two methods are typical for restricting the model from overfitting: pre-pruning, which stops growing early, but it is hard to choose a stopping point; and post-pruning, which is a cross-validation used to check whether expanding the tree will make improvements or lead to overfitting [ 42 , 43 ]. DT structure consists of a root node, splitting, decision node, terminal node, sub-tree, and parent node [ 4 ]. There are two main phases of the DT induction process: the growth phase and the pruning phase. The growth phase involves a recursive partitioning of the training data resulting in a DT where decision trees have a natural “if”, “then”, “else” construction that makes it fit easily into a programmatic structure [ 44 ].

3.6. Random and Rotation Forest

Random forest is an ensemble learning model that can be used for both regression and classification. Indeed, a random forest consists of many decision trees. Therefore, in some cases, it is more logical to use random forest rather than a decision tree.

The rotation forest algorithm consists of generating a classifier that is based on the extraction of attributes. The attribute set is randomly grouped into K different subsets. It aims to create accurate and significant classifiers [ 45 ].

In the decision tree, feature selection is the main problem, and there are different approaches for that. Furthermore, random forest searches the best feature among a random subset of features, instead of searching for the most prominent feature while splitting nodes. It is possible to make it even more arbitrary by using random thresholds for each feature rather than seeking the best one [ 46 ].

Some built-in function parameters could make the model faster or more accurate. Max features, n estimators, and min sample leaf are used for increasing the power of prediction. N jobs and random state are generally used for making models faster. In this study, n estimators, which determines the number of trees to grow, and random state parameters are used to increase the accuracy and speed of the model. In Table 2 , parameters that are used in the dataset are represented and explained.

Wisconsin breast cancer dataset.

1ID9Symmetry Mean17Smoothness Se25Perimeter Worst
2diagnosis10concavity mean18compactness se26area worst
3radius mean11concave points mean19concavity se27smoothness worst
4texture mean12fractal dimension mean20concave points se28compactness worst
5perimeter mean13radius se21symmetry se29concavity worst
6area mean14texture se22fractal dimension se30concave points worst
7smoothness mean15perimeter se23radius worst31symmetry worst
8compactness mean16area se24texture worst32fractal dimension worst

The dataset contains 32 parameters. All parameters can be useful to classify cancer; if these parameters have relatively large values, it can be a sign of malignant tissue. The first parameter is ID, and it is a number that is used for identification [ 30 ]. The second parameter is the diagnosis of membranes, of which there are two diagnoses for tissue: malignant and benign. For different cancer types, it is necessary to determine the correct diagnosis of tissue in case both membranes have different treatments. After these two, estimated means, standard errors, and radius means indicate a range between the center and point on the perimeter. Radius se shows the estimated standard error. Radius worst has the highest value of the center for the estimated range. It is essential to know the distance between the center and the point because surgery depends on the size. There is no chance to do surgery with big tumors. Texture mean represents the standard deviation of the gray-scale values. Texture se represents the standard error of the calculated standard deviation for gray-scale values. The highest mean value of standard deviation for gray-scale values is shown as texture worst. Gray-scale is commonly used to find the tumor location, and the standard deviation is essential to find the variation of the data and to explain how to spread out the numbers. Perimeter mean represents the mean value for the core tumor, while standard error of the mean represents the core tumor described as perimeter se. The highest value of the core tumor is written on the perimeter worst column. Area mean, area se, and area worst point at similar values related to the mean of the cancer cell areas, as described before. Smoothness mean is the mean for regional variations in radius range, smoothness se represents standard error of the mean of local variations in radius length, and the largest mean value is shown as smoothness worst. Compactness mean is a mean value of estimation of the perimeter and area, compactness se is used for standard error of compactness mean, and the highest mean value of the calculation is named compactness worst. Concavity mean shows the severity of concave portions of the shape, and concave points mean is the number of concave portions of the contour. Concavity se stands for the standard error of concave portions, while concave points se stands for the standard error of the concave portions of the shape. Concavity worst and concave points worst stand for the highest mean value. Fractal dimension mean is the calculated mean value for coastline approximation, standard error of the coastline approximation is shown as fractal dimension se, and the highest mean value is fractal dimension worst [ 17 , 18 , 30 ].

R, Matlab, Stata, SAS, WEKA, and Python are popular tools for data analysis and visualization. In the literature, WEKA is also a favorite tool in data science. In recent years, R programming language, Minitab, and Python have shown to one step ahead of their competitors with their packages and libraries. On the other hand, Matlab created its tool for data science to operate with data and features efficiently.

In this study, bestglm, car, corrplot, gpplots, leaps, ROCR, and MASS libraries in R as well as NumPy, Pandas, Matplotlib, Seaborn, Plotly, and Scikit-learn libraries in Python are used for data visualization and implementation of machine learning algorithms. As the first step of this study, the dataset was imported to R, Minitab, and Python as a data frame and examined further. After observing data, two unnecessary columns were extracted from the data set. These columns were removed from the core data, and this process can be called data cleaning. Then, the label column was transferred to another data frame to make steps easier while plotting charts. The label column shows whether the tumor was benign or malignant.

It is necessary to identify whether data are balanced or unbalanced. It can be observed that the dataset was not smoothly balanced, and the number of benign tumors was almost twice that of malignant tumors. In the next step, a heat map was constructed to indicate a correlation between all features, and its graph is given in Figure 4 .

An external file that holds a picture, illustration, etc.
Object name is healthcare-08-00111-g004.jpg

Heat map representative of independent variables.

In the dataset, some of the features had higher numerical integer values, yet some of them, on the other hand, had much lower numerical values. Plotting charts and relations among features and unbalanced numerical values would not give adequate outcomes. Consequently, numerical values are normalized. The normalization equation of numerical values can be estimated as

After normalizing the numerical values of features, a box plot was created for cleaning the data. Figure 5 indicates a box plot of features. According to it, some features were excluded since there were too many outliers. This means sufficiency was lacking to use these features while classifying by looking at the boxplot.

An external file that holds a picture, illustration, etc.
Object name is healthcare-08-00111-g005.jpg

Swarm plot provides more visualization rather than a box plot for classifying. According to the first swarm plot, redundant features were excluded, and the final swarm plot was generated. Before machine learning techniques, data were grouped under three main categories, which were positively correlated, negatively correlated, and uncorrelated, then drawn as scatter plots in Figure 6 .

An external file that holds a picture, illustration, etc.
Object name is healthcare-08-00111-g006a.jpg

Pairwise plots of Breast Cancer Dataset attributes.

Supervised learning method includes algorithms for identified data by understanding the given data and making predictions for the future. Classification and regression are two different categories under this approach. Classification is a technique for determining the label of the data and used for discrete responses, unlike the regression technique. In the classification process, the first step is to read the given data. Different classification algorithms are usually preferred for machine learning applications. In this study, logistic regression, k-nearest neighbor, support vector machine, random forest, decision tree, and naïve Bayes classification algorithms were created, and accuracy scores for each of them were obtained. Each algorithm was applied to three different datasets that included various features. The first dataset covered all independent features, the second dataset included highly correlated features, and the last dataset included low correlated features. Three different datasets were used separately for each machine learning technique, and accuracy results were obtained to make comparisons.

Logistic regression is one of the most common algorithms to solve classification problems. It measures the relationship between a categorical dependent variable and independent variables by using a sigmoid function. Before the application, the dataset was divided into two groups, a training set and testing set, for logistic regression. Eighty percent of the data was preferred for the training set, and the rest of it was used for testing to get accurate results from the classification algorithm. The logistic regression algorithm was applied with an optimum cut off value/threshold by using different libraries, and accuracy results were obtained as 98.06%, 95.61%, and 93.85%.

The second applied model was KNN. Three different datasets were used with 10, 12, and 10 neighbors sequentially. The random state was defined as 42 for the k-nearest neighbor algorithm as applied before in logistic regression. According to each application, accuracy scores were obtained as 96.49%, 95.32%, and 94.69% for each dataset.

SVM technique was applied to get better accuracy scores. However, before application of the algorithm, a random state was defined as 1, not 42, since it gave better results. SVM accuracy results were obtained as 96.49%, 96.49%, and 93.85%.

Naïve Bayes method gave the worst accuracy result compared to the other methods. Accuracy results for each dataset were obtained as 94.73%, 92.98%, and 93.85%.

Decision tree algorithm belongs to the family of supervised learning algorithms. The working principle of it is based on random selections. Positions of the features are selected randomly in the decision tree algorithm. Therefore, when the function runs several times, it is possible to get different accuracy results. In this study, with decision tree function, the best results were received as 95.61%, 93.85, and 92.10%.

Random forest algorithm has the same working principle as a decision tree. In the random forest, there are n number of decision trees. In this application, n was defined as 50, and accuracy results were obtained as 95.61%, 94.73%, and 92.98% for each dataset.

Rotation forest algorithm consists of generating a classifier based on the extraction of attributes. The attribute set is randomly grouped into K different subsets. In this application, the rotation forest algorithm gave competitive results. Accuracy results were obtained as 97.4%, 95.89%, and 92.99% for each dataset.

Among women, 62.7% had a benign tumor type, while 37.3% had a malignant tumor type in the given dataset. This distribution shows that the data were unbalanced, with benign tumors being more frequently stored. The heat map illustrates the correlation between each feature one by one. It contains 900 (30 feature x 30 feature) relationships to indicate the relationship within all features. Darker blue colors represent that there was a clear and positive correlation between those features, while lighter blue colors show a negative correlation, and uncorrelated for benign breast mass. Similarly, darker red colors represent that there was a clear and positive relationship between those features, while lighter red colors show a negative correlation and uncorrelated for malignant breast mass. For instance, radius mean and perimeter means had a strong and positive correlation with a 1.0 coefficient value.

Boxplots were created to give insight into the basic statistics of the data and outliers. Tumor types were divided into their labels, and boxplots were constructed for each feature. According to the boxplot results, features that could classify tumor types better were selected for further applications. Graphical demonstrations in Figure 5 and Figure 6 propose the benefit of differentiating between the shapes of the normal distribution (e.g., smoothness mean) and other distributions. All of the reviewed boxplot variations used the median, and variations rarely presented summary statistics around the mean or a value close to it.

In Figure 6 , each attribute distribution indicated concerning tumor types. It describes the distribution of attributes. Firstly Figure 6 a,b show the attribute mean and standard error distribution, while Figure 6 c,d indicate a standard error and worst value of attributes. For instance, the shape of radius mean attribute distribution for a benign tumor is symmetric, while the area se attribute distribution for a malignant tumor is right-skewed. Attribute data values largely varied for some of the attributes.

In Figure 7 , a detailed correlation was investigated by visualizing each individual in the data by using a swarm plot. Features were divided into two groups: uncorrelated and correlated. Uncorrelated and correlated features were two groups that showed high and low correlation features throughout the dataset. The plot indicates that the mean of data distribution was between −2 and +2.

An external file that holds a picture, illustration, etc.
Object name is healthcare-08-00111-g007.jpg

Swarm plot after the first elimination.

In Figure 8 , correlated features can be observed with dropping uncorrelated features. It was observed that a comparatively denser and higher correlation existed. Nodes stayed close to each other.

An external file that holds a picture, illustration, etc.
Object name is healthcare-08-00111-g008.jpg

Final swarm plot after second elimination.

In Figure 9 , the correlation between features is clear, accurate, and high. Correlated features can be seen with eliminating uncorrelated features. Figure 9 c shows the highest correlation between variables, while Figure 9 a displays the lowest relationship among two variables. Figure 9 d displays a relatively strong correlation rather than Figure 9 b. Observations became increasingly correlated as nodes became closer.

An external file that holds a picture, illustration, etc.
Object name is healthcare-08-00111-g009a.jpg

Positively correlated scatter plots 1–4.

In Figure 10 , the correlation between features is moderate and positive. Concavity mean and radius worst ( Figure 10 b) had the highest correlation coefficient severity, while compactness mean and the area mean ( Figure 10 a) had the lowest correlation coefficient magnitude.

An external file that holds a picture, illustration, etc.
Object name is healthcare-08-00111-g010a.jpg

Positively correlated scatter plots 5–6.

In Figure 11 a–d, the correlation between features is weak and irregular. It means there was no correlation between features and did not affect each other.

An external file that holds a picture, illustration, etc.
Object name is healthcare-08-00111-g011a.jpg

Uncorrelated scatter plots 1–4.

In Figure 12 a–d and Figure 13 a,b, correlation between features is weak, negative, and high. Correlated features can be seen with eliminating uncorrelated features. Radius mean and fractal dimension mean had the weakest ( Figure 12 b) correlation value, and smoothness standard error and perimeter mean ( Figure 12 d) had the highest correlation intensity.

An external file that holds a picture, illustration, etc.
Object name is healthcare-08-00111-g012a.jpg

Negatively correlated scatter plots 1–4.

An external file that holds a picture, illustration, etc.
Object name is healthcare-08-00111-g013.jpg

Negatively correlated scatter plots 5–6.

After the data cleaning process, positively correlated, uncorrelated, and negatively correlated groups were created and represented in Figure 8 , Figure 9 , Figure 10 , Figure 11 and Figure 12 . Under the positively correlated category, features of radius mean, concavity mean, concave points mean, perimeter worst, area worst, and compactness worst were grouped, and their graphs were plotted. Under the negatively correlated category, smoothness mean, texture mean, radius mean, fractal dimension mean, texture worst, symmetry se, symmetry mean, and the area mean were also grouped, and their graphs were represented in Figure 13 a,b. Breast cancer diagnosis using logistic regression had a 98.60% accuracy level for the malign tumor type and 97.17% accuracy level for the benign tumor type. The average accuracy level was 98.07%. Figure 14 indicates a comparison of machine learning techniques with accuracy results.

An external file that holds a picture, illustration, etc.
Object name is healthcare-08-00111-g014.jpg

Applied machine learning techniques with accuracy results.

In Figure 14 , all accuracy result values under three feature groups and six machine learning algorithms are given. According to the experiment result, the worst scenario was a decision tree algorithm with low correlated features. Nevertheless, logistic regression with all features included provided the best accuracy result, compared to all other scenarios, with 98.1%. All test accuracy results of all features were 98.1%, 96.9%, 95.9%, 95.6%, 95.6%, and 95.6% from best to worst respectively. Highly correlated test accuracy results were 97.4%, 96.5%, 95.6%, 94.7%, 94.7%, 93.8%, and 93.0% from best to worst respectively. Low correlated test accuracy results were 95.6%, 93.9%, 93.9%, 93.9%, 93.0%, 93.0%, and 92.1% from best to worst respectively.

5. Discussion

Data science is a multidisciplinary field that uses scientific methods, processes, algorithms, and systems to extract knowledge and insights from structured and unstructured data. Statistics, data mining, data visualization, machine learning, deep learning, and artificial intelligence are the main subtopics of data science. Even though data science was born in the 1990s, the importance of this field is realized nowadays. It is mentioned in different studies that the amount of data in the world is increasing rapidly, and the unstructured data type still accounts for more than half of the total amount of data. Therefore, data science has become an essential issue in any field to make data understandable. Healthcare is one of the necessary environments for data science applications since big data is a part of it. The volume of collected data in healthcare is enormous, yet it is proven that 80% of the gathered data is unorganized. The total number of studies among data science applications in healthcare has increased significantly [ 45 ].

The objectives of this study were to analyze the Wisconsin breast cancer dataset by visualizing it and classify tumor types, whether benign or malignant, by using machine learning algorithms. As a first step, the dataset was prepared for visualization by removing non-numerical values and normalizing each numeric value. Then, heat map, boxplot, swarm plot, and scatterplots were created by using R studio, Minitab, and Python. Visualizing the data helped to understand the correlation between each feature and brought out unnecessary features that were not essential to use while making predictions [ 47 ]. After visualization was completed, three different datasets were generated. The first dataset covered all features, the second one included highly correlated features, and the last one included features with a low correlation. Machine learning algorithms, which were logistic regression, k-nearest neighbor, support vector machine, naïve Bayes, decision tree, random forest, and rotation forest, were applied for classification of the tumor type. Accuracy results were obtained for the three different datasets and given in a table as a result [ 48 ]. Logistic regression gave better accuracy results rather than the other methods. The main advantage of LR is that it is very efficient to train. In addition, the LR model is useful and gives more accurate results in complex algorithms.

Attributions to the cause of death in those with breast cancer may depend on numerous reasons related to the specifications of the patient. Any specific cause can reduce the risk of death as a result of breast cancer. Nonetheless, these results underline the significance of early diagnoses faced by both active and former patients with a history of breast cancer. Our study reinforces the importance of early diagnosis with high accuracy in women with breast cancer.

In this paper, six distinct machine learning techniques were investigated for breast cancer diagnosis. A competitive performance was demonstrated when dealing with imbalanced data (98.1% accuracy). However, it is essential that before running the algorithm, the dataset must be pre-processed, as it does not deal with missing values, and it has a better performance when learning from a dataset with discretized nominal values.

This research received no external funding.

Conflicts of Interest

There is no conflict of interest in this study.

IMAGES

  1. (PDF) A Review on Recent Progress in Thermal Imaging and Deep Learning

    research papers on breast cancer detection

  2. (PDF) Breast cancer detection: A review on mammograms analysis techniques

    research papers on breast cancer detection

  3. Utilizing Automated Breast Cancer Detection to Identify Spatial

    research papers on breast cancer detection

  4. (PDF) A Review on Breast Cancer Detection Using Data Mining Techniques

    research papers on breast cancer detection

  5. Breast Cancer Detection using Histopathological Images

    research papers on breast cancer detection

  6. (PDF) An Expert System for Detection of Breast Cancer Using Data

    research papers on breast cancer detection

VIDEO

  1. DETECTION of BREAST CANCER stage IV #information #cancer #breastcancer #symptoms

COMMENTS

  1. Breast Cancer Detection and Diagnosis Using Mammographic Data: Systematic Review

    The IARC statistics show that breast cancer accounts for 25% of all cancer cases diagnosed in women worldwide. Around 53% of these cases come from developing countries, which represent 82% of the world population [1]. It is reported that 626,700 deaths will occur only in 2018 [1]. Breast cancer is the leading cause of cancer death among women ...

  2. Breast Cancer Dataset, Classification and Detection Using Deep Learning

    The paper starts by reviewing public datasets related to breast cancer diagnosis. Additionally, existing deep learning methods for breast cancer diagnosis are reviewed. ... In this review, we looked at the most recent research on breast cancer diagnosis using DL in image modalities. Various well-known DL methods such as CNN, RNN, GoogLeNet ...

  3. Deep Learning Based Methods for Breast Cancer Diagnosis: A Systematic

    Moreover, the datasets used for breast cancer diagnosis and the performance of different algorithms are also explored. Finally, the challenges and future research directions on breast cancer detection using deep learning techniques are also investigated to help researchers and practitioners acquire in-depth knowledge of and insight into the area.

  4. Breast cancer detection using deep learning: Datasets, methods, and

    1. Introduction. According to the "International Agency for Research on Cancer (IARC's) 2020 World Cancer Report", Cancer is the primary or secondary cause of deathblow (ages 30-69) in 134 out of 183 countries (see Fig. 1).Lung cancer is by far the leading cause of death among men and women, but in men prostate cancer is more common and in women BC is more common.

  5. Deep Learning to Improve Breast Cancer Detection on Screening ...

    1. Set learning rate to 10 −3 and train the last layer for 3 epochs. 2. Set learning rate to 10 −4, unfreeze the top layers and train for 10 epochs, where the top layer number is set to 46 for ...

  6. Mammography with deep learning for breast cancer detection

    1 Introduction. Breast cancer is one of the most prevalent cancers among females worldwide ().Several factors, including gender, age, family history, obesity, and genetic mutations, contribute to the development of breast cancer ().Early diagnosis with prompt treatment can significantly improve the 5-year survival rate of breast cancer ().Medical imaging techniques like mammography and ...

  7. Breast Cancer Detection and Prevention Using Machine Learning

    Breast cancer is a common cause of female mortality in developing countries. Early detection and treatment are crucial for successful outcomes. Breast cancer develops from breast cells and is considered a leading cause of death in women. This disease is classified into two subtypes: invasive ductal carcinoma (IDC) and ductal carcinoma in situ (DCIS). The advancements in artificial intelligence ...

  8. Deep learning approaches to detect breast cancer: a comprehensive

    Detection and diagnosis of breast cancer have greatly benefited from advances in deep learning, addressing the critical problem of early detection and accurate diagnosis. This paper presents a review of 68 high-quality articles related to deep learning techniques applied to various imaging modalities including mammography, ultrasound, MRI, histopathology, and thermography published in 2022 and ...

  9. Efficient breast cancer mammograms diagnosis using three deep neural

    Breast cancer (BC) is spreading more and more every day. Therefore, a patient's life can be saved by its early discovery. Mammography is frequently used to diagnose BC. The classification of ...

  10. [2409.06699] A study on Deep Convolutional Neural Networks, Transfer

    View a PDF of the paper titled A study on Deep Convolutional Neural Networks, Transfer Learning and Ensemble Model for Breast Cancer Detection, by Md Taimur Ahad and 4 other authors ... The high accuracy in detecting and categorising breast cancer detection using CNN suggests that the CNN model is promising in breast cancer disease detection ...

  11. Machine Learning Algorithms For Breast Cancer Prediction And Diagnosis

    The rest of this paper is organized as follows .section 2 introduces methods and results of previous research on breast cancer diagnosis. Section 3 describes the proposed methodology for our work. Section 4 presents and explains in detail the experiments results. Section 5 concludes the paper. 2.

  12. A review on image-based approaches for breast cancer detection

    The papers published in the journal containing keywords of breast cancer detection, segmentation, classification, and breast dataset are extracted from different sources including Research Gate, Google Scholar, Pubmed, Elsevier, and Springer. The most common imaging modalities consisting of mammography, ultrasound, MRI, thermography, CT, and ...

  13. Deep learning applications to breast cancer detection by magnetic

    Deep learning analysis of radiological images has the potential to improve diagnostic accuracy of breast cancer, ultimately leading to better patient outcomes. This paper systematically reviewed the current literature on deep learning detection of breast cancer based on magnetic resonance imaging (MRI). The literature search was performed from 2015 to Dec 31, 2022, using Pubmed. Other database ...

  14. Breast Cancer Detection and Prediction using Machine Learning

    Early detection of breast cancer, either through screening or early diagnosis initiatives, led by community health workers (CHWs) has been proposed as a potential way to address the unjustly high ...

  15. A Systematic Review on Breast Cancer Detection Using Deep Learning

    In the recent past, several research papers were published to summarize the breast cancer detection techniques. The relevant survey papers are discussed as below: Yassin et al. [ 25 ] presented the findings of a systematic review (SR) aimed at determining the current state-of-the-art for computer aided diagnosis and detection (CAD) systems for ...

  16. Breast cancer detection using artificial intelligence techniques: A

    gene sequencing. In this research, we focused on papers that implement the breast cancer detection using the techniques of AI, as well as papers that predict breast cancer using both gene data and image data. We applied the following eligibility criteria on each paper: (1) The language is English; (2) The topic is related

  17. A Comprehensive Survey on Deep-Learning-Based Breast Cancer Diagnosis

    1. Introduction. The most commonly occurring cancer in women is breast cancer (BrC). As claimed by the World Health Organization (WHO), BrC was diagnosed in 2.3 million women, and 685,000 deaths were recorded globally in 2020 [].In addition, the WHO predicts that the number of new BrC patients will increase by seventy percent (70%) in the next twenty years.

  18. (PDF) A Systematic Review on Breast Cancer Detection Using Deep

    cancer screening techniques suffer from non-inv asive, unsafe radiations, and specificity of diagnosis of tumor in the breast. The deep learning techniques are widely used in medical imaging ...

  19. Breast Cancer Detection

    BreastScreening: On the Use of Multi-Modality in Medical Imaging Diagnosis. MIMBCD-UI/prototype-multi-modality • 7 Apr 2020. This paper describes the field research, design and comparative deployment of a multimodal medical imaging user interface for breast screening. 8.

  20. Breast Cancer Detection Using Extreme Learning Machine Based on Feature

    A computer-aided diagnosis (CAD) system based on mammograms enables early breast cancer detection, diagnosis, and treatment. However, the accuracy of the existing CAD systems remains unsatisfactory. This paper explores a breast CAD method based on feature fusion with convolutional neural network (CNN) deep features. First, we propose a mass detection method based on CNN deep features and ...

  21. A Review Paper on Breast Cancer Detection Using Deep Learning

    Breast Cancer is mostly found in the women. Early detection is a way to control the breast cancer. There are many cases that are handled by the early detection and decrease the death rate. Many research works have been done on the breast cancer. The Most common technique that is used in research is machine learning.

  22. Breast cancer detection using artificial intelligence techniques: A

    Most research papers used genetic sequencing data only with binary classification, with the main focuses being breast cancer detection and likelihood of survival. Compliance with ethical standards The authors thank the University of Sharjah for supporting this work.

  23. Diagnostics

    This review focuses on the pivotal role of radiotracers in breast cancer imaging, emphasizing their importance in accurate detection, staging, and treatment monitoring. Radiotracers, labeled with radioactive isotopes, are integral to various nuclear imaging techniques, including positron emission tomography (PET) and positron emission mammography (PEM). The most widely used radiotracer in ...

  24. Current State of Breast Cancer Diagnosis, Treatment, and Theranostics

    Early diagnosis is a key to successful breast cancer treatment. T1 tumors measuring less than 2 cm in diameter have a 10-year survival of approximately 85%, while T3 tumors—essentially the result of delayed diagnosis—have a 10-year survival of less than 60% [8]. Imaging techniques commonly used for detection of breast cancer are summarized ...

  25. Ph.D. Student Receives Patent for Thermographic Breast Cancer Detection

    Mammograms can be an effective resource for detecting breast cancer, but for some women, it can be an invasive and uncomfortable experience.That's why Gianna Slusher, Ph.D. student in the George W. Woodruff School of Mechanical Engineering, developed a device that could serve as an effective alternative to traditional early detection methods for breast cancer.Slusher and her partner, Caitlin ...

  26. Kennesaw State doctoral student revolutionizes breast cancer research

    Researcher Linglin Zhang believes that data science, boosted by artificial intelligence (AI), can improve detection of breast cancer and develop more effective treatments. Zhang, who is a doctoral candidate in Data Science and Analytics at Kennesaw State University, delves into the underlying processes of breast cancer, particularly focusing on biomarkers, specific molecules in blood or ...

  27. Optimized biomarker evaluation and molecular testing in the era of

    Figure 1. Sequence of events for a pathology specimen from tissue acquisition to pathology report (red - preanalytical testing; yellow - analytical testing; green - post-analytical testing): The final pathologic diagnosis requires an entire team of people as the specimens travels through a chain of custody on the way to a diagnostic interpretation. The blue arrows represent the ...

  28. Found a white spot on my breast. Should I be worried?

    Cancer Research UK is a registered charity in England and Wales (1089464), Scotland (SC041666), the Isle of Man (1103) and Jersey (247). A company limited by guarantee. Registered company in England and Wales (4325234) and the Isle of Man (5713F). Registered address: 2 Redman Place, London, E20 1JQ.

  29. A Comparative Analysis of Breast Cancer Detection and Diagnosis Using

    Accurate diagnosis is one of the most important processes in breast cancer treatment. In the literature, there are many studies about predicting the type of breast tumors. In this research paper, data about breast cancer tumors from Dr. William H. Walberg of the University of Wisconsin Hospital were used for making predictions on breast tumor ...