Dual-mode artificially-intelligent diagnosis of breast tumours in shear-wave elastography and B-mode ultrasound using deep polynomial networks
Introduction
Breast cancer is the most common cancer in women and has high mortality of 508 000 annually [1], [2]. The early diagnosis of patients with breast cancer is crucial to improve the prognosis of patients and prolong their survival [3]. It is of great value to differentiate between benign and malignant breast tumors for the diagnosis and treatment of breast cancer. At present, there are two main diagnostic methods for diagnosis of breast cancer: pathology and imaging. Pathology is the gold standard for diagnosis of breast cancer. However, it is invasive and thus is not suitable for breast screening [4].
Ultrasound imaging technology has been recognized as the main method for early diagnosis of breast cancer because of its non-ionizing, non-invasive, and low-cost nature, as well as its capability of real-time dynamic imaging and imaging of dense breast tissue [5]. Traditional ultrasound imaging such as B-mode ultrasound provides useful information pertaining the number, size, shape and boundary of a breast tumor [6], [7]. Shear-wave elastography (SWE) has emerged as an effective imaging tool for measurement of breast tissue elasticity and early detection of breast cancer based on the fact that the change of breast tissue elasticity may be earlier than its morphological changes [8].
The SWE imaging system often provides dual-modal visualization of breast tumors consisting of both a B-mode image and an elastogram. However, the current diagnosis methods for differentiating benign and malignant breast tumors mainly use single modality, either B-mode or elastography, and combination of dual modalities for diagnosis is limited. In this paper, we focus on dual-modal discrimination between benign and malignant breast tumors by combining complementary diagnostic information provided by SWE and B-mode [9].
The diagnosis of breast tumors with ultrasound often relies on visual interpretation of ultrasound images by experienced radiologists, which is subjective, time-consuming, tedious, and also limits the diagnosis accuracy. Thus, it is desirable to develop approaches using artificial intelligence (AI) for more objectively, accurately, and efficiently interpreting dual-modal ultrasound images and distinguishing between malignant and benign breast tumors [7]. In this paper, we have proposed an AI-based architecture for breast tumor classification on dual-modal ultrasound. The contributions of this work are two-fold: (i) Contourlet-based texture features and morphological features have been extracted from dual-modal ultrasound images, following an improved tumor segmentation model with the reaction diffusion (RD) level set. (ii) Dual-modal features have been combined by using a deep learning method called the deep polynomial network (DPN) to facilitate feature learning and to yield accurate classification of breast tumors. To the best of our knowledge, this study is among the first to propose an AI-based architecture on dual-modal ultrasound namely SWE and B-mode for breast cancer diagnosis.
Section snippets
Materials and methods
Our methods were comprised of four steps as show in Figure 1. First, the dual-modal ultrasound images were preprocessed to be suitable for following analysis. Second, tumor segmentation was conducted on B-mode images with the reaction diffusion (RD) level set model combined with the Gabor-based anisotropic diffusion (GAD) algorithm, named RD-GAD, and then the segmented tumors locations were mapped back to the paired SWE images. Third, 82 quantitative features were extracted from B-mode and SWE
Results of tumor segmentation
The proposed segmentation model RD-GAD was based on the RD algorithm and we compared the RD-GAD with the traditional RD for breast tumor segmentation.
Figure 3(a)–(d) shows segmentation results for a benign breast tumor. The border of the breast tumor was uncontinuous and broken at 4 to 5 o'clock. The RD-GAD model extracted the broken edges more effectively and accurately than the traditional RD method, and its segmentation result was more close to the result of the manual segmentation. Figure 3
Discussion
The most important contribution of this work is the introduction of AI architecture to breast cancer diagnosis with dual-modal ultrasound. This study used a deep learning algorithm, the DPN, to achieve more accurate, efficient and convenient classification. The dual-modal AI architecture was superior to all other compared methods and could improve the classification performance of breast tumors, which indicated that the AI-based technique could get close results to breast tumor biopsies and
Conclusion
In conclusion, we propose a dual-modal AI-based framework for diagnosis of breast tumors. The experimental results show that the dual-modal DPN was superior to all other frameworks, indicating that the AI architecture can assist in more effective and more convenient classification of breast tumors.
Conflict interest
There is no conflict of interest.
Ethical approval
This was a retrospective study approved by the Institutional Review Board and informed consent of all patients was obtained.
Acknowledgments
The work was funded by the National Science Foundation of China (Nos. 61671281, 81627804, and 61471231).
References (35)
- et al.
DNA repair genetic polymorphisms and risk of colorectal cancer in the Czech Republic
Mutat Res Fund Mol Mech Mutag
(2008) - et al.
A case-oriented web-based training system for breast cancer diagnosis
Comput Methods Progr Biomed
(2018) - et al.
WFUMB guidelines and recommendations for clinical use of ultrasound elastography: Part 2: breast
Ultrasound Med Biol
(2015) - et al.
Dual-modal computer-assisted evaluation of axillary lymph node metastasis in breast cancer patients on both real-time elastography and B-mode ultrasound
Eur J Radiol
(2017) - et al.
Quantification of elastic heterogeneity using contourlet-based texture analysis in shear-wave elastography for breast tumor classification
Ultrasound Med Biol
(2015) - et al.
Magnetic resonance-based automatic air segmentation for generation of synthetic computed tomography scans in the head region
Int J Rad Oncol Biol Phys
(2015) - et al.
Vegetation segmentation robust to illumination variations based on clustering and morphology modelling
Biosyst Eng
(2014) - et al.
Sonoelastomics for breast tumor classification: a radiomics approach with clustering-based feature selection on sonoelastography
Ultrasound Med Biol
(2017) - et al.
Computer-aided quantification of contrast agent spatial distribution within atherosclerotic plaque in contrast-enhanced ultrasound image sequences
Biomed Signal Process Control
(2014) - et al.
Stacked deep polynomial network based representation learning for tumor classification with small ultrasound image dataset
Neurocomputing
(2016)
Computer-aided tumor diagnosis using shear wave breast elastography
Ultrasonics
On the choice of the parameters for anisotropic diffusion in image processing
Pattern Recogn
Deep learning based classification of breast tumors with shear-wave elastography
Ultrasonics
Diagnostic performances of shear-wave elastography and B-mode ultrasound to differentiate benign and malignant breast lesions: the emphasis on the cutoff value of qualitative and quantitative parameters
Clin Imag
Breast cancer: prevention and control
World Health Stat Ann
Identification of the breast cancer susceptibility gene BRCA2
Nature
T category and operable breast cancer prognosis
Tumori
Cited by (41)
Deep learning radiomics on shear wave elastography and b-mode ultrasound videos of diaphragm for weaning outcome prediction
2024, Medical Engineering and PhysicsIntelligent multi-modal shear wave elastography to reduce unnecessary biopsies in breast cancer diagnosis (INSPiRED 002): a retrospective, international, multicentre analysis
2022, European Journal of CancerCitation Excerpt :We identified seven other studies in this field: Six (86%) were single-centre studies without external validation; reported AUC ranged from 0.90 to 0.96, sensitivity ranged from 0.86 to 0.98, and sample size ranged from 85 to 363 patients analysed. The seventh study used a development set of 263 patients from one institution and an external validation set of 28 patients from a second institution; reported AUC was 1.0 and sensitivity 1.0 [38–44]. Patient age proved to be the most important variable for the algorithms in making their predictions (Supplemental Fig. 3).
Deep learning for Alzheimer's disease diagnosis: A survey
2022, Artificial Intelligence in MedicineDoubly supervised parameter transfer classifier for diagnosis of breast cancer with imbalanced ultrasound imaging modalities
2021, Pattern RecognitionCitation Excerpt :Therefore, EUS provides additional diagnosis information and improves the diagnostic performance of conventional BUS [6,7]. The combination of BUS and EUS (bimodal ultrasound imaging) achieves superior diagnostic accuracy for breast cancers to the single-modal approach due to the complementary information [8–11]. In recent years, the ultrasound-based computer-aided diagnosis (CAD) has again attracted considerable attention with the fast development of artificial intelligence technologies [12].
Application of deep learning to establish a diagnostic model of breast lesions using two-dimensional grayscale ultrasound imaging
2021, Clinical ImagingCitation Excerpt :The results proved that specificity and accuracy improved for all radiologists when S-Detect was combined with US, but the benefits differed depending on the radiologists' level of experience.19,20 With the development of AI technology, DL methods have been adapted to other medical imaging technologies, such as 3-dimensional (3D) US and elastography.21,22 The present study also showed that according to conventional standards (standard 1), taking category 4A as the benign and malignant cut-off set, the diagnostic performance of the AI model was better than that of manual diagnosis.
AW3M: An auto-weighting and recovery framework for breast cancer diagnosis using multi-modal ultrasound
2021, Medical Image AnalysisCitation Excerpt :However, there exist many challenging cases where the difference in B-mode is subtle or even contradictory that could lead to misdiagnosis (shown in Fig. 1 (b, c, e, f)). To overcome this, researchers have adopted Share-Wave Elastography (SWE), Strain Elastography (SE) and color Doppler images (see Fig. 2) to improve the diagnosis accuracy (Gong et al., 2020; Sultan et al., 2018; Zhang et al., 2019). Elastography can reflect the stiffness of the tissue.