Elsevier

Medical Engineering & Physics

Volume 64, February 2019, Pages 1-6
Medical Engineering & Physics

Dual-mode artificially-intelligent diagnosis of breast tumours in shear-wave elastography and B-mode ultrasound using deep polynomial networks

https://doi.org/10.1016/j.medengphy.2018.12.005Get rights and content

Highlights

  • AI-based diagnosis for breast cancer is proposed using deep polynomial network (DPN).

  • Dual-modal methods outperforms single-modal ones for breast tumor classification.

  • DPN achieves 97.8% sensitivity, 94.1% specificity, 95.6% accuracy, and 0.961 AUC.

  • DPN outperforms conventional feature learning methods.

Abstract

The main goal of this study is to build an artificial intelligence (AI) architecture for automated extraction of dual-modal image features from both shear-wave elastography (SWE) and B-mode ultrasound, and to evaluate the AI architecture for classification between benign and malignant breast tumors. In this AI architecture, ultrasound images were segmented by the reaction diffusion level set model combined with the Gabor-based anisotropic diffusion algorithm. Then morphological features and texture features were extracted from SWE and B-mode ultrasound images at the contourlet domain. Finally, we employed a framework for feature learning and classification with the deep polynomial network (DPN) on dual-modal features to distinguish between malignant and benign breast tumors. With the leave-one-out cross validation, the DPN method on dual-modal features achieved a sensitivity of 97.8%, a specificity of 94.1%, an accuracy of 95.6%, a Youden's index of 91.9% and an area under the receiver operating characteristic curve of 0.961, which was superior to the classic single-modal methods, and the dual-modal methods using the principal component analysis and multiple kernel learning. These results have demonstrated that the dual-modal AI-based technique with DPN has the potential for breast tumor classification in future clinical practice.

Introduction

Breast cancer is the most common cancer in women and has high mortality of 508 000 annually [1], [2]. The early diagnosis of patients with breast cancer is crucial to improve the prognosis of patients and prolong their survival [3]. It is of great value to differentiate between benign and malignant breast tumors for the diagnosis and treatment of breast cancer. At present, there are two main diagnostic methods for diagnosis of breast cancer: pathology and imaging. Pathology is the gold standard for diagnosis of breast cancer. However, it is invasive and thus is not suitable for breast screening [4].

Ultrasound imaging technology has been recognized as the main method for early diagnosis of breast cancer because of its non-ionizing, non-invasive, and low-cost nature, as well as its capability of real-time dynamic imaging and imaging of dense breast tissue [5]. Traditional ultrasound imaging such as B-mode ultrasound provides useful information pertaining the number, size, shape and boundary of a breast tumor [6], [7]. Shear-wave elastography (SWE) has emerged as an effective imaging tool for measurement of breast tissue elasticity and early detection of breast cancer based on the fact that the change of breast tissue elasticity may be earlier than its morphological changes [8].

The SWE imaging system often provides dual-modal visualization of breast tumors consisting of both a B-mode image and an elastogram. However, the current diagnosis methods for differentiating benign and malignant breast tumors mainly use single modality, either B-mode or elastography, and combination of dual modalities for diagnosis is limited. In this paper, we focus on dual-modal discrimination between benign and malignant breast tumors by combining complementary diagnostic information provided by SWE and B-mode [9].

The diagnosis of breast tumors with ultrasound often relies on visual interpretation of ultrasound images by experienced radiologists, which is subjective, time-consuming, tedious, and also limits the diagnosis accuracy. Thus, it is desirable to develop approaches using artificial intelligence (AI) for more objectively, accurately, and efficiently interpreting dual-modal ultrasound images and distinguishing between malignant and benign breast tumors [7]. In this paper, we have proposed an AI-based architecture for breast tumor classification on dual-modal ultrasound. The contributions of this work are two-fold: (i) Contourlet-based texture features and morphological features have been extracted from dual-modal ultrasound images, following an improved tumor segmentation model with the reaction diffusion (RD) level set. (ii) Dual-modal features have been combined by using a deep learning method called the deep polynomial network (DPN) to facilitate feature learning and to yield accurate classification of breast tumors. To the best of our knowledge, this study is among the first to propose an AI-based architecture on dual-modal ultrasound namely SWE and B-mode for breast cancer diagnosis.

Section snippets

Materials and methods

Our methods were comprised of four steps as show in Figure 1. First, the dual-modal ultrasound images were preprocessed to be suitable for following analysis. Second, tumor segmentation was conducted on B-mode images with the reaction diffusion (RD) level set model combined with the Gabor-based anisotropic diffusion (GAD) algorithm, named RD-GAD, and then the segmented tumors locations were mapped back to the paired SWE images. Third, 82 quantitative features were extracted from B-mode and SWE

Results of tumor segmentation

The proposed segmentation model RD-GAD was based on the RD algorithm and we compared the RD-GAD with the traditional RD for breast tumor segmentation.

Figure 3(a)–(d) shows segmentation results for a benign breast tumor. The border of the breast tumor was uncontinuous and broken at 4 to 5 o'clock. The RD-GAD model extracted the broken edges more effectively and accurately than the traditional RD method, and its segmentation result was more close to the result of the manual segmentation. Figure 3

Discussion

The most important contribution of this work is the introduction of AI architecture to breast cancer diagnosis with dual-modal ultrasound. This study used a deep learning algorithm, the DPN, to achieve more accurate, efficient and convenient classification. The dual-modal AI architecture was superior to all other compared methods and could improve the classification performance of breast tumors, which indicated that the AI-based technique could get close results to breast tumor biopsies and

Conclusion

In conclusion, we propose a dual-modal AI-based framework for diagnosis of breast tumors. The experimental results show that the dual-modal DPN was superior to all other frameworks, indicating that the AI architecture can assist in more effective and more convenient classification of breast tumors.

Conflict interest

There is no conflict of interest.

Ethical approval

This was a retrospective study approved by the Institutional Review Board and informed consent of all patients was obtained.

Acknowledgments

The work was funded by the National Science Foundation of China (Nos. 61671281, 81627804, and 61471231).

References (35)

  • MoonW.K. et al.

    Computer-aided tumor diagnosis using shear wave breast elastography

    Ultrasonics

    (2017)
  • C. Tsiotsios et al.

    On the choice of the parameters for anisotropic diffusion in image processing

    Pattern Recogn

    (2013)
  • ZhangQ. et al.

    Deep learning based classification of breast tumors with shear-wave elastography

    Ultrasonics

    (2016)
  • SongE.J. et al.

    Diagnostic performances of shear-wave elastography and B-mode ultrasound to differentiate benign and malignant breast lesions: the emphasis on the cutoff value of qualitative and quantitative parameters

    Clin Imag

    (2018)
  • Breast cancer: prevention and control

    World Health Stat Ann

    (2012)
  • R. Wooster et al.

    Identification of the breast cancer susceptibility gene BRCA2

    Nature

    (2015)
  • S. Ciatto et al.

    T category and operable breast cancer prognosis

    Tumori

    (2015)
  • Cited by (41)

    • Intelligent multi-modal shear wave elastography to reduce unnecessary biopsies in breast cancer diagnosis (INSPiRED 002): a retrospective, international, multicentre analysis

      2022, European Journal of Cancer
      Citation Excerpt :

      We identified seven other studies in this field: Six (86%) were single-centre studies without external validation; reported AUC ranged from 0.90 to 0.96, sensitivity ranged from 0.86 to 0.98, and sample size ranged from 85 to 363 patients analysed. The seventh study used a development set of 263 patients from one institution and an external validation set of 28 patients from a second institution; reported AUC was 1.0 and sensitivity 1.0 [38–44]. Patient age proved to be the most important variable for the algorithms in making their predictions (Supplemental Fig. 3).

    • Deep learning for Alzheimer's disease diagnosis: A survey

      2022, Artificial Intelligence in Medicine
    • Doubly supervised parameter transfer classifier for diagnosis of breast cancer with imbalanced ultrasound imaging modalities

      2021, Pattern Recognition
      Citation Excerpt :

      Therefore, EUS provides additional diagnosis information and improves the diagnostic performance of conventional BUS [6,7]. The combination of BUS and EUS (bimodal ultrasound imaging) achieves superior diagnostic accuracy for breast cancers to the single-modal approach due to the complementary information [8–11]. In recent years, the ultrasound-based computer-aided diagnosis (CAD) has again attracted considerable attention with the fast development of artificial intelligence technologies [12].

    • Application of deep learning to establish a diagnostic model of breast lesions using two-dimensional grayscale ultrasound imaging

      2021, Clinical Imaging
      Citation Excerpt :

      The results proved that specificity and accuracy improved for all radiologists when S-Detect was combined with US, but the benefits differed depending on the radiologists' level of experience.19,20 With the development of AI technology, DL methods have been adapted to other medical imaging technologies, such as 3-dimensional (3D) US and elastography.21,22 The present study also showed that according to conventional standards (standard 1), taking category 4A as the benign and malignant cut-off set, the diagnostic performance of the AI model was better than that of manual diagnosis.

    • AW3M: An auto-weighting and recovery framework for breast cancer diagnosis using multi-modal ultrasound

      2021, Medical Image Analysis
      Citation Excerpt :

      However, there exist many challenging cases where the difference in B-mode is subtle or even contradictory that could lead to misdiagnosis (shown in Fig. 1 (b, c, e, f)). To overcome this, researchers have adopted Share-Wave Elastography (SWE), Strain Elastography (SE) and color Doppler images (see Fig. 2) to improve the diagnosis accuracy (Gong et al., 2020; Sultan et al., 2018; Zhang et al., 2019). Elastography can reflect the stiffness of the tissue.

    View all citing articles on Scopus
    View full text