Predictive features for early cancer detection in Barrett's esophagus using Volumetric Laser Endomicroscopy

https://doi.org/10.1016/j.compmedimag.2018.02.007Get rights and content

Highlights

  • First study on CAD for cancer detection in VLE images.

  • CAD methods clearly outperform trained human experts.

  • Simple clinically-inspired features outperform established alternatives.

  • An optimal scan depth for cancer detection is identified.

  • Exhaustive benchmark of widely-used methods for comparison.

Abstract

The incidence of Barrett cancer is increasing rapidly and current screening protocols often miss the disease at an early, treatable stage. Volumetric Laser Endomicroscopy (VLE) is a promising new tool for finding this type of cancer early, capturing a full circumferential scan of Barrett's Esophagus (BE), up to 3-mm depth. However, the interpretation of these VLE scans can be complicated, due to the large amount of cross-sectional images and the subtle grayscale variations. Therefore, algorithms for automated analysis of VLE data can offer a valuable contribution to its overall interpretation. In this study, we broadly investigate the potential of Computer-Aided Detection (CADe) for the identification of early Barrett's cancer using VLE. We employ a histopathologically validated set of ex-vivo VLE images for evaluating and comparing a considerable set of widely-used image features and machine learning algorithms. In addition, we show that incorporating clinical knowledge in feature design, leads to a superior classification performance and additional benefits, such as low complexity and fast computation time. Furthermore, we identify an optimal tissue depth for classification of 0.5–1.0 mm, and propose an extension to the evaluated features that exploits this phenomenon, improving their predictive properties for cancer detection in VLE data. Finally, we compare the performance of the CADe methods with the classification accuracy of two VLE experts. With a maximum Area Under the Curve (AUC) in the range of 0.90–0.93 for the evaluated features and machine learning methods versus an AUC of 0.81 for the medical experts, our experiments show that computer-aided methods can achieve a considerably better performance than trained human observers in the analysis of VLE data.

Introduction

Patients suffering from gastric reflux over an extended period of time are prone to developing Barrett's Esophagus (BE). This is a condition in which the normal lining of the esophageal wall upwards from the gastroesophageal junction has been replaced by an acid-resistant cell type, which is similar to that of the small intestine (Shaheen and Richter, 2009). It has been estimated that 5.6% of the adult population of the US suffers from a BE (Hayeck et al., 2010) and with obesity and smoking as risk factors for its development (Cook et al., 2012, Lagergren, 2011), a strong increase in its incidence has been observed in recent years (van Soest et al., 2005). Patients with a BE have an over thirty-fold increased chance of developing Esophageal Adenocarcinoma (EAC) (Solaymani-Dodaran et al., 2004). If this type of cancer is detected at an early stage, it can be removed endoscopically, leading to an excellent prognosis (Ell et al., 2007). Typically, patients suffering from BE undergo regular endoscopic surveillance, to examine the BE segment and obtain random biopsies for detecting developing cancer (Reid et al., 2000). However, this surveillance protocol is not optimal, since early cancer is often missed due to subtle appearance upon visual inspection and the biopsy sampling error (Peters et al., 2008, Corley et al., 2013). Hence, a considerable amount of early cancerous lesions are unnoticed so that the cancer is detected at a later stage, for which the prognosis is substantially worse.

Volumetric Laser Endomicroscopy (VLE) offers a very attractive solution which could efficiently find these early cancerous lesions in BE (Wolfsen et al., 2015). With VLE imaging, a balloon is inflated in the esophagus and a full circumferential scan of the esophageal wall is captured over a segment of 6 cm, up to a depth of 3 mm, using second generation Optical Coherence Tomography (OCT) (Gonzalo et al., 2010). This unique capability allows the physician to analyze the underlying tissue layers, theoretically enabling better detection of early cancer. However, due to the large volume and subtle nature of the greyscale cross-sectional data, the interpretation of the VLE images remains a challenging task for gastroenterologists. Although a recent clinical prediction model has shown reasonable detection accuracy (Swager et al., 2016d), the identification of early cancer on VLE images using current criteria remains very complex (Swager et al., 2016b, Swager et al., 2016c). Hence, an automated system for the analysis of VLE scans would be highly desirable for supporting the physician. However, the applicability of computer-aided methods for the interpretation of VLE data remains to be discovered. Therefore, in this study, we investigate the basic feasibility and the potential of computer-aided methods for the analysis of VLE imagery. We establish a benchmark employing a considerable set of widely-used image features and classification methods on a histopathologically-validated dataset of ex-vivo VLE images. In addition, we propose three clinically-inspired features based on a recent clinical prediction model: two completely novel features and an additional one, derived from a widely-applied texture feature. To validate and compare the performance of the evaluated methods, we use a thorough validation procedure and compare the results to the classification performance of two VLE experts on the same set of VLE images.

The remainder of this paper is organized as follows. We first provide an overview of related work in Section 2 and continue with a comprehensive description of the employed methods in Section 3, in which we elaborate on the data acquisition (Sections 3.1 and 3.2), the evaluated features and classification methods (Sections 3.3 and 3.4) and a detailed description of our validation procedure (Sections 3.5–3.8). The results of this study are presented in Section 4 leading to our conclusions as outlined in Section 5.

Section snippets

Related work

Earlier work of Qi et al. (2010) has shown promising results on the detection of dysplasia using Endoscopic OCT (EOCT), achieving a detection accuracy of 84% and maximal Area Under the Curve (AUC) of 0.84. In contrast to VLE, EOCT is a probe-based system with a relatively small scanning surface (Rollins et al., 1998, Rollins et al., 1999). Qi et al. used the working channel of the endoscope for the EOCT probe and a suction cap for taking a biopsy, ensuring a histopathology correlation between

Patients and image acquisition

The images used in this study are derived from a previously established database, consisting of correlated ex-vivo VLE images and histology slides (Swager et al., 2016a). In this subsection, we provide a short overview of the construction of this VLE-histology database (figures derived from (Swager et al., 2016a)). Two groups of patients were eligible for this study: patients with Non-Dysplastic Barrett's Esophagus (NDBE) undergoing surveillance endoscopy, and patients referred for work-up and

Image features

Table 3 presents the classification performance for all combinations of features and classification methods, where results with an AUC of 0.80 or higher are printed in boldface. From this table, we observe that the proposed clinically inspired features show a superior performance to state-of-the-art alternatives over all classification methods, except for Neural Networks. The maximum AUC for the clinically inspired features is 0.90, which is achieved by LH features in combination with a linear

Discussion and conclusions

In this study, we have investigated the use of computer-aided methods for the automated analysis of Volumetric Laser Endomicroscopy (VLE) to detect early Barrett's cancer. We have evaluated commonly used image analysis features (e.g. Local Binary Patterns, Histogram of Oriented Gradients and Gray-Level Co-occurrence Matrix features), in combination with popular classification methods (e.g. Support Vector Machine, Random Forest and Neural Nets). In this evaluation, we have included features that

Conflict of interest

The authors have no conflicts of interest to declare.

Acknowledgments

The authors gratefully acknowledge the support from NinePoint Medical (NinePoint Medical Inc., Bedford, MA, USA), who granted us the permission to use the VLE scans for this work.

References (39)

  • D.A. Corley et al.

    Impact of endoscopic surveillance on mortality from barrett's esophagus-associated esophageal adenocarcinomas

    Gastroenterology

    (2013)
  • C. Cortes et al.

    Support-vector networks

    Mach. Learn.

    (1995)
  • N. Dalal et al.

    Histograms of oriented gradients for human detection

  • J.D.J. Deng et al.

    Imagenet: a large-scale hierarchical image database

    2009 IEEE Conf. Comput. Vis. Pattern Recognit.

    (2009)
  • I. Fogel et al.

    Gabor filters as texture discriminator

    Biol. Cybern.

    (1989)
  • Y. Freund et al.

    Computational learning theory

    Comput. Learn. Theory

    (1995)
  • M.J. Gora et al.

    Tethered capsule endomicroscopy enables imaging of gastrointestinal tract microstructure

    Nat. Med.

    (2013)
  • R.M. Haralick et al.

    Textural features for image classification

    IEEE Trans. Syst. Man Cybern. Syst.

    (1973)
  • T.J. Hayeck et al.

    The prevalence of Barrett's esophagus in the us: estimates from a simulation model confirmed by seer data

    Dis. Esophagus

    (2010)
  • Cited by (17)

    View all citing articles on Scopus
    View full text