Original Research
Deep learning outperformed 136 of 157 dermatologists in a head-to-head dermoscopic melanoma image classification task

https://doi.org/10.1016/j.ejca.2019.04.001Get rights and content
Under a Creative Commons license
open access

Highlights

  • A convolutional neural network (CNN) received enhanced training with 12,378 open-source dermoscopic images.

  • In a head-to-head comparison, the CNN outperformed 136 of 157 participating dermatologists.

  • The CNN was capable to outperform dermatologists of all hierarchical subgroups (from junior to chief physicians) in dermoscopic melanoma image classification.

Abstract

Background

Recent studies have successfully demonstrated the use of deep-learning algorithms for dermatologist-level classification of suspicious lesions by the use of excessive proprietary image databases and limited numbers of dermatologists. For the first time, the performance of a deep-learning algorithm trained by open-source images exclusively is compared to a large number of dermatologists covering all levels within the clinical hierarchy.

Methods

We used methods from enhanced deep learning to train a convolutional neural network (CNN) with 12,378 open-source dermoscopic images. We used 100 images to compare the performance of the CNN to that of the 157 dermatologists from 12 university hospitals in Germany. Outperformance of dermatologists by the deep neural network was measured in terms of sensitivity, specificity and receiver operating characteristics.

Findings

The mean sensitivity and specificity achieved by the dermatologists with dermoscopic images was 74.1% (range 40.0%–100%) and 60% (range 21.3%–91.3%), respectively. At a mean sensitivity of 74.1%, the CNN exhibited a mean specificity of 86.5% (range 70.8%–91.3%). At a mean specificity of 60%, a mean sensitivity of 87.5% (range 80%–95%) was achieved by our algorithm. Among the dermatologists, the chief physicians showed the highest mean specificity of 69.2% at a mean sensitivity of 73.3%. With the same high specificity of 69.2%, the CNN had a mean sensitivity of 84.5%.

Interpretation

A CNN trained by open-source images exclusively outperformed 136 of the 157 dermatologists and all the different levels of experience (from junior to chief physicians) in terms of average specificity and sensitivity.

Keywords

Melanoma
Skin cancer
Artificial intelligence

Cited by (0)

1

These authors contributed equally to this work.

2

These collaborators are listed in the acknowledgement section.