Abstract
Effective and rapid detection of lesions in the Gastrointestinal (GI) tract plays a critical role in how fast gastroenterologist can respond to life-threatening diseases. Capsule Endoscopy (CE) has revolutionized traditional endoscopy procedure by allowing gastroenterologists visualize the entire GI tract non-invasively. Once the tiny capsule is swallowed, it captures sequence of images as it is propelled down the GI tract. A single video can last up to 8 h producing between 30,000 and 100,000 images. Automating the detection of frames containing specific lesion in CE video would relieve gastroenterologists of the arduous task of reviewing the entire video before making diagnosis. Convolutional Neural Network (CNN) based models have been very successful in various image classification tasks. However, they suffer excessive parameters, are sample inefficient and rely on very large amount of training data. Deploying a CNN classifier for lesion detection task will require time-to-time fine-tuning to generalize to any unforeseen category. In this paper, we propose a meta-learning framework followed by a few-shot lesion recognition in CE video. Meta-learning framework is designed to establish similarity or dissimilarity between concepts while few-shot learning (FSL) aims to identify new concepts from only a small number of examples. We train a feature extractor to learn a representation for different small bowel lesions using meta-learning. At the testing stage, the category of an unseen sample is predicted from only a few support examples, thereby allowing the model to generalize to a new category that has never been seen before. We demonstrated the efficacy of this method on real patient CE images. We conducted experiments to evaluate the impact of the number of support samples and compared performance across multiple CNN networks. Our experiment showed that this approach performs competitively with baseline models and is effective in few-shot lesion recognition in CE images.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Adewole, S., et al.: Deep learning methods for anatomical landmark detection in video capsule endoscopy images. In: Proceedings of the Future Technologies Conference, pp. 426–434. Springer, Cham (2020).https://doi.org/10.1007/978-3-030-32520-6
Ahlawat, R., Ross, A.B.: Esophagogastroduodenoscopy (2018)
Bromley, J., Guyon, I., LeCun, Y., Säckinger, E., Shah, B.: Signature verification using a “siamese” time delay neural network. In: Advances in Neural Information Processing Systems, pp. 737–744 (1994)
Chen, J., Zou, Y., Wang. Y.: Wireless capsule endoscopy video summarization: a learning approach based on siamese neural network and support vector machine. In: 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 1303–1308. IEEE (2016)
Collins, J.T., Nguyen, A., Madhu Badireddy. A.: Abdomen and pelvis, small intestine. StatPearls [Internet] (2020)
Deeba, F., Mohammed, S.K., Bui, F.M., Wahid. K.A.: A saliency-based unsupervised method for angiectasia detection in endoscopic video frames. J. Med. Biol. Eng. 38(2), 325–335 (2018)
Eliakim, R., et al.: Evaluation of the pillcam colon capsule in the detection of colonic pathology: results of the first multicenter, prospective, comparative study. Endoscopy 38(10), 963–970 (2006)
Eliakim, R.: The pillcam colon capsule–a promising new tool for the detection of colonic pathologies. Curr. Color. Cancer Rep. 4(1), 5–9 (2008)
Emam, A.Z., Ali, Y.A., Ben Ismail, M.M.: Adaptive features extraction for capsule endoscopy (CE) video summarization. In: International Conference on Computer Vision and Image Analysis Applications, pp. 1–5. IEEE (2015)
Gao, Y., Weining, L., Si, X., Lan, Yu.: Deep model-based semi-supervised learning way for outlier detection in wireless capsule endoscopy images. IEEE Access 8, 81621–81632 (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Hoffer, E., Ailon, N.: Deep metric learning using triplet network. In: Feragen, A., Pelillo, M., Loog, M. (eds.) SIMBAD 2015. LNCS, vol. 9370, pp. 84–92. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24261-3_7
Hwang, S., Celebi, M.E.: Polyp detection in wireless capsule endoscopy videos based on image segmentation and geometric feature. In: 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 678–681. IEEE (2010)
Iakovidis, D.K., Tsevas, S., Polydorou, A.: Reduction of capsule endoscopy reading times by unsupervised image mining. Comput. Med. Imag. Graph. 34(6), 471–478 (2010)
Iddan, G., Meron, G., Glukhovsky, A., Swain, P.: Wireless capsule endoscopy. Nature 405(6785), 417–417 (2000)
Klang, E., et al.: Deep learning algorithms for automated detection of Crohn’s disease ulcers by video capsule endoscopy. Gastroint. Endosc. 91(3), 606–613 (2020)
Koch, G., Zemel, R., Salakhutdinov, R.: Siamese neural networks for one-shot image recognition. In: ICML Deep Learning Workshop, vol. 2. Lille (2015)
Kodogiannis, V., Lygouras, J.N.: Neuro-fuzzy classification system for wireless-capsule endoscopic images. Int. J. Electr. Comput. Syst. Eng. 2(1), 55–63 (2008)
Wei Koh, J.E., et al.: Automated diagnosis of celiac disease using dwt and nonlinear features with video capsule endoscopy images. Fut. Gen. Comput. Syst. 90, 86–93 (2019)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
Li, H., et al.: Advanced endoscopic methods in gastrointestinal diseases: a systematic review. Quant. Imag. Med. Surg. 9(5), 905 (2019)
Liu, B., et al.: Deep few-shot learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 57(4), 2290–2304 (2018)
Mamonov, A.V., Figueiredo, I.N., Figueiredo, P.N., Richard Tsai, Y.-H.: Automated polyp detection in colon capsule endoscopy. IEEE Trans. Med. Imag. 33(7), 1488–1502 (2014)
Mewes, P.W., et al.: Semantic and topological classification of images in magnetically guided capsule endoscopy. In: Medical Imaging 2012: Computer-Aided Diagnosis, vol. 8315, pp. 83151A. International Society for Optics and Photonics (2012)
Miaou, S.G., et al.: A multi-stage recognition system to detect different types of abnormality in capsule endoscope images. J. Med. Biol. Eng. 29(3), 114–121 (2009)
Paszke, A.: Pytorch: an imperative style, high-performance deep learning library. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 8024–8035. Curran Associates, Inc. (2019)
Paul, B.D., Babu, C.: Robust image compression algorithm for video capsule endoscopy: A review. In 2019 International Conference on Intelligent Sustainable Systems (ICISS), pp. 372–377. IEEE (2019)
Peery, A.F., et al.: Burden of gastrointestinal disease in the united states: 2012 update. Gastroenterology 143(5), 1179–1187 (2012)
Pogorelov, K., et al.: Deep learning and hand-crafted feature based approaches for polyp detection in medical videos. In: 2018 IEEE 31st International Symposium on Computer-Based Medical Systems (CBMS), pp. 381–386. IEEE (2018)
Sainju, S., Bui, F.M., Wahid, K.A.: Automated bleeding detection in capsule endoscopy videos using statistical features and region growing. J. Med. Syst. 38(4), 25 (2014)
Sali, R., et al. Hierarchical deep convolutional neural networks for multi-category diagnosis of gastrointestinal disorders on histopathological images. arXiv preprint arXiv:2005.03868 (2020)
Schroff, F., Kalenichenko, D., Philbin. J.: FaceNet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: DeepFace: closing the gap to human-level performance in face verification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014
Tsevas, S., Iakovidis, D.K., Maroulis, D., Pavlakis, E.: Automatic frame reduction of wireless capsule endoscopy video. In: 2008 8th IEEE International Conference on BioInformatics and BioEngineering, pp. 1–6. IEEE (2008)
Tsuboi, A., et al.: Artificial intelligence using a convolutional neural network for automatic detection of small-bowel angioectasia in capsule endoscopy images. Digest. Endosc. 32(3), 382–390 (2020)
Van Gossum, A., et al.: Capsule endoscopy versus colonoscopy for the detection of polyps and cancer. New Engl. J. Med. 361(3), 264–270 (2009)
Vanschoren, J.: Meta-learning: a survey. arXiv preprint arXiv:1810.03548 (2018)
Yuan, Y., Li, B., Meng. M.Q.-H.: Improved bag of feature for automatic polyp detection in wireless capsule endoscopy images. IEEE Trans. Autom. Sci. Eng. 13(2), 529–535 (2015)
Yuan, Y., Wang, J., Li, B., Meng. M.Q.-H.: Saliency based ulcer detection for wireless capsule endoscopy diagnosis. IEEE Trans. Med. Imag. 34(10), 2046–2057 (2015)
Zhang, P., Bai, Y., Wang, D., Bai, B., Li, Y.: Few-shot classification of aerial scene images via meta-learning. Remote Sens. 13(1), 108 (2021)
Zhao, Q., Meng, M.Q.-H.: An abnormality based WCE video segmentation strategy. In: 2010 IEEE International Conference on Automation and Logistics, pp. 565–570. IEEE (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Adewole, S. et al. (2022). Lesion2Vec: Deep Meta Learning for Few-Shot Lesion Recognition in Capsule Endoscopy Video. In: Arai, K. (eds) Proceedings of the Future Technologies Conference (FTC) 2021, Volume 2. FTC 2021. Lecture Notes in Networks and Systems, vol 359. Springer, Cham. https://doi.org/10.1007/978-3-030-89880-9_57
Download citation
DOI: https://doi.org/10.1007/978-3-030-89880-9_57
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-89879-3
Online ISBN: 978-3-030-89880-9
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)