A Comprehensive Guide to Artificial Intelligence in Endoscopic Ultrasound

Background: Endoscopic Ultrasound (EUS) is widely used for the diagnosis of bilio-pancreatic and gastrointestinal (GI) tract diseases, for the evaluation of subepithelial lesions, and for sampling of lymph nodes and solid masses located next to the GI tract. The role of Artificial Intelligence in healthcare in growing. This review aimed to provide an overview of the current state of AI in EUS from imaging to pathological diagnosis and training. Methods: AI algorithms can assist in lesion detection and characterization in EUS by analyzing EUS images and identifying suspicious areas that may require further clinical evaluation or biopsy sampling. Deep learning techniques, such as convolutional neural networks (CNNs), have shown great potential for tumor identification and subepithelial lesion (SEL) evaluation by extracting important features from EUS images and using them to classify or segment the images. Results: AI models with new features can increase the accuracy of diagnoses, provide faster diagnoses, identify subtle differences in disease presentation that may be missed by human eyes, and provide more information and insights into disease pathology. Conclusions: The integration of AI in EUS images and biopsies has the potential to improve the diagnostic accuracy, leading to better patient outcomes and to a reduction in repeated procedures in case of non-diagnostic biopsies.


Introduction
Endoscopic ultrasound (EUS) has revolutionized the field of gastrointestinal (GI) endoscopy by providing high-resolution imaging of the gastrointestinal tract and adjacent anatomical structures. EUS has been widely used for the diagnosis of bilio-pancreatic diseases, staging of GI tract tumors, evaluation of subepithelial lesions, and sampling of lymph nodes and solid masses [1]. EUS-guided fine-needle aspiration (FNA) and biopsy (FNB) have enabled the diagnosis of various malignancies and have greatly improved patient outcomes [2].
However, the accuracy of EUS-guided FNA and FNB largely depends on the skills and experience of the endoscopist and of the pathologist. In recent years, artificial intelligence (AI) has emerged as a promising tool for improving the accuracy and efficiency of EUSguided tissue sampling and pathological diagnosis [3].
AI refers to the use of computer algorithms to analyze large amounts of data and identify patterns or make predictions. In healthcare, AI has been applied to various tasks, including image recognition, natural language processing, and clinical decision-making [4]. AI algorithms can analyze EUS images and assist with the interpretation of findings, as well as predict the pathological diagnosis of tissue samples obtained by EUS-guided FNA and FNB.
The objective of this review article is to provide an overview of the current state of AI in EUS imaging and pathology on the final pathological diagnosis. We will discuss the various AI techniques used for EUS image analysis and pathological diagnosis, their strengths and limitations, and their potential impact on clinical practice. A section is dedicated to the application of AI in training program to improve the knowledge of EUS and the recognition of anatomical structures.

AI Algorithms and Image Acquisition
The use of artificial intelligence in EUS image interpretation has shown great potential for improving the accuracy and efficiency of the diagnostic process. AI algorithms can be divided into two main categories: deep learning techniques and machine learning techniques.
Deep learning techniques involve the use of neural networks to learn and recognize patterns in EUS images ( Figure 1). These networks are composed of multiple layers of interconnected nodes that allow for the processing of large amounts of data. Convolutional neural networks (CNNs) are a commonly used deep learning technique for image analysis in healthcare [5]. They are designed to identify and extract important features from EUS images and use them to classify or segment the images.
However, the accuracy of EUS-guided FNA and FNB largely depends on the skills and experience of the endoscopist and of the pathologist. In recent years, artificial intelligence (AI) has emerged as a promising tool for improving the accuracy and efficiency of EUS-guided tissue sampling and pathological diagnosis [3].
AI refers to the use of computer algorithms to analyze large amounts of data and identify patterns or make predictions. In healthcare, AI has been applied to various tasks, including image recognition, natural language processing, and clinical decision-making [4]. AI algorithms can analyze EUS images and assist with the interpretation of findings, as well as predict the pathological diagnosis of tissue samples obtained by EUS-guided FNA and FNB.
The objective of this review article is to provide an overview of the current state of AI in EUS imaging and pathology on the final pathological diagnosis. We will discuss the various AI techniques used for EUS image analysis and pathological diagnosis, their strengths and limitations, and their potential impact on clinical practice. A section is dedicated to the application of AI in training program to improve the knowledge of EUS and the recognition of anatomical structures.

AI Algorithms and Image Acquisition
The use of artificial intelligence in EUS image interpretation has shown great potential for improving the accuracy and efficiency of the diagnostic process. AI algorithms can be divided into two main categories: deep learning techniques and machine learning techniques.
Deep learning techniques involve the use of neural networks to learn and recognize patterns in EUS images ( Figure 1). These networks are composed of multiple layers of interconnected nodes that allow for the processing of large amounts of data. Convolutional neural networks (CNNs) are a commonly used deep learning technique for image analysis in healthcare [5]. They are designed to identify and extract important features from EUS images and use them to classify or segment the images. By combining recognized EUS-image features for pancreatic lesion diagnosis with measurements of non-Euclidean anatomical features, significant progress can be made in distinguishing diverse sub-types that have varying outcomes, A, B, and C. The utilization of fractal geometry, specifically the surface fractal dimension, as a measure of the space-filling property of an irregularly shaped structure, can be effectively merged as a feature within an AI-based neuronal network classification system, to achieve a more precise anatomical classifier system. By combining recognized EUS-image features for pancreatic lesion diagnosis with measurements of non-Euclidean anatomical features, significant progress can be made in distinguishing diverse sub-types that have varying outcomes, A, B, and C. The utilization of fractal geometry, specifically the surface fractal dimension, as a measure of the space-filling property of an irregularly shaped structure, can be effectively merged as a feature within an AI-based neuronal network classification system, to achieve a more precise anatomical classifier system. Machine learning techniques, on the other hand, involve the use of algorithms that can learn from data and make predictions based on that learning. These techniques can be supervised, unsupervised, or semi-supervised. Supervised learning involves the use of labelled data to train an algorithm to recognize patterns in EUS images. Unsupervised learning involves the use of unlabeled data to discover patterns and relationships in the data. Semi-supervised learning combines both supervised and unsupervised learning [6]. In addition to image interpretation, AI can also be used to enhance the acquisition of EUS images. This includes automatic segmentation and image quality improvement.
Automatic segmentation involves the use of AI algorithms to identify and separate different structures in EUS images [7]. This can help to improve the accuracy and efficiency of EUS-guided procedures by providing better visualization of the target area. For example, AI algorithms can be used to automatically segment the pancreas or the lymph nodes in EUS images, allowing for more precise targeting during EUS-guided biopsy [8].
Image quality improvement involves the use of AI algorithms to enhance the clarity and resolution of EUS images. This can help to improve the accuracy of image interpretation and diagnosis. For example, AI algorithms can be used to reduce noise, improve contrast, and sharpen edges in EUS images. AI-enhanced image quality can also help to reduce the variability in image quality between different endoscopists and ultrasound machines, improving the consistency of diagnosis and treatment [9].

Tumor Identification
Tumor identification is a crucial step in the diagnosis and staging of GI neoplasia, especially in case of pancreatic cancers that may be isoechoic with the surrounding parenchyma or may be hidden by signs of chronic pancreatitis. AI algorithms can assist with tumor identification by analyzing EUS images and identifying suspicious areas that may require biopsy sampling and microscopy observation or further clinical evaluation [10]. Deep learning techniques, such as CNNs, have shown great potential for tumor identification by extracting important features from EUS images and using them to classify or segment the images [11]. Supervised machine learning techniques can also be used to train AI algorithms to recognize specific tumor features, such as shape, size, and vascularity [12,13]. A recent meta-analysis of 10 studies, which involved 1871 patients, evaluated the diagnostic accuracy of AI applied to EUS in detecting pancreatic cancer ( Table 1). The results showed that AI had a high diagnostic sensitivity of 0.92 and specificity of 0.9, with an area under the summary receiver operating characteristics (SROC) curve of 0.95 and a diagnostic odds ratio of 128.9 [14]. These findings suggest that AI-assisted EUS could become an essential tool for the computer-aided diagnosis of pancreatic cancer. However, a relatively small number of studies and enrolled patients make generalizing difficult, and further research is needed to validate these results on a larger scale.
It is known that there are several advantages of incorporating new features in classifying pathological changes using AI. Among these are the following: (a) increased accuracy: the addition of new features to AI models can increase their accuracy in diagnosing and classifying pathological changes; (b) faster diagnosis: AI models with new features can analyze large amounts of data quickly and accurately, allowing for early diagnosis and better patient outcomes with reduced healthcare costs; (c) personalized treatment: AI models with new features can identify subtle differences in disease presentation that may be missed by human eyes. This can lead to more personalized treatment plans that are tailored to the specific needs of each patient; (d) improved decision-making: AI models with new features can provide clinicians with more information and insights into disease pathology, allowing them to make more informed decisions about patient care; and (e) scalability: AI models can be easily scaled to analyze large amounts of data, making them ideal for analyzing large datasets or monitoring patient health over time. This can lead to improved population health management and disease surveillance.  Diseases of various origins, such as inflammatory disorders, tumors, and functional diseases, can result in changes in the structural complexity and dynamic activity patterns. One way to quantify this structural complexity is by measuring the fractal dimension, among other parameters.
The human body is composed of intricate systems and networks, including its most complex structures. It is now widely accepted that the architecture of anatomical entities and their activities exhibit non-Euclidean properties. Natural fractals, including those found in anatomy, possess four distinct characteristics: (a) irregular shape, (b) statistical selfsimilarity, (c) non-integer or fractal dimension, and (d) scaling properties that depend on the scale of measurement. As anatomical structures do not conform to regular Euclidean shapes, their dimensions are expressed as non-integer values between two integer topological dimensions [17]. Fractal geometry has been shown to be useful in evaluating the geometric complexity of anatomic and imaging patterns observed in both benign and malignant masses ( Figure 1).
Recently, Carrara et al. have introduced a new estimator, called the surface fractal dimension, to evaluate the complexity of EUS-Elastography images in differentiating solid pancreatic lesions [18]. The study showed that the surface fractal dimension can distinguish malignant tumors from NETs, unaffected tissues surrounding malignant tumors from NETs, and NETs from inflammatory lesions. This study highlights the importance of incorporating fractal analysis into AI algorithms for the diagnosis and categorization of the diverse array of pancreatic lesions.

Subepithelial Lesion Evaluation
Subepithelial lesions (SELs) are a common indication to perform EUS, and their diagnosis can be challenging. AI algorithms can assist with SELs evaluation by analyzing EUS images and identifying suspicious lesions that may require biopsy. Deep learning techniques, such as CNNs, have shown great potential for SELs evaluation by extracting important features from EUS images and using them to classify or segment the images. The study by Hirai et al. suggests that an AI system has higher diagnostic performance than experts in differentiating SELs on EUS images [19]. The AI system's accuracy for classifying five different types of SELs was 86.1%, which was significantly better than that of all endoscopists. In particular, the sensitivity and accuracy of the AI system for detecting gastrointestinal stromal tumors (GISTs) were higher than those of all endoscopists. These findings suggest that AI technology can be a valuable tool to assist in the diagnosis of SELs on EUS images, and may help improve clinical decision-making [19].

Diagnostic Accuracy
The diagnostic accuracy of AI algorithms may be affected by several factors, such as the quality of the EUS images, the size and location of the lesion, and the expertise of the endoscopists and the pathologists. The results of previous studies have been controversial. In recent meta-analysis, Xiao et al. identified seven studies to assess the diagnostic accuracy of AI-based EUS in distinguishing GISTs from other SELs [20]. The combined sensitivity and specificity of AI-based EUS were 0.93 and 0.78, respectively, with an overall diagnostic odds ratio of 36.74 and an area under the summary receiver operating characteristic curve (AUROC) of 0.94. These results suggest that AI-based EUS showed high diagnostic ability in differentiating GISTs from other SELs and could potentially set a premise for adapting diagnostic capabilities of other disease under EUS.

Clinical Impact and Limitations
The clinical impact of AI algorithms for lesion detection and characterization in EUSguided pathological diagnosis is still under investigation. However, studies have reported that AI algorithms can improve the accuracy and efficiency of EUS-guided procedures, reduce the need for unnecessary biopsies, and assist with treatment planning [21]. AI algorithms can also help to reduce the inter-observer variability in lesion detection and characterization between different endoscopists [22]. However, the implementation of AI algorithms in clinical practice may be limited by several factors, such as the availability and cost of AI software, the need for specialized training, concerns about data privacy and security, and most importantly the need for larger studies to establish the accuracy of such systems.

Digital Histopathological Diagnosis
The advancement of digital pathology has revolutionized the field of pathology by enabling the acquisition, management, and interpretation of pathological information in a digital format. This transition has been fueled by advances in whole slide imaging (WSI) technology, which allows for the digitization of glass slides at high resolution [23]. The adoption of digital pathology offers numerous advantages, such as improved efficiency, reduced turnaround times, remote consultation, and easy access to archived cases [23]. The ability to store pictures from tissue acquisition, as it happens for radiological imaging, puts the basis to share and use a lot of knowledge from pathological anatomy. Moreover, it sets the stage for the application of AI algorithms to facilitate and enhance diagnostic accuracy in the field of EUS.
WSI involves the scanning of entire histological glass slides to create high-resolution digital images. These digital images can be zoomed in or out and navigated as easily as a glass slide under a microscope. WSI technology has been instrumental in overcoming the challenges of data management in digital pathology, as it allows for efficient storage, retrieval, and sharing of massive amounts of image data [23]. Furthermore, WSI facilitates the standardization of image quality and provides an ideal platform for the application of AI algorithms to analyze the digital images, thereby supporting the development of novel diagnostic tools in endoscopic ultrasound.
In the context of histopathological image analysis, CNNs can be trained to automatically detect and classify the multifarious tissue structures, cellular patterns, and pathological alterations. CNNs are composed of multiple layers of interconnected "neurons", including convolutional, pooling, and fully connected layers. The hierarchical structure of CNNs allows them to learn complex, high-level features from raw image data, thereby making them particularly suitable for the analysis of intricate histopathological images in endoscopic ultrasound [24][25][26].
In addition to CNNs, other machine learning techniques have been employed in histopathological image analysis for endoscopic ultrasound. These include support vector machines (SVM), random forests, and decision trees, among others. These algorithms can be used to extract and analyze peculiar features from histopathological images, such as texture, shape, and color. By leveraging the strengths of multiple machine learning techniques, ensemble models can be created to improve overall performance and address potential limitations of individual algorithms [27].
The integration of AI algorithms, particularly CNNs, into the field of EUS has the potential to revolutionize the diagnosis and management of various GI tract disorders. As research progresses and these techniques become more refined, AI-based tools are expected to play an increasingly prominent role in the field of EUS and digital pathology.

AI Applications in Anatomical Pathology
Tumor grading and staging are critical steps in the management of GI malignancies. Accurate tumor grading and staging are necessary for determining the appropriate treatment plan and predicting patient outcomes. AI algorithms can aid in tumor grading and staging by analyzing EUS images and identifying features that correspond to different tumor stages and grades (Figure 1). Deep learning techniques, such as CNNs, can identify subtle differences in tissue structure and morphology that may not be apparent to the human eye. For example, CNNs can potentially analyze EUS images of pancreatic cancer and differentiate between early-stage and advanced-stage tumors based on changes in tissue texture and vascularity [15,16,28]. AI algorithms can also predict the presence of lymph node metastasis by analyzing EUS images and identifying characteristic features, such as size, shape, and echogenicity, ( Table 2). In a study by Săftoiu et al., contrastenhanced harmonic EUS (CEH-EUS) with time-intensity curve (TIC) analysis and artificial neural network (ANN) processing were used to differentiate pancreatic carcinoma (PC) and chronic pancreatitis (CP) cases [29]. Parameters obtained through TIC analysis were able to differentiate between PC and CP cases and showed good diagnostic results in an automated computer-aided diagnostic system.  Prognostic and predictive biomarker analysis is essential for predicting patient outcomes and determining the most appropriate treatment plan. Prognostic biomarkers are associated with patient outcomes, such as survival or recurrence, while predictive biomarkers are associated with response to specific therapies [30]. Kurita et al. investigated the diagnostic ability of carcinoembryonic antigen (CEA), cytology, and AI using cyst fluid in differentiating malignant from benign pancreatic cystic lesions [31]. AI using deep learning showed higher sensitivity and accuracy in differentiating malignant from benign pancreatic cystic lesions than CEA and cytology.

Integrating AI in EUS-Guided Tissue Acquisition
EUS-guided fine needle aspiration (EUS-FNA) and EUS-guided fine needle biopsy (EUS-FNB) are commonly used techniques for obtaining tissue samples for pathological diagnosis. The accuracy of EUS-guided tissue acquisition largely depends on the skills and experience of the operator and the quality and size of the tissue samples obtained can vary. The integration of AI in EUS-guided tissue acquisition has the potential to improve the accuracy and efficiency of the procedure, leading to better patient outcomes (Table 3).  In a study by Inoue et al., an automatic visual inspection method based on supervised machine learning was proposed to assist rapid on-site evaluation (ROSE) for endoscopic ultrasound-guided fine needle aspiration (EUS-FNA) biopsy. The proposed method was effective in assisting on-site visual inspection of cellular tissue in ROSE for EUS-FNA, indicating highly probable areas including tumor cells [33].
Hashimoto et al. evaluated the diagnostic performance of their computer-aided diagnosis system using deep learning in EUS-FNA cytology of pancreatic ductal adenocarcinoma. The deep learning system showed promising results in improving diagnostic performance by step-by-step learning, with higher training volume and more efficient system development required for optimal CAD performance in ROSE of EUS-FNA cytology [32].
Ishikawa et al. developed a new AI-based method for evaluating EUS-FNB specimens in pancreatic diseases using deep learning and contrastive learning. The AI-based evaluation method using contrastive learning was comparable to macroscopic on-site evaluation (MOSE) performed by EUS experts and can be a novel objective evaluation method for EUS-FNB [21].
AI algorithms can potentially assist in EUS-FNA and EUS-FNB by providing real-time feedback to the endoscopist during the procedure. AI algorithms can analyze EUS images in real-time and provide guidance on the optimal location and depth of the needle insertion, as well as feedback on the quality of the tissue sample obtained. AI algorithms can also assist in the selection of the appropriate needle size and type based on the characteristics of the target lesion, such as diameter and location. The expectations of the limit of how AI can help improve variability in an endoscopic procedure can be the forefront need to find applicability from these systems to aid the outcome of a procedure [3].

Clinical Validation of AI-Enhanced Pathological Diagnosis
The accuracy of AI-enhanced pathological diagnosis has not been evaluated to a certain degree in the EUS setting. Traditional pathology approaches have been crucial in diagnosing diseases. Additionally, several qualitative and semi-quantitative grading and staging scoring systems have been widely proposed with even more accepted limitations. Semi-quantitative scores are not "measures" but only "labels" of severity [34]. An observer assigns "semiquantitative" scores to tissue changes based on predefined morphologic criteria. These scores are whole numbers and are less precise than quantitative scores because they only approximate relative changes. However, the advantage of semiquantitative scoring is that it can be applied to both macroscopic and microscopic tissue changes, generating strong data that can be statistically analyzed and used to evaluate experimental groups [35].
However, the development of digital pathology and AI solutions have allowed for more quantitative pathologic assessments, which are particularly useful in translational research [36]. These approaches provide invaluable opportunities for biomarker discovery and patient selection, aiding in the identification of optimal treatment regimens based on patient profiles [37]. Despite these benefits, challenges still exist in implementing AI-based methods in clinical settings. Specifically, in endoscopic setting, the incorporation of datasets and patient profiles to enhance the pathological diagnosis of a set disease is an expectation of many clinicians with the ever-growing boom of AI into clinical practice, that does not exist as of today.
The clinical impact of AI-enhanced pathological diagnosis has yet to be fully realized. However, the potential benefits include improved diagnostic accuracy, reduced inter-observer variability, and more efficient use of healthcare resources. AI-enhanced pathological diagnosis may also lead to the development of new biomarkers and treatment strategies for gastrointestinal malignancies. Limitations to the use of AI in pathological diagnosis exist, such as the accuracy of AI algorithms depends on the quality and quantity of the data used for training. The development of AI algorithms requires large amounts of data, which may be difficult to obtain for rare or uncommon gastrointestinal malignancies. Additionally, the use of AI in pathological diagnosis may raise ethical concerns regarding the role of technology in healthcare decision-making, a stage unexplored yet in many aspects of AI's massive incorporation into healthcare.

EUS Training
The learning of EUS requires time and practice in a high-volume center, with an experienced endosonographer as teacher. This is due to the need to learn not only endoscopy but also to show excellent knowledge of ultrasound anatomy and of different district diseases [38].
As in other areas, in EUS AI ideally aims to improve the quality of the examination, helping to distinguish the type of lesions found, reducing procedural times, and providing real-time decision support and guidance in the execution of operative procedures [39]. It could also support the training of beginners, speeding up the learning process and reducing the need for a mentor. Furthermore, AI could provide quality control, standardizing performance between trainees and experts (Table 4). Often one of the first obstacles for the trainee in approaching EUS is the recognition of anatomical structures, as these are visualized in an unusual perspective, that varies according to the position of the endoscope and the station being examined. In 2021 at the ESGE days, a pilot study was awarded as the "best procedural innovation of the year"; it proposed an AI system based on two convolutional neuronal networks that recognizes anatomical structures in both radial and linear EUS. This software has already achieved a recognition accuracy of 85% during the development phase [43].
Similarly, another study proposed the use of a CNN consisting of two branches, one for voice data and one for image data. EUS image labels were assigned based on simple verbal inputs indicating anatomical landmarks provided by experienced operators during the procedures [42]. The prediction accuracy after the first system training reached 76% at the image level on a data set with five different labels. Moreover, voice tagging, instead of manual annotation, is very convenient in saving time [42]. These results are encouraging from the point of view of providing support to beginners, however data on actual improvement in the learning curve are scarce [42].
The BP MASTER (pancreaticobiliary master) system was specifically designed by a joint collaboration (by Renmin Hospital of Wuhan University, Wuhan Union Hospital of Huazhong University of Science and Technology, Wuhan Puai Hospital, and Wuhan EndoAngel Medical Technology Company) for training in EUS and examination quality control. The system includes a station classification model and a pancreas/abdominal aorta/portal confluence segmentation model. It was validated both internally and externally, reaching in the latter an accuracy of 82.4% in station classification and 0.72 Dice score in segmentation. The results of accuracy in classification and Dice score in segmentation were also comparable to that of experienced operators. In a crossover study, it was tested whether the system could increase the accuracy of station recognition in trainees, showing an improvement from 67.2% to 78.4% (p < 0.01) [45].
The same research group, subsequently, implemented the BP MASTER incorporating four deep convolutional neural networks (DCNN) in order to obtain additional functions: transducer location information, real-time operating instructions, and to annotate the common bile duct anatomy and measure its caliber on freeze frame [44]. At internal and external validations, the model confirmed its accuracy values, comparable to that of an expert.
Another crossover study was performed, aiming to evaluate the trainees' accuracy improvement in interpreting the images when assisted by AI, which raised from 60.8% to 76.3% (p < 0.01) [44]. Finally, another study proposed to use AI to improve learning of the CH-EUS technique, particularly useful in identifying pancreatic masses and notoriously difficult to learn.
The system (CH-EUS MASTER), which includes a real-time acquisition and segmentation model, was adequately validated. A cross-trial was then conducted to assess the impact on trainees' learning curve, using intersection over union (IoU) and time to lesion finding as indicators. Beginners who were supported by CH-EUS MASTER reported an improvement in mean IoU from 0.80 to 0.87 (p = 0.002) and a reduction in mean lesion identification times from 22.75 to 17.98 s (p < 0.01), and from 34.21 to 25.92 s (p < 0.01) in the pancreatic body-tail and head-uncinate process, respectively [41].
CH-EUS MASTER seems also a valid tool in guiding EUS-FNA, with improvement in the first-pass diagnostic yield [40]. The development of more effective and articulate AI systems is desirable to allow trainees to speed up the training process and improve their performance. Ideally, integrating AI assistance systems with the use of simulators, up to virtual reality, could almost make the mentor unnecessary, but further data are necessary [46,47].

Future Directions and Conclusions
The integration of AI in EUS images and biopsy microscopy analysis has the potential to improve the diagnostic accuracy, leading to better patient outcomes and to a reduction in repeated procedures in case of non-diagnostic biopsies. AI algorithms can also aid in tumor grading, staging, and prognostic analysis. However, the clinical impact of AI-enhanced pathological diagnosis has yet to be established. Further research is needed to evaluate the long-term benefits and limitations of AI in EUS-imaging and biopsies.