Robotic large‐area optical biopsy imaging for automated detection of gastrointestinal cancers tested in tissue phantoms and ex vivo porcine bowel

Gastrointestinal endoscopy is a subjective procedure that frequently requires tissue samples for diagnosis. Contact optical biopsy (OB) techniques have the aim of providing direct diagnosis of endoscopic areas without excising tissue samples but lack the wide‐area coverage required for locating and resecting lesions. This article presents a large‐area robotically deployed OB imaging platform for endoscopic detection of colorectal cancer as an add‐on for conventional endoscopes. In vitro, in silicon colon phantoms, the platform achieves an optical resolution of 0.5 line pairs per millimeter, while resolving simulated cancer lesions down to 0.75 mm diameter across large‐area images (55‐103 cm2). Large‐area OB images were generated in an ex vivo porcine colon. The platform allows centimeter‐sized large‐area OB imaging in vitro and ex vivo with submillimeter resolution, including automatic data segmentation of simulated cancer areas. The ability for robotic actuation and spectrum collection is also shown for ex vivo animal colon. If successful, this technology could widen access to user‐independent high‐quality endoscopy and early detection of gastrointestinal cancers.

precancerous lesions and early-stage superficial cancers. [3] However, the endoscopic detection and removal of these lesions depend heavily on the endoscopist's skills and cognitive load, while displaying significant interoperator variability. [4][5][6] These human factors affect the sensitivity of colonoscopy and reportedly 22% to 45% of colonoscopies miss lesions. [4] Furthermore, $2% to 9% of all CRC patients correspond to false-negative colonoscopies, [7] with up to 70% of post-colonoscopy CRC cases due to missed lesions. [8,9] Most of the missed lesions are flat and small (<10 mm), [4,10] some of them have a higher risk of malignancy and are harder to detect than protruding lesions (polyps), [10,11] while most of the diminutive polyps (1-5 mm) are benign. [12] Removal of benign lesions might not be cost-effective nor justify the increased risk of complications and unnecessary histopathology. [13,14] Unfortunately, conventional white-light (WL) colonoscopy does not allow endoscopists to (a) spot flat and small lesions easily; (b) differentiate benign from potentially malignant lesions; (c) determine their histology or stage (depth and submucosal invasion). This information ultimately determines further referral for endoscopic or open surgical removal, and the time interval until the next follow-up endoscopy. Consequently, expert guidelines recommend endoscopists to remove all the lesions found for further histopathological assessment. [15] Since the pioneering work of Kudo, [16] multiple image-enhanced endoscopy (IEE) techniques have aimed to reduce the number of missed lesions by increasing the contrast between normal and pathological tissue. The most widely used IEE techniques use topical dyes (chromoendoscopy), digital magnification or alternative illumination schemes to enable autofluorescence or narrow-band imaging. [17] IEE intensifies morphological patterns that can be used to predict the malignancy or even the histology type via subjective classification systems. [18,19] Consequently, guidelines recognize that some IEE techniques, if used by expert endoscopists, are sensitive and specific enough to replace histopathology and adopt the "resect-and-discard" workflow: if the endoscopist using IEE characterizes the lesion as benign, the lesion is resected and discarded. On the contrary, if superficial malignancy is suspected, the lesion is resected and sent for histopathological assessment. [20][21][22] This workflow could reduce the logistical and financial burden of classical colonoscopy workflow. [23] Nevertheless, IEE is still limited because (a) it does not outperform conventional endoscopy alone [24] ; (b) its benefit is only valid when an expert operator performs the procedure [22,25,26] ; and (c) it does not reduce the risk of complications inherent to endoscopic resection of benign lesions. [27] Hence, expert committees proposed the "diagnose-and-leave" workflow which avoids the resection of benign lesions via in situ IEE characterization and histology prediction, [20] reducing the risk of complications due to the resection of lesions. [27] Still, to date, "diagnose-and-leave" might be feasible only for lesions located in the rectosigmoid colon. [28] Despite their promising cost-effectiveness, "resect-and-discard" and "diagnose-and-leave" workflows face multiple challenges to be widely adopted.
Computer-assisted detection (CADe) attempts to reduce the dependence on the user's expertise while improving sensitivity by automatically red-flagging suspected lesions on the endoscopic video feed in realtime. [29][30][31] Interestingly, as with humans, current WL endoscopy-based CADe research studies struggle to detect small and flat lesions. [32] At the same time, IEE has a role to play in improving the sensitivity of CADe systems and streamline a "resect-and discard" workflow. [33] Contact-based optical biopsy (OB) complements IEE by providing in situ histopathological characterization via morphological and functional imaging of narrow-field tissue areas. [28] Commercially available OB approaches, like confocal laser endomicroscopy and endocytoscopy, are not widely used due to (a) high capital costs [34] ; (b) user-dependent (subjective) interpretation of OB images [28] ; (c) requirement for stable probe-tissue interaction (constant force and minimization of tissue deformation) [35] ; and (d) limited field-of-view (FoV; <1 mm 2 [36] ) which hinders spatial situational awareness and retargeting. [37] IEE and OB modalities can have different levels of subjectivity, mostly related to how much of the interpretation of the IEE/OB data depends on the skills and experience of the user. Some of these challenges are being addressed using, for instance, computerassisted diagnosis (CADx) for automated interpretation of OB data, [29,38] balloon-tissue-stabilization and noncontact techniques for scanning large endoscopic areas at high-speed. [39,40] Hyperspectral OB, such as diffuse reflectance spectroscopy (DRS), a direct-contact-based optical sensing modality, allows the quantification of changes present in the early stages of malignant transformation using relatively simple and low-cost hardware. [41,42] Multiple studies show that DRS-OB allows early detection of malignant transformation and cancer in several tissues. [43,44] For CRC, DRS-OB performance, as per expert societies' guidelines, reached sensitivity and negative predictive values that could allow "resect-and-discard" and "diagnose-and-leave" workflows. [20,45] A study by Rodriguez-Diaz et al. [45] showed (sensitivity/specificity) 92%/92% for neoplastic/non-neoplastic polyps, and 87%/91% for polyps F I G U R E 2 Robotic actuation setup (A) the device is axially translated and rotated along any conventional colonoscope via an overtube coupled to a hollow rotary actuator mounted on top of a linear actuator. As the endoscope is fixed in space, any linear or rotary actuation applied by the actuators will translate/rotate the device along/around the endoscope. The diffuse reflectance spectroscopy (DRS)-optical biopsy (OB) fiber optics bundle and Bowden cables (for deployment) run externally along the overtube. The current in vitro phantom setup uses a rigid brass overtube (white-dashed line rectangle) for torque and translation transmission. (B) Zoomed-in area from A showing a side view of the OB probes (white arrowheads). (C) Deformable silicone phantoms (; = 50 mm) with simulated adenomas were manufactured for in vitro validation. (D-F) In vitro scanning setup: clamped acrylic tubing cylinders are used to hold in vitro and ex vivo targets from their proximal and distal ends, while the OB probes scan the target directly. Probes trans-illuminating the phantom are visible in (D; yellow arrows). Target areas are marked with white dashed lines. The scanning sequence is visible in the Media S1 <5 mm in a total of 83 patients in vivo. Dhar et al. [46] obtained 80%/86% for malignant vs normal tissue and 80%/75% for malignant vs adenomatous polyps in a total of 45 patients in vivo. However, DRS-OB adoption has been limited, as current clinical implementations of DRS do not provide images because they use single-point probes, [47] and arrays of DRS-OB detectors do not match the requirements for endoscopic use. [48][49][50] Recent proofof-concept studies using robotic actuation of single contact OB probes showed accurate position tracking, actuation, and stable probe-tissue interface, [51][52][53] achieving OB scanning coverage, via digital-stitching (mosaicking), ranging from 3 mm 2 , [54] 25 mm 2 [51] up to 160 mm 2 . [39] We believe that robotic-assisted OB endoscopic scanning represents a logical step to achieve centimeter-sized large-area OB coverage. Preliminary versions of this work have been reported that introduced the overall concept, and tested the scanning mechanism in a rigid tube and silicone phantom, with results presented in brief. [55,56] In this article, we propose a robotic large-area OB imaging platform as an accessory for any conventional colonoscope. Here, we characterize and validate in vitro a prototype of such platform while testing its ex vivo feasibility on animal tissue. To the best of our knowledge, the endoscopic deployment and manipulation of an array of OB sensors have not been developed before.

| MATERIALS AND METHODS
The platform is designed as an accessory to any conventional endoscope. The accessory is an overtube with a deployable array of OB sensors at its distal end ( Figure 1F). The proposed overtube device is introduced into the GI tract sliding along an endoscope, which is used as a "guidewire." In this article, for its in vitro and ex vivo validation, the device was autonomously actuated along a section of simulated and porcine ex vivo large intestine (adjacent to the rectum). After reaching the region of interest, the array of OB probes (ELUXI Ltd.) was deployed from its collapsed configuration by externally pulling tendons via Bowden force transmission conduits (0.75 mm internal ;, Asahi Intecc) to bring them into direct contact with the target ( Figure 1C-E). Two-dimensional DRS-OB data were directly acquired along the length of target cylindrical areas (typically up to 70 mm; Figure  Each DRS-OB probe ( Figure 1F, zoomed-in oval area) contained a pair of illumination and collection fibers perpendicular to and in direct contact with the tissue (core: cladding = 200:20 μm, numerical aperture = 0.22). The illumination fiber [I] emits broadband light (from an external light source, HL-2000-HP, Ocean Optics), which is diffusely scattered inside the tissue, while a parallel fiber [C] collects and transmits it to a spectrograph (V10 SPECIM, Spectral Imaging Ltd.; Figure 1G), where is diffracted into its spectrum and finally captured by an sCMOS sensor (40 ms exposure time, optiMOS, QImaging). From each spectral image acquired, a computer vision subroutine extracted the eight raw reflectance spectra from known positions ( Figure 1H, bottom). 2.1 | Actuation, scanning, and spatial resolution testing and classification The raster scanning sequence comprised a rotation of 48 (clockwise and counterclockwise angular actuation, step-size = 1 , total coverage with eight probes = 360 , 3 overlap between adjacent probes) while the device was advanced axially along the tubular target (stepsize = 0.4 mm, scanning sequence visible in Media S1). Paper resolution targets (USAF 1951, Edmund Optics) were used to characterize the optical resolution. The targets were glued on the inner surface of an acrylic cylinder and scanned (scan length typically 63 mm, ; = 53 mm; Figure 2, right). Deformable phantoms were constructed from silicone (Ecoflex 00-30, Smooth-On), which included 8 pigmented patterns of simulated flat lesions (; = 0.5-6.0 mm, height = 0.7 mm) over simulated mucosa background color silicone. One of such patterns was constructed for each probe's workspace ( Figure 2C). The deformable phantoms were mounted by their proximal and distal edges on the outer surface of two fixed acrylic cylinders, while the OB probes remain in direct contact with the phantom (Figure 2D-F, also seen in Media S1). The position of all probes was derived from the angular and axial , which were then co-registered with its corresponding raw reflectance spectrum. Angular actuation errors were calculated as the absolute difference between the position reported by the actuator's controller and the rounded pixel coordinate used to reconstruct the images. DRS spectra in the visible range were calculated from the raw reflectance data, by normalizing against white standard reference spectra acquired by the corresponding DRS-OB probe. Grayscale pixel intensities were calculated by numerical integration of each DRS spectrum. RGB pixel intensities were calculated by convolution of DRS spectra with color matching functions acquired from a reference color camera. [57] Large-area images are generated by inserting pixel intensities on a 2D matrix as per their original spatial location (as seen in Media S1). Scanning-speeds and processing times were calculated by measuring the time required per area scanned from live video recordings.
Repeatability of the spatial reconstruction was measured by sequentially scanning the same phantom area and calculating the structural similarity index (SSI) map (MATLAB, MathWorks), between the first large-area image and each one of the repetitions (n = 3). An average SSI map was generated from three repetitions, where a pixel-SSI of 1.0 means 100% structural similarity. [58] Actuation, data acquisition, processing, and visualization are fully integrated into a MATLAB framework (Windows 7 PC). A user interface allowed near real-time visualization of acquired raw and processed data as 2D and pseudo-3D maps to represent the GI tract (as seen in Media S1).
For the purposes of data visualization, DRS-OB data from the silicone phantom data were pre-processed (smoothing and detrending) to remove high-frequency noise (above the spectral resolution of the sensor) and a wavelength-dependent offset in near real-time (no hardware clock or real-time operating system). [55] After preprocessing a support vector machine (SVM; linear kernel F I G U R E 4 Training dataset for machine learning classification on a silicone phantom: 11,500 diffuse reflectance spectroscopy (DRS) spectra (x ± σ) of simulated adenomas (ade, dashed-red line) vs 9000 DRS spectra of simulated mucosa (class bkg, blue line) F I G U R E 5 Performance of machine learning classification per experiment for class ade. Color-coded pixels: White, true positives; Green, false positives; Magenta, false negatives; Black, true negatives and sequential minimal optimization, svmtrain, MATLAB implementation) was trained using pigmented stretchable targets with known color classes (ade, red or bkg, pink pigment). These two classes were used as the training dataset, providing a total of 21 500 DRS spectra for the stretchable silicone phantom. The full acquired spectral range ($420-1200 nm) was used to maximize the number of features available to train the SVM classifiers.
The trained machine learning (ML) classifier spatially correlated the classified DRS data to generate near real-time segmented maps/images (as seen in Media S1), with accuracy determined by reference to groundtruth classes maps.

| Actuation and data collection test on porcine colon
Under appropriate local licence, one acute postmortem porcine colon (within 1 h of euthanasia) was mounted between two fixed acrylic tubes and scanned using the same scanning setup shown in Figure 2D. The probes acquired DRS-OB data through a transparent waterproofing plastic sheet (40 μm thick, 16-1006, 365 Healthcare) in direct contact with the tissue. The rest of the device was covered by a waterproof layer of black-colored silicone (as seen in the scanning sequence shown in the Media S1).

| RESULTS
The platform deployed the DRS-OB probe array into contact with rigid and deformable tubular targets, as designed. The angular actuation and linear translation of the device allowed the acquisition and spatial reconstruction of centimeter-sized large-area DRS-OB (103 cm 2 in vitro, 8.9 cm 2 ex vivo).
The scanning speeds for all the experiments in this article are as per Table 1. The mean angular actuation error of the current device was 0.043 (σ = 0.013 ), as shown in Table 2. Angular actuation error was highest during the first five steps of each scanning cycle (x = 0.48 , σ = 0.15 ), whereas for the rest of the cycle (Steps 6-47) the average error decreased to 0.001 (σ = 0.0002 ). The maximum angular actuation error was 0.93 , which for the scanned 52-mm-diameter phantom translates into a 2-mm arc-length. This peak occurred in the first three steps of each scanning cycle.
The optical resolution was 0.5 line pairs per mm while scanning a $ 103 cm 2 area USAF test target ( Figure 3A-E, also shown in the Media S1).
This resolution was consistent with the smallest spatial feature resolved while scanning a deformable and stretchable phantom (0.75 mm diameter feature; Figure 3F). The spatial reconstruction was repeatable, as per SSI map calculation, comparing four sequential large-area scans (SSI map global minima = 0.9476), as shown in Figure 3H.
The DRS-OB data of the silicone tissue phantom were processed in near real-time to generate grayscale and color large-area images (16.6 Â 3.3 cm-55 cm 2 ; Figure 3F). A binary ML classifier was trained in advance with 2 DRS-OB data classes (Figure 4). The binary SVM classifier was integrated into the framework for near realtime automated generation of segmentation maps ( Figure 5, and also shown in the Media S1).
Performance of the ML classification was as follows: 99.6% sensitivity, 99.9% specificity, 96.4% positive predictive value, and 99.9% negative predictive value (mean value, over the 4 images shown in Figure 5, while excluding edge areas ±2 pixels).
The pixel-wise processing time (required to acquire, process, and visualize a single pixel) in all the experiments ranges from 50.9 ms for color maps (including actuation, OB data acquisition, preprocessing, color convolution, and its visualization), up to 86 ms for near realtime classification (adding machine learning classification and its visualization).
The current raster scanning was feasible over acute postmortem porcine colon. The DRS-OB probes were able to acquire DRS-OB data, and generate grayscale and color large-area images ($899 mm 2 ; Figure 6A-C; as seen in Media S1). The intensity of the DRS signal decreased only by 20% when the probes were covered by the transparent waterproofing plastic sheet (data not shown).

| DISCUSSION
In this article, we validated a novel robotic wide-area OB scanning platform for GI endoscopy as an accessory to augment conventional endoscopes. We described its design and actuation principles, optical resolution, largearea coverage, and showed a prototype computer-aided detection (CADe) system for automated generation of segmented OB maps in vitro. Finally, we showed the endoscopic feasibility of the approach on freshly excised ex vivo animal colon.
The platform allows centimeter-sized large-area OB scanning of simulated and ex vivo GI tract. This is achieved by the robotic deployment and actuation of an array of DRS-OB probes. The design as an add-on attachment will allow its concurrent use with conventional endoscopes while keeping working channels free. This is key for one-stop diagnostic and therapeutic GI endoscopy, which is not possible with current working-channel OB probes, [34] OB capsules, and endoscope-integrated OB imaging. [39,40] The device is designed as a sliding overtube-device for easy navigation, stable actuation, and scanning along the GI tract, thanks to the centralizing effect of its radially deployed probes. As the endoscope is F I G U R E 7 Current and future workflows in colonoscopy. Widely used: Classic screening endoscopy (A-1a); Classic IEE assisted (A'-1a); used in selected centers only: IEE resect and discard (B-O-1b); image-enhanced endoscopy (IEE) "diagnose and leave" (B-2); optical biopsyoptical biopsy (OB) "resect and discard" (O-1b); OB "diagnose and leave" (O-2); proposed future workflows: Large-area OB CADe "resect and discard" (C-1b); Large-area OB CADe "diagnose and leave" (C-2); long-term workflows: Large-area OB CADx "diagnose, confirm, and discard" (D-1b); Large-area OB CADx "diagnose, confirm, and leave" (D-2). CADe, Computer-assisted detection; CADx, Computer-assisted diagnosis; IEE, Image-enhanced endoscopy; OB, contact-based optical biopsy; WL, white light conventional endoscopy assumed to be fixed in space, linear or rotary forces applied to a torsionally stiff overtube will convert into device translation/rotation along/around the endoscope.
The platform actuates the array of OB probes in 2 of freedom (rotation and translation along the longitudinal axis of the tubular target area), while automated radial deployment is possible. The angular actuation error during the entire scanning cycle seems consistently low (<x + 2σ = 0.053 , or $117 μm arc-length for a tube ; = 52 mm), where the linear actuation error reported by the manufacturer was 122 μm.
The platform achieves a submillimeter optical resolution in rigid targets (0.5 line pairs per mm; Figure 3A-D) and stretchable phantoms (resolving features ≥0.75 mm ;; Figure 3F). That resolution would be suitable for detecting lesions smaller than 5 mm in diameter, usually missed during colonoscopy. [4] The described spatial resolution matches the expected sensing area for our DRS-OB probes ($500 μm sensing diameter). Limited FoV is one of the main limitations of OB, hindering the spatial awareness of OB users and not allowing to easily identify the margins of the detected lesions in the macroscopic video feed from the endoscope. [37] The FoV of commercially available endomicroscopy probes ranges between 200 Â 200 and 500 Â 500 μm. [36] Areas as large as 160 mm 2 can be scanned by integrating robotic actuation of OB probes and digital-stitching (mosaicking). [39] Our large-area OB platform scans centimeter-sized areas: 103 cm 2 in vitro and 8.9 cm 2 ex vivo. Finally, the cadaveric animal study shows the feasibility of using the platform during endoscopic conditions, allowing the generation of DRS-OB color images (area 8.9 cm 2 ; Figure 6C).
The work presented here is at preclinical stage and requires further research to achieve clinical feasibility, which will require extensive in vitro and in vivo validation. This future work will focus on reducing the footprint, optimizing scanning and classification speed while ensuring a safe and stable probe-tissue interface and scanning.
The device's current diameter of 32 mm is smaller than the average diameter of the sigmoid colon (narrowest section x = 33 mm, σ = 6 mm), [59] and matches the deployed diameter of endoscope accessories already in clinical use (32.6 mm [60] ). However, our device does not collapse further due to its rigidity, therefore additional miniaturization and flexibility (via softer materials) are required to ensure safety and adaptability to folds and flexures.
We recognize scanning speed as a requirement for feasible clinical adoption. The design of the current prototype is not intended for fast scanning. Fast, non-contact OB imaging capsules are promising, [40] however, our platform aims to provide concurrent diagnostic OB imaging and therapeutic GI endoscopy in a single procedure. Future design iterations should consider a trade-off between: (a) size of the target area to scan; (b) required imaging speed; (c) required spatial and spectral resolution; and (d) size and number of OB probes in the array. A radial array of OB probes scanning via direct contact with the colon mucosa seems feasible and safe, considering that passive endoscopic accessory devices with similar geometry and footprint are in clinical use in colonoscopy without significant adverse events (eg, Endocuff; ARC Medical Design, UK: used to immobilize and hold folds, increasing spatial awareness and adenoma detection rate [61] ).
Adding more probes would help to maintain the spatial sampling frequency at the cost of increasing the complexity and footprint of the device. We believe that to achieve scanning speeds compatible with the clinical workflow; we need to evolve from the current step-bystep raster scanning actuation towards a continuous actuation with faster OB data acquisition. Future work should address processing speed as a possible bottleneck for near real-time classification. Currently, we achieve a total pixel-wise time of 86 ms per classified and visualized pixel (SVM classification). In our current work with ex vivo human DRS-OB data, its off-line SVM classification takes from 0.71 to 1.85 ms per pixel (601 features, 33 classes, linear kernel SVM, parallel 4-core processor). These speeds can be reduced down to 0.003 ms per pixel by using parallel hardware processing (SVMs on brain cancer hyperspectral data). [62] We believe that a hybrid approach that uses faster hardware with hyperspectral compressed sensing [63] could allow real-time classification and segmentation at a human ergonomic frame rate (≥60 fps), compatible with the endoscopic live feed.
Previous studies reported how stable OB-probe to tissue interface determines the quality of OB data. [35,64] Our results replicate this observation, as the generated in vitro and ex vivo images contain some contrast and color artifacts caused by a loss of probe-tissue contact, due to eccentric angular actuation (in vitro, visible in Figure 3G, and minute 1:08 of Media S1), or probe occlusion (ex vivo, as seen in Figure 6C). Probe occlusion, loss of contact, and over-pressing must be avoided because they diminish the OB data quality. Loss of contact affects the OB data quality because: (a) it provides a weaker signal intensity; (b) the OB signal comes mostly from the superficial layers of the tissue; and (c) it reduces the optical resolution, due to collection of light from outside its expected FoV (or blurring, in the case of fixed depth-ofview imaging OB probes). On the other hand, overpressing is also unwanted because: (a) it collapses the vessels, reducing the blood present in the interrogated volume (DRS-OB signal is highly dependent on blood presence); and (b) it could artificially increase the scattering of the interrogated area, due to tissue compaction. Our current approach tries to stabilize the probe-tissue interface via passive-force control (spring-based). [55] However, this cannot compensate for the loss of contact, or sense and control the force applied. The direct correlation of probe-tissue contacts and raw DRS-OB signal intensity could be used to sense probe-tissue contact and its stability based on time-resolved intensity changes. On the other hand, future iterations should try to include the use of active-force control, as shown by recent studies. [54] Unfortunately, this might increase the probe-holder complexity. An interesting option to explore is the use of flexible arrays of DRS-OB probes [65] which could adapt to irregularities of the area being scanned.
Tissue deformation is a major challenge in real clinical environments. Tissue moves and deforms due to physiological events (breathing, cardiac cycle, or gastrointestinal tract peristalsis) or by direct interaction with the user via instruments or their hands. Tissue deformation can be reduced and tracked. For our concept, we believe the simplest and well-proven approach is to minimize deformation by immobilizing the tissue areas of interest. For the case of our experiments, spatial artifacts above the achieved resolution might be related to deformation of the stretchable targets due to friction forces during scanning. This can be observed in Figure 3H, where artifacts along the angular axis are worse than the ones along the longitudinal axis of the image. The optical resolution and largearea coverage shown here might not have been possible if we did not limit tissue deformation by mounting the in vitro and ex vivo targets between acrylic tubes. However, the need and degree of stabilization will need to be assessed in future trials, as the in vitro and ex vivo experiments lack the stabilizing effect of the mesocolon. Direct GI tract stabilization for OB scanning was already proposed by others. [39,40] In future iterations, we plan to integrate balloon tissue immobilization, to minimize tissue deformation, stabilize the probe-tissue interface while protecting the probes from colonic content and probe occlusion. At the same time, different raster scanning patterns (eg, helical or longitudinal) could be explored.
We recognize that pigmented silicone cannot represent the rich heterogeneity found in hyperspectral signals from live tissues. Our future work focuses on improving the level of validation by testing future iteration on live models of colorectal cancer [66] and improving our phantoms. [67] Re-targeting of suspicious areas during large-area OB imaging represents an active technical challenge. [68] There are several solutions to explore, such as physical (tattooing or laser ablation) [69] and virtual (featurebased). [70] The process shown here to generate large-area synthetic color images by convoluting DRS-OB data, makes the DRS data easy to be visualized by human operators while allowing co-registration with the endoscopic video live-feed to ease retargeting of lesions. Future work in this area will explore simultaneous localization and mapping techniques. This computer vision approach will not only ease the navigation and retargeting of the lesions found, but also allow visual-servoing for controlling the scanning and actuation of the device over folds and along flexures. This approach might be feasible via automatic detection of vascular features for registration. [71] At the same time, large-area DRS-OB could allow IEE "on-the-fly" via convolution of DRS data into narrow-band imaging.
Most of the current limitations of the proposed platform are related to unstable probe-tissue interface due to the rigid and fixed-diameter scanning approach used. In order to address most of these challenges, we are developing a smaller second-generation part-disposable softrobotic inflatable deployment device for variable-diameter and compliant scanning along a flexible path. [56] The platform's concept is OB sensing-modality agnostic. Therefore, high-sensitivity low-cost OB modalities, such as DRS, could be complemented with a high-specificity modality, such as molecular-markers fluorescence, fluorescence lifetime imaging microscopy, or Raman spectroscopy.
The significance of this article lies on the demonstrated submillimeter optical resolution and the proposed DRS-OB large-area imaging CADe concept, which in the future could be used for automated diagnosis (CADx) of hard-todetect flat and small lesions. We foresee that CADe OB imaging could be deployed in vivo at low resolution, as soon as data allows robust ML training, and the scanning and processing speeds improve. As proposed in Figure 7, single or multi-modality large-area OB CADx would allow cost-efficient endoscopic workflows: (a) eliminating the need for histopathological analysis of benign lesions, therefore reducing the current risks and costs associated with unnecessary histopathology of benign tissue samples; (b) widening the access to high-quality endoscopy, regardless of the endoscope used or the level of experience of the operator. Non-gastroenterologist operators could achieve similar sensitivity as experts, who could focus on interventional endoscopy. Pan-esophageal, enteric or colonic OB imaging may become feasible in the future, facilitated by robotic navigation.

| CONCLUSIONS
In this article, we described and validated in vitro a novel robotic platform for endoscopic large-area OB imaging. We showed that the prototype can acquire large-area DRS-OB data from rigid and deformable GI phantoms via robotic actuation of a radial array of DRS-OB probes.
We demonstrated how the DRS-OB data can be reconstructed into centimeter-sized images while achieving submillimeter spatial resolution. Also, we validated in vitro how the DRS-OB data can be automatically segmented to generate in near real-time heat-maps to pinpoint the location of simulated suspicious areas. Finally, we showed ex vivo feasibility by scanning freshly excised porcine colon.