A heterogeneous computing system for coupling 3D endomicroscopy with volume rendering in real-time image visualization☆
Introduction
Modern biomedical imaging systems are heavily reliant on visualization methods to provide auxiliary perceptions of the captured data. Being one of the core branches of imaging disciplines, the fundamental visualization method for volumetric datasets is known as volume rendering. The central principle of volume rendering includes the study of projecting three-dimensional (3D) objects onto a flat, two-dimensional screen for display. Such methods provide viewing with at least six degrees-of-freedom and additional artificial processing such as highlighting for analysis. This offers better understanding of presented medical conditions and improves the quality of healthcare. As such, the research on volume visualization has garnered increasing attention and interest within the scientific community over recent years.
In biomedical imaging, volumetric datasets which represent spatial data are acquired in 3D, ranging from full-body scans to cellular microscopy. Various imaging methods have been derived and common macro-scale modalities include Computed Tomography (CT) scanning, Magnetic Resonance Imaging (MRI) and 3D ultrasound scanning. With the advent of increasingly sophisticated, microscopic-scale imaging techniques, for instance, the Laser Scanning Confocal Microscopy (LSCM), real-time in vivo micro-imaging is possible.
An emerging variation of LSCM is known as the Laser Scanning Confocal Endomicroscope (LSCEM). The light delivery platform of LSCM is replaced with a more versatile pen-sized probe which is effective for endogenous in vivo microscopic imaging. Targeted at imaging through the mucosal layer, modalities such as LSCEM provide alternatives to the more invasive and compromising acquisition methods found in ex vivo microscopy. Through in vivo methods, the elimination of tissue sampling using biopsy incisions will prevent infections, cell damage and other complications that may impair the well-being of patients as well as affect the accuracy of the obtained information.
The term ‘confocal microscopy’ was firstly coined in 1979 by Brakenhoff et al. [1], in an experiment to show that the capture of consecutive incrementing plane depths can be done to obtain a 3D representation of a specimen. Since then, the modality has undergone much development, especially to improve the portability of the imaging device. In Ref. [2], fiber optic technology is used to reduce the bulkiness of LSCM, replacing the optical design without affecting image performance. This design has pioneered the portable scanning head present in current LSCM models, enabling probe navigation into hard-to-reach areas.
Using a single pinhole, an ultra-compact design is described in Ref. [3], which suggested a novel approach to exclude out-of-focus light signals and produce clear slices without interference from other plane levels. Since the advent of portable LSCM designs for medical-based imaging applications, may procedures have used this modality to obtain volume datasets [4], [5], [6]. In recent work LSCM datasets have also been combined with reconfigurable computing to provide real-time imaging for instantaneous cancer diagnosis and tissue assessment [7].
The LSCEM is an endoscopic complementary to the LSCM which enables in vivo imaging of tissue through the surface, commonly targeted at oral mucosal areas [8]. The light path and scanning mechanisms are integrated within a miniature pen-sized probe [9] and is effective in capturing image data from the deeper subsurface layers of tissue. This opens up access to a range of clinical applications such as virtual biopsy and non-invasive optical exploration [10]. Due to its in vivo application, it is commonly chosen as an aiding tool for targeted biopsy procedures, thus reducing the number of biopsies and medical incisions [11], [12], [13]. Apart from that, it is also used for image-guided surgery by providing assessment of the tissue condition before, during or after surgical procedures [14].
However, although current LSCEM microscopes enable in vivo imaging, they are limited to capturing and displaying images from a single viewing configuration or focal plane at a time. This viewing plane is fixed and perpendicular to the probe direction and to gain perspective from other angles, the only way is to physically tilt the probe. Apart from that, it is inconvenient to operate LSCEM microscopes as the user must steadily manage the probe against the tissue area as well as manually controlling the captures. Addressing the issue of lack of an embedded automation system is the core motivation in this work.
Moreover, the advancement of biomedical imaging techniques is not fully saturated. Currently there are several challenges toward a high-performance imaging mechanism, which comprise of acquisition, reconstruction and rendering. Firstly, acquiring a full set of high-resolution dataset requires a significant amount of time. This is inherently present in confocal microscopy systems as acquisition is pixel-wise and sequential. Due to the nature of in vivo capturing, over a prolonged period artifacts and distortions may present themselves. These problems arise from involuntary probe movement or natural bodily functions and these effects are further magnified at a microscopic scale.
Confocal imaging systems are well known for capturing images beneath the tissue surface due to effective isolation of signals. Consequently, it is possible to build a volume dataset by using LSCEM to capture consecutive 2D slices across incrementing z-depths. However, achieving this requires manual human input to control the depth level and trigger a capture, typically using a footswitch. A single human operator is possible, but it may be a challenging task to mitigate movement while controlling the probe.
Furthermore, conventional imaging and visualization protocols require the imaging process to be complete prior to dataset visualization. Thus, by the time a dataset is fully rendered and visualized, the imaging process has ceased, preventing further changes and capturing adjustments to be made. These changes may include light settings and intensity measures which may provide better rendering results. The outcome is not versatile enough and repeating the capturing process to include modifications is costly and time consuming. Therefore, there is a clear need for merging the acquisition system with an online visualization system, to render volumetric datasets on-the-fly and to prevent repetition due to unsatisfactory acquisition.
In our opinion, the inclusion of computing methods is key to empowering imaging techniques with augmented capabilities. Our presented approach in this paper is to improve the LSCEM modality using heterogeneous computing, by complementing the existing imaging pipeline with an automated controller and a customized renderer. Here, we describe in detail a novel computing architecture to realize an online volume rendering procedure while dataset acquisition and reconstruction is being performed. We call this an online incrementally accumulated rendering system, which processes acquired data online in an incremental fashion and simultaneously renders the dataset accumulatively.
The visualization of LSCM datasets was presented as early as 1996 by Sakas et al. [15], where various methods were proposed to render these datasets, including surface reconstruction and illumination models. Subsequently, a visualization system used to render time-dependent LSCM data is introduced [16], which shows chromatin condensation and de-condensation during mitosis. The authors concluded that there is no generalized visualization technique that fits every dataset and different methods should be used catering to its own strengths and weaknesses. In Ref. [17], the need of interactive visualization for LSCM datasets is discussed and the ray-casting algorithm is presented as the main choice to perform this task. It is also shown that fast visualization provides interactive examination of datasets to improve the quality of captured LSCM images.
Conventionally, software-based methods (CPU) [18] are popular due to the high penetration of computing systems and concrete foundation for software development and research. However, they are relatively slow and insufficient for real-time volume rendering and require hardware-acceleration to produce high-performance results. Common hardware accelerators mainly consist of graphics processors (GPU), application-specific integrated circuits (ASIC), digital signal processors (DSP) and field-programmable gate arrays (FPGA). Cluster computing and distributed systems are also prevalent in the research scene.
Customized hardware rendering systems such as VolumePro [19], VIZARD [20] and VIZARD II [21] have been proposed to improve the visualization process. However, while these architectures are intended for rendering volume datasets, they are not designed to deal with a dynamic dataset which acquires new slices during imaging. Also, there is a lack of a real-time renderer which directly interfaces with the LSCEM imaging system in a standalone fashion to replace its controls as mentioned in the previous subsection. Of course, an independent terminal can be added alongside the imaging device to visualize the dataset, but this defeats the purpose of an integrated solution with efficient space and power resource consumption.
Typically, these existing custom hardware renderers are dependent on an underlying platform such as a motherboard to provide interfacing peripherals and process managements. In most cases, they use a PCI interface which is targeted for use on consumer PCs. Noticeably, the use of embedded, reconfigurable computing systems is an emerging development trend targeted to achieve significant performance speedups. These systems, mainly represented by FPGA, also provide cheaper and shortened development cycles to generate ASIC solutions through its reconfigurability and prototyping methods. To speed up calculations, basic DSP blocks have also been fitted into modern FPGA prototyping boards [22] and have become an essential component to boost performance. Adhering to these values, we make use of the FPGA in our previous works [10], [23], [24].
Apart from that, a contributing factor to the choice of FPGA is its ability to support real-time controls, thus allowing automated capturing mechanisms to be realized within the LSCEM system. These automated controls substitute the present manual footswitch and keyboard controls which are difficult to be operated simultaneously by a single user. From the abovementioned alternatives, none of which, apart from ASIC and FPGA, are flexible enough for embedded control signaling alongside computation. Ultimately, the FPGA which provides prototyping features suitable for building a customized parallelizable hardware system, can produce a design to be fabricated as an optimized ASIC chip for embedded integration.
We have presented a snapshot of the state-of-the-art within the multi-disciplinary fields of confocal medical imaging, visualization and reconfigurable computing respectively. Section 2 gives a detailed theoretical background of methods used in this work. In Section 3, we present our proposed incrementally accumulated imaging-rendering system and demonstrate the results in Section 4 with an empirical review. In the final section, we conclude our work with an overview and future propositions.
Section snippets
Confocal imaging with embedded FPGA controller
The Optiscan FIVE1 LSCEM device (Optiscan, Australia) is used to develop the prototype system. This device is coupled with a pen-sized, hand-held rigid probe intended for imaging the oral cavity. Its main signal source is a laser which transmits 488 nm light signals through a single optic fiber. The use of a single fiber for transmission also doubles as a pinhole for confocal imaging. Within the rigid probe, a miniaturized resonant mechanism can be found, which scans the x–y plane in a raster
System development
The system is mainly composed of five parts: (1) an electronic imaging controller which replaces manual controls required by the user with automated progressive acquisition, (2) an effective allocator module to isolate memory for the imaging and rendering processes, (3) an efficient memory organization scheme which may significantly reduce the amount of memory accesses by up to eight-fold, compared to unoptimized per-voxel organization, (4) a novel compositing scheme to render the accumulated
Empirical results
The presented architecture is implemented on the Celoxica RC340E FPGA prototyping board. This board contains the Xilinx Virtex-4 4VLX160-10 FPGA and various interfacing features including multiple data transfer ports, additional control peripheral support and an integrated LCD touchscreen display. The board also has a dual in-line memory module (DIMM) which supports a 256-Mbyte SDRAM with a 64-bit bus. This provides fast additional external memory which is crucial for storing the subcube
Discussion and conclusion
Modern medical imaging techniques have increasing prominence in the field of medical and biological studies and are crucial to the quality of healthcare and research. However, current techniques for these imaging technologies have limitations and thus there is a need to enhance their capabilities. We demonstrated several ideas and experimental results to address these problems [23], [24] in previous work. In this paper, we further describe a novel architecture targeted to enhance the imaging
Acknowledgements
We are grateful to our clinical partners, Prof. Soo Khee Chee, Prof Malini Olico and Dr. Patricia Thong of National Cancer Center Singapore for the LSCEM dataset used in the experiments. This work was partially supported by the following grants: SBIC RPC-010/2006 from A-Star Biomedical Research Council, Singapore, two funds M4080106.020 and M4080634.B40 from Nanyang Technological University, Singapore, and MOE2011-T2-2-037 from the Ministry of Education, Singapore.
Wei Ming Chiew received his B.E. degree from the School of Electrical and Electronics Engineering, Nanyang Technological University in Singapore in 2008. He is currently a Ph.D. scholar at School of Computer Engineering from the same university. His main research interests include biomedical imaging, volume rendering, image registration, embedded systems and high-performance computing.
References (44)
- et al.
Virtual histology
Best Practice & Research Clinical Gastroenterology
(2008) - et al.
A review of high-level synthesis for dynamically reconfigurable FPGAs
Microprocessors and Microsystems
(2000) Imaging modes in confocal scanning light microscopy (CSLM)
Journal of Microscopy
(1979)- et al.
Confocal microscopy through a fiber-optic imaging bundle
Optics Letters
(1993) - et al.
A single-pinhole confocal laser scanning microscope for 3-D imaging of biostructures
IEEE Engineering in Medicine and Biology Magazine
(1999) - et al.
General-purpose object recognition in 3D volume data sets using gray-scale invariants – classification of airborne pollen-grains recorded with a confocal laser scanning microscope
- et al.
Laser confocal endomicroscopy as a novel technique for fluorescence diagnostic imaging of the oral cavity
Journal of Biomedical Optics
(2007) - et al.
Ultrahigh resolution 3D model of murine heart from micro-CT and serial confocal laser scanning microscopy images
- et al.
Embedded computing for fluorescence confocal endomicroscopy imaging
Journal of Signal Processing Systems
(2009) - et al.
Hypericin fluorescence imaging of oral cancer: from endoscopy to real-time 3-dimensional endomicroscopy
Journal of Medical Imaging and Health Informatics
(2011)
Fiber-optics in scanning optical microscopy
Toward real-time virtual biopsy of oral lesions using confocal laser endomicroscopy interfaced with embedded computing
Journal of Biomedical Optics
Confocal microscopy and molecular-specific optical contrast agents for the detection of oral neoplasia
Technology in Cancer Research & Treatment
Real-time histology with the endocytoscope
World Journal of Gastroenterology
Real time intraoperative confocal laser microscopy-guided surgery
Annals of Surgery
Case study: visualization of laser confocal microscopy datasets
Visualization of time dependent confocal microscopy data
Interactive visualization technique for confocal microscopy images
Fast multi-resolution volume rendering
The VolumePro real-time ray-casting system
VIZARD – visualization accelerator for realtime display
VIZARD II: a reconfigurable interactive volume rendering system
Cited by (7)
Terrain visualization information integration in agent-based military industrial logistics simulation
2022, Journal of Industrial Information IntegrationCitation Excerpt :Especially, owing to the development of oblique photography, the fast reconstruction technology has gradually become mature and received more attention in recent years [5-9]. Some studies focus on texture mapping and real-time rendering methods [10-19], which can provide reference for terrain visualization. The new progress in this field is reflected by the related researches on automatic terrain modeling and real-time rendering of large scale terrain [20-25].
Dynamic 3D surface reconstruction and motion modeling from a pan-tilt-zoom camera
2015, Computers in IndustryCitation Excerpt :Dynamic 3D reconstruction of structure and motion of targets is an important topic in industry [1–3]. It emerged in many recent industrial applications including real-time 3D image visualization [4], face detection [5,6], and 3D surface and curve reconstruction [1,7]. The advantage comes when the 3D reconstruction method uses less time and space to reconstruct the 3D objects from fewer frames, hopefully without incorporating a second camera [8].
Feature-based monocular dynamic 3D object reconstruction
2018, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)A novel embedded interpolation algorithm with negative squared distance for real-time endomicroscopy
2016, ACM Transactions on Embedded Computing SystemsExploring data by PCA and k-means for IEEE xplore digital library
2016, ACM International Conference Proceeding SeriesSimulation and visualization of deformation with anisotropic materials
2015, Proceedings of the International Conference on Information Visualisation
Wei Ming Chiew received his B.E. degree from the School of Electrical and Electronics Engineering, Nanyang Technological University in Singapore in 2008. He is currently a Ph.D. scholar at School of Computer Engineering from the same university. His main research interests include biomedical imaging, volume rendering, image registration, embedded systems and high-performance computing.
Feng Lin is currently an Associate Professor, the Director of Bioinformatics Research Centre and the Programme Director of M.Sc. (Digital Media Technology) at the School of Computer Engineering, Nanyang Technological University Singapore. His research interests include biomedical informatics, biomedical imaging and visualization, computer graphics and high-performance computing.
Kemao Qian is currently an Assistant Professor at the School of Computer Engineering, Nanyang Technological University Singapore. His research interests include optical metrology, image processing and computer animation.
Hock Soon Seah is a Professor and Director of the Multi-plAtform Game Innovation Centre (MAGIC) at the School of Computer Engineering, Nanyang Technological University. He has more than 20 years of experience in computer graphics and imaging research. His research areas are in geometric modeling, image sequence analysis with applications to digital film effects, automatic in-between frame generation from hand-drawn sketches, augmented reality and advanced medical visualization.
- ☆
This paper originally formed part of the Special Issue on “3D Imaging in Industry”, guest-edited by Melvyn L. Smith and Lyndon N. Smith and was published in the issue 64/9 (2013). As the special issue was published before the acceptance of this article, this is now included in this regular issue. The Guest Editors acknowledge the contribution and effort of the author Prof. Wei Ming Chiew in the preparation of this paper.