Optical wide-field tomography of sediment resuspension

We present a wide-field imaging approach to optically sense underwater sediment resuspension events. It uses wide-field multi-directional views and diffuse backlight. Our approach algorithmically quantifies the amount of material resuspended and its spatiotemporal distribution. The suspended particles affect the radiation that reaches the cameras, hence the captured images. By measuring the radiance during and prior to resuspension, we extract the optical depth on the line of sight per pixel. Using computed tomography (CT) principles, the optical depths yield estimation of the extinction coefficient of the suspension, per voxel. The suspended density is then derived from the reconstructed extinction coefficient. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement


Introduction
We propose an approach for underwater optical computed tomography (CT) for the study of marine sedimentation. Sedimentation affects physical, chemical, and biological processes in the sea [1][2][3]. Sediment resuspension is the transport of previously settled particles from the seafloor back into the overlying water. Such an event occurs when near seafloor currents exceed a threshold velocity. The currents are induced by physical forces, such as waves, winds or tides; human activities, such as fishing, dredging and trawling; and biological activity. Biological resuspension occurs when fish and other animals search for food and shelter at the seafloor.
There is a gap of knowledge regarding the rate and extent of biological sediment resuspension in the ocean [4]. Moreover, the relevance of biological sediment resuspension to biochemical underwater processes is not fully known. Recent publications [5][6][7][8] suggest that due to its frequent occurrences, biological activity may be a more significant contributor to sediment resuspension than physical forces and human activity. Due to these reasons it is important to devise methods for measuring these events. However, this is challenging [4,9,10]. Understanding resuspension requires a wide set of methods [4,[11][12][13]. Existing in-situ approaches for quantifying sediment resuspension or fluids are very localized and limited to cm-scale [14][15][16][17][18]. We seek methods that least disrupt resuspension events, hence we seek measurements from a distance. In aquatic environments, a sensing range of several meters is practical. In addition, the evolving sediment clouds have a meter-scale as well. Therefore, we seek multi-meter-scale far-field measurements of these events using cameras. The presented imaging approach (a) observes from a distance the water medium above the seafloor, (b) senses sediment resuspension events, and (c) algorithmically quantifies the resuspension.
The spatial distribution of the particles is three-dimensional (3D). Hence, we develop a 3D tomographic imaging system. To achieve this, the evolving sediment plume is imaged against a diffuse back-light. Imaging is done simultaneously from multiple directions, as illustrated in Fig. 1. The resuspended particles affect the light that reaches the cameras. Image analysis uses a CT principle, motivated by medical CT. In a small scale, optical CT was used in laboratory studies on the dynamics of fluids and particles [19][20][21], and a recent in-situ study of marine , -Ray segment of in voxel voxels -radiance + Diffuse illumination Fig. 1. The concept of an underwater optical tomography system. The volume includes water and a resuspended sediment cloud. There are n voxels. A line of sight corresponding to pixel p is LOS p . microscopic and mesoscopic organisms [22]. Recently, CT has expanded to large-scale 3D atmospheric sensing of aerosols and clouds [23][24][25]. This work proposes a meter-scale CT to optically quantify underwater resuspension. We develop optical and algorithmic techniques to function through challenges posed by the underwater environment [26][27][28][29]. Initial partial results of our work have been presented in [30].

Image formation model
When a scene is only under ambient illumination, the image measures radiance i Ambient p per pixel p. An active illumination screen has radiance i (0) p . Pixel p corresponds to a line of sight denoted LOS p , see Fig. 1. The extinction coefficient of bulk water at the site is assumed to be spatially homogeneous. It is denoted as β Water in units of m −1 . For sediment-free water, according to Beer-Lambert law, the transmitted radiance reaching p is We note that off-axis scattering affects the images. In preliminary lab experiments these off-axis scattering components were significantly lower than the model in Eq. (1). The attenuation of transmitted radiance is induced by absorption and scattering of light. Therefore, the water extinction coefficient satisfies where β Water (3) Let β Sed (X) be the volumetric extinction coefficient of suspended sediment particles. Then the radiance measured through the suspension is The unitless sediment optical depth at pixel p is

Tomographic reconstruction -inverse model
Let a p,v be the length m of ray segment of LOS p , in voxel v, as noted in Fig. 1. Let β Sed v be the sediment extinction coefficient of voxel v. The vector β Sed ∈ R n×1 represents the extinction coefficients of all voxels v ∈ 1..n, in a column-stack form. A finite-sum approximation of Eq. (5) is Tomographic setups have multidirectional LOSs through the scene, Fig. 1. Let N views cameras observe the scene, each having resolution of N width × N height pixels. The total number of pixels observing the scene is m = N views × N width × N height . Then, τ ∈ R m×1 represents the sampled optical depths in all pixels p ∈ 1..m. Define A ∈ R m×n + as a projection matrix, whose elements are a p,v . Then, τ ≈ Aβ Sed .
Let α ≥ 0 be a regularization parameter, and L be a 3D Laplacian operator which defines the smoothness term of β Sed . Then the volumetric extinction coefficient can be estimated by: The extinction of light depends on the density and optical properties of the suspended particles.
The extinction coefficient is The particle mass density ρ Mass v is linearly proportional to particle number density ρ # v . Thus, from Eq. (11) The coefficient b in units of m 2 gr can be calibrated (See Appendix A). From Eqs. (10) and (12) the estimated mass of suspended particles at the voxel v iŝ and the total sediment cloud mass isμ

Simulations
We tested this concept using both lab experiments and simulations. The simulation environment contained an underwater 3D scene, a submerged diffuse illuminating screen, machine vision cameras and a 3D sediment cloud. Using a radiation transfer solver [31,32], we synthesized the observed underwater images. Then, we performed 3D tomographic reconstruction. The simulated ground truth helped design the imaging configuration by considering how camera specifications (type, amount and poses) affect the reconstruction quality.

Renderings
During simulations we set i Ambient p = 0. The volumetric domain has n = 128 × 50 × 128 voxels. Similarly to [23,25,33], the radiative transfer simulations relied on volumetric optical parameters, which are the extinction coefficient β(X), single scattering albedo (X) and anisotropy parameter g HG of a Henyey-Greenstein scattering phase function. We specifically used typical clear ocean water optical properties [34,35]. In these waters, corresponding to RGB channels, β Water (0.583, 0.16, 0.15) m −1 , Water (X) (0.228, 0.625, 0.667), and g Water HG 0.9. The sediment extinction coefficient β Sed (X) is spatially heterogeneous and spectrally uniform. As a proxy for a sediment cloud, we used an open source smoke phantom [32]. We aimed to simulate a dense sediment cloud for which on average β(X) X = 3.3 m −1 , corresponding to a 30 cm visibility range. Thus we scaled the phantom's range of extinction coefficients to β(X) ∈ [0, 12.2] m −1 . In the simulations, we set Sed (X) = 0.
The imaging sensor follows a perspective camera model, with a set field of view, image resolution, and Bayer pattern. These parameters are set by the specifications of an off-the-shelf machine vision camera, e.g., IDS UI3260xCP-C. In particular these specifications enabled us to render realistic noise in the simulated images. Let ν P e be the photon signal generated by Monte-Carlo simulations of light propagation. The maximum photon signal ν P e generates N well photo-electrons in a saturated pixel. Let γ e to be the number of photo-electrons per camera gray level. To induce noise we took the following steps: 1. An effect similar to photon noise is induced by introducing zero-mean Gaussian noise which has a variance equal to ν P e in photo-electrons counts. 2. To emulate readout noise, zero-mean Gaussian noise with fixed variance σ read_e is added.
Thus pixel intensity of a noisy image is i e ∼ N(ν P e , ν P e + σ 2 read_e ) in photo-electrons counts. 3. We introduced quantization noise by converting photo-electrons counts to gray levels using i p = i e /γ e ∈ N. For example, for the IDS UI3260xCP-C camera, the applied specifications [36] in photo-electron counts are : N well = 32870, σ read_e = 6.2, γ e ≈ 128.4. The renderings were performed on a machine of type M4.16Xlarge, of the Amazon Elastic Computing Cloud [37].
Algebraic Reconstruction Technique (SART) [40]. Reconstruction quality compares the estimated β Sed to the original phantom β Sed , in terms of unitless global [23] and local [24] measures Here we describe representative results for the simulations process of the scenario illustrated in Fig. 2(a). The cameras are uniformly spaced on a 125 • horizontal arch at height 0.5 m , facing the cloud from a 3 m distance. The phantom is of size 1 × 0.39 × 1 m 3 having voxel resolution of 0.78 cm , and the camera type is IDS UI3260xCP-C. Each column in Fig. 2(b) shows rendered images and optical depths for three different views. The Reconstructed volumetric extinction coefficient of the cloud retrieved from eight cameras is presented at Fig. 2(c). The reconstruction error 2 as a function of the number of cameras is presented in Fig. 2(d). We received similar results for other camera types and positions. The local error 2 dropped as the number of cameras increased. The global error δ saturated at δ = 0.08.

System and method
We performed experiments in the research seawater pool of dimensions 6 × 3 × 3 m 3 , at The Leon H. Charney School of Marine Sciences, University of Haifa, Israel. Inspired by the communicating vessels principle, we built an injection system, illustrated in Fig. 3(a), connected at its top to a 10 L source tank. The source tank contained MP SILICA particles suspended in water. The particle size range is 12-26 µm . This range suits particles of silt, clay, and fine sand, which exist along the Israeli Mediterranean shelf [42], at sites deeper than 30 [m]. The source tank was partially drained, creating a resuspended cloud emanating from the middle of the lighting screen.
The optical system contained eight machine vision cameras having a linear radiometric response. We used IDS UI3260xCP-C cameras with Tamron M112FM12 12 mm lenses, sealed inside designated housings having flat-ports (windows), as shown in Fig. 3(b). According to [27], when a perspective camera resides in an air chamber having a flat-port and is embedded in a water medium, refraction causes the imaging system to have a non-single viewpoint. We note that dome-ports, which we did not use here, can mitigate refraction distortions if the dome center aligns with the lens' center of projection. Nonetheless, it is possible to approximate flat-port systems as having a single viewpoint [27] by setting a tight lens-port distance. In such conditions, refractions induce two-dimensional image distortions, which can be accommodated digitally using camera calibration. Therefore, each camera was placed inside the housing while keeping the port relatively tangent to the lens. The cameras are mounted on a frame above a lighting screen, Fig. 4(a). Each camera was directed to the volume of interest and set to have ∼ 2.7 m working distance from the middle of the screen. The illumination screen is composed of sealed LEDs mounted between two diffuse white PVC boards, emitting a total of 24000 lumens, Fig. 4.
We used a calibration board, markers on the screen as in Fig. 4(c), openCV [43], and Agisoft software [44] to calibrate the system geometry. This led to the sparse projection matrix A used in Eqs. (8) and (9). Before each resuspension event, we imaged the lighting screen when active and not active, to acquire measurements of i Water p and i Ambient p respectively. Throughout each event, we imaged the evolving cloud to acquire measurements of i p , at 10 frames per second.

Tomography reconstructions
Using Eq. (6), we retrieved the optical depths {τ p } m p=1 of the suspended sediment cloud through time. Representative images are shown in Fig. 5(a). We performed reconstruction similarly to Section 3.2. The following steps, as shown in Fig. 5, improved the quality and runtime: (a) Pruning pixels by segmenting [45] and cropping of the normalized optical depth images. (b) Reconstructing an initial solutionβ Sed(0) , using the un-pruned pixels, and α = 0. Then, deriving the visual hull [46] ofβ Sed(0) . (c) Reconstructingβ Sed within the visual hull, using α = 0.45. The 3D results in Fig. 5 are for the green channel in a 2 cm voxel resolution, thus having voxel volume ϑ = 8 cm 3 .
Using an independent lab experiment, we calibrated the coefficient b which relates β Sed v to ρ Mass v in Eq. (12). This experiment is described in Appendix A. Then using Eqs. (12)- (14) we calculated the sediment cloud mass densityρ Mass and massμ Sed . This yielded an estimate of the evolving sediment cloud mass. We compared sediment mass estimation between two different experiments, each having a different density in the source tank: 22.5 mgr cm 3 , and 30 mgr cm 3 . The estimated mass of the clouds through 5.5 [sec] from the cloud's initiation is plotted in Fig. 6(a). Each curve averages two experiment repetitions. Values in corresponding times are scatter-plotted in Fig. 6(b). The linear fit is consistent with the source densities ratio 30:22.5 = 1.33.
Ultimately as in any active system, the system size limits the measurement domain. We noticed that 5.5 [sec] after the cloud's initiation, the cloud expanded beyond the screen area. This leads erroneously to negative value of τ p , when using naively Eq. (6), beyond screen borders. These pixels were pruned in our algorithm. This phenomenon is emphasized using a figure which represents τ p using a false-color palette; Fig. 7 shows τ p in the optical green channel, 46 [sec] after the cloud's initiation.    Negative values beyond screen borders are due to scattered light contributing to measured radiance.

Discussion
Our approach adapts optical CT principles to a multi-meter-scale underwater domain. Our work goes beyond proposal of the concept, to include a theoretical formulation, computer simulations, engineering of an optical tomographic system, algorithm and eventual empirical validation. We envision future developments for enabling field work in deep natural waters, which strive to minimize disturbance to nature. Future advancement may obviate active lighting in tomographic setups, for example relying on scatter of natural light [47]. It may be beneficial to incorporate measurements of turbidity sensors in conjunction with our approach. Such systems open the door for quantitative in-situ research of marine sedimentation, and other underwater phenomena. The algorithm we used assumes the resuspended cloud is diluted enough to suit the singlescattering approximation. In a dense and wide cloud, this approximation may bias results. In the calibration described in Fig. 8, when reaching higher sediment densities the relation between optical density and particle density becomes non-linear. Therefore, for optical thickness satisfying Sed βl > 1, we expect biased results. We believe that this bias can be largely mitigated using full 3D radiative transfer scattering tomography as in [23,33,48]. While this requires complex reconstruction algorithms [33,48], the imaging system would still be similar to ours.

Appendix A: Sediment density calibration
Sediment density vs. extinction calibration was done in a small water tank, in a dark room, see Fig. 9. A glassware beaker is fixed above a stirring device inside the tank. The beaker contains a suspension of particles in 1 L water. A magnet stirring stick is used to maintain a uniform suspension. We used the imaging sensor described in Section 4.1. The lighting array includes: White LED (1 W , 6500 K , OPA733WD, Optek Technology), resistor of 47Ω, power supply of 3.335 V (Horizon Electronics DHR), DVM (34401A Digital Multimeter), optical mirror, and shutters.
The camera sampled the intensity of the light passing through the beaker. We first took a clear water image I Water . Following this, we gradually added to the water beaker particles of roughly constant doses, until a final weight of 600 mgr . Here too, we used MP SILICA of size range 12 − 26 µm . For each session, we averaged the intensity inside a square measuring-area of 10 × 10 pixels in the center of the beam and of the imaging sensor, over a second. Denote by i Water rec , i rec measurements of clear water and suspension, respectively, similarly to Eqs. (1) and (4). As the density of suspension increases, image intensity drops. Thus, during use of higher suspension concentrations we used longer exposures, then normalized the measurements accordingly. The non-linear domain is due to multiple-scattering [49]. The images demonstrate the intensity attenuation of the transmitted light beam, for increasing particle density.
From measurements of i Water rec and i rec , and Eqs. (1) and (4) the retrieved extinction coefficient is where l = 0.092 m is the inner diameter of the beaker. A linear relation is extracted between the weight of particles in a volume of 10 −3 m 3 water to the measured extinction coefficient β Sed of the suspension, Fig. 8. Following [49], a linear fit should rely only on low concentrations, for which multiple scattering is negligible. We included only measurements within the range of linear response. From the linear fit shown in Fig. 8, the estimated relation corresponding to RGB channels and used in Eq. (12) is b ≈ (9.9, 9.6, 9.4) · 10 −3 m 2 gr .