Next Article in Journal
An Eddy Current-Based Structural Health Monitoring Technique for Tracking Bolt Cracking
Previous Article in Journal
An Improved Unscented Particle Filter Approach for Multi-Sensor Fusion Target Tracking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Implementation and Evaluation of Vision-Based Sensor Image Compression for Close-Range Photogrammetry and Structural Health Monitoring

by
Luna Ngeljaratan
and
Mohamed A. Moustafa
*
Department of Civil & Environmental Engineering, University of Nevada, Reno, NV 89557, USA
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(23), 6844; https://doi.org/10.3390/s20236844
Submission received: 10 October 2020 / Revised: 26 November 2020 / Accepted: 29 November 2020 / Published: 30 November 2020
(This article belongs to the Section Optical Sensors)

Abstract

:
Much research is still underway to achieve long-term and real-time monitoring using data from vision-based sensors. A major challenge is handling and processing enormous amount of data and images for either image storage, data transfer, or image analysis. To help address this challenge, this study explores and proposes image compression techniques using non-adaptive linear interpolation and wavelet transform algorithms. The effect and implication of image compression are investigated in the close-range photogrammetry as well as in realistic structural health monitoring applications. For this purpose, images and results from three different laboratory experiments and three different structures are utilized. The first experiment uses optical targets attached to a sliding bar that is displaced by a standard one-inch steel block. The effect of image compression in the photogrammetry is discussed and the monitoring accuracy is assessed by comparing the one-inch value with the measurement from the optical targets. The second application is a continuous static test of a small-scale rigid structure, and the last application is from a seismic shake table test of a full-scale 3-story building tested at E-Defense in Japan. These tests aimed at assessing the static and dynamic response measurement accuracy of vision-based sensors when images are highly compressed. The results show successful and promising application of image compression for photogrammetry and structural health monitoring. The study also identifies best methods and algorithms where effective compression ratios up to 20 times, with respect to original data size, can be applied and still maintain displacement measurement accuracy.

1. Introduction

The advancements of non-contact and vision-based sensors in the field of structural health monitoring (SHM) as well as the development of optics and computer vision algorithms have led to a growing demand, among the civil and construction engineering communities, for long-term continuous and real-time vision-based SHM. Currently, monitoring using vision-based sensors incorporates an offline camera calibration and a closed-range photogrammetry process while using either artificial markers [1,2,3,4,5,6,7] or relying on the natural features of the structure [8,9,10]. Several previous works have shown the robust development and promising future of vision-based sensors deployment for SHM purposes [9,11,12,13,14,15]. Meanwhile, the development of the sensor system for long-term continuous and real time monitoring is still underway. Vision-based sensors capture two-dimensional data, i.e., raw images, dictating the size of the storage that leads to a large concern of excessively large storage when long-term or continuous monitoring is desired. Using high-resolution cameras is also mandatory in some applications that require a high accuracy in distance or displacement measurements but does not emerge as a cost-effective solution. Another concern is associated with the pressing need for fast data transmission and efficient algorithms for image processing and analysis when vision-based sensors are to be deployed in the future for real-time monitoring.
The goal of this study is to contribute towards enabling future real-time or near real-time SHM using vision-based sensors through robust digital image data compression. The objective of this study is to reduce data storage of vision-based sensors and investigate the viability and feasibility of using compressed digital image data in photogrammetry, displacement measurements, and vibration-based SHM. The accuracy of vision-based sensor metrology highly depends on the image resolution and scale [16]. Therefore, it is essential to assess and quantify the effect and implications of the applied compression on digital images when output images are used for photogrammetry and SHM purpose. Specific definitions of what is meant by photogrammetry and SHM in relation to vision-based sensors are provided in the next section. The applied compression method essentially optimizes the data storage by minimizing the number of required pixels that is used to represent an image. For example, a monochrome image with a dimension of 2560 × 2048 pixels contains n = 8 bits depth so it stores 2 n = 256 possible black (0)-to-white (255) color information of the image that results in 5.2 MB size for a single image. As the SHM duration increases such as 1-h of bridge monitoring under traffic loading with 30 frames per second (fps) rate, the required data storage increases to 561.6 GB since 108,000 images have to be stored based on the selected fps.
The idea of data compression and data loss recovery has been already implemented for time-domain-signals, or time signal for short, of wireless sensors network using the compressive sensing (CS) algorithm [17,18,19,20,21,22] for instance, which can be integrated in the monitoring system [21,23]. Wavelet-based transforms are also widely used tools in time-signal compression from vibration-based SHM data for the purpose of civil structure condition monitoring [24] or the identification of structural modal properties [25]. However, for image compression, the applications are available mostly in the field of satellite imagery [26,27,28,29,30,31], medical imaging [32,33,34,35], or in aerial photogrammetry [36,37,38,39,40,41]. Each field also proposes a compression method that is considered suitable for each area of application. Hyperspectral image compression for example, is the essential concept in the remote sensing field not only to compress the bulk data, but also to store them in details [42,43,44,45,46,47]. A new standard, JPEG 2000, has also been developed to provide the framework of still image compression with wide-range of applications including remote sensing and medical imagery areas [48,49,50,51,52,53,54,55]. Combinations between the hyperspectral image compression with principal component analysis (PCA) and JPEG 2000 perform faster compression that are benefit for real-time applications [56,57,58,59,60,61,62,63,64]. Meanwhile, very limited studies have been reported relate to image compression implementation in the field of vibration-based SHM.
The existing literatures in SHM that implement image compression methods are mainly for structural health diagnosis or for training purposes using advanced convolutional neural networks (CNNs) or deep learning frameworks. Comprehensive reviews of image and video compression with neural network can be found in Dony et al. [65], Jiang et al. [66] and Ma et al. [67]. Studies conducted by Yang and Nagarajaiah [68] and Huang et al. [69], reconstructed SHM images based on the CS theory that crack information in a structure is more pronounced and exhibits sparsity in the image. Xu et al. [70] resized the resolution of rectangular raw images of damaged reinforced concrete columns into smaller square pixels image to reduce calculation costs on the supervised learning procedure. Su [71] downsized 100 sampling images that were used in training model for concrete pavement to reduce computational time. Xu et al. [72] resized the height pixel unit of grayscale images using bicubic interpolation then cropped them into smaller elements as the input of the deep network. These data were used to train the machine algorithms for identification framework of steel surface cracks.
In vibration-based SHM, the latest study by Ri et al. [73] used downsized image data from a bridge-field experiment. The study demonstrated that JPEG image quality of 20% was the threshold to balance between an effective compression and the measurement accuracy. However, applying image compression using the CS technique to the digital images registered using a sub-pixel algorithm such as in the digital image correlation (DIC) method was likely to be inefficient [74]. It was found that processing thousands of camera images was prohibitively expensive, and should be preceded by another technique, such as a shape description method for potential application in in-situ data compression. A study by Akcay [75] also stated that compressing images that used sub-pixel registration resulted in significant errors, especially in computing exterior orientation and generating 3D point clouds since sub-pixel registration required high resolution of images. To summarize, digital image compression from vision-based sensor for vibration-based SHM purposes remains an open field for further studies and implementations, especially for vision-based SHM and using the principal of sub-pixel registration in measuring the displacement. Thus, there is a need to explore the feasibility and novel methods of applications of various advanced, yet computationally efficient, image compression algorithms. A comprehensive assessment of such methods is needed when applied to compress large image data files in high compression ratios. There is also a need to study the effect of data compression on the digital image photogrammetry and measurement accuracy for SHM purposes. These aforementioned knowledge gaps is what this study attempts to fill.
In this study, an overview of the vision-based sensor system using optical points or targets as well as the sensor deployment in photogrammetry and SHM process is explained first in Section 2. Section 3 provides the proposed image compression framework and characterizes the selected image compression method with the assessment of the image quality using several statistical indices. Section 4 describes the systematic implementation of the proposed image compression framework, from the compression to accuracy assessment, using a one-inch steel block experiment. Section 5 focuses on the static and dynamic displacement measurement accuracy assessment of the proposed image compression frameworks from the monitoring of a continuous quasi-static experiment and a unique full-scale seismic shake table experiment. Finally, Section 6 provides the key takeaways and conclusions of this study.

2. Vision-Based Sensor System for Photogrammetry and Structural Health Monitoring

2.1. System Requirements

The vision-based sensor deployed in this study uses two sets of monochrome cameras: consumer grade digital cameras and exclusive high-speed cameras. Each set comprises two cameras to allow three-dimensional (3D) monitoring and measurements. Complementary metal oxide semiconductor (CMOS) image sensors are commonly used in the camera system with image resolution ranging from 5 to 18 Megapixels or more, depending on the camera specifications. The photogrammetry images are stored as .tiff or .jpg formats, and test images are sometimes converted from video format when commercial digital cameras are used. For indoor monitoring, lightings are sometimes required but they are not necessary for outdoor or field monitoring. The setting exposure and sampling rates are varied depend on the environmental condition and monitoring types. For vibration SHM purposes they are normally set to f / 16 , and used fast shutter speed to avoid motion blur. However, lower sampling rates such as 30 fps are sufficient for vibration monitoring or for structural system identification of the monitored structures [4,5]. Several types of optical templates are available but this system uses a template of a white dot surrounded by white rings printed on a black background based on Schneider [76] design as shown in Figure 1. The geometry of the white rings are varied for the ease of identification as each geometry translates into a different number and it is identified as a coded target. The single white dot does not specify a certain number, but also has a similar function as the coded ones. The behavior of the structure is detected through the movement of center of the optical targets, sometimes referred to as templates, points, or dots. These targets are rigidly attached and randomly distributed to the structure within the field of view (FOV) and area of interest as dictated by the monitoring setup. For large-scale experiment, the monitoring requires a distant setup to accommodate the large FOV, so the system completely depends on the environment where the experiment is being conducted. The different components of the full vision-based system along with the various processing steps are presented in Figure 1. As shown in the Figure, large-scale monitoring setups might need to be located at higher platform to monitor a broader area and to get a clean view of larger structures.

2.2. Photogrammetry Procedures

The SHM using the proposed system starts by the photogrammetry process, in which the internal and external camera parameters need to be determined. In close-range photogrammetry, the procedure of finding the internal and external camera parameters generally adopts the method established and used in many studies, such as Tsai [77], Heikkila and Silven [78], Ruther [79], Fraser [80], Gruen [81], and Bouguet [82], to name but a few. This procedure is usually referred to as close-range photogrammetry or photogrammetry calibration. In most SHM applications using vision sensors, camera calibration is specifically dedicated to remove lens distortion [83,84]. The camera calibration in this study is in fact a photogrammetry deliverable with the final goal of defining the relationship between the two cameras by forming a full-projection matrix. In the rest of the paper, just the term photogrammetry is used for simplicity to refer to the process described herein.
The internal and external camera parameters defined in the photogrammetry process are listed in Figure 1. These parameters are used to compensate for the systematic error in the camera sensor, either CCD or CMOS sensor. They are recovered from a set of photogrammetry images using self-calibrating bundle adjustment [85]. Bundle adjustment starts with the least square estimates of the internal orientations. Then, by using these estimates, all parameters are determined simultaneously using the iterative non-linear collinearity optimization to minimize the error in each image plane. The additional parameters, i.e., radial symmetric, asymmetric, and tangential distortions are also computed in the bundle adjustment process. The overall process requires only a set of static images, that could be as little as five and up to as many as hundred images, taken from multi-view and different angles toward the structure as shown in Figure 2. Other statistical qualities of the images such as the ellipse quality, image deviation, or targets position in the 2D and 3D coordinates as shown in Figure 2 can also be quantified from the photogrammetry process. The figure shows examples of the additional output of photogrammetry: (1) registered targets, i.e., targets that are selected to track the structure movement; (2) unregistered targets, i.e., targets that are used in photogrammetry but neglected in SHM as their locations are not within the scope of monitoring but their attachments are required for bundle adjustment convergence; (3) ellipse quality is the grayscale quality of the identified targets; (4) natural feature point is the point that is found using the ellipse finding algorithm but their locations are also not within the scope of monitoring;
The photogrammetry procedure is then followed by the SHM analysis, as illustrated in Figure 1, in which videos or continuous images are recorded and stored in the system. They are post-processed further using the sub-pixel registration of pattern or template matching method [86] that is used to track the target locations within image sequences. When the SHM is conducted using video recording, a conversion to image sequences is preferable so the results from the imaging photogrammetry can be used. During conversion, the SHM image resolution sometimes has to be adjusted to be similar to the resolution of the photogrammetry images, or vice versa. Finally, using the relationship between two cameras (as full-projection matrix) and the change of target location in each image (from the pattern-matching method) as outlined in Figure 1, images are translated into time-domain response signals, i.e., displacement, velocity, or acceleration. These signals are certainly the SHM data which are then used to assess and report the condition of the structure, identify structural modal properties, etc. depending on the purpose of the SHM and user needs.

3. Proposed Framework for Digital Image Compression

3.1. Overview

This study is not aimed at developing new image compression methods or algorithms, but rather to assess and extend existing methods when specifically tailored to the application in hand. The next goal is then to systematically explore the effect of image resampling on the photogrammetry and apply it for SHM purposes. In other words, we utilize the most applied yet computationally simple resampling algorithms. By tuning the influence of each of the different algorithm to the compressed image quality, the photogrammetry results and SHM accuracy can be easily tied to assess the effectiveness of the selected algorithms.
The term “compression” in this study is defined as reducing the size of the digital image, in terms of storage size, by a scale factor, i.e., the ratio between the size of the original data and compressed data. In the digital image up or down scaling, two categories of digital image processing are generally used, i.e., resizing and resampling techniques. Image resizing changes the image size while keeping the pixel dimension constant while image resampling physically changes the number of pixels in the image. Since the compressed images are further used in the photogrammetry and vibration-SHM for the vision-based sensor, the image resampling technique is a possible option as it reduces the data storage without losing the image original quality such as creating blurry or distorted images. The non-adaptive image interpolation is a resampling technique that is mostly preferable for real-time applications because some other adaptive image interpolation algorithms require longer processing time [87]. They also consume more hardware resources as compared to the non-adaptive image interpolation algorithm. Several interpolation algorithms have been used for image resampling, from the simplest nearest neighborhood to more complex functions such as cubic splines [88]. In the frequency domain, wavelet transforms also offer advantages in compressing and reconstructing images without high hardware cost [89] as they are easier to implement than the Fourier method [90] and provide easier computation and transform processes [91]. The compression rate or the ratio between the original data volume from the input image with full resolution and the compressed data volume obtained after resampling process is expressed by Equation (1):
I C R = o r i g i n a l   d a t a   v o l u m e c o m p r e s s e d   d a t a   v o l u m e
The I C R in this study is expressed from one single image ratio before and after compression and it is also associated to the total volume of images recorded from SHM and stored in the system. The proposed framework for digital image data compression is shown in Figure 3. The procedure is outlined for monochrome image with n = 8 but can also be extended for the application on colored images with higher bits, such as 24 bits or 32 bits. The process starts by resampling the photogrammetry images either in .tiff or .jpg format as taken by the two cameras in the system. For the rest of the paper, the frames taken by the first camera are referred to as reference frame or reference image, R , and the ones captured by the second camera as search frame or search image, S . These are associated with the terms used in the sub-pixel pattern matching method to track the target location between the two frames when processing the SHM images. The term input images refer to the initial image with full resolution of 2560 × 2048 pixels while compressed images refer to the output image of the applied resampling algorithm.
How the resampling algorithm works in compressing an image is briefly described as follows. Considering a pixel i ( x , y ) where x , y are the horizontal and vertical coordinates of pixel i located on an image I with a size of X × Y pixels. Image I is resampled using scaling factors r x and r y in x and y directions so r x = r y when the scaling factor in both directions is similar. After resampling, the image is repositioned by rotating the image by angle θ and the new coordinate of pixel i ( x , y ) is now j ( x , y ) . The latter is calculated by adding a translation vector t x and t y so the pixel i ( x , y ) can shift to the new location j ( x , y ) along a straight-line path. The overall process is expressed as the geometric transformation expressed in Equation (2):
[ x y ] = [ x y ] [ c o s θ s i n θ s i n θ c o s θ ] [ r x 0 0 r y ] + [ t x t y ]
According to Shannon’s first theorem, the lower and upper bounds of I C R is expressed as n H ( I ) + 1 < I C R < n H ( I ) in which n refers to the bit depth and equals 8 for monochrome image and H ( I ) is the image entropy that is explained later in Section 3.4 when assessing the quality of compressed image. Even though high compression ratio can be possibly achieved based on I C R m a x , digital images are restricted, in either compressing or magnifying, without compromising the degradation of their pixel values. Therefore, besides the criteria of fast algorithm and high compression ratio, maintaining the image quality is significant, especially for photogrammetry, SHM, and accurate displacement measurement as desired in this study. Two resampling methods using interpolation and wavelet transform are selected with detailed description described in the next section.

3.2. Non-Adaptive Linear Interpolation

Interpolation algorithms are basic resampling techniques that reduce the input image size by estimating the new pixel intensity of the compressed image based on the neighboring pixel values. In this study, interpolation is selected as the compressed images can be presented without any visual loss [92] and they are divided into two types: adaptive and non-adaptive techniques. The non-adaptive image interpolation is utilized here since it is mostly preferable for real time applications. Image interpolation is applied in two directions, x , y , and attempts to reach the best approximation of a pixel’s intensity based on the values at surrounding pixels [87].
The non-adaptive interpolation scheme is operated using interpolation Kernel K 1 ( t ) with properties such as regularity, moments, and symmetry. For monochrome images, they use adjacent pixels from 0 to 255 or more when interpolating. Four functions of non-adaptive linear interpolation, i.e., bicubic, bilinear, box kernel, and Lancsoz are selected in this study. The simplest method is box Kernel that takes a region of bandwidth h that controls the degree of smoothing. It is specified as 0.5 in this study and the interpolated pixel value is computed by taking the pixel average in the specified region. The algorithms become more complex and require longer computational time when they use more adjacent pixels, but the results are more accurate. Bilinear interpolation is more complex than box kernel as it considers the known 2 × 2 neighboring pixel values. Bicubic considers the closest 4 × 4 pixel values so the interpolated image is smoother and generates sharper images than box kernel or bilinear method. Another interpolation method uses Sinc kernel with very high accuracy in smooth data. However, it produces ripple around the image edges since the function s i n c ( t ) decays at a rate of 1 / t that creates the Gibbs phenomenon or ripple throughout the image [93]. The solution is to apply a window function to limit the ripple artifact and the most commonly used method in image processing is a Lanczos window with a positive integer n = 2 or 3. These algorithms are applied on the input images to calculate the interpolated pixel coordinate then the convolution operation is performed to obtain the pixel intensity value of the compressed images as outlined above in Figure 3.

3.3. Wavelet Compression

Wavelet transform is a widely used method in the image and video compression applications. The general architecture of the wavelet compression is outlined in Figure 3 where a wavelet transform or encoder is selected first and then operated by transforming data to remove redundancy in the image. The transform coefficients are quantized followed by the entropy coding of the quantizer output. The wavelet compression algorithms used in this study are in the category of progressive coefficients significant method (PCSM) and coefficients thresholding method (CTM). PCSMs includes the adaptively scanned wavelet difference reduction (ASWDR), embedded zero tree wavelet (EZW), set partitioning in hierarchical trees (SPIHT), spatial-orientation tree (STW), and wavelet difference reduction (WDR) algorithms, while CTM uses subband thresholding and Huffman encoding or LvlMmc.
EZW was first introduced by Shapiro [94] and became the first most powerful wavelet encoder for image compression. It forms a tree structure and uses a list of significant coefficients. After applying a discrete wavelet transform (DWT) on the image, the roots are located into the lowest frequency sub-band of the wavelet tree. A threshold value is predefined to be compared with the wavelet coefficients. The SPIHT [95] encoder provides the same basic progressive concept as EZW but it uses three lists of significant pixels, insignificant pixels and insignificant sets. While EZW is the first encoder that provides a significant rate-distortion performance, the SPIHT encoder is also a fast algorithm that improves the EZW performance by proposing new encoding for the position of the significant coefficients [96]. STW is essentially the SPIHT algorithm but it uses a state transition model in organizing the output of its coding. ASWDR [97] is relatively recent algorithm and is also the adaptation of WDR [98] that adds one last step of creating new scanning orders. While WDR uses a fixed ordering, ASWDR employs a varying order so it can adapt with a specific image feature creating objective computation and improving the compressed image quality.
A note related to the wavelet compression applied in this study is that the pixels of the input image must be the powers of two. Since the width of the input image used in this study is 2560 pixels, which is not the power of 2, it requires scaling in the horizontal direction before applying the wavelet compression. For this purpose, the bilinear interpolation is selected due to its simplicity then the wavelet compression is applied after horizontal scaling following the procedure in Figure 3.

3.4. Compressed Image Quality Index

A histogram of a compressed image quantifies the information contained in the output of image compression from the resampling algorithms described above. The Shannon entropy of image I , H ( I ) , is calculated using Equation (3) with p ( i ) as the probability mass function that contains the normalized histogram gray level counts of the compressed image. It is always non-negative and indicates the number of bits on average required to describe the random variable. Higher entropy indicates the variable contains more information and in turn, the compressed image will be more detailed. Entropy reaches the maximum value when the probability distribution is uniform. Note that there is a relationship established between image entropy and compression ratio as briefly mentioned before. The lower and upper bound of compression ratio based on Shannon estimation, i.e., n H ( I ) + 1 < I C R < n H ( I ) , clearly shows the relationship between entropy H ( I ) and I C R . The higher the entropy, the lower the compression ratio that can be achieved, and vice versa. That relationship is not explored further in this study as the applied compression ratio in this study is much higher than the theorem:
H ( I ) = i I p ( i ) l o g 2 p ( i )
A number of indexes have been proposed in the literature to measure the image quality degradation analyzed by resampling or compression algorithms. The simplest metric implemented in measuring the image quality is the peak signal to noise ratio P S N R that is given by Equation (4). The variable n is the bit depth of the image and for a monochrome image; it is defined as 8-bit. M S E is the mean square error between the compressed and input image; P S N R is 48 dB without M S E or higher when M S E is included. Both P S N R and M S E are computed based on pixel-to-pixel difference so they provide simple computation with clear physical meaning. Another metric selected in this study is the structural similarity index metric S S I M , which is computed using Equation (5). It measures the similarity between two images in terms of luminance, contrast, and structure. It computes a local spatial index in the terms of average pixel values x ¯ , y ¯ ; standard deviation σ x , σ y ; and covariance σ x y :
P S N R = 10 l o g 10 ( 2 n 1 ) 2 M S E
S S I M = 4 σ x y x ¯ y ¯ ( σ x 2 + σ y 2 ) [ x ¯ 2 + y ¯ 2 ]

3.5. Close-Range Photogrammetry and Structural Health Monitoring

After processing the photogrammetry images through resampling algorithms and assessing their quality using the index metrics, they are analyzed following the procedures explained in Section 2.2. The internal and external parameters in photogrammetry are determined based on the compressed reference and search images captured by the two cameras system. When the compressed images are successfully analyzed and the full-projection matrix is computed, the SHM is performed. SHM images are also compressed following the framework in Figure 3 before processing to generate time signals. The sub-pixel pattern matching method is used to track the location of the optical targets within image sequences. The SHM accuracy is computed based on the difference between the optical target measurement and reference values as the maximum error or RMSE based on the experiments.

4. Compression Framework Implementation and validation using One-Inch Block Experiment

4.1. Test Setup

The proposed image compression framework was first experimentally evaluated through a one-inch steel block test in the Earthquake Engineering Laboratory (EEL) at the University of Nevada, Reno (UNR). The test considered a sliding bar that can be displaced, along with selected number of targets, when a standard NIST-certified one-inch block is inserted as illustrated in Figure 4. It is noted that the sliding bar was installed onto a reinforced concrete (RC) column and slab subassembly that was being monitored as part of another study. For the best and widest field of view, the vision-based monitoring was set on a higher platform, basically the top of one of the shake tables in EEL, approximately at 5 m from the specimen as shown in Figure 4. The system used two monochrome cameras with the specification listed in Table 1.
Optical targets were printed on adhesive stickers and carefully attached on both the specimen and the sliding validation bar to avoid losing the targets during test. The radius of the white targets was 21 mm. With camera setup of approximately 1 m separations, all distributed optical targets on the specimen were successfully captured by the cameras. These targets were illuminated by two additional lights. A total of 76 photogrammetry images, 38 captured by each camera, was taken with different position and orientation towards the specimen following the procedure described in Section 2.2.
The validation test model has the following components: A sliding validation/verification bar as shown in Figure 4 was attached to the RC specimen. For simplicity, the validation test was conducted only on the bar that was displaced using the standard block. The distributed targets on the concrete specimen were used for another set of static experiment but the photogrammetry process was conducted using all targets on the specimen to satisfy the requirements of minimum targets in each photogrammetry images for bundle adjustment convergence. Six optical targets, that were attached to a plate glued to the moving part of the sliding bar as shown in Figure 4, were monitored for the validation test. Three static pictures were taken to validate the vision sensor measurement: initial without steel block, with steel block inserted onto the bar, and final after steel block was taken off the bar. When the steel block was attached, the aluminum plate with the glued optical targets was displaced precisely at 1 inch (25.4 mm) following the dimension of the steel block. The pictures taken with and without inserting the steel block were used as the reference initial and end-state of the validation test. They measured the value of zero as no movement was induced on the optical targets when the static pictures were taken. Therefore, the accuracy measured from this test was based on an absolute single value of 25.4 mm in which the value was compared with the three optical targets measurement shown in Figure 4.

4.2. Compressed Image Quality

4.2.1. Histogram and Output Image Visualization

Before analyzing the photogrammetry results, the compressed photogrammetry image quality was first assessed to check whether the images were not distorted nor blurred due to the applied resampling algorithms. It was also important to observe whether the input image information was still preserved in the compressed image. The example of the photogrammetry image with original size of 2560 × 2048 pixels is shown in Figure 5.
The histogram shows that most pixels have rather low intensity values that the pixel values are not distributed over the entire gray level intensity range. From the histogram, the inferred photogrammetry image information indicates that the captured image was a low-contrast image, i.e., the object being viewed was mostly dark in color and located on a dark environment. Also, the specimen was illuminated with low-intensity light. More details are observed when the image is zoomed into a local area of 72 × 305 pixels surrounding the three optical targets or points used for accuracy assessment purpose as shown in Figure 5. The figure shows that the low-pixel intensities are not only a result of the dark environment but the black color of the optical targets as well. The vision-based sensor setup was designed such that most of the image information was used as the background for the ease of separation with the white dots of the optical target. Even though the brightness level of the white dots were low, for example it was within the range of 60–80 intensity level as shown in Figure 5, it was still able to be identified in the photogrammetry process.
The photogrammetry images were compressed by the resampling algorithms from the original size of 2560 × 2048 pixels to 75% (1920 × 1536 pixels), 50% (1280 × 1024 pixels), and 25% (640 × 512 pixels) of its original size. These percentages were selected to keep the compressed images size as integers. The data size after compression were reduced from 5.2 MB for a single image to 2.6 MB, 1.04 MB, and 250 KB for 75%, 50%, and 25% resampling, respectively. These data size reductions were associated with compression ratios I C R of 2, 5 and 20. The examples of the photogrammetry image after compression using non-linear adaptive interpolation and wavelet transform are shown in Figure 4. The clear observation is that the dynamic pixel counts of the input (original) image is also reduced to a smaller number of pixels as the effect of quantization during the compression. The compressed image also does not contain gray values that did not exist in the input image as indicated by the histogram of the local area surrounding the three points in Figure 6. The gray level intensity of the compressed image using both resampling methods is still within the range of the input image intensity level.
By inspecting the histogram of the local area of the three optical targets with I C R = 2 in Figure 6 both resampling methods resulted in identical histogram of their compressed image. Besides the quantization effect on the number of pixels per gray level intensity, no spatial artifact occurs in low compression ratio images, but it comes into effect in higher compression ratio I C R = 20 as shown in Figure 7. The output of the resampling creates the pixels surrounding the edge to look averaged, so it appears as a large block, which is generally known as staircase noise in photography. By enlarging the image size, this effect on the sharp edges becomes visible especially on the edge of the white dot. Another spatial artifact shown in Figure 7 is the color bleeding where the edge of the white dot overlaps or bleeds into another gray level intensity. This artifact appears in all compressed images using either non-adaptive interpolation or wavelet compression, but the most apparent color bleeding was shown when using the coefficient threshold (LvlMmc) for wavelet transform. Therefore, the color bleeding intensifies the pixel counts in several gray level as shown in their histogram in Figure 6. However, the intensity still falls within the range of the input image, i.e., the gray level information from the input image is still preserved. When the brightness and contrasts of the image are enhanced as shown in Figure 7, small artifacts around the edge of the white dots are visible, shown as dots or so-called ringing or edge halos. They were formed due to the loss of sharpness as the output image from the compression oscillated at a fading rate around the sharp edges during the intensity transitions.
Overall, the image compression from the resampling methods using either non-adaptive linear interpolation or wavelet compression experiences reduction of pixel counts within the distributed gray level intensity. The output images also show spatial artifacts such as staircase noise, color bleeding, and edge halos. These artifacts are left untreated, as this study is not intended to enhance or to correct any additional features that are visible after image compression. The photogrammetry as well as SHM procedures were continued using these compressed images to understand the implications of the image compression as discussed in Section 4.3 and Section 4.4.

4.2.2. Quantitative Analysis Results

Quantitative analysis was conducted on all compressed photogrammetry images with the total of 2508 images (76 images × 3 compression ratios × 11 resampling operations). Two examples are shown in Figure 8 for using the resampling output of bicubic interpolation and spatial orientation tree wavelet. The histogram focuses only on the predominant gray values between 0 and 100 intensity captured by the two cameras. The figure shows the similarity of gray intensity value distribution between original and compressed images without any intensity outside the input image range. The pixels count was also identical between reference and search images for each compression rate.
Image entropy defines the randomness contained in the image, so it directly relates to the image histogram. The maximum entropy is equal to the number of bits n , i.e., 8. It can be achieved only if the histogram uses all the available dynamic range from 0 to 255 with equal probability of the pixel counts, or to visualize, the resultant probability distribution is completely flat. The entropy of all the 76 images used in photogrammetry with original size is shown above in Figure 5c. The input images that are associated with the example in Figure 8 have entropies of 5.78 and 5.82. After compression, their entropies do not change significantly, even at higher compression ratio, as they are within the range of 5.76−5.78 and 5.79−5.82 for the reference and search image, respectively. This agrees with the visualization from the histogram in Figure 8 where the applied resampling algorithms only creates quantization without changing the gray level distribution or adding additional intensity correspond to the input image. Moreover, a quick example is provided to confirm the observation previously mentioned regarding the lower and upper bound value of compression ratios based on the Shannon entropy. Using the relationship mentioned in Section 3.4, with the entropy of the original size image of 5.78 for example, the lower and upper bound compression is computed as 8 5.78 + 1 < I C R < 8 5.78 . Therefore, such image can be compressed within the range 1.18   < I C R < 1.38, which provides very low values for compression that will not be of benefit. However, the compression ratios are still stretched to 20 in this study, and hence, recommend not to use such low theoretical values.
The peak-signal-to-noise ratio P S N R and structural similarity index metric S S I M of the sample compressed images from the bicubic and STW wavelet are also shown in Figure 8. Moreover, the complete results of the compressed image entropy, P S N R , and S S I M used in the photogrammetry from those cases are shown in Figure 9. P S N R measures the quality of the reconstructed image where the noise is the error introduced by the resampling algorithm. For all photogrammetry images compressed using bicubic and spatial orientation tree wavelet, the results show a high P S N R more than 40 dB, as shown in Figure 9. As for S S I M , a good quality index is implied in Figure 9 where the S S I M for all photogrammetry images is more than 0.96. The assessments based on entropy, P S N R and S S I M show that the compressed images still have a good reconstruction quality with insignificant noise even at a higher compression ratio.
The relationship between P S N R with entropy and S S I M based on the applied compression ratio is shown in Figure 9 using bicubic and STW compressed images. It is observed that the compression rate slightly reduces the values of P S N R either by using bicubic interpolation or STW compression. The P S N R distribution using STW wavelet is more concentrated into a constant value while bicubic interpolation generates more variations in the P S N R distribution.
The relationship between image entropy and P S N R as shown in Figure 9 is somewhat constant that the change of entropy does not significantly affect the change of P S N R . A non-linear relationship is observed between P S N R and S S I M that is more visible when using bicubic interpolation. An upward sloping is shown on the bicubic interpolation curve suggests that increasing the P S N R also increases the S S I M value. Meanwhile, the P S N R and S S I M relationship trend when using STW compression is not as noticeable as with bicubic interpolation, only that the S S I M improves for higher compression ratio, whereas the P S N R declines once the compression ratio is higher.

4.3. Effect of Image Compression on the Photogrammetry Process

The compressed images from non-adaptive linear interpolation and wavelet compression were processed following the procedure of the close-range photogrammetry outlined before in Figure 3. Using I C R = 2, 5, and 20, the photogrammetry was successfully conducted as both internal and external orientation parameters were possible to be identified from the compressed images. This shows that even though the output of the image compression has some artifacts (e.g., more visible defects from the LvlMmc method at I C R = 20), the photogrammetry still yields meaningful and acceptable results. In this section, the photogrammetry results such as principal point u 0 , v 0 , radial symmetric, asymmetric and tangential distortions A1, A2, A3, B 1 , B 2 from compressed images are presented. In addition, the effect of image compression using a selected resampling method is also explored in the context of relating the change of photogrammetry parameters with respect to the applied compression ratio. Also, for a given compression ratio, photogrammetry results using the compressed images are checked to see whether there is any trend (e.g., specific ratio or scale) when compared to results from the original full-resolution input images.

4.3.1. Principal Point Location in the Compressed Image Plane

The photogrammetry results of the one-inch block test using input images with the size of 2560 × 2048 pixel are given in Table 2. The table shows that the locations of the principal point in the x and y direction ( u 0 , v 0 ) are −24.17, −7.30 and 13.44, −10.21, for the reference R and search image S , respectively. These positions are plotted in Figure 10 followed by the principal point results using compressed reference R and search S images from different compression methods and for the three different compression ratios. For convenience, the figure shows only pixel range where the principal points are located, instead of the entire image resolution. In all locations, the units of the principal point in both axes are in pixels. Figure 10 clearly shows that the principal point is shifted in the horizontal and vertical axes along with the applied compression ratio. Although the photogrammetry procedure is similar for all compression ratios, different compression algorithms seem to affect the location of the principal point. This shifting is expressed by a pixel translation ratio in both axes, Δ u 0 and Δ v 0 , which is the ratio of a principal point coordinate in the original image with respect to its compressed version (see Figure 10 for an illustrative example). The computed translation ratios are summarized in Table 3. The table shows a varied behavior of the principal point translation between non-adaptive linear interpolation and wavelet compression methods. The principal point translation ratio in the image plane with I C R = 2 ranged between 0.66−1.34, which increased for I C R = 5 to 1.16−2.51. Translation ratios become even larger for I C R = 20 where a principal point coordinate is shifted significantly within a translation ratio of 8 when the Aswdr compression method is used for instance. It is also noted that the principal point is shifted more in in the horizontal x direction compared to the vertical y direction, even though the image is resampled using a similar scale in both directions.

4.3.2. Optical Target Locations in the Compressed Image Plane

A total of 88 optical targets distributed on the RC specimen and verification bar (see Figure 4) were used in the photogrammetry procedure. Therefore, the output listed in Table 2 was in fact the result from the bundle adjustment of all these targets on the 76-photogrammetry images. It is also noted that all 88 targets were available to be used for SHM purpose. However, the one-inch block test used in quantifying the measurement accuracy was conducted using only three optical targets placed on the sliding bar as mentioned before. Accordingly, only these three targets or points were used to assess the effect of applied compression on their 2D position on the image plane.
One of the outputs of the photogrammetry is the 2D location of the optical targets, i.e., the horizontal x and vertical y pixel coordinates in the image plane. The results when the compressed images are used is shown in Figure 11 and they are plotted together with the 2D plots of the three points when the original size images are used in the photogrammetry. A much clear clustering of these points is observed and a definite 2D translation of their positions from the original location (uncompressed) is presented. The change of target position in the original and compressed image is again expressed by the translation ratio as shown in Table 4. However, this time the ratios are shown to be the reciprocal of the applied scaling of the digital image when non-adaptive linear interpolation is used, i.e., 75% compression leads to a translation ratio of 1.30, and 50% or 25% leads to 2.0 and 4.0, respectively. The results from wavelet compression are different only because the horizontal axis of the input image needs scaling to fulfill the requirement that the input image size must be the powers of two. Based on the results shown in Table 4, it can be confirmed that the 2D point translation in the image plane based on the compressed image photogrammetry using both resampling methods is in fact the inverse of the scaling factor applied on the input images.

4.4. Effect of Image Compression on the Vision System Measurement Accuracy

The objective of the previous subsections was to confirm that the conducted image compression could still lead to a successful photogrammetry process, i.e., internal camera parameters could be determined for instance. This was still true regardless of some variations between the different compression methods, that have been quantified (e.g., principal point or optical target translation). However, such previous results do not necessarily represent measurement accuracy after image compression, which in turn, is the focus of this subsection. Using same compression methods as before, the input images for the 1-inch displaced targets were compressed then used to determine the displacement value. The value used for validation here was the absolute 1 inch (25.40 mm) resulting from inserting the one-inch block onto the sliding bar. Using original image size subpixel registration, the displacement of point 1, 2, and 3 as shown in Figure 4 was measured as 25.48, 25.46 and 25.48 mm, respectively. The measurement accuracy was assessed by computing the average error of these values with respect to the absolute value of 25.4 mm. The error was computed as 0.31%, 0.24%, and 0.31% for points 1, 2, and 3, respectively, and the average error from all three point was computed as 0.28% when uncompressed images were used.
The effect of image compression on the vision-based displacement measurement is expressed using two metrics. These are the error of the three points with respect to two reference values: (1) the measurement using the original size image, i.e., 25.48, 25.46 and 25.48 mm for points 1, 2 and 3, and (2) the absolute value of 25.40 mm. Table 5 provides all the obtained measurement values from different compression methods. In addition, the measurement result with respect to the input original image computed as the average error of the three points, Δ a v g (%), is summarized in Table 5.
The results show a very high accuracy at lower compression ratio, especially when the images are compressed using non-adaptive linear interpolation method. At higher compression ratios, the accuracy becomes slightly less and the measurement using compressed images using wavelet shows relatively higher error as compared to the original values. Similar trends are observed when comparing the measurement from compressed images with the absolute value of 25.4 mm. The accuracy measured at lower compression ratio is very close to the accuracy given by the input image, i.e., 0.28% error. The maximum error is shown to be associated with the images compressed using LvlMmc method as 0.96% error with respect to the absolute value of 25.4 mm. Recall the example of compressed image with I C R = 20 using Lvlmmc method in Figure 7; that relatively lowest quality output image possibly contributed to the lower measurement accuracy using this method. Thus, future studies can further examine and relate the gray value distributions on the output images from high compression ratios to displacement computation using sub-pixel image registration algorithm.
Overall, the results shown in Table 5 provide the ultimate validation and verification for adopting image compression using either linear interpolation or wavelet transform methods, where displacement measurement errors can be less than 1%. There are no clear trends on which algorithms under the two methods would be preferred for higher accuracy. However, the non-adaptive linear interpolation methods could be argued to provide slightly more accurate results than the wavelet transform methods. Nonetheless, the table highlights the specific cases that can be recommended as best algorithms based on error values. Given these promising results obtained from the simple 1-in block test, it was desired to extend the study to more realistic cases including full-scale building vibration monitoring, which is the focus of the next section.

5. Accuracy Assessment of Image Compression Framework Using Static and Dynamic Tests

The implications of image compression in the photogrammetry process and the vision-based sensor metric accuracy was evaluated in Section 4 using a precisely known single-value test. i.e., the one-inch block test. In this section, the accuracy of vision-based sensor metric using compressed images is further evaluated. Results from two recently conducted laboratory tests were used. The first case study applies image compression on quasi-static test images. The second case study implements the proposed framework on monitoring images from a signature full-scale three-story RC building tested under seismic loads at the world’s largest shake table, located in Japan. While different compression methods and algorithms were used and compared in the previous section, only the Bicubic interpolation was selected to use for the two case studies discussed in this section due to its low-computational efficiency and better accuracy as evaluated in Section 4.

5.1. Quasi-Static Experiment

5.1.1. Experimental Setup and System Configuration

The accuracy of image compression for displacement measurement was experimentally evaluated through a more elaborate validation quasi-static test. A small ANCO shake table was used to conduct uniaxial loading for a rigid test model at the Earthquake Engineering Laboratory at the University of Nevada, Reno. The system configuration of the vision-based sensor system using two cameras is same as used in the one-inch block test (Table 1). For this test, the sampling rate was 32 fps and monitoring/recording of the test continued for 33.5 s, which resulted in a total of 27.4 GB data size. The test model was an aluminum block specifically constructed for this study with the dimensions 100 mm × 10 mm × 900 mm as shown in Figure 12. In the experiment, the shake table was moved manually by pushing the specimen while setting the actuator free to follow the motion. The test setup is shown in Figure 12 along with the distribution of the optical targets. For the purpose of photogrammetry, 11 optical targets (numbered from 399 to 409) were attached to the specimen and six targets (numbered 410 to 416) were distributed on the shake table. The targets were printed from photogrammetry software [99] on adhesive stickers and carefully attached to avoid losing the targets during the tests. The radius of the white circular targets was 3.5 mm. For the best and widest field of view for the test, the monitoring station was located approximately 3.8 m from the specimen. Using a camera setup as shown in Figure 12 with approximately 1.77 m of camera separations, all distributed targets were successfully captured by the cameras. A total of 64 images (32 on each camera) were used in photogrammetry and the results using the original and compressed image size are given in Table 6. With the given vision sensor configuration and number of images, the photogrammetry could be still processed for high compression ratios. The static accuracy assessment in this experiment is provided only for the maximum applied compression ratio, i.e., I C R = 20 .

5.1.2. Static Accuracy Assessment of Vision-Based Sensor Metric

For the validation quasi-static test, monitoring was conducted with only a 32 fps, i.e., 32 Hz sampling rate, and global electronic shutter of 3000 msec. The test utilized the full resolution ROI of the cameras, i.e., 2560 × 2048 pixels. Total images recorded from the test were 2378 images associated with the data size of 27.4 GB. For the validation purposes, two NIST-calibrated string potentiometers, namely SP top (top channel) and SP bot (bottom channel) were connected to the specimen as shown in Figure 12.
SP top was connected to the specimen close to optical target #401 (TT top) and SP bot was connected close to target #407 (TT bot) for comparison purposes. The displacement measurements from both optical targets and string potentiometers were compared, and the error was computed as the percentage difference between SP and optical target measurement. With I C R = 20 , the test data size was possible to be compressed to 1.37 GB. The processed displacement history of the static test as obtained from both original and compressed images with I C R = 20 are shown in Figure 13, which are also compared to the SP measurements.
The measurement errors of the results shown in Figure 14 were calculated and listed in Table 7 for the top and bottom channels. Three accuracy or error metrics are determined based on the selection of reference sensor. Using string potentiometer for reference, the accuracy of target-tracking (TT) from original and compressed images is estimated with respect to the SP values. Another error is estimated using original image results as reference for the compressed image results. Using the original images, the vision sensor metric is shown to have 0.51% and 0.27% error with respect to SP at the top and bottom, respectively. The root mean square error ( R M S E ) associated with these errors, when calculated in actual measurement units (mm), is 0.18 mm and 0.31 mm for the top and bottom channel measurement, respectively. Using compressed images, the RMSE slightly increased to 0.75 mm and 0.52 mm for respective channels. These results show that from the continuous static monitoring, using compressed images only slightly reduce the accuracy of the static measurement. When comparing the results between vision sensors metric using original and compressed data, R M S E of 0.79 mm is measured at both monitoring locations. The maximum relative error between using compressed and original images is shown to be only around 0.5%. Therefore, similar to the results from the one-inch block test, the accuracy of using compressed image for static displacement measurement is very acceptable. For comparison purposes, the maximum error is found to be about 0.3% and 0.5% when bicubic interpolation is used for image compression for the one-inch block and quasi-static continuous test images, respectively.

5.2. Seismic Shake Table Experiment

5.2.1. Vision-Based Sensor System and Monitoring Setup

The last part of this study is concerned with assessing the accuracy of image compression in measuring dynamic vibration and displacements of actual structures. For this purpose, a signature seismic shake table test of a full-scale three-story RC building shown in Figure 14 is used. The test was performed in December 2019 at the world’s largest shake table facility, E-Defense, located at the National Research Institute for Earth Science and Disaster Resilience (NIED) in Kobe, Japan. This test was part of the Tokyo Metropolitan Resilience Project Subproject C [67] and the authors were involved through a US-Japan collaboration to monitor the test using vision-based systems. The general purpose of the test was to improve the resiliency of buildings and to develop SHM techniques that could rapidly assess the safety and post-disaster functionality of buildings after major earthquake events. More details about the project and the building system can be found in Yeow et al. [100].
To monitor the building vibration and seismic performance, SHM was conducted using two-types of vision-based sensor systems. The first system used the same high-speed cameras as in the one-inch block and quasi-static tests, which is shown as Cam 1 and Cam 2 in Figure 14. The second system used consumer-grade digital cameras that was set to monochrome mode, which is shown as Cam A and Cam B in Figure 14. No additional lights were used in the monitoring, so both vision systems relied completely on the ambient light sources and the setting adjustment in each camera. Thus, it is noted that the captured photogrammetry images as well as the test images required image processing to enhance their low-level of brightness and contrast before resampling. Since image enhancement is not the scope of this study, the process is not provided here, and only discussion on the image compression effects on dynamic measurements of vision systems is provided here. Table 8 provides more details on the two vision sensor systems and their configuration. Both systems used CMOS sensor. However, the .jpg format was used as opposed to the .tiff for the digital cameras versus the high-speed cameras and the input image resolution was unchanged. A lower resolution was set in the digital camera even though it could shoot image at the maximum of 5184 × 3456 pixels (18 MP). The reason was that the video recording mode was used for the seismic test monitoring instead of continuous burst image recording. Therefore, image resolution of 1920 × 1080 pixels was set for photogrammetry process to be similar with the 1920 × 1080 full HD video. The sampling rate for the seismic test was selected as 32 fps for the high-speed cameras, which resulted in 7680 images for 120 sec recording duration and the total data storage of 38.4 GB in both cameras. The digital camera monitored the tests through video recording with the HD quality of 1920 × 1080 pixels. After video processing, the total image data storage for the system was computed as 2.3 GB.
A total of 50 photogrammetry images was captured by each vision sensor system using their respective input image resolution listed in Table 8. Since the image compression ratio of 20 could be achieved in the previous experiments, similar step was also attempted using the photogrammetry images for the seismic tests purpose. However, the optimum compression ratio that could be achieved by images from both vision systems was found to be I C R = 5 . When compression ratio of 20 was applied and used in the photogrammetry, the bundle adjustment process failed and could not be completed. In short, the reason for the failed bundle adjustment in the photogrammetry process could be attributed to relaxing the target identification threshold to 2.0 pixels, which led to random recognition of some natural features such as window reflections but only in some frames. Selected results of the photogrammetry, i.e., principal point location, from both monitoring systems original and compressed images are listed in Table 9. It is noted that image compression output using I C R = 2 and I C R = 5 for the digital vision system is 1440 × 810 pixels and 960 × 540 pixels, respectively. As for the high-speed cameras, same output image size as previous tests was used.
An example of the compressed image histogram and the quality index metric for both sensor systems is shown in Figure 15.
The histogram of the high-speed vision system clearly shows the stretching as the effect of image enhancement. The reduction of the pixel counts is also observed once the compression is applied to the image. The image entropy measures higher values due to the applied image enhancement process and values lower than 40 dB are computed for the P S N R . The S S I M results infer that the structural information of compressed images slightly matches that of input images. A larger difference is observed at the higher compression ratio where the computed S S I M is 0.69 using the high-speed vision systems.
From Figure 15, it is shown that a better similarity index is measured from the digital cameras system as the S S I M is computed to be more than 0.90. As for the entropy and P S N R results, the values are closer to the results from the high-speed vision system. One noticeable visualization from the histogram of compressed image from digital vision system is the high peaks at the beginning and end of the gray level bins as shown in a zoomed-in view in Figure 15. The image enhancement causes some region in the image as pure black and white, or also called clipping. The gray values of this specific region are outside the sensor dynamic range so they are set as the minimum (0) and maximum (255) and appear as the peaks in the histogram bins. These peaks provide no detail related to the image information, so the region associated with these peaks will not expose any features even if there is an optical target attach in the region.

5.2.2. Dynamic Accuracy Assessment of Vision Sensor Metric

Low-to-high amplitude of seismic waves, also known as ground motion excitations, were applied to the RC building. White noise excitations were also applied in between the seismic tests. A sample of the processed results, i.e., displacement history at the target identified in Figure 16, from TT under a low-amplitude (20% scale of a Japanese synthetic ground motion [65]) seismic excitation is shown in Figure 16. The high-speed camera is chosen as the reference sensor and three types of dynamic assessment metrics for the vision systems are evaluated as explained next.
For error assessment notation, the displacement values measured at the selected optical target (Figure 16) using high-speed cameras with full resolution image and digital cameras with full HD resolution are denoted as a 1 and b 1 , respectively. Using compressed images, the displacement values measured by the high-speed and digital systems are defined as a i and b i , respectively, and i = 2 or 5 for I C R = 2   or   5 , respectively. The first assessment is the “relative error” or Δ r e l , C R = i that is used to define the accuracy of compressed digital camera image results with respect to high-speed camera results (see Equation (6) for maximum relative error calculation). The second assessment is the “absolute error” or Δ a b s , C R = i that is used to define the accuracy of original and compressed image results from the digital cameras system with respect to the reference high-speed system with full resolution images. The maximum absolute error is computed based on Equation (7), and the case when i = 1 refers to error in digital camera results relative to high-speed cameras at full-resolution. The last assessment is evaluating the effect of applied compression on each vision system relative to their measurement from full resolution images. The maximum “compression error” is calculated based on Equations (8) and (9) for the high-speed and digital camera systems, respectively:
Δ r e l , C R = i ( m m ) = max [ a b s ( b i a i ) ] ;   Δ r e l , C R = i ( % ) = ( Δ r e l , C R = i a i ) × 100   ;   i = 2 ,   5
Δ a b s ,     C R = i ( m m ) = max [ a b s ( b i a 1 ) ] ;   Δ a b s , C R = i ( % ) = ( Δ a b s , C R = i a 1 ) × 100   ;   i = 1 ,   2 ,   5
Δ c o m p , a , C R = i ( m m ) = max [ a b s ( a i a 1 ) ] ;   Δ c o m p , a , C R = i ( % ) = ( Δ c o m p , a , C R = i a 1 ) × 100   ;   i = 2 ,   5
Δ c o m p , b , C R = i ( m m ) = max [ a b s ( b i b 1 ) ] ;   Δ c o m p , b , C R = i ( % ) = ( Δ c o m p , b , C R = i b 1 ) × 100   ;   i = 2 ,   5
Table 10 provides a summary of the peak displacement value as measured from original and compressed images from both monitoring systems. The peak observed displacement measurement from the original full-resolution high-speed and digital systems was 618.88 mm and 615.37 mm, respectively. Using compressed image data, the peak displacement was slightly lower as compared and reported in Table 10. For proper comparative assessment, the different errors defined above as per Equation (6) through Equation (9) are calculated and summarized in Table 11. First, before exploring the image compression effects, the accuracy of both vision systems is assessed using Equation (7) (for i = 1) where the high-speed vision system is selected as the reference sensor. The absolute maximum error in the digital system measurement relative to the high-speed system is 6.7 mm (1.08%), which shows that both consumer-grade and high-end high-speed sensor systems are comparable.
The relative error (Equation (6)) in the digital system is reported in Table 11 as 12.73 mm (2.09%) and 18.64 mm (3.15%) for image compression ratios I C R = 2 and I C R = 5 , respectively. The absolute error of the digital system is also computed for each compression ratio with respect to the high-speed vision system with original data (full resolution images). As inferred from Table 11, the accuracy of the digital system declines, i.e., higher error is observed, at I C R = 5 where the maximum absolute error is computed as 45.91 mm (7.42%). This absolute error is approximately two times the relative error that is measured as 18.64 mm (3.15%). The last dynamic measurement accuracy metric for the vision-based sensors is concerned with the effect of image compression relative to original images for each sensor system separately, i.e., assessment of each system before and after image compression as per Equations (12) and (13). The error values are also reported in Table 11. For the high-speed system, the image compression accuracy in measuring dynamic response is computed as 8.51 mm (1.38%) and 27.26 mm (4.41%) for I C R = 2 and I C R = 5 , respectively. Comparable error values are observed in the digital vision system measurement as 14.62 mm (2.38%) and 26.96 mm (4.38%) for I C R = 2 and I C R = 5 , respectively.
In summary, the results in Table 11 show that it is possible to measure the dynamic behavior of actual vibrating structures and other large-scale systems using compressed images either from high-end high-speed cameras or consumer-grade digital systems. Additionally, the Bicubic interpolation that is computationally efficient is demonstrated to be feasible and effective in compressing digital image for both vision systems.

6. Conclusions

This study aimed at implementing two general methods of image compression namely non-adaptive linear interpolation and wavelet transform with several algorithms used for both. Successful implementation and demonstration of vision-based sensors image compression is desired for feasible storage, data transmission, and processing needed for future real-time or near real-time SHM. The study used several carefully designed experimental tests to quantify the effects of image compression on accuracy of vision-based monitoring and measurements. While SHM could utilize both acceleration and displacement measurements, the study focused only on displacements since it is the hardest to capture with high accuracy using vision sensors. Meanwhile, accurate displacements are guaranteed to lead to correct acceleration values as established in literature. The main conclusions and key findings of this study are as follows:
  • Image compression is a technique that can efficiently reduce the storage needs of the data collected from vision-based sensors but it still needs to be adopted more in infrastructure and large-scale SHM applications. The non-linear adaptive interpolation is shown to be more practical in application than wavelet compression as it does not require any specification of image size. Therefore, non-linear adaptive interpolation is suitable for the type of displacement measurements and SHM demonstrated in this study and can be applied directly without any additional process. The wavelet compression is also shown to be adequate, but it requires the image size to be within the power of two, which might require some scaling of original images before the compression algorithm is applied.
  • The image compression affects the quality of the output images mainly in the total pixel counts, which is known as the quantization effect. However, this study quantified such effect and its implication for the first time. The compressed image quality was assessed using several metrics and results confirmed very adequate reconstruction quality with low noise. Although output images of some compression methods or algorithms display spatial artifacts, they can still be successfully used for close-range photogrammetry and SHM purposes.
  • The image compression affects the photogrammetry results, i.e., process of determining internal and external parameters of camera sets, especially in identifying the spatial locations of the principal points and optical targets in compressed image plane. A clear cluster of 2D translations of optical targets is observed in the compressed image plane. However, these translations were found to be related to the inverse of the applied scaling with no adverse effects on failing the photogrammetry process for high compression ratios up to 20.
  • The high accuracy of static or quasi-static/continuous displacement measurement from high-end vision systems compressed images is inferred from absolute errors less than 1% for compression ratio up to 20. The accuracy was shown to slightly drop when assessed in full-scale dynamic response monitoring and realistic SHM. To be specific, the static accuracy assessment form the quasi-static continuous test was measured to be within 0.56% error, while that from the dynamic test measures a maximum error of 4.4% when using compressed versus uncompressed images. These values, that have been exclusively determined for the first time through this study, are considered adequate for SHM purposes.
  • Overall, it is concluded that image compression does have adverse effects or implications on the close-range photogrammetry and SHM accuracy. The bicubic non-adaptive linear interpolation is shown to be computationally effective and is recommended for vision-based SHM image compression applications. On the other hand, future users are cautioned that measurement accuracy seems to slightly deteriorate when very large compression ratios (e.g., more than 20) are desired or when consumer-grade DSRL cameras are used, as opposed to high-end high-speed dedicated systems, for close-range photogrammetry or SHM purposes. With due care, successful monitoring using highly compressed images will be still possible for a wide range of original digital image sizes and vision system hardware with proper selection of image compression algorithm.

Author Contributions

L.N.: Methodology, Formal analysis, Writing—original draft, Visualization. M.A.M.: Conceptualization, Writing—review & editing, Supervision, Project administration, Funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

The experimental test using one-inch block test was funded Los Alamos National Laboratory for the column capital test program and the large-scale seismic shake table test was funded by National Science Foundation (NSF) RAPID /Collaborative Research Award: Japan-US Collaboration in the Seismic Performance of Reinforced Concrete Structures (Federal Award ID Number: 2000560).

Acknowledgments

We like to thank Sherif Elfass for valuable insight on the validation tests.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Ngeljaratan, L.; Moustafa, M.A. Digital Image Correlation for dynamic shake table test measurements. In Proceedings of the 7th International Conference on Advances in Experimental Structural Engineering (7AESE), Pavia, Italy, 6 September 2017; pp. 6–8. [Google Scholar]
  2. Ngeljaratan, L.; Moustafa, M.A. Novel Digital Image Correlation Instrumentation for Large-Scale Shake Table Tests. In Proceedings of the 11th NCEE, Los Angeles, CA, USA, 25–29 June 2018. [Google Scholar]
  3. Ngeljaratan, L.; Moustafa, M.A. System Identification of Large-Scale Bridge Model using Digital Image Correlation from Monochrome and Color Cameras. Struct. Health Monit. 2019. [Google Scholar] [CrossRef]
  4. Ngeljaratan, L.; Moustafa, M.A. System Identification of Large-Scale Bridges Using Target-Tracking Digital Image Correlation. Front. Built Environ. 2019, 5, 85. [Google Scholar] [CrossRef] [Green Version]
  5. Ngeljaratan, L.; Moustafa, M.A. Structural health monitoring and seismic response assessment of bridge structures using target-tracking digital image correlation. Eng. Struct. 2020, 213, 110551. [Google Scholar] [CrossRef]
  6. Feng, D.; Feng, M.Q. Vision-based multipoint displacement measurement for structural health monitoring. Struct. Control. Health Monit. 2016, 23, 876–890. [Google Scholar] [CrossRef]
  7. Feng, D.; Feng, M.Q.; Ozer, E.; Fukuda, Y. A Vision-Based Sensor for Noncontact Structural Displacement Measurement. Sensors 2015, 15, 16557–16575. [Google Scholar] [CrossRef]
  8. Feng, M.Q.; Fukuda, Y.; Feng, D.; Mizuta, M. Nontarget Vision Sensor for Remote Measurement of Bridge Dynamic Response. J. Bridg. Eng. 2015, 20, 04015023. [Google Scholar] [CrossRef]
  9. Xu, Y.; Brownjohn, J.; Kong, D. A non-contact vision-based system for multipoint displacement monitoring in a cable-stayed footbridge. Struct. Control. Health Monit. 2018, 25, e2155. [Google Scholar] [CrossRef] [Green Version]
  10. Yoon, H.; Elanwar, H.; Choi, H.; Golparvar-Fard, M.; Spencer, B.F. Target-free approach for vision-based structural system identification using consumer-grade cameras. Struct. Control. Health Monit. 2016, 23, 1405–1416. [Google Scholar] [CrossRef]
  11. Acikgoz, S.; DeJong, M.J.; Soga, K. Sensing dynamic displacements in masonry rail bridges using 2D digital image correlation. Struct. Control. Health Monit. 2018, 25, e2187. [Google Scholar] [CrossRef] [Green Version]
  12. Feng, D.; Feng, M.Q. Experimental validation of cost-effective vision-based structural health monitoring. Mech. Syst. Signal Process. 2017, 88, 199–211. [Google Scholar] [CrossRef]
  13. Khuc, T.; Catbas, F.N. Completely contactless structural health monitoring of real-life structures using cameras and computer vision. Struct. Control. Health Monit. 2016, 24, e1852. [Google Scholar] [CrossRef]
  14. Kim, S.-W.; Jeon, B.-G.; Kim, N.-S.; Park, J.-C. Vision-based monitoring system for evaluating cable tensile forces on a cable-stayed bridge. Struct. Health Monit. 2013, 12, 440–456. [Google Scholar] [CrossRef]
  15. Kohut, P.; Holak, K.; Uhl, T.; Ortyl, Ł.; Owerko, T.; Kuras, P.; Kocierz, R. Monitoring of a civil structure’s state based on noncontact measurements. Struct. Health Monit. 2013, 12, 411–429. [Google Scholar] [CrossRef]
  16. Fraser, C.S.; Shortis, M. Metric Exploitation of still Video Imagery. Photogramm. Rec. 1995, 15, 107–122. [Google Scholar] [CrossRef]
  17. O’Connor, S.M.; Lynch, J.P.; Gilbert, A.C. Compressed sensing embedded in an operational wireless sensor network to achieve energy efficiency in long-term monitoring applications. Smart Mater. Struct. 2014, 23, 85014. [Google Scholar] [CrossRef]
  18. Meyer, J.; Bischoff, R.; Feltrin, G.; Motavalli, M. Wireless sensor networks for long-term structural health monitoring. Smart Struct. Syst. 2010, 6, 263–275. [Google Scholar] [CrossRef] [Green Version]
  19. Bao, Y.; Li, H.; Sun, X.; Yu, Y.; Ou, J. Compressive sampling–based data loss recovery for wireless sensor networks used in civil structural health monitoring. Struct. Health Monit. 2012, 12, 78–95. [Google Scholar] [CrossRef]
  20. Huang, Y.; Beck, J.L.; Wu, S.; Li, H. Bayesian compressive sensing for approximately sparse signals and application to structural health monitoring signals for data loss recovery. Probabilistic Eng. Mech. 2016, 46, 62–79. [Google Scholar] [CrossRef] [Green Version]
  21. Baraniuk, R.G. A lecture on compressive sensing. J. IEEE Signal Process. Mag. 2007, 24, 118–121. [Google Scholar] [CrossRef]
  22. Mishra, I.; Jain, S. Soft computing based compressive sensing techniques in signal processing: A comprehensive review. J. Intell. Syst. 2020, 30, 312–326. [Google Scholar] [CrossRef]
  23. Candès, E.J.; Wakin, M.B. An introduction to compressive sampling a sensing/sampling paradigm that goes against the common knowledge in data acquisition. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar]
  24. Zhang, Y.; Li, J. Wavelet-Based Vibration Sensor Data Compression Technique for Civil Infrastructure Condition Monitoring. J. Comput. Civ. Eng. 2006, 20, 390–399. [Google Scholar] [CrossRef]
  25. Perez-Ramirez, C.A.; Valtierra-Rodriguez, M.; Moreno-Gomez, A.; Gonzalez, A.D.; Osornio-Rios, R.A.; Sanchez, J.P.A.; Romero-Troncoso, R.D.J. Wavelet-based vibration data compression technique for natural frequencies identification of civil infrastructure. In Proceedings of the 2017 IEEE International Autumn Meeting on Power, Electronics and Computing (ROPEC), Ixtapa, Mexico, 8–10 November 2017. [Google Scholar] [CrossRef]
  26. Wiseman, Y. Enhancement of JPEG Compression for GPS Images. Int. J. Multimed. Ubiquitous Eng. 2015, 10, 255–264. [Google Scholar] [CrossRef]
  27. Ameer, I.M.A.; Abduljabbar, H.M. The Effect of Re-Use of Lossy JPEG Compression Algorithm on the Quality of Satellite Image. NeuroQuantology 2020, 18, 17–25. [Google Scholar] [CrossRef]
  28. Gashnikov, M.V.; Glumov, N.I.; Kuznetsov, A.V.; Mitekin, V.A.; Myasnikov, V.V.; Sergeev, V.V. Hyperspectral remote sensing data compression and protection. Comput. Opt. 2016, 40, 689–712. [Google Scholar] [CrossRef]
  29. Dua, Y.; Kumar, V.; Singh, R.S. Comprehensive review of hyperspectral image compression algorithms. Opt. Eng. 2020, 59, 090902. [Google Scholar] [CrossRef]
  30. Ibn Afjal, M.; Mamun, A.; Uddin, P. Band reordering heuristics for lossless satellite image compression with 3D-CALIC and CCSDS. J. Vis. Commun. Image Represent. 2019, 59, 514–526. [Google Scholar] [CrossRef]
  31. Indradjad, A.; Nasution, A.S.; Gunawan, H.; Widipaminto, A. A Comparison of Satellite Image Compression Methods in the Wavelet Domain; IOP Publishing: Bristol, UK, 2019; Volume 280, p. 012031. [Google Scholar]
  32. Parikh, S.; Ruiz, D.; Kalva, H.; Fernandez-Escribano, G.; Adzic, V. High Bit-Depth Medical Image Compression with HEVC. IEEE J. Biomed. Health Inform. 2018, 22, 552–560. [Google Scholar] [CrossRef]
  33. Mofreh, A.; Barakat, T.M.; Refaat, A.M. A new lossless medical image compression technique using hybrid prediction model. Signal Process. Int. J. (SPIJ) 2016, 10, 20–30. [Google Scholar]
  34. Juliet, S.; Rajsingh, E.B.; Ezra, K. A novel medical image compression using Ripplet transform. J. Real-Time Image Process. 2013, 11, 401–412. [Google Scholar] [CrossRef]
  35. Sathiyanathan, N. Medical Image Compression Using View Compensated Wavelet Transform. J. Global Res. Comput. Sci. 2018, 9, 1–4. [Google Scholar]
  36. Lam, K.W.; Li, Z.; Yuan, X. Effects of Jpeg Compression on the Accuracy of Digital Terrain Models Automatically Derived from Digital Aerial Images. Photogramm. Rec. 2001, 17, 331–342. [Google Scholar] [CrossRef]
  37. Alfio, V.S.; Costantino, D.; Pepe, M. Influence of Image TIFF Format and JPEG Compression Level in the Accuracy of the 3D Model and Quality of the Orthophoto in UAV Photogrammetry. J. Imaging 2020, 6, 30. [Google Scholar] [CrossRef]
  38. Zhilin, L.; Xiuxia, Y.; Lam, K.W.K. Effects of JPEG compression on the accuracy of photogrammetric point determination. Photogramm. Eng. Remote Sens. 2002, 68, 847–853. [Google Scholar]
  39. Re, C.; Simioni, E.; Cremonese, G.; Roncella, R.; Forlani, G.; Langevin, Y.; Da Deppo, V.; Naletto, G.; Salemi, G. Effects of image compression and illumination on digital terrain models for the stereo camera of the BepiColombo mission. Planet. Space Sci. 2017, 136, 1–14. [Google Scholar] [CrossRef]
  40. Feng, C.; Yu, D.; Liang, Y.; Guo, D.; Wang, Q.; Cui, X. Assessment of Influence of Image Processing on Fully Automatic UAV Photogrammetry. ISPRS-Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2019, 269–275. [Google Scholar] [CrossRef] [Green Version]
  41. Marčiš, M.; Fraštia, M. Influence of image compression on image and reference point accuracy in photogrammetric measurement. In Advances and Trends in Geodesy, Cartography and Geoinformatics, Proceedings of the 10th International Scientific and Professional Conference on Geodesy, Cartography and Geoinformatics (GCG 2017), Demänovská Dolina, Low Tatras, Slovakia 10–13 October 2017; CRC Press: Boca Raton, FL, USA, 2018; pp. 77–82. [Google Scholar]
  42. Motta, G.; Rizzo, F.; Storer, J.A. Hyperspectral Data Compression; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  43. Conoscenti, M.; Coppola, R.; Magli, E. Constant SNR, Rate Control, and Entropy Coding for Predictive Lossy Hyperspectral Image Compression. IEEE Trans. Geosci. Remote. Sens. 2016, 54, 7431–7441. [Google Scholar] [CrossRef]
  44. Dusselaar, R.; Paul, M. Hyperspectral image compression approaches: Opportunities, challenges, and future directions: Discussion. J. Opt. Soc. Am. A 2017, 34, 2170–2180. [Google Scholar] [CrossRef]
  45. Guerra, R.; Barrios, Y.; Díaz, M.; Santos, L.; Lopez, S.; Sarmiento, R. A New Algorithm for the On-Board Compression of Hyperspectral Images. Remote Sens. 2018, 10, 428. [Google Scholar] [CrossRef] [Green Version]
  46. Chang, C.-I. Real-Time Progressive Hyperspectral Image Processing; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  47. Fu, C.; Yi, Y.; Luo, F. Hyperspectral image compression based on simultaneous sparse representation and general-pixels. Pattern Recognit. Lett. 2018, 116, 65–71. [Google Scholar] [CrossRef]
  48. Christopoulos, C.; Skodras, A.; Ebrahimi, T. The JPEG2000 still image coding system: An overview. IEEE Trans. Consum. Electron. 2000, 46, 1103–1127. [Google Scholar] [CrossRef] [Green Version]
  49. Hernández-Cabronero, M.; Marcellin, M.W.; Blanes, I.; Serra-Sagrista, J. Lossless Compression of Color Filter Array Mosaic Images with Visualization via JPEG 2000. IEEE Trans. Multimed. 2018, 20, 257–270. [Google Scholar] [CrossRef] [Green Version]
  50. Taubman, D.S.; Rabbani, M.; Marcellin, M.W. JPEG2000: Image Compression Fundamentals, Standards and Practice. J. Electron. Imaging 2002, 11, 286–287. [Google Scholar] [CrossRef]
  51. Marlapalli, K.; Bandlamudi, R.S.B.P.; Busi, R.; Pranav, V.; Madhavrao, B. A Review on Image Compression Techniques; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2020; pp. 271–279. [Google Scholar]
  52. Gungor, M.A.; Gencol, K. Developing a compression procedure based on the wavelet denoising and JPEG2000 compression. Optik 2020, 218, 164933. [Google Scholar] [CrossRef]
  53. Fischer, C.E.; Müller, D.; De Moortel, I. JPEG2000 Image Compression on Solar EUV Images. Sol. Phys. 2016, 292, 16. [Google Scholar] [CrossRef] [Green Version]
  54. Radosavljević, M.; Brkljač, B.; Lugonja, P.; Ćrnojevic, V.; Trpovski, Ž.; Xiong, Z.; Vukobratović, D. Lossy Compression of Multispectral Satellite Images with Application to Crop Thematic Mapping: A HEVC Comparative Study. Remote Sens. 2020, 12, 1590. [Google Scholar] [CrossRef]
  55. Caba, J.; Díaz, M.; Barba, J.; Guerra, R.; López, J.A.D.L.T.A.S. FPGA-Based On-Board Hyperspectral Imaging Compression: Benchmarking Performance and Energy Efficiency against GPU Implementations. Remote Sens. 2020, 12, 3741. [Google Scholar] [CrossRef]
  56. Báscones, D.; González, C.; Mozos, D. Hyperspectral image compression using vector quantization, PCA and JPEG2000. Remote Sens. 2018, 10, 907. [Google Scholar] [CrossRef] [Green Version]
  57. Báscones, D.; González, C.; Mozos, D. An FPGA Accelerator for Real-Time Lossy Compression of Hyperspectral Images. Remote Sens. 2020, 12, 2563. [Google Scholar] [CrossRef]
  58. Du, Q.; Fowler, J.E. Hyperspectral Image Compression Using JPEG2000 and Principal Component Analysis. IEEE Geosci. Remote. Sens. Lett. 2007, 4, 201–205. [Google Scholar] [CrossRef]
  59. Machidon, A.L.; Del Frate, F.; Picchiani, M.; Machidon, O.M.; Ogrutan, P.L. Geometrical Approximated Principal Component Analysis for Hyperspectral Image Analysis. Remote Sens. 2020, 12, 1698. [Google Scholar] [CrossRef]
  60. Fernandez, D.; Gonzalez, C.; Mozos, D.; Lopez, S. FPGA implementation of the principal component analysis algorithm for dimensionality reduction of hyperspectral images. J. Real-Time Image Process. 2019, 16, 1395–1406. [Google Scholar] [CrossRef]
  61. Mei, S.; Khan, B.M.; Zhang, Y.; Du, Q. Low-Complexity Hyperspectral Image Compression Using Folded PCA and JPEG2000. In Proceedings of the IGARSS 2018–2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 4756–4759. [Google Scholar]
  62. Ciznicki, M.; Kurowski, K.; Plaza, A.J. Graphics processing unit implementation of JPEG2000 for hyperspectral image compression. J. Appl. Remote Sens. 2012, 6, 061507. [Google Scholar]
  63. Jeyakumar, S.; Sudha, S. Hybrid hyperspectral image compression technique for non-iterative factorized tensor decomposition and principal component analysis: Application for NASA’s AVIRIS data. Comput. Geosci. 2019, 23, 969–979. [Google Scholar] [CrossRef]
  64. Licciardi, G.; Chanussot, J. Spectral transformation based on nonlinear principal component analysis for dimensionality reduction of hyperspectral images. Eur. J. Remote. Sens. 2018, 51, 375–390. [Google Scholar] [CrossRef] [Green Version]
  65. Dony, R.D.; Haykin, S. Neural network approaches to image compression. Proc. IEEE 1995, 83, 288–303. [Google Scholar] [CrossRef] [Green Version]
  66. Jiang, J. Image compression with neural networks—A survey. Signal Process. Image Commun. 1999, 14, 737–760. [Google Scholar] [CrossRef]
  67. Ma, S.; Zhang, X.; Jia, C.; Zhao, Z.; Wang, S.; Wanga, S. Image and Video Compression with Neural Networks: A Review. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 1683–1698. [Google Scholar] [CrossRef] [Green Version]
  68. Yang, Y.; Nagarajaiah, S. Robust data transmission and recovery of images by compressed sensing for structural health diagnosis. Struct. Control. Health Monit. 2016, 24, e1856. [Google Scholar] [CrossRef]
  69. Huang, Y.; Zhang, H.; Li, H.; Wu, S. Recovering compressed images for automatic crack segmentation using generative models. Mech. Syst. Signal Process. 2020, 146, 107061. [Google Scholar] [CrossRef]
  70. Xu, Y.; Wei, S.; Bao, Y.; Li, H. Automatic seismic damage identification of reinforced concrete columns from images by a region-based deep convolutional neural network. Struct. Control. Health Monit. 2019, 26, e2313. [Google Scholar] [CrossRef]
  71. Su, T.-C. Application of Computer Vision to Crack Detection of Concrete Structure. Int. J. Eng. Technol. 2013, 5, 457–461. [Google Scholar] [CrossRef] [Green Version]
  72. Li, H.; Li, S.; Zhang, D.; Jin, Y.; Zhang, F.; Li, N.; Li, H. Identification framework for cracks on a steel structure surface by a restricted Boltzmann machines algorithm based on consumer-grade camera images. Struct. Control. Health Monit. 2018, 25, e2075. [Google Scholar] [CrossRef]
  73. Ri, S.; Tsuda, H.; Chang, K.; Hsu, S.; Lo, F.; Lee, T. Dynamic Deformation Measurement by the Sampling Moiré Method from Video Recording and its Application to Bridge Engineering. Exp. Tech. 2020, 44, 313–327. [Google Scholar] [CrossRef] [Green Version]
  74. Chang, Y.-H.; Wang, W.; Chang, J.-Y.; Mottershead, J.E. Compressed sensing for OMA using full-field vibration images. Mech. Syst. Signal Process. 2019, 129, 394–406. [Google Scholar] [CrossRef]
  75. Akcay, O.; Erenoğlu, R.C.; Avşar, E. Özgür the Effect of Jpeg Compression in Close Range Photogrammetry. Int. J. Eng. Geosci. 2017, 2, 35–40. [Google Scholar] [CrossRef] [Green Version]
  76. Schneider, C.T. 3-D Vermessung von Oberflächen und Bauteilen durch Photogrammetrie und Bildverarbeitung. In Proceedings of the IDENT/VISION’91, Stuttgart, Germany, 14–17 May 1991. [Google Scholar]
  77. Tsai, R. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef] [Green Version]
  78. Heikkila, J.; Silven, O. A four-step camera calibration procedure with implicit image correction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 1106–1112. [Google Scholar]
  79. Rüther, H. An overview of software in non-topographic photogrammetry. In Non-Topographic Photogrammetry, 2nd ed.; Karara, H.M., Adams, L.P., Eds.; American Society for Photogrammetry and Remote Sensin: Bethesda, MD, USA, 1989; Volume 10, pp. 129–145. [Google Scholar]
  80. Fraser, C.S. Digital camera self-calibration. ISPRS J. Photogramm. Remote. Sens. 1997, 52, 149–159. [Google Scholar] [CrossRef]
  81. Gruen, A.; Beyer, H.A. System Calibration through Self-Calibration; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2001; Volume 34, pp. 163–193. [Google Scholar]
  82. Miller, E. How Not to Sort by Average Rating [EB/OL]. 6 February 2009. Available online: https://www.evanmiller.org/how-not-to-sort-by-average-rating.html (accessed on 5 October 2020).
  83. Yoon, H.; Shin, J.; Spencer, B.F. Structural Displacement Measurement Using an Unmanned Aerial System. Comput. Civ. Infrastruct. Eng. 2018, 33, 183–192. [Google Scholar] [CrossRef]
  84. Wang, X.; Wittich, C.E.; Hutchinson, T.; Bock, Y.; Goldberg, D.; Lo, E.; Kuester, F. Methodology and Validation of UAV-Based Video Analysis Approach for Tracking Earthquake-Induced Building Displacements. J. Comput. Civ. Eng. 2020, 34, 04020045. [Google Scholar] [CrossRef]
  85. Duane, C.B. Close-range camera calibration. Photogramm. Eng. 1971, 37, 855–866. [Google Scholar]
  86. Stephen, G.; Brownjohn, J.; Taylor, C. Measurements of static and dynamic displacement from visual monitoring of the Humber Bridge. Eng. Struct. 1993, 15, 197–208. [Google Scholar] [CrossRef] [Green Version]
  87. Mahajan, S.H.; Harpale, V.K. Adaptive and Non-adaptive Image Interpolation Techniques. In Proceedings of the 2015 International Conference on Computing Communication Control and Automation, Pune, India, 26–27 February 2015; pp. 772–775. [Google Scholar]
  88. Parker, J.A.; Kenyon, R.V.; Troxel, D.E. Comparison of Interpolating Methods for Image Resampling. IEEE Trans. Med Imaging 1983, 2, 31–39. [Google Scholar] [CrossRef] [PubMed]
  89. Lewis, A.; Knowles, G. Image compression using the 2-D wavelet transform. IEEE Trans. Image Process. 1992, 1, 244–250. [Google Scholar] [CrossRef] [PubMed]
  90. Knowles, G. VLSI architecture for the discrete wavelet transform. Electron. Lett. 1990, 26, 1184. [Google Scholar] [CrossRef]
  91. Lewis, A.; Knowles, G. VLSI architecture for 2-D Daubechies wavelet transform without multipliers. Electron. Lett. 1991, 27, 171. [Google Scholar] [CrossRef]
  92. Devaraj, S.J. Emerging Paradigms in Transform-Based Medical Image Compression for Telemedicine Environment; Elsevier BV: Amsterdam, The Netherlands, 2019; pp. 15–29. [Google Scholar]
  93. Getreuer, P. Linear Methods for Image Interpolation. Image Process. Line 2011, 1, 238–259. [Google Scholar] [CrossRef] [Green Version]
  94. Shapiro, J.M. Embedded image coding using zerotrees of wavelet coefficients. IEEE Trans. Signal Process. 1993, 41, 3445–3462. [Google Scholar] [CrossRef]
  95. Said, A.; Pearlman, W. A new, fast, and efficient image codec based on set partitioning in hierarchical trees. IEEE Trans. Circuits Syst. Video Technol. 1996, 6, 243–250. [Google Scholar] [CrossRef]
  96. Islam, A.; Pearlman, W.M. Embedded and efficient low-complexity hierarchical image coder. Proc. SPIE 1998, 3653, 294–305. [Google Scholar] [CrossRef]
  97. Walker, J.S. Lossy image codec based on adaptively scanned wavelet difference reduction. Opt. Eng. 2000, 39, 1891. [Google Scholar] [CrossRef]
  98. Tian, J.; Wells, J.R.O. A lossy image codec based on index coding. In Proceedings of the Data Compression Conference—DCC ’96, Snowbird, UT, USA, 31 March–3 April 1996. [Google Scholar]
  99. TRITOP, Optical 3D Coordinate Measuring Machine, GOM. Available online: https://trilion.com/wp-content/uploads/TRITOP-Coordinate-Measuring-System.pdf (accessed on 5 October 2020).
  100. Yeow, T.Z.; Kusunoki, K.; Nakamura, I.; Hibino, Y.; Ohkubo, T.; Seike, T.; Yagi, S.; Mukai, T.; Calvi, P.; Moustafa, M.; et al. The 2019 Tokyo Metropolitan Resilience Project E-Defense Test of a 3-Story Disaster Management Center. In Proceedings of the 17th World Conference on Earthquake Engineering, Sendai, Japan, 14–18 September 2020. [Google Scholar]
Figure 1. SHM framework and various components of vision-based sensor system for laboratory and field deployment using optical targets.
Figure 1. SHM framework and various components of vision-based sensor system for laboratory and field deployment using optical targets.
Sensors 20 06844 g001
Figure 2. Example of photogrammetry images with typical output; monochrome images captured from a three-story building seismic test monitored using high-speed and digital cameras, and respective photogrammetry results for 3D positions and rays of optical targets in object coordinates.
Figure 2. Example of photogrammetry images with typical output; monochrome images captured from a three-story building seismic test monitored using high-speed and digital cameras, and respective photogrammetry results for 3D positions and rays of optical targets in object coordinates.
Sensors 20 06844 g002
Figure 3. Proposed framework for digital image data compression.
Figure 3. Proposed framework for digital image data compression.
Sensors 20 06844 g003
Figure 4. Test setup for validation of the proposed image compression method using a standard 1-in (25.4-mm) block inserted into a sliding verification bar.
Figure 4. Test setup for validation of the proposed image compression method using a standard 1-in (25.4-mm) block inserted into a sliding verification bar.
Sensors 20 06844 g004
Figure 5. Sample histogram from original-size (full-resolution) images from both cameras: (a) reference image R , and (b) search image S; (c) entropy of 76 photogrammetry images; and (d) local area x × y = 72 × 305 pixels of three optical targets, all from the one-inch block experiment.
Figure 5. Sample histogram from original-size (full-resolution) images from both cameras: (a) reference image R , and (b) search image S; (c) entropy of 76 photogrammetry images; and (d) local area x × y = 72 × 305 pixels of three optical targets, all from the one-inch block experiment.
Sensors 20 06844 g005
Figure 6. Compressed images from non-adaptive linear interpolation using Bicubic algorithm and wavelet transform using Stw compression for three compression ratios ( I C R = 2, 5, and 20) with histograms of the local area of the monitored optical targets from the 1-inch block experiment.
Figure 6. Compressed images from non-adaptive linear interpolation using Bicubic algorithm and wavelet transform using Stw compression for three compression ratios ( I C R = 2, 5, and 20) with histograms of the local area of the monitored optical targets from the 1-inch block experiment.
Sensors 20 06844 g006
Figure 7. Comparison of a high quality input image (original size) against compressed versions at I C R = 20 obtained using different non-adaptive linear interpolation and wavelet compression (images enlarged by 400% with +20% and −40% adjustment of brightness and contrast for better visualization).
Figure 7. Comparison of a high quality input image (original size) against compressed versions at I C R = 20 obtained using different non-adaptive linear interpolation and wavelet compression (images enlarged by 400% with +20% and −40% adjustment of brightness and contrast for better visualization).
Sensors 20 06844 g007
Figure 8. Compressed images histograms from reference, R , and search, S images using bicubic interpolation and spatial orientation tree wavelet (STW) resampling methods for I C R = 2, 5, and 20 and quantity index metric (the gray intensity value range is selected between 0 to 100 for better visualization).
Figure 8. Compressed images histograms from reference, R , and search, S images using bicubic interpolation and spatial orientation tree wavelet (STW) resampling methods for I C R = 2, 5, and 20 and quantity index metric (the gray intensity value range is selected between 0 to 100 for better visualization).
Sensors 20 06844 g008
Figure 9. Evaluation of bicubic and STW compression quality of 76 images (38 each for reference and search images) used in the photogrammetry step.
Figure 9. Evaluation of bicubic and STW compression quality of 76 images (38 each for reference and search images) used in the photogrammetry step.
Sensors 20 06844 g009
Figure 10. Principal points u 0 , v 0 location in the input image plane and their translation in the compressed image plane for different compression methods and ratios. The examples are shown for the bicubic interpolation and STW compression methods.
Figure 10. Principal points u 0 , v 0 location in the input image plane and their translation in the compressed image plane for different compression methods and ratios. The examples are shown for the bicubic interpolation and STW compression methods.
Sensors 20 06844 g010
Figure 11. Points 1, 2, and 3 location in the input image plane x , y and their translation in the compressed image plane,   Δ x , y for different compression methods and ratios. The examples are shown for the bicubic interpolation and STW compression methods.
Figure 11. Points 1, 2, and 3 location in the input image plane x , y and their translation in the compressed image plane,   Δ x , y for different compression methods and ratios. The examples are shown for the bicubic interpolation and STW compression methods.
Sensors 20 06844 g011
Figure 12. Quasi-static test setup used for experimental validation of image compression along with the vision-based sensor components and optical targets details.
Figure 12. Quasi-static test setup used for experimental validation of image compression along with the vision-based sensor components and optical targets details.
Sensors 20 06844 g012
Figure 13. Comparison of measured displacement from SP and target-tracking (TT) using original and compressed images and the relative error between different measurements from quasi-static test.
Figure 13. Comparison of measured displacement from SP and target-tracking (TT) using original and compressed images and the relative error between different measurements from quasi-static test.
Sensors 20 06844 g013
Figure 14. E-Defense full-scale RC building test and monitoring setup using two vision-based sensor systems. The optical target enclosed in white circle is used in this study to obtain seismic displacement and relative measurement accuracy using original and compressed images from both sensor systems.
Figure 14. E-Defense full-scale RC building test and monitoring setup using two vision-based sensor systems. The optical target enclosed in white circle is used in this study to obtain seismic displacement and relative measurement accuracy using original and compressed images from both sensor systems.
Sensors 20 06844 g014
Figure 15. Image quality assessment from compressed images using bicubic interpolation measured by two-vision-based systems, i.e., high-speed vision system and digital cameras system.
Figure 15. Image quality assessment from compressed images using bicubic interpolation measured by two-vision-based systems, i.e., high-speed vision system and digital cameras system.
Sensors 20 06844 g015
Figure 16. Displacement history from low-intensity seismic test as obtained from original and compressed images from two different vision-based systems (high-speed and digital cameras).
Figure 16. Displacement history from low-intensity seismic test as obtained from original and compressed images from two different vision-based systems (high-speed and digital cameras).
Sensors 20 06844 g016
Table 1. Vision-based system configuration used for one-inch block verification tests.
Table 1. Vision-based system configuration used for one-inch block verification tests.
Camera TypeHigh-Speed
Color ModeMonochrome
Data size (single image, test images)5.2 MB, 30 MB
Image Format.tiff
Record duration (sec)N/A (snapshots)
Resolution, w × h 2560 × 2048
f (mm)35
Table 2. Photogrammetry results using 76 images with original size of 2560 × 2048 pixels.
Table 2. Photogrammetry results using 76 images with original size of 2560 × 2048 pixels.
Internal Parameters u 0 , v 0   ( pix ) A 1 A 2 A 3 B 1 B 2
R −24.17, −7.30−4.6 × 10−9−1.3 × 10−16−7.9 × 10−23−8.3 × 10−8−1.0 × 10−7
S 13.44, −10.21−3.8 × 10−9−7.6 × 10−165.8 × 10−23−8.6 × 10−9−1.1 × 10−7
Table 3. Principal point translation ratios (calculated as shown in Figure 10) in horizontal Δ u 0 and vertical Δ v 0 directions for different image compression methods and compression ratios.
Table 3. Principal point translation ratios (calculated as shown in Figure 10) in horizontal Δ u 0 and vertical Δ v 0 directions for different image compression methods and compression ratios.
Method I C R = 2 I C R = 5 I C R = 20
R S R S R S
Δ u 0 Δ v 0 Δ u 0 Δ v 0 Δ u 0 Δ v 0 Δ u 0 Δ v 0 Δ u 0 Δ v 0 Δ u 0 Δ v 0
Bicubic1.341.311.331.322.031.931.961.934.123.823.923.93
Bilinear 1.341.321.331.342.011.851.951.954.063.564.033.69
Box Kernel1.341.301.331.322.021.961.961.943.572.495.812.48
Lanczos 21.341.321.331.332.041.321.981.383.592.395.362.57
Lanczos 31.331.311.351.321.971.302.111.364.063.904.013.93
ASWDR1.000.690.970.711.911.192.271.193.142.137.982.77
EZW1.020.690.950.711.821.162.431.213.232.076.952.77
LvlMmc0.910.671.180.661.881.172.511.243.232.076.952.77
SPIHT0.990.691.000.711.821.162.431.213.232.076.952.77
STW1.030.710.930.731.841.322.261.353.822.434.792.64
WDR1.000.690.970.711.841.162.331.223.822.434.792.64
Table 4. Translation of points 1, 2, and 3 locations in the compressed image plane in horizontal Δ x and vertical Δ y directions for different compression methods and ratios.
Table 4. Translation of points 1, 2, and 3 locations in the compressed image plane in horizontal Δ x and vertical Δ y directions for different compression methods and ratios.
I C R = 2 I C R = 5 I C R = 20
Method R S R S R S
Δ x Δ y Δ x Δ y Δ x Δ y Δ x Δ y Δ x Δ y Δ x Δ y
Linear interpolation1.301.301.301.302.002.002.002.004.004.004.004.00
Wavelet transforms1.251.001.251.002.502.002.502.005.004.005.004.00
Table 5. Summary of the absolute displacement values (mm) from compressed images along with the average displacement accuracy, Δ a v g ( % ) , with respect to results from uncompressed images and absolute displacement accuracy, Δ a b s (%), with respect to the one inch (25.4 mm) block value.
Table 5. Summary of the absolute displacement values (mm) from compressed images along with the average displacement accuracy, Δ a v g ( % ) , with respect to results from uncompressed images and absolute displacement accuracy, Δ a b s (%), with respect to the one inch (25.4 mm) block value.
I C R PointMethod *
(A)(B)(C)(D)(E)(F)(G)(H)(I)(J)(K)
2125.4925.4825.4625.5325.4925.4225.4325.4925.4525.4725.43
225.4825.4625.5025.4725.4725.4925.5125.4425.4825.4825.51
325.4825.4925.5125.4725.4925.5225.5225.4325.5325.4725.54
Δ a v g (%)0.030.020.100.100.030.160.170.110.120.050.19
Δ a b s (%)0.320.310.360.350.330.300.330.210.340.290.37
5125.4925.5425.5925.5225.5825.6025.6125.6025.6125.5925.61
225.4525.5025.5525.5525.5025.4125.4425.5625.4425.4425.45
325.4725.4725.5225.5425.5725.4425.4825.4025.4825.4525.45
Δ a v g (%)0.060.150.300.230.290.290.210.400.210.230.23
Δ a b s (%)0.280.410.600.530.590.320.420.470.420.360.40
20125.4825.5125.5125.4925.4925.5225.5325.6825.5325.6025.52
225.3825.4125.4125.3825.3925.5325.5625.6325.5725.3825.54
325.2425.2825.2725.2425.2525.2225.2625.6225.2625.4425.21
Δ a v g (%)0.440.380.390.440.410.500.500.660.510.330.51
Δ a b s (%)0.350.300.320.350.320.580.570.960.590.350.59
* (A) Bicubic; (B) Bilinear; (C) Box Kernel; (D) Lanczos 2; (E) Lanczos 3; (F) Aswdr; (G) Ezw; (H) LvlMmc; (I) Spiht; (J) Stw; (K) Wdr.
Table 6. Results of photogrammetry using input (original) image size and compressed images.
Table 6. Results of photogrammetry using input (original) image size and compressed images.
Internal ParametersInput Image Compressed Image   I C R = 20
R S R S
u 0 , v 0 (pix)4.15, 14.8814.01, −8.880.54, 8.971.36, −4.55
A 1 −5.1 × 10−9−3.6 × 10−9−2.80 × 10−8−2.10 × 10−8
A 2 −6.6 × 10−18−1.1 × 10−15−1.10 × 10−14−2.00 × 10−14
A 3 −1.2 × 10−221.7 × 10−22−1.90 × 10−201.10 × 10−20
B 1 −1.6 × 10−7−7.0 × 10−9−3.60 × 10−7−2.20 × 10−8
B 2 −1.3 × 10−7−1.1 × 10−7−2.70 × 10−7−2.80 × 10−7
Table 7. Different estimates of displacement measurement error for TT using original and compressed images with respect to string potentiometer.
Table 7. Different estimates of displacement measurement error for TT using original and compressed images with respect to string potentiometer.
Error S P T T o r i g i n a l S P T T c o m p r e s s e d T T o r i g i n a l T T c o m p r e s s e d
Top channel Δ m a x (mm)0.191.511.38
Δ m a x (%)0.510.560.52
R M S E (mm)0.180.750.79
Bottom channel Δ m a x (mm)0.711.081.36
Δ m a x (%)0.270.410.51
R M S E (mm)0.310.520.79
Table 8. Vision-based systems configuration for the seismic shake table test monitoring.
Table 8. Vision-based systems configuration for the seismic shake table test monitoring.
Sensor System#1#2
Camera TypeHigh-speedDigital DSLR
Color ModeMonochromeColor used in monochrome mode
Data size38.4 GB2.3 GB
Format.tiff.jpg
Record duration (sec)120120
Resolution, w × h (pixel)2560 × 20481920 × 1080
Sampling rates, f s 32 fps30 fps
Table 9. Selected results of photogrammetry (principal point coordinates) for the two-vision sensor systems using original images and compressed images with applied bicubic interpolation algorithm.
Table 9. Selected results of photogrammetry (principal point coordinates) for the two-vision sensor systems using original images and compressed images with applied bicubic interpolation algorithm.
Monitoring SystemInput Image I C R = 2 I C R = 5
R S R S R S
High-speed cameras u 0 , v 0 (pix)22.10,
−15.35
5.37,
−13.03
13.4,
−11.12
3.88,
−9.91
9.86,
−11.84
2.10,
−9.04
DSLR cameras u 0 , v 0 (pix)64.5,
−334.3
17.4,
−370.6
39.23,
−242.08
12.59,
−284.82
28.77,
−257.86
6.82,
−257.22
Table 10. Summary of the peak displacement value as measured from original and compressed images from both monitoring systems.
Table 10. Summary of the peak displacement value as measured from original and compressed images from both monitoring systems.
Original I C R = 2 I C R = 5
High-speed cameras system a 1 = 618.88 mm a 2 = 610.37 mm a 5 = 591.61 mm
DSRL camera system b 1 = 615.37 mm b 2 = 600.76 mm b 5 = 575.96 mm
Table 11. Summary of the maximum values for the different calculated assessment errors.
Table 11. Summary of the maximum values for the different calculated assessment errors.
AssessmentOriginal Full Resolution I C R = 2 I C R = 5
Equation (6)n/an/a Δ r e l , C R = 2 = 12.73 mm Δ r e l , C R = 2 = 2.09% Δ r e l , C R = 5   = 18.64 mm Δ r e l , C R = 5   = 3.15%
Equation (7) Δ a b s , C R = 1 = 6.70 mm Δ a b s , C R = 1 = 1.08 % Δ a b s , C R = 2   = 21.24 mm Δ a b s , C R = 2   = 3.43 % Δ a b s , C R = 5   = 45.91 mm Δ a b s , C R = 5   = 7.42 %
Equation (8)n/an/a Δ c o m p , a , C R = 2 = 8.51 mm Δ c o m p , a , C R = 2 = 1.38 % Δ c o m p , a , C R = 5 = 27.26 mm Δ c o m p , a , C R = 5 = 4.41 %
Equation (9)n/an/a Δ c o m p , b , C R = 2 = 14.62 mm Δ c o m p , b , C R = 2 = 2.38 % Δ c o m p , b , C R = 5 = 26.96 mm Δ c o m p , b , C R = 5 = 4.38 %
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ngeljaratan, L.; Moustafa, M.A. Implementation and Evaluation of Vision-Based Sensor Image Compression for Close-Range Photogrammetry and Structural Health Monitoring. Sensors 2020, 20, 6844. https://doi.org/10.3390/s20236844

AMA Style

Ngeljaratan L, Moustafa MA. Implementation and Evaluation of Vision-Based Sensor Image Compression for Close-Range Photogrammetry and Structural Health Monitoring. Sensors. 2020; 20(23):6844. https://doi.org/10.3390/s20236844

Chicago/Turabian Style

Ngeljaratan, Luna, and Mohamed A. Moustafa. 2020. "Implementation and Evaluation of Vision-Based Sensor Image Compression for Close-Range Photogrammetry and Structural Health Monitoring" Sensors 20, no. 23: 6844. https://doi.org/10.3390/s20236844

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop