Parameter Measurement of Live Animals Based on the Mirror of Multiview Point Cloud

Scale and standardization are essential to the prosperity of the breeding industry. During large-scale, standardized breeding, the selective breeding of good livestock breeds hinges on the accurate measurement of body parameters for live animals. However, the complex shooting environment brings several urgent problems, such as the missing of many local data in the point cloud and the difficulty in the automatic acquisition of body data. To solve these problems, this paper proposes a method for parameter measurement of live animals based on the mirror of multiview point cloud. Firstly, the acquisition and stitching principles were given for the multiview point cloud data on body parameters of live animals. Next, the authors presented a way to make up for the data missing areas in the point cloud. Finally, this paper acquires the body mirror data of live animals and scientifically calculates the body parameters. The proposed measurement method was proved effective through experiments.


Introduction
Precision animal husbandry refers to the scientific breeding and management of live animals by arranging regular daily ration based on information technology. As an important aspect of intelligent agriculture, precision animal husbandry can improve the output benefit of animal husbandry products and ensure product quality and safety [1][2][3][4]. Large-scale, standardized breeding can effectively improve the output and profit of pigs, cattle, and sheep. During large-scale, standardized breeding, the selective breeding of good livestock breeds hinges on the accurate measurement of body parameters for live animals [5][6][7][8][9][10][11][12]. e manual measurements with tools like caliper and tap measure are greatly affected by subjective human factors. By contrast, the body measurement of three-dimensional (3D) body parameters, which cover the geometry of live animals, is relatively accurate. e measured data help to assess the health state of livestock, evaluate their body shapes, and identify their behavioral features [13][14][15][16].
Focusing on parameter measurement based on 3D point cloud data, Jo et al. [17] relied on the point cloud data of the 3D human body model to construct an objective interpolation function, which can describe the morphological changes of human body (e.g., gender, age, weight, height, and body proportion). en, the independent elements were weighted reasonably according to the linkage been element changes. In this way, the independent elements were adjusted and updated. After that, the needed human body model was derived from the intermediate human body. Sato [18] proposed a hardware and software system capable of synchronization precise acquisition of point cloud data on live animals. e system consists of an FM810-GI depth camera and its fixation structure, a point cloud data processing module, and a repeater. Rao et al. [19] improved the stereo calibration method of point cloud data on live animals based on the location relationship between the multiple depth cameras used to collect the data. en, the three-view point cloud data on live animals underwent stitching and duplicate removal by the interactive closest point (ICP) algorithm and k-means clustering (KMC). Finally, a precise 3D point cloud data was established for live animals.
To evaluate the health state of pandas, Turner et al. [20] introduced the skinned multianimal linear (SMAL) model to the 3D model reconstruction of these first class protected animals in China and obtained the base shape and base pose of the 3D panda model based on principal component analysis (PCA) and bone movements. Further, they derived the parameterized description of the shape and pose of the model. Zhang et al. [21] manually extracted animal contours from two-dimensional (2D) images, set up the objective function of Euclidean clustering between SMAL model and contour segmentation maps, and estimated SMAL parameters by minimizing the objective function. Ahsan et al. [22] provided an effective and accurate way to measure the length, width, and depth of pavement cracks. Specifically, watershed segmentation was adopted to segment and mask the background of damaged pavement images, the coordinate system of pavement cracks was converted point by point, a 3D visual model was established on MATLAB for pavement cracks, and the computed results were compared with the measured data. To solve the precision product quality problems induced by manufacturing errors, Chen and Wang [23] proposed a 3D point cloud feature calculation method to compute the geometric and physical parameters of workpieces and combined area changes and centroid deviation into a dense layered part evaluation and adaptive stratification algorithm, which can reconstruct workpiece surfaces and adaptively stratify workpieces.
Some results have been achieved on 3D point cloud and body parameter extraction, as well as weight prediction [24][25][26][27]. However, there are often holes in the point cloud, owing to the complex environmental factors, e.g., environmental interference (especially the fences of the breeding base) and low equipment precision. ese holes severely impede the postprocessing of the point cloud. In addition, it is very difficult to automatically acquire the body data of live animals [28][29][30][31]. To solve these problems, this paper proposes a method for parameter measurement of live animals based on the mirror of multiview point cloud. Section 2 introduces the acquisition and stitching principles of the multiview point cloud data on body parameters of live animals. Section 3 presents a way to make up for the data missing areas in the point cloud. Section 4 acquires the body mirror data of live animals and scientifically calculates the body parameters. e proposed measurement method was proved effective through experiments.

Data Acquisition and Stitching
During the 3D reconstruction of a specific object, the object must be extracted from the background to ensure the recognition and analysis accuracy. Owing to the complex environment of the breeding base, the point cloud data extracted from live animals contain the background, noises, and outliers. e data need to be preprocessed to remove the background and noises, facilitating further analysis. By visualizing the point cloud data of live animals, it is possible to obtain the left and right point cloud data of the background, including the ground, cameras, and noises. Considering the complex living environment of animals, the point cloud data were extracted from live animals in the following steps: (1) crop the point cloud data in the specified coordinate range, using passthrough filter; (2) remove ground point cloud data through planar template matching; and (3) eliminate outliers with redundant information by statistical filter. Figure 1 shows the mirroring principle of multiview point cloud. It can be inferred that the multiview point cloud data on live animals contain multiple coordinate systems, such as rotation and translation. Point cloud registration is necessary to unify the coordinates of multiple point clouds under different coordinate systems.
Let G, p, and U be the rotation matrix, translation matrix, and perspective transform vector between two depth cameras, respectively, with U being a zero vector and A � 1 be the proportional factor of the multiview point cloud on live animals. en, the mapping F of point cloud registration can be expressed as follows: To directly stitch point clouds on live animals, it is necessary to determine the location relationship between depth cameras. e parameters can be obtained through the stereo calibration in the binocular visual system. Let O SJ be the coordinate of any point O in the world coordinate system; G 1 and G 2 be the rotation matrices of cameras 1 and 2 relative to the calibration object, respectively; and p 1 and p 2 be the translation matrices of cameras 1 and 2 relative to the calibration object, respectively. Under the world coordinate system, the coordinates of the two cameras can be described by the following: e relationship between O 1 and O 2 can be established as follows: Combing formulas (2) and (3), According to the affine invariance of four point pairs in 4-point congruent sets (4PCS), the distance ratio g can be fixed with three known colinear points U, V, and W: Suppose U and W fall on the same straight line and V and Q fall on the same straight line. In addition, the two lines intersect at point H. en, distance ratios g 1 and g 2 can be calculated by the following:

Computational Intelligence and Neuroscience
During affine transform, the distance ratios g 1 and g 2 determined by the four coplanar points of the source point cloud and the corresponding four points in the target point cloud are constant, i.e., completely the same. If there exists any point pair s 1 and s 2 in S whose lines intersect at points h 1 and h 2 , which are the same within a certain error range, then s 1 and s 2 are the coplanar points corresponding to the given base in the world coordinate system. e intersections h 1 and h 2 can be calculated by the following: If the point cloud data on live animals are stitched directly using the results of stereo calibration, the registration accuracy needs to be guaranteed through iterations by the precision matching algorithm ICP. Suppose the point set under the world coordinate system and the target point set is denoted as respectively. Under the premise of minimizing the error function error(G, p) between the two point sets in formula (8), the least squares method can be adopted to iteratively perform the optimal coordinate transform and calculate the rotation matrix and translation matrix until the preset error threshold or maximum number of iterations is reached:

Repairing Missing Areas
To make up for the large nonclosed missing areas in the point cloud of live animals, this paper proposes the cubic B-spline curve fitting method based on the projections of point cloud slices. e slicing of the point cloud on live animals was carried out along the a-axis. e first step is to determine the minimum distance ε min between the point cloud center and other points and the maximum a max and minimum a min of the center along the a-axis. Next, point cloud splices were sampled from a min in the positive direction of a-axis, with an interval of ε min . e sampling number M S can be calculated by the following: where the square brackets stand for rounding operation. e sampling interval of the i-th point cloud slice O i can be described as [a min + (i − 1) ε min , a min + iε min ]. en, the maximum b i − max and minimum b i − min of O i along b-axis were determined, and the point cloud was sliced into M i parts with an interval of ε min : e curve fitting effect is greatly affected by the number of new points appearing through the expansion of the interval of point cloud slices. erefore, this paper selects the e processed O i was projected onto plane boc. e projection point was then fitted. When restoring the fitted point cloud O * i to the space, the points in interval O * i along a-axis should be configured uniformly: Suppose the slice plane or space of the point cloud on live animals contains u + v + 1 vertices. en, O i has a v-dimensional parametric curve segment: e v-dimensional B-spline curve can be derived from the v-dimensional B-spline curve segment O jv (p) above. e base function R iv (p) of the curve can be calculated by the following: e v-dimensional B-spline curve can be defined by v − 1 adjacent vertices. en, the cubic B-spline curve can be expressed as follows:

Computational Intelligence and Neuroscience
e corresponding base function can be expressed as follows: e j-th segment of the cubic B-spline curve can be described by the following: Figure 2 shows the projection and fitting of point cloud slices on the forelimbs of a cow. After the projection and fitting, the distance between adjacent points averaged at 6.842 mm, the standard deviation was 1.514 mm, and the approximate error was 0.426 mm. e number of points increased by 67.2% to 270. e fitted range of points was close to the original range of points.

Key Point Positioning and Mirror Data Acquisition.
e nearby transition point P 3 at which the slope also turns from positive to negative was identified in a similar manner as point P 2 . Along the positive direction of axis a, the number MFK of centers in the cloud segment P 1 -P 4 was calculated with P 2 (a 2 , c 2 ) as the starting point. en, the angle ω j between axis a and each point in O i (a o , c o ) (i � 1,2, . . ., MFK) with P 1 (a 1 , c 1 ) as the starting point and satisfying c oj ≥ c 1 can be obtained by the following: After obtaining P 2 and P 3 , the shoulder point of the live animal P 4 was determined as the farthest point in the point cloud segment P 2 -P 3 from the line connecting P 2 and P 3 . en, the point of ischial tuberosity P 5 could be obtained as the center of the K nearest points to the point of minimum a. After that, the point of withers P 6 could be solved by computing the center coordinates of all the tallest points in the 2 slice point clouds extended to the left and right of the axis a coordinates of the midpoint of P 2 and P 4 . Finally, the upper point P U and lower point P D of the depth could be solved by computing the center coordinates of all the tallest points and all the lowest points in the 2 slice point clouds extended to the left and right of the axis a coordinates of point P 1 , respectively.
To get an accurate plane of symmetry for the body of the live animal, the normal vector c op of the ground supporting the animal and the horizontal direction vector ξ p of the animal were aligned with the positive directions of axes a and b, respectively, to normalize the poses. en, the normal vector ϕ p of the plane of symmetry is the product between c op and ξ p : e above analysis shows that the tail point of the live animal is the extreme point in the negative direction of axis a, whose coordinates are (a 0 , b 0 , c 0 ). From (a 0 , b 0 , c 0 ) and ϕ p , the planar equation of the live animal can be determined as c � c 0 . e mirror data on one side of the plane of asymmetry could be obtained by setting up the homogeneous coordinates of the point O t1 � {(a, b, c)|c > c 0 } on one side of the plane: After obtaining the symmetric data where O t is the mirror of the point cloud on the complete animal in the 3D space.

Calculation of Body Parameters.
e Euclidean distance from P 4 (a 4 , b 4 , c 4 ) to P 5 (a 5 , b 5 , c 5 ) was defined as the diagonal length of the Euclidean distance: e horizontal distance from P 4 (a 4 , b 4 , c 4 ) to the vertical line of P 5 (a 5 , b 5 , c 5 ) was defined as the horizontal distance: e shoulder width was defined as twice the distance from P 4 (a 4 , b 4 , c 4 ) to the plane of symmetry c � μ * 0 + μ 1 a: e abdominal width was defined as twice the distance from P 1 (a 1 , b 1 , c 1 ) to c � μ * 0 + μ 1 a: e height was defined as the distance from P 6 (a 6 , b 6 , c 6 ) to the ground τ a a + τ b b + τ c c + υ � 0: (26) e depth was defined by the vertical heights of P U (a P U , b P U , c P U ) and P D (a P D , b P D , c P D ):

Experiments and Result Analysis
During the acquisition of point cloud data from live animals, it is difficult to shoot an image and complete 3D calibration using normal calibration targets. us, this paper performs 3D calibration with large and small infrared calibration targets. Depending on the deployment of depth cameras, the overhead camera was calibrated separately with the left infrared lens of the left camera and the right infrared lens of the right camera. e calibration errors are recorded in Figures 4(a) and 4(b). It can be inferred that the mean reprojection error between overhead camera and the left infrared lens of right camera was 1.31 pixels, and that between overhead camera and the right infrared lens of left camera was 0.87 pixels. Both results meet the precision requirements.
e axis a coordinates of chest circumference measuring points on a live pig were recorded on interactive measuring software of point cloud data. e fitting parameters were set as follows: the order of the curve, 4; the number of iterations, 50; and the number of control points, 100. Figure 5(a) shows the point cloud within 0.005 before and after the coordinates obtained by passthrough filter. Figure 5(b) shows the curve obtained by our cubic B-spline curve fitting method, which is marked in red. e curve circumference could be estimated from the length of the approximate polygon composed of the curve control points.
Our point cloud repairing method was applied to the missing areas in the 240 frames of point clouds on 50 pigs. ese areas went missing due to the occlusions of railings. e fitting errors of the traditional method and our method Computational Intelligence and Neuroscience are displayed in Figure 6. e mean, maximum, and minimum fitting errors of traditional cubic B-spline curve were 2.524 mm, 4.452 mm, and 2.346 mm, respectively; those of our method, i.e., cubic B-spline curve fitting based on projection of point cloud slices, were 1.924 mm, 3.754 mm, and 1.859 mm, respectively. is further confirms that the curve fitted by our method is closer to the original point cloud.
To verify its effectiveness, the proposed algorithm was compared with two other models through experiments. e processing results of different models are listed in Table 1. Our algorithm achieved relatively good results on segmenting live animals in point cloud data: the recall was as high as 82.7% and the accuracy as 88.9%. e recall and accuracy of region growth + threshold judgement were 80.4% and 82.7%, respectively. e recall and accuracy of watershed segmentation were merely 55.1% and 80.4%, respectively. e comparison shows that our algorithm boasts a good precision and high recall and accuracy. As for the two contrastive models, region growth + threshold   Computational Intelligence and Neuroscience point cloud slices between the following points were merged into a point range: point of maximum abdominal width P 1 , shoulder point P 4 and its transition points P 2 and P 3 , and point of ischial tuberosity P 5 . en, the outliers were removed from the point range, and the center coordinates were solved. e resulting new point range was fitted by a linear equation. en, the straight line was translated to the mean distance from every point in the range to the center coordinates, producing the line of symmetry of the live animal. en, it is possible to derive the mirror data of the point cloud of that animal. Figure 8 shows the fitted line of symmetry. e body parameters were extracted and measured from the 240 frames of point clouds on 50 pigs. Table 1 presents the measured results. Table 2 compares the point cloud measurements with manual measurements.
As shown in Table 3, the MAE of height measurement was minimized at 0.0032. e MAEs of other parameters were within 0.0270. Specifically, diagonal length and horizontal length had relatively large MAEs (0.0262 and 0.0232), 35% greater than the MAEs of other parameters. e MRE of height was also minimized at 0.9127%. e MRE of horizontal length was 5.0327, above the MREs of all the other five parameters. Regardless of MAE or MRE, the measurement errors of horizontal and diagonal lengths were relatively large, while those of height and depth were small. e main reason is that the slight changes of body position of the live animals during the measurement affects the accuracy of key

Conclusions
is paper develops a parameter measurement method for live animals based on the mirror of multiview point cloud. After being acquired from the target animal, the point cloud data from multiple views were preprocessed and stitched, followed by the eliminating of redundant background points. Next, the features of the point cloud data were analyzed, and a 3D point cloud data model was established for live animals. After that, the authors explained how to repair the missing part of the point cloud data, acquired the mirror data on animal body, and scientifically computed the body parameters. Experimental results confirm the scientific nature of the calibrating overhead camera separately with the left infrared lens of the right camera and the right infrared lens of the left camera. In addition, the chest circumference measuring points were fitted into a curve, and the errors of different methods were compared for repairing missing areas in the point cloud. e relevant results demonstrate the effectiveness of our fitting algorithm. Further, the line of asymmetry of a live animal was fitted, which proves the feasibility and effectiveness of our point cloud acquisition method. Finally, the measuring errors of body parameters were presented, suggesting the high accuracy of our body parameter measuring method for live animals.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest regarding the publication of this paper.  Computational Intelligence and Neuroscience 9