Next Article in Journal
Study on the Residual Strength of Nonlinear Fatigue-Damaged Pipeline Structures
Previous Article in Journal
Enhancing Document Image Retrieval in Education: Leveraging Ensemble-Based Document Image Retrieval Systems for Improved Precision
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Contactless Detection of Sow Backfat Thickness Based on Segmented Images with Feature Visualization

1
College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
2
Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Shenzhen 518000, China
3
Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, Shenzhen 518000, China
4
Shenzhen Branch, Guangdong Laboratory for Lingnan Modern Agriculture, Shenzhen 518000, China
5
Hubei Hongshan Laboratory, Wuhan 430070, China
6
Key Laboratory of Smart Farming for Agricultural Animals, Ministry of Agriculture and Rural Affairs, Wuhan 430070, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(2), 752; https://doi.org/10.3390/app14020752
Submission received: 19 October 2023 / Revised: 20 November 2023 / Accepted: 27 November 2023 / Published: 16 January 2024
(This article belongs to the Section Mechanical Engineering)

Abstract

:
Aiming to address the problem that the existing methods for detecting sow backfat thickness are stressful, costly, and cannot detect in real time, this paper proposes a non-contact detection method for sow backfat with a residual network based on image segmentation using the feature visualization of neural networks. In this paper, removing the irrelevant information of the image to improve the accuracy of the sow backfat thickness detection model is proposed. The irrelevant features in the corresponding image of the feature map are found to have the same high brightness as the relevant feature regions using feature visualization. An image segmentation algorithm is then used to separate the relevant feature image regions, and the model performance before and after image segmentation is compared to verify the feasibility of this method. In order to verify the generalization ability of the model, five datasets were randomly divided, and the test results show that the coefficients of determination (R2) of the five groups were above 0.89, with a mean value of 0.91, and the mean absolute error (MAE) values were below 0.66 mm, with a mean value of 0.54 mm, indicating that the model has high detection accuracy and strong robustness. In order to explain the high accuracy of the backfat thickness detection model and to increase the credibility of the application of the detection model, using feature visualization, the irrelevant features and related features of the sow back images extracted by the residual network were statistically analyzed, which were the characteristics of the hip edge, the area near the body height point, the area near the backfat thickness measurement point (P2), and the lateral contour edge. The first three points align with the previous research on sow backfat, thus explaining the phenomenon of the high accuracy of the detection model. At the same time, the side contour edge features were found to be effective for predicting the thickness of the back. In order to explore the influence of irrelevant features on the accuracy of the model, UNet was used to segment the image area corresponding to the irrelevant features and obtain the sow contour image, which was used to construct a dorsal fat thickness detection model. The R2 results of the model were above 0.91, with a mean value of 0.94, and the MAE was below 0.65 mm, with a mean value of 0.44 mm. Compared to the test results of the model before segmentation, the average R2 of the model after segmentation increased by 3.3%, and the average MAE decreased by 18.5%, indicating that irrelevant features will reduce the detection accuracy of the model, which can provide a reference for farmers to dynamically monitor the backfat of sows and accurately manage their farms.

1. Introduction

China has a high rate of pork consumption, and with the growth in its population, the country’s demand for pork is increasing. In 2021, the Ministry of Agriculture and Rural Development issued the “14th Five-Year Plan” National Animal Husbandry and Veterinary Industry Development Plan, which predicts that by 2025, the output value of the national hog farming industry will reach CNY 1.5 trillion. However, the country is not a breeding power, and there are many problems in China’s hog farming, such as high costs, a low degree of automation, and a weak breeding capacity of sows [1]. In particular, the weak breeding capacity of sows is the key problem that restricts large-scale breeding in the hog industry. To realize large-scale pig breeding, how to improve the breeding capacity of sows is an urgent problem. However, sow fertility is affected by many factors, of which sow backfat is significant. Sow backfat is the soft fatty tissue on the back of the sow, and its thickness affects the animal’s service life, culling rate, reproductive performance, the performance of its piglets, and intestinal health [2,3]. Different backfat thicknesses have different effects on sow reproductive performance [4]. The feeding program in production is based on the backfat thickness of sows. Therefore, dynamic detection of sow backfat thickness is an important means to improve sow reproductive performance and production capacity, and it is a powerful guarantee for the economic efficiency of pig farms and the realization of large-scale breeding.
Visual inspection is the original method of sow backfat detection, supplemented by pressing, which is subjective, causes stress in sows, and makes it difficult to achieve accurate and real-time measurement [5]. In addition to the visual method, there is also the carcass method, which uses vernier calipers to measure the backfat of the carcass. The measurement results are accurate, but they do not apply to the detection of live pigs, and the workload is large and cumbersome [6,7]. In 1945, Hazel invented the probe method, which opened a new chapter in live measurement and improved the accuracy compared to the visual method. However, the probe method requires piercing into the skin of the pig, causing a strong stress reaction in pigs, which is not conducive to pig production. In the 1950s, Wild applied ultrasonic technology for the first time in the detection of biological tissues, and since then, the ultrasonic method has gradually become the best method for measuring the backfat thickness of live pigs that does not subject them to mechanical damage. Currently, the main method for measuring backfat thickness in live pigs is ultrasound, which requires professionals to draw the backfat position on an ultrasound image according to specialized theories to calculate the backfat thickness. This method reduces the accuracy of the measurement results, is time-consuming and relatively ineffective, is subject to the constraints of the equipment and the influence of the measuring personnel, and is not able to achieve real-time, low-cost contactless measurement of backfat thickness.
With the development of deep learning, deep learning-based computer vision technology has been very successful in detecting the behavior, individual weight, and body size of pigs [8,9,10,11,12]. With great success, it has detection advantages, such as being contactless, real time, automatic, and objective, and it has become possible to establish a model for the contactless measurement of sow backfat thickness using computer vision technology. Zhang Lijuan et al. applied fully convolutional networks (FCNs) to determine pig backfat thickness in ultrasound images. The results showed that the correlation coefficient between the segmentation results and the actual measurement of backfat thickness reached 0.92, and the determination of backfat thickness in the ultrasound images of pigs segmented by FCN had high accuracy [13]. Basak et al. [14] used ultrasound to measure growing–finishing pigs’ backfat thickness and established a machine learning model of growing–finishing pigs’ fat mass and backfat thickness. However, the use of the ultrasound method not only causes a stress response in sows but is also limited by the level and equipment of professionals. Li Qing et al. [15] used image processing technology instead of manual methods to achieve accurate measurements of pork backfat thickness, which improved the efficiency but is limited to only carcasses. Fernandes et al. explored the correlation between top-view 3D images and the backfat thickness of fattening pigs using deep learning and a manual means of extracting image features of fattening pigs, respectively. The experimental results showed that the R2 of the predicted and measured backfat thickness obtained by the deep learning model and the mean absolute scaled error (MASE) were 0.45 and 13.56 mm. The best backfat thickness detection model was obtained by deep learning [16]. Thus, the deep learning algorithm can be utilized to construct a contactless detection model of pig backfat thickness.
The essence of deep learning is to train neural networks. In the field of intelligent pig breeding research, generally, only end-to-end research of neural networks is conducted, focusing only on the detection accuracy of the network [17]. This method lacks the understanding of its internal working mechanism, resulting in the inability to determine the image region that affects the decision making of the neural network and to understand the phenomenon of the high detection accuracy of the neural network, thus failing to improve the credibility of the application of neural network models. By visualizing image features extracted by the neural network, the research object and non-research object image regions affecting the neural network decision making can be studied. The former can study the feature map corresponding to image regions affecting the network decision making, providing ideas for understanding the neural network detection performance. The latter can guide the separation of which images are non-research object regions, obtain new image data, and study possible methods of data processing for improving the accuracy of the model test.
The residual network is a kind of deep learning. It is the pioneering image classification algorithm, and it solves the degradation and training problem of very deep networks, improves the neural network feature extraction ability [18,19], and is widely used as the feature extraction network part of computer vision tasks, which are widely used in the research field of livestock and poultry breeding [20,21,22]. Therefore, to address the problem that real-time and low-cost contactless detection of sow backfat thickness cannot be realized, and to explain the backfat thickness detection model, this paper explores the relationship between sow back images and backfat thickness based on residual networks to construct a contactless detection model for sow backfat thickness.

2. Data Collection and Dataset Construction

The data in this study were collected by the research group from a pig farm in Guangxi Province, with a total of 48 gestating sows, including those in pre-gestation and mid-gestation. The sows were kept in single pens, and sows of different gestation periods were kept in different buildings. The video data of each sow were captured by an Azure Kinect camera. The camera was set up on a homemade adjustable mobile cart to capture video data of the sow back while standing at an overhead angle. The camera captured videos at 30 frames/second for 3 min, and the videos were parsed to obtain RGB images, which were used as the object of this study. The backfat thicknesses of 48 sows were measured using a Renco (LEAN-METER) backfat meter with a measuring range of 4–35 mm and an error of ±1 mm. The backfat measuring point was chosen as the P2 measuring point commonly used in the international pig industry.
Parsing the video data of the sow back to obtain the image data of the sow back and taking into account the similarity of the neighboring images, one frame of the image was taken every two seconds. Each sow’s video was parsed to obtain 90 images with a size of 1280 × 720, and a total of 4320 RGB images were obtained from the parsing of the video data of 48 sows. Some of the images are shown in Figure 1, and the captured information contains the background of the sows in the limiting pen, the sows in the limiting pen next to it, and the drainpipe above it. To reduce the influence of the background and to speed up the forward inference and backpropagation of the model, the image of the sow back was cropped and grayscaled to retain more information about the sows in the middle limit fence and to reduce the information about the sows in the side limit fence as much as possible. The resolution of the image was reduced from 1280 × 720 to 1080 × 450. Part of the image is shown in Figure 1. The cropped image was combined with the backfat thickness of the sow to construct the dataset.
The body condition scores of 48 sows are shown in Table 1. The actual measured backfat thicknesses of the sows were all within the backfat thickness intervals corresponding to body condition scores 2, 3, and 4. The dataset was randomly divided into a training set, a validation set, and a test set with a ratio of 8:1:1, with the number of samples being 37, 6, and 5. To validate the stability of the model’s generalization performance across different datasets, the division of the five datasets was repeated with the five test sets. The samples were not repeated, and the division samples of each dataset are shown in Table 2.

3. A Residual Network-Based Model for Detection of Backfat Thickness in Sows

3.1. Construction and Training of Residual Network Model

The residual network constructed in this experiment is shown in Figure 2. In the figure, the input image size of the residual network model is a 224 × 224 grayscale image of the sow’s back, in which the two convolutional layers conv1 and conv2 both used 3 × 3 convolution kernels. The numbers of convolution kernels were set as a hyperparameter, n1 and n2, respectively, the step size of both convolution kernels was 1, and the padding was 0. The size of the convolution kernels of the two pooling layers was 2 × 2, the step size was 1, and the padding was 0. Both used maximum pooling. The size of the convolution kernels of the two convolutional layers was 3 × 3. To ensure that the size of the feature map remained unchanged, boundary expansion (padding = 1) was used, and the step size was 1. The number of convolution kernels in the two convolutional layers of each residual module, Resdiual1 and Resdiual2, was the same, and the numbers of convolution kernels were n1 and n2, respectively. The local connection of the residual module used the LeakyReLu activation function. The numbers of hidden neurons in the two fully connected layers, Full connection1 and Full connection2, were n3 and n4, respectively. The fully connected layer integrated the features extracted from the previous network to obtain the final output. The output layer output the backfat thickness corresponding to the back image of the input sow, which was used as the target value of the network.

3.2. Test Results and Analysis

In this study, linear regression analysis was performed to quantitatively assess the accuracy of the model by analyzing the pig backfat thickness predicted by the residual model with the actual measured sow backfat thickness. Therefore, the coefficient of determination (R2) and the mean absolute error (MAE) were chosen as the evaluation indexes of the model.
The hyperparameter space for training was set, and OPTUNA [23] was used to search within the search space to obtain five model parameters that all minimized the loss function on the validation set. The five models were tested using five test sets, respectively. The results of the test set prediction comparisons are shown in Figure 3, with the corresponding MAE and R2 shown in Table 3, where BF denotes the thickness of the backfat of the sows.
In Figure 3, the five discrete points of the horizontal coordinates of the five test sets represent five pigs in each test set. The yellow and blue colors represent the true and model-predicted backfat thicknesses, respectively. The results of the evaluation indexes of the test sets are shown in Table 3.
In Table 3, the test set numbers in the first column, in order from smallest to largest, represent each of the five test sets in Figure 3 according to size. The results of the 90 images of each pig predicted by the model were subjected to median denoising. The median was taken as the final predicted backfat thickness of the pig, which, combined with the true backfat thickness, yielded the mean absolute error (MAE) and the coefficient of determination (R2) for the five groups of test sets in Table 3. In Table 3, the R2 values of the five groups of test sets were all higher than 0.89, and the mean R2 value was 0.91. The MAEs of the five groups of test sets were all lower than 0.66 mm, and the mean MAE value was 0.54 mm. The results of the five test sets show that the estimated and measured backfat thicknesses based on the residual network model have a good linear relationship, and the model accuracy is high.

4. Research on Backfat Thickness Detection Model Based on Sow Contour Images

4.1. Residual Network to Extract the Image Features of the Sow’s Back Corresponding to Image Site Analysis

The high accuracy of the backfat thickness detection model is based on the residual network. To explain the phenomenon of high accuracy of the backfat thickness detection model, the feature maps of each layer of the network extracted by the model were analyzed using feature visualization. The residual network was loaded with the saved model, and the back image data of 48 sows were input. The input image is shown in Figure 4, and the actual measurement position of sow backfat is shown in Figure 5. A program with a visualization function was written to visualize the channel feature maps of each convolutional and residual layer. The process of extracting the back image features is shown in Figure 6, Figure 7, Figure 8 and Figure 9.
Point A in Figure 5 indicates the area near the P2 point of sow backfat, which is the location where the actual measurement of sow backfat was made [2]. The area around point B represents the area near the high point of the sow body, where Xiong Yuanzhu et al. measured the backfat thickness. They referred to this location as the thickest part of the shoulder [6].
The highlighted region in the feature map represents that this part occupies a more important part of the feature map. The network pays more attention to the features of these image regions and plays a greater role in the decision making of the neural network [24,25]. By visualizing the features extracted by the residual network from the sow back image, it is possible to understand how the different regions of the sow back image affect the network decision making and, thus, analyze the effectiveness of the extracted features in predicting the sow backfat [26].
A neural network’s extracted image features are divided into relevant features and irrelevant features. Relevant features are features in the target area of the image, and irrelevant features are features outside the target area [27]. The relevant features in this study are the image features of the target area of the middle sow of the sow back image, and the irrelevant features are the image features outside the target area.
In analyzing the channel feature maps in Figure 6, Figure 7, Figure 8 and Figure 9, combined with Figure 4 and Figure 5, it was found that the irrelevant features of the channel feature maps extracted by the network for the images of 48 sows were those corresponding to the image regions such as the drain and the feeding pen, and the relevant types of features were the rump edges, the region near the P2 measurement point, the region near the body height point, and the flank contour edges. The analysis indicates that different network layers of the residual network extract different features. In addition to extracting the features of the sow body itself, the hip edges, the pig rear-drive area with its corresponding P2 point, the region near the body height point, and the side profile edges, and the irrelevant features, such as the restriction bar and the drainage pipe, etc., are also extracted. Finally, these features are combined in the fully connected layer to obtain the output of the network. The irrelevant features affect the decision making of the network, which may reduce the accuracy of the network. Subsequent research should isolate the image regions corresponding to irrelevant features and explore whether the irrelevant features will affect the accuracy of the results.

4.2. Interpretive Analysis of Backfat Thickness Detection Models

The kinds of relevant features in the back images of 48 sows were counted, and it was found that the features in four sow image regions, namely, near the P2 point, near the body height point, the rump edge, and the flank profile edge of each sow, were extracted by the residual network. The following analyzes the effectiveness of the feature maps extracted by the residual network from the back image of one sow in predicting the backfat thickness of the sows to explain the phenomenon of the high accuracy of the model.
The output feature maps of the residual module’s second layer and the first convolutional layer’s highlighted areas were roughly similar, so only the output feature maps of the first layer were analyzed. In the channel feature map of the first convolutional layer, shown in the first and third figures in Figure 6, the residual network extracted the sow back hind image feature in the red-circled area. The feature acted through the convolutional layer and the fully connected layer to output the backfat thickness. This feature reflects the information related to the sow backfat thickness, which is consistent with the location of the actual measurement of sow backfat thickness (P2). The highlighted area contains the location of the actual detection of sow backfat on the sow back, which indicates that there is a correlation between the features extracted by the network from the back driving region of the sow and the sow backfat thickness, and that the residual network extracting the features in this region for the prediction of the backfat thickness is effective in predicting the backfat thickness.
The channel feature map of the third convolutional layer and the channel feature map of the fourth residual module layer had roughly the same highlighted area corresponding to the sow’s back area. It can be seen in the first and third graphs of the channel feature maps of the fourth residual module layer that the residual network extracted the features of the edge of the sow rump, and the features went through the action of the fully connected layer to output the sow backfat thickness, which reflects the information related to the characteristics of the sow rump. This had a greater impact on the network decision making, which is in line with the previous research carried out by Teng Guanghui et al. [28], who believed that there is a correlation between the height-to-width ratio of the sow rump, the area of the rump stock the radius of curvature, and the backfat thickness. These three areas also reflect the information related to the characteristics of the sow rump, and there is a correlation between the rump information and the sow backfat thickness, which suggests that the residual network extracting the features of the rump edges of sows is effective in predicting their backfat thickness.
In the first and third graphs of the channel feature maps of the fourth residual module layer, the residual network extracted the features in the area near the sow body height point and output the backfat thickness after the action of the full connectivity layer, which influenced the decision of the residual network. This area coincides with the location where the actual measurement of sow backfat thickness was made—the thickest part of the shoulder [25]. Xiong Yuanzhi et al. measured sow backfat thickness at this location, which indicates that the features extracted by the network near the sow’s body height point are correlated with the sow backfat thickness, and the features extracted by the residual network from the region are effective in predicting sow backfat thickness.
In the fourth panel of the channel feature map of the fourth residual modeling layer and the third panel of the third convolutional layer, the residual network extracted the left half-contour edge and right half-contour edge features of the sow back and output the sow backfat thickness after the action of the fully connected layer. The left half-contour edge and right half-contour edge features are related to the prediction of backfat thickness, reflecting the information on the sow contour. However, no study has yet shown statistical correlation between the side contour and backfat. The side contour features of the 48 sows were extracted by the network, which indicates that the residual-extracted side contour features had a greater impact on the network decision and its correlation with the sow backfat thickness, which is effective for the prediction of sow backfat thickness.
In summary, the relevant features of 48 sows extracted by the residual network were statistically corroborated with the relevant studies on sow backfat thickness detection. Combined with previous studies on backfat thickness, it was shown that the extracted features of the area near the P2 point, the area near the body height point, and the rump edge features were all effective in the prediction of sow backfat thickness, which demonstrates that the method used in this experiment was correct as a model for backfat thickness detection. This explains the phenomenon of the high accuracy of the backfat thickness detection model and adds credibility to the practical application of the model. Meanwhile, the left half-contour and right half-contour edge features, which reflect the edge information of the lateral contour, were found to be effective in the prediction of backfat thickness.

4.3. UNet-Based Sow Contour Segmentation

The analysis of extracting sow back image features based on a residual network explains the phenomenon of the high accuracy of the model and increases the credibility of the practical application of the model but, at the same time, irrelevant features in the feature map that may affect the accuracy of the backfat thickness detection model were found. To explore the influence of irrelevant features on the accuracy of the backfat thickness detection model, the image region corresponding to the irrelevant features needed to be separated. In addition, to realize efficient and accurate segmentation, a Unet network [29] was constructed to segment the data of 4320 sow back images of 48 sows to obtain new image data. The process of segmentation is shown in Figure 10. Combined with the backfat thicknesses of 48 sows, a new dataset was constructed. The dataset was divided according to the dataset division method in Table 2, and each dataset had the same individual sow samples.

4.4. Model Parameter Setting and Training

The architecture of the constructed residual network is shown in Figure 2. The input image was converted into a grayscale image, with a size of 240 × 100. In this experiment, the Adam optimization algorithm and learning rate decay strategy were used. The number of training rounds was set to 50 rounds, the mean squared error (MSE) was used as the loss function for training, and the image batch size (batch sizes) was set to 4. The learning rate decay strategy was as follows: the initial learning rate was 0.01, and the learning rate remained unchanged within intervals of 10 rounds. The rounds were decayed to 0.1 of the initial learning rate within an interval of 10~40, and the rounds were decayed to 0.01 of the initial learning rate within an interval of 40~50. The hyperparameter space settings for the training of the residual network model are shown in Table 4. The hyperparameter space in Table 4 was searched using the automated search framework OPTUNA to determine the optimal parameter combinations within the search space, and five models with different parameters were obtained.

4.5. Test Results and Analysis

The five models obtained were tested using each of the five test sets, and the results of the predicted pairs for the five test sets are shown in Figure 11. The MAE and R2 sizes for the five sets are shown in Table 4, with BF representing sow backfat thickness.
In Figure 11, the five discrete points of the horizontal coordinates of the five test sets represent five pigs, and the yellow and green colors represent the actual and predicted backfat thickness of each pig, respectively. The results of the test set evaluation indexes are shown in Table 4.
In Table 4, the test set numbers in the first column, in order from smallest to largest, respectively, represent the five test sets in Figure 11. The median denoising was performed on the results of 90 images of each pig predicted by the model. The median was taken as the final predicted backfat thickness of the pig, and combined with the real backfat thickness, we obtained the MAE and R2 of the five test sets shown in Table 4. The R2 values of the five test sets were all greater than 0.91, and the mean R2 value was 0.94. The MAE values of the five test sets were all below 0.65 mm, and the mean MAE value was 0.44 mm. These results show that the estimated and measured backfat thickness of sows based on the residual network model of the present study had a good linear relationship, and the model prediction accuracy was high.

4.6. Comparison of the Performance of Different Models in the Test Set

Table 5 shows the average of the prediction results of the three models on the five test sets. As shown in the table, in terms of the R2 and MAE, the ResNet model in this study had the highest R2 and the lowest MAE. VGG16 had strong performance in the classification field [30] but it did not perform very well in this study, with an R2 of 0.66 and an MAE of 0.66 mm, which were the lowest and the highest, respectively. In this study, the R2 increased by 42.4% and 23.6%, respectively, and the MAE decreased by 68.3% and 63.3% compared to VGG16 and the model by YU M et al. [31], respectively. Therefore, the performance of the model in this study was better than VGG16 and that proposed by YU M et al. [31].

4.7. Comparison of the Performance of the Backfat Thickness Detection Model before and after Segmentation Analysis

In Table 6, the test set numbers in the first column, in order from smallest to largest, represent the five test sets in Figure 3 and Figure 11. We compared the results of the five test sets obtained before and after the segmentation of the backfat thickness detection model. The MAE of each test set was obtained after the segmentation, except for the second group, which was the same as that of the pre-segmentation group. The rest of the test sets were all smaller than that of the pre-segmentation group, but the second group after segmentation had a larger R2 than the pre-segmentation group. The average R2 of the model after segmentation was greater than that of the pre-segmentation group, and the average R2 of the model after segmentation was the same except for the first group. The rest were all larger after segmentation than before segmentation. The average R2 of the model after segmentation increased by 3.3%, and the average MAE decreased by 18.5%. The results show that irrelevant features reduced the detection accuracy of the model, and visualization features can provide a reference method to improve the model accuracy.

5. Conclusions

In order to solve the problems of time consumption, relative inefficiency, equipment constraints, and the influence of surveyors, this study proposes a non-contact detection method for measuring sow backfat thickness based on the feature visualization of a residual network. Compared to the VGG16 model and that proposed by YU M et al. [31], the R2 of the model in this study was 0.94 in the test set, which is higher than the R2 of the other two models, and the MAE was 0.44 mm, which is lower than the MAE of the other two models, indicating that the performance of the model in this study is better than the other two. Secondly, compared to the model before segmentation, the R2 increased by 3.3% and the MAE decreased by 18.5%, which verifies the feasibility of removing irrelevant features and improving the accuracy of the sow backfat thickness detection model.
Feature visualization not only provides a method to improve the accuracy of the model but also validates the conclusions of some researchers on sow backfat thickness. We also found a sow contour area feature that is strongly correlated with sow backfat thickness. Whether establishing a model of the feature information of these areas to determine the backfat thickness of sows can improve the accuracy of the data, reduce the model size, and improve the accuracy of the model needs to be further studied.

Author Contributions

Conceptualization, T.C. and D.X.; methodology, T.C.; software, T.C.; validation, X.L. (Xuan Li); formal analysis, X.L. (Xiaolei Liu) and D.X.; investigation, T.C.; resources, X.L. (Xiaolei Liu) and X.L. (Xiaolei Liu); data curation, H.L.; writing—original draft preparation, T.C.; writing—review and editing, D.X.; visualization, H.W.; supervision, D.X.; project administration, X.L. (Xuan Li) and D.X.; funding acquisition, X.L. (Xuan Li). All authors have read and agreed to the published version of the manuscript.

Funding

Hubei Provincial Science and Technology Major Project (2022ABA002); Wuhan Biological Breeding Major Project (2022021302024853); Huazhong Agricultural University—Chinese Academy of Agricultural Sciences, Shenzhen Institute of Agricultural Genomics Cooperative Fund Project (SZYJY2022031); Huazhong Agricultural University Independent Science and Technology Innovation Fund (2662022XXYJ009); Intelligent Phenotypic Testing of Livestock and Poultry for Major Agricultural Projects (2022ZD0401802).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Leng, B.; Ji, X.; Zeng, H.; Tu, G. Study on technical efficiency of large-scale pig breeding in China. Zhejiang J. Agric. 2018, 30, 1082–1088. [Google Scholar]
  2. Kou, Z.; Wang, C.; Ren, Z.; Pang, W. Research Progress on Backfat and its Application in Pig Industry. Pig Breed. 2021, 49–54. [Google Scholar] [CrossRef]
  3. Cheng, C.; Wu, X.; Zhang, X.; Peng, J. Obesity of Sows at Late Pregnancy Aggravates Metabolic Disorder of Perinatal Sows and Affects Performance and Intestinal Health of Piglets. Animals 2019, 10, 49. [Google Scholar] [CrossRef] [PubMed]
  4. Ao, W.; Zhai, Y.; Li, N.; Qi, X.; Yang, X. Correlation of backfat thickness and litter performance of reproductive sows. Anim. Husb. Vet. Med. 2022, 54, 9–12. [Google Scholar]
  5. Li, X.; Yu, M.; Xu, D.; Zhao, S.; Tan, H.; Liu, X. Non-Contact Measurement of Pregnant Sows’ Backfat Thickness Based on a Hybrid CNN-ViT Model. Agriculture 2023, 13, 1395. [Google Scholar] [CrossRef]
  6. Feng, J.; Wang, P.; Zhang, C.; Zhang, Z.; Wang, B. Research progress of pig back-fat detection techniques. J. Northeast Agric. Univ. 2019, 50, 87–96. [Google Scholar]
  7. Whiteman, J.V.; Whatley, J.A.; Hillier, J.C. A Further Investigation of Specific Gravity as a Measure of Pork Carcass Value. J. Anim. Sci. 1953, 12, 859–869. [Google Scholar] [CrossRef]
  8. Gan, H.; Ou, M.; Zhao, F.; Xu, C.; Li, S.; Chen, C.; Xue, Y. Automated piglet tracking using a single convolutional neural network. Biosyst. Eng. 2021, 205, 48–63. [Google Scholar] [CrossRef]
  9. Kashiha, M.; Bahr, C.; Ott, S.; Moons, C.P.; Niewold, T.A.; Ödberg, F.O.; Berckmans, D. Automatic weight estimation of individual pigs using image analysis. Comput. Electron. Agric. 2014, 107, 38–44. [Google Scholar] [CrossRef]
  10. Navarro-Jover, J.M.; Alcañiz-Raya, M.; Gómez, V.; Balasch, S.; Moreno, J.R.; Grau-Colomer, V.; Torres, A.G. An automatic color-based computer vision algorithm for tracking the position of piglets. Span. J. Agric. Res. 2009, 7, 535. [Google Scholar] [CrossRef]
  11. Wongsriworaphon, A.; Arnonkijpanich, B.; Pathumnakul, S. An approach based on digital image analysis to estimate the live weights of pigs in farm environments. Comput. Electron. Agric. 2015, 115, 26–33. [Google Scholar] [CrossRef]
  12. Nir, O.; Parmet, Y.; Werner, D.; Adin, G.; Halachmi, I. 3D Computer-vision system for automatically estimating heifer height and body mass. Biosyst. Eng. 2018, 173, 4–10. [Google Scholar] [CrossRef]
  13. Zhang, L.; Chen, L.; Li, L.; Wu, H.; Chen, S.; Wang, K.; Zhang, L.; Wang, J. Fast and accurate estimation of the pig backfat thickness from B-scan image with FCN model. Trans. Chin. Soc. Agric. Eng. 2022, 38, 183–188. [Google Scholar]
  14. Basak, J.K.; Paudel, B.; Deb, N.C.; Kang, D.Y.; Moon, B.E.; Shahriar, S.A.; Kim, H.T. Prediction of body composition in growing-finishing pigs using ultrasound based back-fat depth approach and machine learning algorithms. Comput. Electron. Agric. 2023, 213, 108269. [Google Scholar] [CrossRef]
  15. Li, Q.; Peng, Y. Pork backfat thickness on-line detection methods using machine vision. Trans. Chin. Soc. Agric. Eng. 2015, 31, 256–261. [Google Scholar]
  16. Fernandes, A.F.A.; Dórea, J.R.R.; Valente, B.D.; Fitzgerald, R.; Herring, W.; Rosa, G.J.M. Comparison of data analytics strategies in computer vision systems to predict pig body composition traits from 3D images. J. Anim. Sci. 2020, 98, skaa250. [Google Scholar] [CrossRef] [PubMed]
  17. Xue, H.; Shen, M.; Liu, L.; Chen, J.; Shan, W.; Sun, Y. Estrus Detection Method of Parturient Sows Based on Improved YOLO v5s. Trans. Chin. Soc. Agric. Mach. 2023, 54, 263–270. [Google Scholar]
  18. Siddiqi, M.H.; Alsayat, A.; Alhwaiti, Y.; Azad, M.; Alruwaili, M.; Alanazi, S.; Kamruzzaman, M.M.; Khan, A. A Precise Medical Imaging Approach for Brain MRI Image Classification. Comput. Intell. Neurosci. 2022, 2022, 6447769. [Google Scholar] [CrossRef]
  19. Saqlain, M.; Rubab, S.; Khan, M.M.; Ali, N.; Ali, S. Hybrid Approach for Shelf Monitoring and Planogram Compliance (Hyb-SMPC) in Retails Using Deep Learning and Computer Vision. Math. Probl. Eng. 2022, 2022, 4916818. [Google Scholar] [CrossRef]
  20. Li, X.; Wang, T. Goat posture recognition based on YOLOv3-SE-RE model. Intell. Comput. Appl. 2023, 13, 171–177. [Google Scholar]
  21. Ning, Y.; Yang, Y.; Li, Z.; Wu, X.; Zhang, Q. Detecting and counting pig number using improved YOLOv5 in complex scenes. Trans. Chin. Soc. Agric. Eng. 2022, 38, 168–175. [Google Scholar]
  22. Zhang, H.; Wu, J.; Li, Y. Recognition Method of Feeding Behavior of Multi-target Beef Cattle. Trans. Chin. Soc. Agric. Mach. 2020, 51, 259–267. [Google Scholar]
  23. Akiba, T.; Sano, S.; Yanase, T.; Ohta, T.; Koyama, M. Optuna: A Next-generation Hyperparameter Optimization Framework. arXiv 2019, arXiv:1907.10902. [Google Scholar]
  24. Wang, H.; Naidu, R.; Michael, J.; Kundu, S.S. SS-CAM: Smoothed Score-CAM for Sharper Visual Feature Localization. arXiv 2020, arXiv:2006.14255v3. [Google Scholar]
  25. Fukui, H.; Hirakawa, T.; Yamashita, T.; Fujiyoshi, H. Attention Branch Network: Learning of Attention Mechanism for Visual Explanation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 10697–10706. [Google Scholar]
  26. Yang, P.; Sang, J.; Zhang, B.; Feng, Y.; Yu, J. Survey on Interpretability of Deep Models for Image Classification. J. Softw. 2023, 34, 230–254. [Google Scholar]
  27. Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. arXiv 2013, arXiv:1311.2901v3. [Google Scholar]
  28. Teng, G.; Shen, Z.; Zhang, J.; Shi, C.; Yu, J. Non-contact sow body condition scoring method based on Kinect sensor. Trans. Chin. Soc. Agric. Eng. 2018, 34, 211–217. [Google Scholar]
  29. Frizzi, S.; Bouchouicha, M.; Ginoux, J.; Moreau, E.; Sayadi, M. Convolutional neural network for smoke and fire semantic segmentation. IET Image Process. 2021, 15, 634–647. [Google Scholar] [CrossRef]
  30. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
  31. Yu, M.; Zheng, H.; Xu, D.; Shuai, Y.; Tian, S.; Cao, T.; Zhou, M.; Zhu, Y.; Zhao, S.; Li, X. Non-contact detection method of pregnant sows backfat thickness based on two-dimensional images. Anim. Genet. 2022, 53, 769–781. [Google Scholar] [CrossRef]
Figure 1. Image of the back of the sow.
Figure 1. Image of the back of the sow.
Applsci 14 00752 g001
Figure 2. Architecture of ResNet model.
Figure 2. Architecture of ResNet model.
Applsci 14 00752 g002
Figure 3. Comparative results of test samples prediction.
Figure 3. Comparative results of test samples prediction.
Applsci 14 00752 g003
Figure 4. Input image of sow back.
Figure 4. Input image of sow back.
Applsci 14 00752 g004
Figure 5. Detection position of back fat.
Figure 5. Detection position of back fat.
Applsci 14 00752 g005
Figure 6. Feature map of the channel of the first convolutional layer. The areas inside the red and yellow circles represent the relevant and irrelevant areas of the image of the sow’s back, respectively.
Figure 6. Feature map of the channel of the first convolutional layer. The areas inside the red and yellow circles represent the relevant and irrelevant areas of the image of the sow’s back, respectively.
Applsci 14 00752 g006
Figure 7. Feature map of the channel of the second layer residual module layer. The areas inside the red and yellow circles represent the relevant and irrelevant areas of the image of the sow’s back, respectively.
Figure 7. Feature map of the channel of the second layer residual module layer. The areas inside the red and yellow circles represent the relevant and irrelevant areas of the image of the sow’s back, respectively.
Applsci 14 00752 g007
Figure 8. Feature map of the output channel of the third convolutional layer. The areas inside the red and yellow circles represent the relevant and irrelevant areas of the image of the sow’s back, respectively.
Figure 8. Feature map of the output channel of the third convolutional layer. The areas inside the red and yellow circles represent the relevant and irrelevant areas of the image of the sow’s back, respectively.
Applsci 14 00752 g008
Figure 9. Feature map of the channel of the fourth layer residual module layer. The areas inside the red and yellow circles represent the relevant and irrelevant areas of the image of the sow’s back, respectively.
Figure 9. Feature map of the channel of the fourth layer residual module layer. The areas inside the red and yellow circles represent the relevant and irrelevant areas of the image of the sow’s back, respectively.
Applsci 14 00752 g009
Figure 10. Segmentation process.
Figure 10. Segmentation process.
Applsci 14 00752 g010
Figure 11. Comparative results of test sample prediction.
Figure 11. Comparative results of test sample prediction.
Applsci 14 00752 g011
Table 1. Sow body condition scores.
Table 1. Sow body condition scores.
ScoresBack Fatness Thickness (mm)
1<10
2≥10~15
3>15~18
4>18~22
5>22
Table 2. Sample division.
Table 2. Sample division.
DatasetGestationBody Condition ScoreTotal
234
Training setEarly95337
Late686
Validation setEarly2106
Late111
Test setEarly2105
Late011
Table 3. Test set evaluation indicator results.
Table 3. Test set evaluation indicator results.
Test SetTest Set MAE (mm)R2
10.480.95
20.450.89
30.660.91
40.560.92
50.550.95
Table 4. Results of five test sets.
Table 4. Results of five test sets.
Test SetTest Set MAE (mm)R2
10.460.95
20.450.91
30.650.92
40.260.98
50.40.96
Table 5. Average results for five test sets.
Table 5. Average results for five test sets.
ModelR2MAE (mm)
VGG160.661.39
YU M et al. [31]0.761.20
ResNet (Residual Network)0.940.44
Table 6. Results of 5 sets of test sets before and after segmentation.
Table 6. Results of 5 sets of test sets before and after segmentation.
Evaluation IndicatorsSplit Pre-MAESplit Pre-R2MAE after SplittingR2 after Splitting
Test set 10.480.950.460.95
Test set 20.450.890.450.91
Test set 30.660.910.650.92
Test set 40.560.920.260.98
Test set 50.550.950.40.96
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cao, T.; Li, X.; Liu, X.; Liang, H.; Wang, H.; Xu, D. Research on Contactless Detection of Sow Backfat Thickness Based on Segmented Images with Feature Visualization. Appl. Sci. 2024, 14, 752. https://doi.org/10.3390/app14020752

AMA Style

Cao T, Li X, Liu X, Liang H, Wang H, Xu D. Research on Contactless Detection of Sow Backfat Thickness Based on Segmented Images with Feature Visualization. Applied Sciences. 2024; 14(2):752. https://doi.org/10.3390/app14020752

Chicago/Turabian Style

Cao, Tingjin, Xuan Li, Xiaolei Liu, Hao Liang, Haiyan Wang, and Dihong Xu. 2024. "Research on Contactless Detection of Sow Backfat Thickness Based on Segmented Images with Feature Visualization" Applied Sciences 14, no. 2: 752. https://doi.org/10.3390/app14020752

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop