Abstract

Image segmentation plays an important role in daily life. The traditional K-means image segmentation has the shortcomings of randomness and is easy to fall into local optimum, which greatly reduces the quality of segmentation. To improve these phenomena, a K-means image segmentation method based on improved manta ray foraging optimization (IMRFO) is proposed. IMRFO uses Lévy flight to improve the flexibility of individual manta rays and then puts forward a random walk learning that prevents the algorithm from falling into the local optimal state. Finally, the learning idea of particle swarm optimization is introduced to enhance the convergence accuracy of the algorithm, which effectively improves the global and local optimization ability of the algorithm simultaneously. With the probability that K-means will fall into local optimum reducing, the optimized K-means hold stronger stability. In the 12 standard test functions, 7 basic algorithms and 4 variant algorithms are compared with IMRFO. The results of the optimization index and statistical test show that IMRFO has better optimization ability. Eight underwater images were selected for the experiment and compared with 11 algorithms. The results show that PSNR, SSIM, and FSIM of IMRFO in each image are better. Meanwhile, the optimized K-means image segmentation performance is better.

1. Introduction

In recent years, image segmentation has attracted much attention and research by researchers. It is of great significance to the future image processing field. As a key step of image processing, image segmentation plays an important role in extracting objects of interest from images. At present, it has important research value in medicine, agriculture, ocean, and other fields. Image segmentation can be divided into four categories: threshold segmentation, region segmentation, edge segmentation, and segmentation methods based on specific theories. The clustering algorithm is a typical unsupervised learning algorithm. It uses the idea of clustering differentiation to solve the problem. The way to solve the problem is simple and easy to understand. It has been successfully applied in many fields [1]. Cluster image segmentation has also been successfully studied. K-means is the most common and easiest clustering method among them, but K-means has the disadvantage of large randomness and easily fall into local optimum, which makes it impossible to control the cluster center reasonably. Swarm intelligence algorithms is an algorithm with global optimization performance and strong versatility that is suitable for parallel processing. This type of algorithm can find the optimal solution or approximate the optimal solution within a certain period of time. [2] Intelligent optimization algorithm opens up a new way for image segmentation. In terms of clustering segmentation, Hrosik R C and others improved the K-means clustering algorithm based on the firefly algorithm, which could achieve better segmentation average error, peak signal-to-noise ratio, and structural similarity index on medical images; [3] Li h and others proposed a k-means clustering algorithm based on dynamic particle swarm optimization (DPSO), which had better visual effect than traditional K-means clustering in image segmentation and obvious advantages in improving image segmentation quality and efficiency; [4] Shubham and others applied gray wolf optimizer (GWO) [5] to the segmentation of satellite images; [6] Therefore, an intelligent optimization algorithm is of great significance in the field of image segmentation.

With the rapid development of swarm intelligence algorithms, a variety of new algorithms are emerging. In addition to the algorithms mentioned above, there are other algorithms as follows: monarch butterfly optimization (MBO) [7], elephant herding optimization (EHO) [8], moth search (MS) algorithm [9], Harris hawks optimization (HHO) [10], etc. Manta ray foraging optimization (MRFO) is a new swarm intelligence optimization algorithm proposed in 2020. With excellent searchability, fewer parameters, simple model and easily understood, it is better than particle swarm optimization (PSO) [11, 12], genetic algorithm (GA) [13, 14], Differential Evolution (DE) [15, 16], Cuckoo Search (CS) [17], gravitational search algorithm (GSA) [18], and ABC [19] in some function optimization [20], and it has been successfully applied to solar energy [21, 22], ECG [23], generator [24, 25], power system [26], cogeneration energy system [27], geophysical inversion problem [28], directional overcurrent relay [29], feature selection [30], hybrid energy system [31], and sewage treatment [32]. Although MRFO has good optimization ability, it still has its own defects. In complex problems, the search ability is limited and the diversity of the population can not be guaranteed. The main reason is that the individual searches orderly, highly dependently, and inflexibly.

At present, researchers have noticed this point and carried out successive studies, such as Mohamed Abd Elaziz, who combines fractional calculus with MRFO to correct the direction of manta ray movement. This algorithm has been verified by CEC 2017 test function and is applied to image segmentation problems with good feasibility [33]. Mohamed H. Hassan combines a gradient optimizer with MRFO to reduce the probability that the algorithm will fall into a local optimum, which has a good effect in single-objective and multiobjective economic emission scheduling [34]. Haitao Xu uses adaptive weighting and chaos to improve MRFO, so as to efficiently handle thermodynamic problems [35]. Essam H. Houssein uses reverse learning to initialize the population so as to increase the diversity of the population and apply it to the threshold image segmentation problem with good segmentation quality [36]. Bibekananda Jena adds an attack capability to MRFO, which allows it to jump out of local optimization and find a globally optimal solution. It is then applied to the image segmentation problem of 3D Tsallis [37]. Mihailo Micev fuses Simulate Anneal (SA) with MRFO and applies it to the Proportional Integral Derivative (PID) controller. The fused algorithm is superior to other algorithms [38]. In addition, Serdar and others adopt opposition-based learning and SA to improve the convergence effect of MRFO. It has better control performance when applied to fractional order proportional integral derivative (FOPID) controller [39]. Although the currently proposed variants of MRFO have achieved some results, the following problems still exist:(1)Most scholars use the fusion of other algorithms to improve the search ability, but this will bring higher time complexity, and the algorithm after fusion may not be able to complement each other so as to present perfect results.(2)Reverse learning can only solve inversely in a certain space, but in complex high-dimensional situations, there are fewer individual optimization methods, and they cannot jump out of the local optimal state perfectly.(3)In the optimization process, the above algorithm cannot completely balance the local search and global search capabilities, which results in insufficient convergence accuracy of the algorithm.

Based on the above analysis, this paper presents an improved algorithm for manta rays, which uses random walk learning to make individuals wander randomly in space, to increase the diversity of the population, and avoid premature convergence of the algorithm, and then we use Lévy flight for long-distance and short-distance searches to balance the local and global searches of the algorithm. Finally, the learning idea of particle swarm is introduced. Two learning factors are used to improve the convergence accuracy of the algorithm 12 functions are used to verify the validity and feasibility of IMRFO. Then eight underwater image datasets are used in K-means image segmentation. The results show that IMRFO has better generalization ability and better segmentation quality.

The innovations and contributions of this paper are as follows:(i)A random walk learning algorithm is designed to increase the diversity of the population and reduce the probability of the algorithm falling into local optimization to a certain extent.(ii)Lévy flight and learning factors are introduced to balance the searchability of the algorithm, which makes the algorithm have a good convergence effect.(iii)In 12 standard test functions, IMRFO is compared with 7 other algorithms to show its superiority and feasibility. Next, two statistical tests are used to emphasize the optimization performance of the algorithm. It is compared with the recently proposed variants of the algorithm. Finally, ablation experiments were performed, all the results show that IMRFO has a good search ability.(iv)IMRFO is applied to K-means underwater image segmentation. The results of 11 algorithms show that IMRFO performs well.

The structure of this paper is as follows: Section 2 introduces the basic MRFO algorithm, Section 3 introduces the improved IMRFO algorithm and related analysis. Section 4 describes the process of IMRFO optimizing K-means image segmentation. Section 5 tests the performance of IMRFO and compares the related algorithms. Section 6 describes and analyses the performance of each algorithm in K-means image segmentation. Section 7 summarizes the experimental results of this paper. The last section expresses the advantages and disadvantages of IMRFO and future research directions.

2. Manta Ray Foraging Optimization

Manta rays feed on plankton, which are mainly water microfauna. When feeding, they suck water and prey into their mouths with angular head leaves. They then filter prey out of the water through improved rabbles. Individuals of the manta rays work together to find the best food. Inspired by the behavior of the manta rays, the algorithm is divided into chain feeding, spiral-feeding, and somersault foraging. There are three stages of spiral and empty foraging.

2.1. Chain Feeding

At this stage, the manta ray population will be arranged in an ordered chain to collaborate in feeding, which will maximize the amount of plankton in the pocket. The mathematical model of the chain feeding process can be expressed as follows:

In formula (1), denotes the d-dimensional information of the location of the first manta ray in generation t. R is a random number that obeys a uniform distribution of [0,1]. is the weight factor, is the d-dimensional information of the best location found so far The manta ray at position i depends on the manta ray at position i-1 and the best food position found so far. represents the population number. The update of the first manta ray depends on the optimal location.

2.2. Spiral Feeding

When a manta ray finds a good food source in a certain space, each individual approaches a manta ray in front of it, in addition to spirally moving toward the food. The spiral-feeding process can be represented by the following mathematical model:where , a weight factor representing the spiral motion, T being the largest number of iterations, r being the rotation factor and obeying [0,1] uniform random numbers. In addition, in order to improve the efficiency of population foraging, MRFO randomly generates a new location during the optimization process and then performs a spiral search at that location. Its mathematical model is as follows: represents a new location in space.

2.3. Somersault Foraging

When a certain manta ray finds a food source, its position can be regarded as an axis. Each manta ray tends to wander around the axis and flip to a new location. Its mathematical model is as follows:

S is the flip factor, which determines the flip distance. R2 and r3 are two random numbers that are uniformly distributed [0,1]. As S values vary, individual mantas flip to locations in search space that are symmetrical to the optimal solution at their current location.

3. Improved Manta Ray Foraging Optimization

From the above formulas, it can be seen that more communication between individuals and orderly work can improve the searchability of the algorithm and perform a wide-ranging search. On the one hand, the lack of initiative of individuals in the population limits their ability to develop. On the other hand, updates within the population are related to the best location. When encountering high-dimensional complex problems, the change of the optimal position is similar, which results in less change in the two updates before and after the algorithm, which limits the algorithm's optimization ability. Therefore, a flexible change strategy is needed to improve the development ability and local convergence effect of the algorithm. This paper uses the Lévy flight strategy to improve individual blindness search, and random walk learning is used to prevent the algorithm from falling into a local state and the learning idea of particle swarm to improve the search accuracy of the algorithm.

3.1. Why Each Modification Has Been Proposed?

MRFO is based on a group of animals collaborating in feeding, which results in fewer optimization methods and a lack of flexibility and fineness. Therefore, individual initiative is required to increase the diversity of the population in order to find high-quality solutions in space.

Therefore, this paper analyses and solves the defects of the algorithm from the following three points. Firstly, it is necessary to make the population individuals better distribute the whole space so as to develop the vision of the algorithm and improve the global search ability of the algorithm. Lévy flight is a classical strategy, which can fly in a given space in the way of alternating long and short distances. It has been used by most scholars to improve the search ability of the algorithm. Secondly, some individuals need to be independent and never be limited by group characteristics. Random walk learning is an uncertain way of walking. The traditional random walk can only be carried out in local areas. However, the random walk learning designed in this paper can make large location differences between different individuals and improve the population diversity of the algorithm. Finally, information sharing among individuals is needed to improve the local search ability of the algorithm and find high-quality solutions. The learning factor is derived from the particle swarm optimization algorithm, which is used to speed up the information exchange of the population, prevent the early invalid search, improve the local search ability of the algorithm, and improve the accuracy of the solution to a certain extent.

3.2. Lévy Flight Strategy

When manta ray individuals perform chain search, all individuals follow the population to search, which leads to the lack of flexibility of the algorithm and can not perform a better search range. Therefore, the Lévy flight strategy [40, 41] is introduced to enable individuals to search long and short distances, increase the diversity of the population, and enable individuals to fully diffuse into the whole space. The location update format for joining Lévy flight strategy is as follows:

In formula (8), xi(t) represents the position of the i-th individual in the t-th iteration, is an arithmetic symbol representing point-to-point multiplication. l = 0.01(xi(t)-xp) denotes a step length control parameter, xp represents the position of the best individual in the population.

Lévy flight formula is as follows [42]:where r4 and r5 are random numbers within the range of [0,1], . The general value is 1.5. is calculated as follows:where , the schematic diagram of Lévy flight is shown in Figure 1. Lévy flight can search long and short distances in a certain space and balance the global and local search of the algorithm.

3.3. Random Walk Learning

In the optimization process, MRFO has the probability of falling into the local optimum, which makes the current optimum individual unreliable, so it is necessary to disperse all the individuals to find a better solution. Unlike random walks, the learning factors at the best and worst locations are introduced to make individual escape directional and reduce unreasonable walk. The specific mathematical model of RWL is as follows:

In formula (8), is the sinusoidal random factor that uses the mathematical properties of the sinusoidal function to fluctuate toward the optimal solution and continuously adjusts the step size based on the worst position of the current population so that the search path can span the entire solution space. M is the maximum number of iterations, and c1 and c2 represent two learning factors, random numbers that obey a normal distribution. is the control step, is the direction of control. As shown in Figure 2(a), (a) is the distribution of individuals without introducing RWL, (b) introduces the individual distribution of RWL; we can see that the introduction of RWL enables individuals to master global information, makes the individual distribution more even, and finds the global optimal solution.

3.4. PSO Algorithm Learning Ideas

There are two learning factors in PSO to develop local solutions, which can effectively improve the convergence accuracy of the algorithm. Therefore, the formula of introducing two learning factors is as follows:b1, b2 are two learning factors, and BestX is the optimal position of the current population. As can be seen from the formula, this strategy exploits individuals between the current one and the optimal one to enhance the local search of the algorithm.

3.5. Improved Manta Ray Foraging Optimization

To improve the local search capability of MRFO and reduce the probability of falling into local optimum, an improved manta ray algorithm is presented in this paper. The algorithm uses random walk learning to prevent the algorithm from falling into the local state after each iteration and to improve the global search ability of the algorithm. Then, the Lévy flight mechanism is combined to improve the blindness of the manta ray algorithm and to balance the searchability of the algorithm. Using two learning factors of particle swarm optimization to improve the search accuracy ultimately makes the algorithm improve effectively both in local and global aspects. The specific pseudocode is shown in Algorithm 1.

Input
 M:Maximum number of iterations
 N:Population sparrows
 Output:Xbest, fg
 Initialize population
t = 1;
 While(t< M)
 Update the position of the population according to formula (8)%RWL
 For i = 1 to N do
 If rand<0.5 then
 If t/M < rand then
 Update the position of the individual according to formula (3)
 Else
 Update the position of the individual according to formula (2)
 End if
 Else
 Update the position of the individual according to formula (1)
 End if
 Update the position of the population according to formula (5)% Lévy flight
 Get the current best and worst
 For i = 1 to N do
 Update the position of the individual according to formula (4)
 End for
 The position of the individual is updated according to formula (9)% PSO algorithm learning ideas
 Get a new Xbest, fg, Xworst;
 End while
 Get the final Xbest, fg.

4. K-Means Image Segmentation Based on IMRFO

The principle of the traditional K-means algorithm is to select K cluster centers randomly, so the way to select them is uncertain, resulting in large differences in the final results and easy to fall into local optimum. Therefore, it is necessary to select an appropriate initial cluster center. Intelligent optimization algorithm has been successfully applied to K-means to improve its randomness and the defect of falling into local optimum. The improved manta ray foraging optimization optimizes K-means so that the initial cluster centers are well controlled. The objective function is as follows:

is a pixel gray value of the image and is the J-th clustering center. The optimal initial number of clustering centers is obtained by IMRFO to minimize the fitness value of the objective function.

K-means image segmentation based on IMRFO is mainly divided into two parts:(1)Use the global search capabilities of IMRFO to find the best initial cluster center in the image point set(2)The initial cluster centers of the IMRFO output are segmented in the K-means algorithm

The specific flow chart is shown in Figure 3.

5. Performance Analysis and Test

5.1. Performance Test

To verify the effectiveness and feasibility of IMRFO, 12 benchmark functions [43, 44] are selected to verify its function optimization ability. The specific test function information is shown in Table 1. F1-6 is a unimodal function, F7-11 is a complex multimodal function, and F12 is a fixed-dimensional function. In addition, F1-11 is tested in different dimensions to verify the optimization ability of the algorithm in high-dimensional cases. To prove that IMRFO is competitive, seven algorithms including MRFO, Honey Badger Algorithm (HBA) [45], GWO, PSO, Whale Optimization Algorithm (WOA) [46], Learning Based Optimization (TLBO) [47], and Flower Pollution Algorithm (FPA) [48] are compared. The new cluster intelligence algorithm was proposed by HBA in 2021, while other algorithms are classical ones that have been extensively studied. The number of iterations and population of each algorithm are 500 and 100. In HBA, O = 6, C = 2; In FPA, the selection probability p = 0.8. B1 and b2 in IMRFO are 0.2 and 0.8, respectively. The experimental environment is Windows 10 64 bit; the software is matlab2019b; the memory is 16 GB; the processor is Intel(R) Core (TM) i5-10200H CPU @ 2.40 GHz. The average, optimal value, and standard deviation of the results of each algorithm for 30 runs are calculated. If IMRFO is the optimal value, the font is bolded. The optimization results of each algorithm are calculated as shown in Table 2-3.

On the one hand, from Table 2 and Table 3, we can see that IMRFO has obvious advantages in searching ability, and the results are better than other algorithms in each function. The increase of dimension does not reduce the searching ability of IMRFO. On the other hand, among these functions, F1, F6, F8-10, F12, MRFO itself has a good optimization effect and can find the theoretical optimal value, IMRFO also has the same optimization effect, so it can be seen that IMRFO does not weaken the original algorithm's optimization ability. Overall, IMRFO has been effectively improved in stability and accuracy. It can be seen that the introduction of multiple strategies improves the algorithm's optimization ability and reduces the probability of entering a local optimum.

5.2. Statistical Test

To verify whether IMRFO and the other seven algorithms have significant differences in global optimization, the 30-dimension results of each algorithm are tested. The Wilcoxon rank-sum test is used to find the differences between the two algorithms. Assume H0 : The two algorithms have the same performance. H1 : There is an obvious difference between the two algorithms. Use the P-value of the test results to measure the differences between the two algorithms. When P < 0.05, reject H0. It shows that there is a significant difference between the two algorithms. When P > 0.05, H0 is accepted, indicating that the two algorithms have the same global optimization performance. To clearly see the differences between these algorithms, we utilize N/A to represent the values of P > 0.05. The Wilcoxon test results are shown in Table 4. At the same time, in order to better show the comprehensive optimization ability of IMRFO in the whole test function, the average and variance of each algorithm are Friedman test [49], and the final ranking is calculated to measure the universality of the algorithm in the 12 test functions. The test results are shown in Table 5.

From Table 4, it can be seen that IMRFO differs significantly from other algorithms. In some functions, MRFO itself has better searching ability, so the difference is not obvious. From Table 5, IMRFO ranks best in the search results of each function, which also indicates that it has a good universality.

5.3. Comparison with Variants of the Algorithm

To further show the effectiveness and innovation of IMRFO, this paper compares IMRFO with multistrategy serial cuckoo search algorithm (MSSCS) [50], firefly algorithm with courtship learning (FACL) [51], self-adaptive cuckoo search (SACS) [52], and CSsin [53] proposed in recent years. These four algorithms are variants of classical algorithms and have been validated in the CEC test set. The specific parameters of each algorithm are set as follows: In MSCS algorithm, α = 0.01, β = 1.5, Pa = 0.25, C = 0.2, PAmax = 0.35, PAmin = 0.25; in CSsin algorithm, Pmax = 0.75, Pmax = 0.25, freq = 0.5; in FACL, α = 0.01, βmin = 0.2, β = 1, γ = 1. The number of populations and the number of iterations for each algorithm are shown above. Similarly, when IMRFO is the optimal value, font bolding will be applied. The results of each algorithm are shown in Table 6.

From Table 6, it is clear that IMRFO is the best value in F1-4, F6, F9-12, which shows that IMRFO is better than these algorithms in the optimization of these functions. Secondly, the variants of CS have better optimization results, especially in F5 and F7, which have higher accuracy. FACL, as the worst one, has poor optimization results but good stability. Generally speaking, IMRFO has some advantages in function optimization, which verifies the effectiveness and innovation of the algorithm.

5.4. Convergence Analysis

In order to clearly see the optimization and convergence effect of each algorithm in each function, the average convergence diagram of each algorithm is given as shown in Figure 4.

From Figure 4, it can be seen that IMRFO has a good convergence effect and can find the most accurate solution quickly, especially in the functions of F1-4, F6, F11. It can be seen that the flexible search mechanism enables the algorithm to find the best solution quickly in the optimization process.

5.5. Ablation Experiment

In order to verify the validity and feasibility of the three combinations of strategies, the combinations of strategies are experimented with to find the better one. In this paper, the algorithm of combining Lévy flight with GWL is recorded as MRFO-I, the algorithm of combining Lévy flight with PSO learning thought strategy is recorded as MRFO-II, while the algorithm of combining PSO thought with GWL is recorded as MRFO-III. Besides, the algorithm using Lévy flight alone is recorded as MRFO-IV, the algorithm using GWL alone as MRFO-V, and the algorithm utilizing PSO alone as MRFO-VI. The experimental parameters are consistent with those above. The test function dimension is 30. If IMRFO is the optimal value, the font is bolded. The experimental results are shown in Table 7.

As can be seen from Table 7, IMRFO is the best performer of all variants, and the criteria on each function are the best. IMRFO search accuracy is better than other algorithms and the difference is significant especially in F2, F4, F11. Therefore, it can be seen that the integration of multiple strategies is important, and the validity and feasibility of IMRFO are verified.

5.6. Time Complexity Analysis

Time complexity is an important measure of an algorithm. In order to show an effective improvement, it is necessary to balance the searchability and time complexity of the algorithm. The basic MRFO consists of only three phases, chain feeding, spiral feeding, and empty feeding, where chain feeding and spiral feeding are in the same cycle. Set the population number to N and the maximum number of iterations to T. The dimension is D, so the time complexity of MRFO can be summarized as follows. Macroscopically, the time complexity of swarm intelligence algorithms is the product of population number, iteration number, and dimension. Therefore, the time complexity of IMRFO is O (TND), just like other algorithms.

Microscopically, MRFO can be calculated as follows:

Set the calculation time of introducing RWL to be t1, the calculation time of introducing Lévy flight to be t2, the calculation time of using two learning factors to be t3, and the other calculations are ignored.

IMRFO can be summarized as follows:

Therefore, it can be seen that the time complexity of IMRFO has not changed fundamentally. A small increase in the number of iterations can be ignored. These increases will be of great significance if the optimization capability of the algorithm is effectively improved.

6. Image Segmentation Experiments

At present, image processing has been applied in many fields, and the image on land has been well developed, but it still has research value in underwater images. So in order to show the research value, eight underwater images are selected as test images. From the literature [54], select PSO, DPSO, sparrow search algorithm (SSA) [55], Modified sparrow search algorithm (MSSA) [56], ABC, MRFO, WOA, TLBO, FPA, IMRFO 9 algorithms optimize K-means algorithm and traditional K-means algorithm for image segmentation. MSSA is a newly proposed K-means based algorithm, and other algorithms have been successfully applied to image segmentation problems in recent years. Because the K-means clustering algorithm has a strong dependence on K values, improper selection of K values will have a great impact on the results, and K-means clustering algorithm has a strong dependence on K values. Set the value of k to 3 to avoid interference from unrelated factors. The general parameters of the algorithm are population size of 30 and the maximum number of iterations of 100. Each algorithm divides the image as shown in Figure 5 and Figure 6. The first line in Figure 5 and Figure 6 represents the original image, and each subsequent line represents the segmentation effect of each algorithm.

It is impossible to see the difference between each algorithm in image segmentation by human eyes. Therefore, three commonly used image segmentation metrics, PSNR, SSIM, and FSIM, are selected to measure the quality of each algorithm.

Peak Signal-to-Noise Ratio (PSNR) is mainly used to measure the difference between the segmented image and the original image. The formula is as follows [57]:

In formula (12) and (13), RMSE represents the root mean square error of the pixels; M×Q represents the size of the image; represents the pixel gray value of the original image; represents the pixel gray value of the segmented image. The larger the PSNR value, the better the segmented image quality. Generally speaking, PSNR higher than 40 dB indicates excellent image quality (indicating that it is very close to the original image). At 30–40db, it usually indicates that the image quality is good (indicating that the distortion is perceptible but acceptable).

Structural Similarity (SSIM) is used to measure the similarity between the original image and the segmented image. The larger the SSIM, the better the segmented result. SSIM is defined as follows [58]:

In formula (14), and represent the average value of the original image and the segmented image; σI and σseg represent the standard deviation of the original image and the segmented image, respectively.; represents the covariance between the original image and the segmented image; are constantly used to ensure stability. SSIM value range [0,1]. The larger the value, the smaller the image distortion.

Feature similarity index mersure (FSIM) is a measure of the characteristic similarity between the original image and the quality of the segmentation, used to evaluate local structure and provide contrast information. The value range of FSIM is [0,1], and the closer the value is to 1, the better the segmentation effect. FSIM is defined as follows [59]:

In the above formula, is all the pixel regions of the original image; is the similarity score; is the phase consistency measure; T1 and T2 are constants; G is the gradient descent; E(x) is the response vector size at position X and the scale is n; is a very small value; An(X) is the local size at scale n.

Run each algorithm 10 times, and the average and average running time of the partitioned metrics are shown in Table 8.

Simply from the naked eye, the image after IMRFO segmentation in Figures 5 and Figure 6 is clearer. Some algorithms have a rough segmentation effect and have appeared a blurry phenomenon. From Table 7, it can be seen that the segmentation index of IMRFO has a greater advantage, especially in test01 and test03-08, where more than two indexes are optimal. For example, the FSIM index in test07 reaches 0.97, SSIM in test08 reaches 0.87, which has a significant advantage over other algorithms. When the performance indicator is not optimal, IMRFO is still close to the optimal value. For example, in test01, the SSIM index of WOA is 0.7488, while that of IMRFO is 0.7479, which is close to the optimal value. In test06, the PSNR of ABC was 43.3715, and that of IMRFO was 43.1626. Therefore, both subjective visual effect and measurement result of IMRFO is better than other algorithms, which can prove a good segmentation effect. It also indirectly proves the good search performance of IMRFO, solves the problem that MRFO is easy to fall into local optimal solution and K-means has the disadvantage of being sensitive to the initial clustering center, which results in an excellent initial clustering center and further improves the image segmentation quality. On the other hand, the running time of K-means segmentation is the least, but the quality is the worst. The operation of other algorithms is large and the effect is obvious. The IMRFO does have a time disadvantage, to be expected, as it takes more time to accurately scan the solution in space.

7. Summary of Results

MRFO relies on group behavior to find food, so it lacks flexibility and is prone to fall into local optimum. In the existing work, most scholars can not solve such problems well. In order to improve the searching ability of MRFO, an improved algorithm for bats is presented, which uses Lévy flight, random walk learning, and learning factors.

The current experimental work is summarized as follows:(1)Comparing IMRFO with some basic algorithms on 12 standard test functions shows that the algorithm has certain advantages.(2)Two statistical tests are used to verify the universality of IMRFO and show a good search ability.(3)The convergence of each algorithm in each function is given, and the result shows that IMRFO has a good convergence rate.(4)To further verify the performance of the algorithm, IMRFO is compared with the recently proposed variants of the algorithm, and the results show that IMRFO has an obvious advantage in most functions.(5)In order to verify the validity and value of the three combinations of strategies, ablation experiments were carried out. The results show that IMRFO is better than other combinations of strategies, highlighting the practical value of IMRFO.(6)Eight underwater images were used to verify the effect of IMRFO optimized K-means image segmentation. The results show that the IMRFO optimized image segmentation quality is good and has a rational segmentation index in multiple images.

In summary, several experiments have demonstrated that IMRFO is a challenging new variant of the algorithm. IMRFO has shown good results in several test functions and image segmentation, but the optimization results in some functions and images need to be improved. More work is waiting to improve its optimization capability.

8. Conclusion and Future Works

In order to improve the shortcomings of K-means image segmentation and its vulnerability to local optimization, this paper presents a K-means image segmentation method based on IMRFO. IMRFO uses Lévy flight to improve the individual searchability, proposes random walk learning to prevent the premature phenomenon of the algorithm, and finally uses learning factor to improve the convergence accuracy of the algorithm so as to improve the search of the algorithm. The validity and feasibility of IMRFO are verified by 12 test functions, and through 8 underwater image data sets, it can be seen that IMRFO has a good segmentation effect and is superior to other algorithms proposed in recent years under several indicators.

Although IMRFO has good segmentation advantages in eight images, it does not achieve the best three criteria for all images. From the experimental point of view, IMRFO is only the best in test 05, while the other test pictures are basically the two best. On the other hand, the running time of each algorithm is too large, and the accuracy is at the expense of time. In the future, we will improve the image quality from the following three aspects.(1)Comprehensively improve the three performance indicators, making the three indicators the best(2)Balance the time and the search ability of the algorithm to get the best performance in an effective time(3)It can be used in agricultural, aerospace, medical, and other scenarios so that the algorithm can play a suitable role in different environments

Data Availability

Some data of this study are confidential, so the experimental data cannot be uploaded. These data can be obtained from the corresponding author on request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was financially supported by the National Natural Science Foundation of China under Grant 61672121.