AUTOMATIC INITIALIZATION FOR ACTIVE CONTOUR MODELS BASED ON PARTICLE SWARM OPTIMIZATION AND APPLICATION TO MEDICAL IMAGES

The active contour models (ACMs) are one of the most widely used techniques in image segmentation, localization and object tracking. Although there are several existing updated versions of ACMs, in which most of the models do not converge to the desired results in images due to complex background and depends on the initial placement of contour. Among the methods based on the level set, the Local Gaussian Distribution Fitting (LGDF) Energy and the Local Binary Fitting (LBF) Energy are successful algorithms, that suffers from image contours and initial position selection, which are the considerable demerits of the models. To overcome these adversities, we present an efficient and socially-inspired population based stochastic algorithm called particle swarm contour search (PSCS) which is the modified version of particle swarm optimization (PSO) algorithm, considered to be one of the most important optimization methods in swarm intelligence. Firstly, we apply smoothing filters on image to remove high intensities noise. Secondly, we utilized the PSCS algorithm to find the dominant points around the object’s boundaries. The PSCS selects some extra points in different parts of the image rather than the required object, such points are removed in our post PSCS step by utilizing different morphological operations. Furthermore, we calculate the center position and radius of the object for initial contour with the help of points generated using PSCS. The experimental outcome of the segmentations indicate that our proposed approach is automatic and fast for its initialization and successfully segment the desire object in medical images. ∗Corresponding author E-mail address: mahmoodulhassan300@gmail.com Received October 11, 2020 243 244 MAHMOOD UL HASSAN, NOOR BADSHAH, SANA ZAHIR


INTRODUCTION
Segmentation of images is an essential and demanding task in image processing with many applications in different fields including medical image analysis [1], autonomous vehicle [2], video surveillance [3], etc. A numerous amount of approaches for image segmentation have been introduced in the few decades, in which ACMs or simply snake models are the most successful and popular methods. For instance, Kass et al. [4] given the idea of segmentation using active contour for the first time. Commonly, the mainstream ACMs can be subdivided into two key categories: the edge-based active contour models [4,5,7,6] and region-based active contour models [8,10,11,9]. Both of them have their own merits and demerits.
The edge-based ACMs utilize the image gradient as an extra constraint parameter to stop the evolving contours on the targeted border of the area of interest. Most of the edge-based ACMs use stopping and balloon force terms, which attract the contours to the border of the object and control the behavior of the contours to shrink or expand towards the desired object boundaries, respectively. Edge-based active contour models are limited to perform well for the images having the week borders of the object.
Region-based ACMs have good achievement for the images whose object borders are week or even without borders, this is because of the image gradient not being used. Comparing to the edge-based ACMs, these models are also less sensitive to the placement of the initial contour.
The Chan-Vese (CV) [8] is prominent region-based ACM, that has been successfully utilized to segment the object in the image whose regions are statistically homogenous. However, the CV model is ineffective, if the regions contain inhomogeneity.
In reality, intensity inhomogeneity takes place in many real-world images, like it is most frequently seen in magnetic resonance imaging (MRI), computer tomography (CT) and ultrasound medical images. For instance, the intensity inhomogeneity often appears in MRI because of the existence of non-consistency in the radio frequency (RF) coil. The beam hardening effect raised in CT images produces similar intensity inhomogeneity. Similarly, in ultrasound images it is created by non-consistence beam attention within the body. Additionally, MRI, CT, and ultrasound images are usually damaged by a variety of other noises, which make the segmentation task more challenging. To address these challenges Li et al. [9] and wang et al. [11] presented the LBF and LGDF models respectively, particularly for this purpose, but they both are sensitive for their initialization.
The goal of this paper is to overcome the basic deficiencies of the LGDF [11] and LBF [12] ACMs as shown in [1,2]. Furthermore, we propose an automatic, fast and effective technique that is insensitive to the initial contour placement. This technique proposes finding the solution of the problem by using a metaheuristic optimization-based algorithm. The current work first finds the best contour for the existing LGDF and LBF models by using PSCS algorithm which is the modified form of PSO, a very popular technique in the metaheuristic optimization category [13]. The result is worth seeing, after applying the basic LGDF and LBF models on the resulting contour.
The proposed technique outline read as follows: Section 2 presents background of several region-based ACMs including LGDF [11] and LBF [9] models, the basic PSO algorithm, and pseudo-codes in detail. Section 3 is about our proposed methodology and complete algorithm of the method. Section 4 defines the measurement quantities used for results comparison. In section 5 we discuss the source and properties of the dataset used for experiment. In section 6 we discuss experimental results in detail with comparison. Section 7 concludes the paper with future research directions.

BACKGROUND
During the last two decades, a verity of ACMs or simply snakes models have been proposed many researchers. Originally it was introduced by Kass et al. [4] which are very thriving and commonly used in medical image segmentation models. The basic principle of these models is to evolve the contour towards the edges of the object in the image by minimizing the energy functional. The snake model was demonstrated to be effective when a prior knowledge of the shape of the object is known [14]. A major downside of conventional snake models is poor convergence to object borders, when complex background appears. In order to obtain a better outcome compared to the Kass's model [4], other efforts have concentrated on enhancing internal and external of the contour energy forces. However, these models have not addressed the automated initial contour segmentation issues since several parameters need to be manually calculated. In addition, we claim that, the fully automated segmentation for medical imaging remains unsolved. Semi-automatic algorithm therefore provides a more efficient and best initial contour for the targeted object in the image.

The Mumford-Shah (M-S) Model.
Let Ω be the two-dimensional image domain, and I : Ω −→ R is a Grayscale image. Mumford and Shah [15] proposed an energy functional as given in Eq (1) where an input image I is passed for image segmentation. Furthermore, they consider a contour C that separates the input image I into different non-disjoint areas, containing different artifacts.
in Eq (1), |C| is the length of the contour and µ, ν are constants. Beside v is the approximation of the image I, which is smooth inside and outside of contour C within each region. Ω (v − I) 2 dx is the data fitting term while µ Ω\C |∇v| 2 dx is the smoothing term and ν|C| is the penalizing term. Minimizing the above Eq (1) over the contour C and v, so the results in an optimum contour C that segmenting I into disjointed regions.

The Chan-Vese (C-V) Model. Chan and Vese introduced a model [8] based on Mumford-
Shah model [15]. They proposed an energy functional given in Eq (2) by supposing I : Ω −→ R an input image and C be the contour.
in Eq (2) c 1 and c 2 are average intensities inside and outside the contour C and the µ, ν ≥ 0 and λ 1 , λ 2 > 0 are parameters. Where µ is the level set regularization term and ν is the length term.
Also, λ 1 and λ 2 are the outer and inner weights, respectively.

The
LGDF Model. Wang et al. [11] presented a model, which relies is on LGDF energy. Their proposed model utilize more complex statistical characteristics of local intensities that characterize distribution of local intensities information via neighborhood partitions. The energy functional of LGDF model can be defined as given in Eq (3).
here a local circle region is taken by using x as a pixel point, and the region is sub-divided into In the ith sub-region a pixel point y is taken with intensity I(y), and its posteriori probability is used and denoted by p i,x I(y) . ω(x − y) is a weighting function which relies on the length of the space between the two points, x and y. The posteriori probability and weighting function are given by where Ω 1 represents the inside region and Ω 2 represents the outside region of the contour.
In LGDF model, the complete equation for curve evolution is as follows: where e i (x) where i = 1, 2 are defined as follows: In the above Eq (7), µ i (x) and σ i (x), where i = 1, 2, are the intensity means and standard deviations, respectively. Heaviside function H ε and its derivative H ε is smoothed dirac delta function δ ε are defined as follows: The whole procedure of LGDF model can be described by algorithm (1). First, the initial level set function Φ is simply defined as a binary function. Image, Φ, ε, σ , µ, ν, λ 1 , λ 2 . Steps: Convert RGB image to grayscale image.
Build up the binary function as an initial level set function Φ. Set the parameters like, level set regularization term, kernel size, inner and outer weights, length and data term weights.
Using Eq (9) calculate the Dirac delta function.
Stop the evolution of contour and record the final segmentation result Φ n = Φ. 2.4. The LBF Model. The LBF model is introduced by Li et al. [9] by an effective use of local intensities patterns for intensity inhomogeneousness based image segmentation. This model consists of two fitting functions f 1 (x) and f 2 (x) that estimate the image intensities inside and outside the contour C. The LBF model energy functional can be defined as given in Eq (10).
where µ,ν, λ 1 , and λ 2 are positive parameters. K σ is the kernel function and the constant σ control the size of the region. In Eq (10) the first two terms are the energy terms of the LBF model. The third is the length term and the fourth one is the regularization term of the level set.
The complete equation for curve evolution of LBF model is as follows: where e i (x) where i = 1, 2 are defined as: The fitting functions f i (x) where i = 1, 2 will updated using the following defined equations: where H ε and σ ε are heaviside and dirac functions, respectively, which have already defined in the above Eq (8) and Eq (9) respectively. Also, the initial level set function Φ is simply defined as a binary function. The whole procedure of LBF model can be described by algorithm (2). Image, Φ, ε, K σ , σ , µ, ν, λ 1 , λ 2 .
Convert RGB image to grayscale image.
Build up the binary function as an initial level set function Φ.
Set the parameters in Gaussian kernel.
Using Eq (9) calculate the Dirac delta function.

Output:
Stop the evolution of contour and record the final segmentation result Φ n = Φ.

Particle Swarm Optimization (PSO).
It is a population based socially motivated metaheuristic intelligence technique developed by James and Eberhart [13] after simulated the social behavior of bird flocks or school fishing in 1995. In this system, i particles (where i = 1, 2, 3, ..., n) in swarm S flies through the search space of dimension D. Every particle i constitutes a candidate solution to the problem of optimization. In every iteration; particles position is updated based on their own search experience, that of the swarm moving in the search space.
A position u i (t) and velocity v i (t) are assigned to each particle i at a time step t. In addition, the particle i has a memory of its own so far best position p i (t) and the best position ever achieved by the whole swarm g i (t), called the local and global bests, respectively. The position of each particle i, and the whole swarm, is updated in each time step t according to the Eq (14) and Eq (15).
In Eq (14) the parameters r 1 and r 2 represents random variables of uniform distribution, between 0 and 1. ξ is the inertia weight, that control the influence of previous velocity of the particle.  InPut: Define objective function f (x), S, q 1 , q 2 , ξ , r 1 , r 2 , D.
• Using the objective function f (x), evaluate the fitness value of each particles u i .
• Set personal best and global best in the swarm.
while Iteration<Maximum Iteration do for i = 1 : S do Update the velocities v i using Eq (14).
Update the position u i using Eq (15).

PROPOSED METHODOLOGY
In this section, we explain our approach which makes LGDF [11] and LBF [12] models automatic, fast and robust for their initialization. For this purpose, we have used some preprocessing techniques, workflow is shown in the figure (3.1), which are as follows: First of all we convert RGB medical image into grayscale image. Then, we apply mean filter for smoothing of size 10 × 5 to reduce high intensities noise or the amount of intensity variation between neighboring pixels. After that, we resize the image from high to low dimension for efficient processing, because the images we used are very large in size, that is 1640 × 1043 and processing original image takes long to segment the required region. We experimentally shown that our proposed algorithm performance is similar even after resizing the image.   [16] which is a modified form of PSO algorithm [13] with the purpose of locating the binary object's contour in a search field. But here we modified PSCS to the non-homogenous object's like skin lesion in the medical images.
The PSCS algorithm consists of three stages for updating velocity: Stage 1 is object search stage, in which swarm S use an updated CPSO [17] to do a global search method.
Stage 2 is contour search stage, in which each time a new subswarm S n is generated from its neighborhood when particles detect targeted object of the image, and investigates the object contour. Stage 3 is contour trace stage, in which particles searching the detected object that is closed to its edge.
In stage 1, the velocities v i and positions u i of the global swarm S g are modified for all particles which are not so far added to the sub-swarm S n . The main aim of stage 1 is to find the object in the allocated search space and assigning a subswarm S n to search the contour of that object. The particles in the swarm update their velocity and can be calculated by the Eq (16) and Eq (17). In this context, only the particles of sub-swarms shall be considered for the measurement of repulsive force.
where Qi and Q j are constants, define the repulsion intensity.
A new subswarm is generated when the particles i fitness value is below the specified threshold Θ and raise "SubSwarmSize − 1" particles from its neighborhood. For all the particles in the new sub-swarm the corresponding value for in is modified to the current particle location i, which is u i . If f ( u i ) stays over Θ, it will update its final location outside of any object, represented as out. When a sub-swarm S n is created it will enter stage 2 immediately. In the beginning of this stage the data swap is caused with a small probability amongst the prevailing and random particle of the on going sub-swarm. This causes particles to pass over the contour and helps the algorithm to identify possible contour holes. Each particle updates its previous location based on the fitness value that may be inside in i or outside out i , instead of taking the best local and global locations. The velocity update is calculated using the following equations.
where Q j is a constant repulsion value and T n denoting the age of the sub-swarm in Eq (20). As either in or out will be always equal to u i , one of the components of the velocity will be 0. The velocity of all the elements including random variables r 1 and r 2 , inertia weight ξ , scaling values q 1 and q 2 are set in Eq (14) as a constant value of 0.5. So, we get Eq (21). Now in stage 3, we hold the members near to the contour as possible and make an archive A of the locations to redesign the whole contour. The stage 3 only start, if particles distance from the contour, during stage 2 is less than a threshold Θ value. In this stage, the velocity update equation contains an extra variable denoting the direction of the contour edges, rather than the repulsive force a i (t), which can be calculated as follows: The contour will be normal with the sufficient conditions of in i (t) and out i (t), to the vector e i (t) = out i (t) − in i (t). When the particle is outside of the contour boundary the vector is then equal to in i (t), but when it is inside the contour boundary then the vector is equal to out i (t). With respect to this formation, the particle penetrates parallel to one of the normal vectors e i (t), which can be represented by n i (t). The step size s determines the approximation accuracy to the contour and the time t needed to fully explore the contour. The last phase in stage 3 is to add particle location to the archive A, that denotes the algorithm-approximated contour. The algorithm (4) shows overall structure and pseudo-code description of the algorithm's initialization. Output: A = S n = φ . Steps: Which is nothing but simply dilation of M by N, pursue by the erosion of N. In our approach we use the opening morphological operation, which is nothing but the other name of erosion followed by dilation. Now, we calculate the center position and radius of the object for making circular contour with the help of points generated using PSCS and Post-PSCS processing, which is as follows: Center = u 1 + u n 2 and radius = Total length Total number of particles , where u 1 and u n are the first and last particles respectively. • Step-4 (LGDF and LBF Models for Final Segmentation) In this step we chose an initial contour as a circle using the Eq,s (30) of center and radius, and then apply one of the curve evolution of LGDF (6) or LBF (11) model for final segmentation, which are already defined in section [2].

Algorithm 5 Pseudo-code of Our Proposed Algorithm
Steps: (1) Read the image and convert it to a grayscale image if it is RGB.

METRICES
For the comparison purpose we use two well known measurement quantities: Dice similarity coefficient (DSC) [18] and Jaccard similarity coefficient (JSC) or simply Jaccard Index (JI) [19].
If the original image is denoted by I m and its ground-truth is denoted by G t , then, DSC and JSC are calculated, respectively as follows:

DATASET
We applied our novel segmentation approach to medical images of cancerous skin taken from

CONCLUSION
Segmentation is one of the hot topics in image processing research community and several methods have been proposed. However, the mainstream techniques for segmentation suffer from heavy processing, low segmentation accuracy, and initial contour initialization problems and most of the methods are working on manual initialization. In this paper, we proposed a novel level set models assisted by PSCS to solve automated segmentation of medical images.
We applied several strategies to solve the aforementioned problems including prepossessing using smoothing filters, PSCS to find the dominant points around the object's boundaries, post PSCS processing using morphological operations, finding center position and radius of object, and final segmentation using LGDF and LBF models. Our proposed method is evaluated using medical images of cancerous skin and the comparisons with existing approaches reveal that proposed technique outperformed existing methods in performance and convenience for users.
In the future, we will investigate recently emerged deep learning techniques [20,21] for more precise segmentation.

CONFLICT OF INTERESTS
The author(s) declare that there is no conflict of interests.