Skip to main content
Log in

Flow characteristics prediction in a flow-focusing microchannel for a desired droplet size using an inverse model: experimental and numerical study

Microfluidics and Nanofluidics Aims and scope Submit manuscript

Abstract

Designing a model to predict the main non-dimensional parameters related to the fluids, i.e., the Capillary number, the Reynolds number, flow rates ratio, and viscosities ratio of two fluids, to achieve a desired droplet size is so important. We use a soft computing method, i.e., a multi-layer perceptron artificial neural network (MLP-ANN), to extract this model. The model is trained by both experimental and validated simulation data in the COMSOL environment. To optimize the structure of the MLP-ANN, the swarm-based metaheuristic algorithms are used, i.e., Particle Swarm Optimization, Firefly Algorithm (FA), Grey wolf optimizer, and Grasshopper Optimization Algorithm. The results show that the FA algorithm has the best results. The optimized network has two hidden layers, with 6 and 14 neurons in its first and second hidden layers, and the network's transfer functions in its hidden layers are in the type of logsig. The RMSE and \({R}^{2}\) values for the optimized MLP network are equal to 4.0076 and 0.9900, respectively. Then, the inverse model is used to calculate the optimum parameters related to the fluids to achieve a desired droplet size. This problem is solved as an optimization problem. A 3D diagram of these optimal parameters is plotted for five desired droplet sizes. It can be seen that these optimal points create different zones related to each droplet size. In other words, for a desired droplet size, there is a confined zone in the 3D space of the capillary number, flow rates ratio, and viscosities ratio.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mostafa Nazari.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix: Description of the optimization algorithms

A.1: Particle swarm optimization (PSO)

PSO is one of the most famous and powerful algorithms for solving engineering problems. It (PSO) has been introduced in 1995, according to bees, birds, and fish's social behaviors (Rao 2019). In this algorithm, each species is considered a "particle", and the swarm's dynamic movements are considered. Each particle's path in the swarm depends on its knowledge and information along with the swarm's knowledge, which would be adapted during iterations. If the swarm has NP particles, there is a position vector \(X_{j}^{t} = \left( {x_{j1, } x_{j2, } x_{j3 } , \ldots x_{jn } } \right) ^{T}\) and a velocity vector \(V_{j}^{t} = \left( {v_{j1, } v_{j2, } v_{j3 } , \ldots v_{jn } } \right) ^{T}\) at t iteration for the jth particle, where n is the number of design variables. The following equations are used for defining the jth particle's position and velocity (Sardashti et al. 2013; de Almeida and Leite 2019):

$$X_{j} \left( i \right) = X_{j} \left( {i - 1} \right) + V_{j} \left( i \right), j = 1,2, \ldots ,N_{{\text{P}}} ,$$
(22)
$$V_{j} \left( i \right) = \gamma V_{j} \left( {i - 1} \right) + c_{1} u_{1} \left[ {P_{{{\text{best}},j}} - X_{j} \left( {i - 1} \right)} \right] + c_{2} u_{2} \left[ {G_{{{\text{best}}}} - X_{j} \left( {i - 1} \right)} \right], j = 1,2, \ldots ,N,$$
(23)

where c1 and c2 are constant and are considered as the learning coefficients of cognitive (individual) and social (group), respectively, these constants are generally considered to be 2. Also, u1 and u2 are two random numbers in the range of 0 and 1. Moreover, Pbest,j is the jth particle's best location, while Gbest is the whole swarm's best location in each iteration. Furthermore, γ(i) is a weighting coefficient whose value changes between [0.4 0.9]. This coefficient can be held constant or has decreased linearly by increasing the number of iterations. Figure 16 represents the flowchart of the PSO algorithm.

Fig. 16
figure 16

Steps in the particle swarm optimization algorithm

A.2: Firefly algorithm (FA)

X.S. Yang first developed the Firefly Algorithm (FA) in 2008 (Yang 2010), as a swarm-based optimization algorithm, based on fireflies' behavior. Fireflies are particular insects that produce flashing lights at night. They use the blinking light to attract the other fireflies or prey. FA is easy to implement and has fewer equations comparing to other metaheuristic algorithms (Johari et al. 2013).

In reality, FA is founded on the following assumptions (Yang 2020):

  • Fireflies are unisex; thus, each firefly, regardless of their sex, will be attracted to others.

  • The attraction between two fireflies is proportional to their brightness, and it decreases by increasing their distance. Therefore, the less bright firefly will go towards the brighter one.

  • The objective function's value determines the brightness of a firefly.

Equation (24) shows the relation for the ith firefly's movement at position Xi, where is excited to a brighter jth firefly at position Xj (Yang 2010, 2020; Johari et al. 2013).

$$X_{i}^{n + 1} = X_{i}^{n} + D_{1} + D_{2} ,$$
(24)

where,

$$\begin{gathered} D_{1} = \beta_{0} e^{{ - \gamma r_{ij}^{2} }} \left( {X_{j}^{n} - X_{i}^{n} } \right) \hfill \\ D_{2} = \alpha \varepsilon_{i}^{n} , \hfill \\ \end{gathered}$$
(25)

\(D_{1}\) is defined based on the attraction phenomenon, \({\beta }_{0}\) is the attractiveness when the distance between two fireflies is equal to zero (\(r=0\)), and γ is the coefficient for light absorption. Moreover, \({D}_{2}\) is a random term, where \(\alpha\) is the randomization parameter, and \(\varepsilon_{i}^{n}\) is a random numbers vector at iteration n, based on the Gaussian distribution. One can see that \({D}_{1}\) has produced the mutation in this algorithm.

It can be seen that some algorithms such as SA, DE, PSO, and HS are special cases of the FA (Yang 2020):

  • If \(\gamma\) is increased, the attractiveness or light intensity decreases immediately. Thus \(D_{1}\) becomes negligible, leading to the standard SA algorithm.

  • If \(\gamma\) is decreased (\(\gamma \to 0\)), the exponential term in \(D_{1}\) leads to one (\(e^{{ - \gamma r_{ij}^{2} }} \to 1\)). In addition, if it is supposed that \({\upalpha } = 0\), it becomes a variant of the DE algorithm.

  • If \(e^{{ - \gamma r_{ij}^{2} }} = 1\) and \(X_{j}^{n}\) in D1 is replaced by the current global best solution (\(G_{{{\text{best}}}}\)), FA also becomes the accelerated PSO algorithm.

  • If \(\beta_{0} = 0\), and \(\varepsilon_{i}^{n}\) is defined as a function of \(X_{i}^{{}}\), Eq. (24) represents a pitch adjustment variant in the HS algorithm.

Figure 17 represents the flowchart of the FA algorithm.

Fig. 17
figure 17

Steps in the FA algorithm

A.3: Grey wolf optimizer (GWO)

This swarm-based optimization algorithm is based on the grey wolves' behavior in hunting. They live in a group that follows a stringent social dominance regime, as shown in Fig. 18 (Mirjalili et al. 2014). Alpha is placed on the top, which is the swarm's leaders, and they are mainly in charge of planning hunting, resting place, time to wake, etc. (David Mech 1999). The next level in the hierarchy is beta, which helps alphas make the decision or other swarm activities. The betas are possibly the most suitable candidate to become the alpha once they pass away or become old. The lowest level in the grey wolf hierarchy is omega, where they have to obey all the other predominant wolves and are considered the last wolves that have the permission to eat. Deltas have to obey alphas and betas; however, they govern the omegas. The most suitable solution in this algorithm is the alpha (\(\alpha\)), while beta (\(\beta )\) and delta (\(\delta\)) are the second and third best solutions, respectively.

Fig. 18
figure 18

Hierarchy of grey wolf (dominance decreases from top to down) (Mirjalili et al. 2014)

The stages of the hunting are considered as follows (David Mech 1999):

  • Track, chase, and approach the prey.

  • Pursue, encircle, and harass the prey till it stops moving.

  • Attack towards the prey.

For modeling the encircling behavior, the following equations are considered (Mirjalili et al. 2014):

$$\vec{D} = \left| {2r_{2} \vec{X}_{p} (t) - \vec{X}(t)} \right|$$
(26)
$$\vec{X}(t + 1) = \vec{X}_{p} (t) - \left( {2ar_{1} - a} \right)\vec{D}$$
(27)

Moreover, \(a\) is decreased from 2 to 0 during the optimization process, and \({r}_{1}\) and \({r}_{2}\) are random values between [0 1]. In addition, the current iteration number is known as \(t\).

For simulating the hunting attitude of grey wolves, Eq. (28) is calculated for the alpha, beta, and delta grey wolves as \(\vec{X}_{1}\), \(\vec{X}_{2}\), and \(\vec{X}_{3}\), and then \(\vec{X}(t + 1)\) in the mean value of them (Mirjalili et al. 2014):

$$\vec{X}(t + 1) = \frac{{\vec{X}_{1} + \vec{X}_{2} + \vec{X}_{3} }}{3}$$
(28)

In the exploration (search for prey) phase, grey wolves mainly look for prey based on the alpha, beta, and delta location. They diverge from each other to search for prey and converge to attack prey.

As a summary, a random initial population of grey wolves is produced in the algorithm's first step. Then, through iterations, the prey's possible position is predicted by alpha, beta, and delta wolves. In the next step, grey wolves (alpha, beta, and delta) update their distance to the prey. In the end, the algorithm is completed once a stop criterion is satisfied. Figure 19 shows the flowchart of the GWO algorithm.

Fig. 19
figure 19

Steps in the GWO algorithm

A.4: Grasshopper Optimization Algorithm (GOA)

Grasshopper Optimization Algorithm (GOA) developed in 2017 (Saremi et al. 2017; Feng et al. 2020), as a swarm-based optimization algorithm, based on grasshopper swarms in nature. Grasshoppers have the largest swarms among all animals. The movement of the millions of grasshoppers is like rolling large cylinders (Feng et al. 2020). Thus GOA is inspired by the long-range and irregular motions of grasshoppers in giant swarms. Moreover, each grasshopper has a local motion inside the swarm. If the search process is divided into two stages, exploration and exploitation, grasshoppers' long-range and sudden motion describe the exploration stage. Moreover, the local motion for finding better food sources defines the exploitation stage (Saremi et al. 2017; Feng et al. 2020). The swarm motion is influenced by the individuals' local motion, food sources, wind effect, and gravity. Equation (29) has represented the relation for this behavior (Saremi et al. 2017):

$$X_{i} = r_{1} S_{i} + r_{2} G_{i} + r_{3} A_{i} ,$$
(29)

where \(X_{i}\) and \(S_{i}\) represent the ith grasshopper's position and the social interaction, respectively. Moreover, \(G_{i}\) is the gravity of the ith grasshopper, and \(A_{i}\) exposes the wind effects. Besides, \(r_{1}\), \(r_{2}\), and \(r_{3}\) are random numbers to produce random behavior in the algorithm.

The social interaction, which is the main term of Eq. (29), can be represented as follows (Saremi et al. 2017; Mirjalili et al. 2018):

$$S_{i} = \mathop \sum \limits_{{\begin{array}{*{20}c} {j = 1} \\ {j \ne i} \\ \end{array} }}^{N} s\left( {d_{ij} } \right)\widehat{{d_{ij} }},$$
(30)

where \({\text{d}}_{{{\text{ij}}}}\) and \(\widehat{{{\text{d}}_{{{\text{ij}}}} }}\) are the distance between two grasshoppers (\(d_{ij} = \left| {X_{i} - X_{j} } \right|\)) and its unit vector (\(\widehat{{d_{ij} }} = \frac{{X_{i} - X_{j} }}{{d_{ij} }}\)), respectively. Moreover, \(s\) is simulating the strength of social forces as follows (Saremi et al. 2017; Mirjalili et al. 2018):

$$s\left( d \right) = fe^{{\frac{ - d}{l}}} - e^{ - d} ,$$
(31)

where \(f\) and \(l\) represent the strength of attraction and the attractive length scale, respectively. Changing these two parameters lead to different social behaviors in grasshoppers swarm. The gravity term (G) in Eq. (29) is obtained as follows:

$$G_{i} = - g\widehat{{e_{{\text{g}}} }},$$
(32)

\(g\) is the gravitational factor, and \(\widehat{{e_{{\text{g}}} }}\) exhibits the unit vector in the direction of the earth's center. Furthermore, for the wind term (A) in Eq. (29), one can use the following relation:

$$A_{i} = u\widehat{{e_{{\text{w}}} }},$$
(33)

where \(u\) and \(\widehat{{e_{{\text{w}}} }}\) are the constant drift and the unit vector towards the wind, respectively. The influences of gravity and the wind effect are significantly lower than the relations between grasshoppers in the swarm which means that the following equation is used for finding the new positions instead of Eq. (29):

$$X_{i} = c\left( {\mathop \sum \limits_{{\begin{array}{*{20}c} {j = 1} \\ {j \ne i} \\ \end{array} }}^{N} c\frac{{{\text{ub}} - {\text{lb}}}}{2}s\left( {\left| {X_{i} - X_{j} } \right|} \right)\frac{{X_{i} - X_{j} }}{{d_{ij} }}} \right) + \widehat{{T_{d} }},$$
(34)

\({\text{ub}}\) and \({\text{lb}}\) denote the upper bound and lower bound vectors of the design variables, and \(\widehat{{T_{{\text{d}}} }}\) is related to the best solution found so far. In Eq. (34), it is supposed that \(g = 0\) (no gravity) and the wind direction (\(\widehat{{e_{{\text{w}}} }}\)) is in the direction of \(\widehat{{T_{{\text{d}}} }}\). Also, \(c\) is a coefficient that balances between exploitation and exploration processes, decreasing as following:

$$c = c_{\max } - n\left( {\frac{{c_{\max } - c_{\min } }}{{n_{\max } }}} \right),$$
(35)

where \(n\) and \(n_{{{\text{max}}}}\) are the iteration number and the maximum number of iterations, respectively. Moreover, \(c_{\max }\) and \(c_{\min }\) are the maximum value and the minimum value of \(c\), which usually are chosen 1 and 0.00001, respectively. Figure 20 represents the flowchart of the GOA algorithm.

Fig. 20
figure 20

Steps in the GOA algorithm

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nazari, M., Varedi-Koulaei, S.M. & Nazari, M. Flow characteristics prediction in a flow-focusing microchannel for a desired droplet size using an inverse model: experimental and numerical study. Microfluid Nanofluid 26, 26 (2022). https://doi.org/10.1007/s10404-022-02529-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10404-022-02529-z

Keywords

Navigation