Abstract
Designing a model to predict the main non-dimensional parameters related to the fluids, i.e., the Capillary number, the Reynolds number, flow rates ratio, and viscosities ratio of two fluids, to achieve a desired droplet size is so important. We use a soft computing method, i.e., a multi-layer perceptron artificial neural network (MLP-ANN), to extract this model. The model is trained by both experimental and validated simulation data in the COMSOL environment. To optimize the structure of the MLP-ANN, the swarm-based metaheuristic algorithms are used, i.e., Particle Swarm Optimization, Firefly Algorithm (FA), Grey wolf optimizer, and Grasshopper Optimization Algorithm. The results show that the FA algorithm has the best results. The optimized network has two hidden layers, with 6 and 14 neurons in its first and second hidden layers, and the network's transfer functions in its hidden layers are in the type of logsig. The RMSE and \({R}^{2}\) values for the optimized MLP network are equal to 4.0076 and 0.9900, respectively. Then, the inverse model is used to calculate the optimum parameters related to the fluids to achieve a desired droplet size. This problem is solved as an optimization problem. A 3D diagram of these optimal parameters is plotted for five desired droplet sizes. It can be seen that these optimal points create different zones related to each droplet size. In other words, for a desired droplet size, there is a confined zone in the 3D space of the capillary number, flow rates ratio, and viscosities ratio.
References
Abdel-Basset M, Abdel-Fatah L, Sangaiah AK (2018) Metaheuristic algorithms: a comprehensive review. Comput Intell Multimed Big Data Cloud with Eng Appl. https://doi.org/10.1016/B978-0-12-813314-9.00010-4
de Almeida BSG, Leite VC (2019) Particle swarm optimization: a powerful technique for solving engineering problems. Swarm Intell Adv New Perspect Appl
Asproulis N, Drikakis D (2013) An artificial neural network-based multiscale method for hybrid atomistic-continuum simulations. Microfluid Nanofluid 15:559–574. https://doi.org/10.1007/s10404-013-1154-4
Butler C (1992) A primer on the Taguchi method. Comput Integr Manuf Syst 5:246. https://doi.org/10.1016/0951-5240(92)90037-d
Catherine R, Hyewon L, Alison H et al (2010) Microfluidics for medical diagnostics and biosensors. Chem Eng Sci 66:1490–1508
Chong ZZ, Tan SH, Gañán-Calvo AM et al (2016) Active droplet generation in microfluidics. Lab Chip 16:35–58. https://doi.org/10.1039/c5lc01012h
David Mech L (1999) Alpha status, dominance, and division of labor in wolf packs. Can J Zool 77:1196
Ding J, Xu N, Nguyen MT et al (2021) Machine learning for molecular thermodynamics. Chin J Chem Eng. https://doi.org/10.1016/j.cjche.2020.10.044
Feng H, Ni H, Zhao R, Zhu X (2020) An Enhanced grasshopper optimization algorithm to the bin packing problem. J Control Sci Eng. https://doi.org/10.1155/2020/3894987
Fernandes Junior FE, Yen GG (2019) Particle swarm optimization of deep neural networks architectures for image classification. Swarm Evol Comput 49:62–74. https://doi.org/10.1016/j.swevo.2019.05.010
Guckenberger DJ, De Groot TE, Wan AMD et al (2015) Micromilling: a method for ultra-rapid prototyping of plastic microfluidic devices. Lab Chip 15:2364–2378. https://doi.org/10.1039/c5lc00234f
Hong SH, Yang H, Wang Y (2020) Inverse design of microfluidic concentration gradient generator using deep learning and physics-based component model. Microfluid Nanofluidics. https://doi.org/10.1007/s10404-020-02349-z
Joanicot M, Ajdari A (2005) Droplet control for microfluidics. Science 309:887–888. https://doi.org/10.1126/science.1112615
Johari NF, Zain AM, Mustaffa NH, Udin A (2013) Firefly algorithm for optimization problem. Appl Mech Mater 421:512–517. https://doi.org/10.4028/www.scientific.net/AMM.421.512
Jung J, Oh J (2014) Cell-induced flow-focusing instability in gelatin methacrylate microdroplet generation. Biomicrofluidics 8:36503. https://doi.org/10.1063/1.4880375
Kamali R, Binesh AR (2013) A comparison of neural networks and adaptive neuro-fuzzy inference systems for the prediction of water diffusion through carbon nanotubes. Microfluid Nanofluid 14:575–581. https://doi.org/10.1007/s10404-012-1075-7
Kim H, Cheon D, Lim J, Nam K (2019) Robust flow control of a syringe pump based on dual-loop disturbance observers. IEEE Access 7:135427–135438. https://doi.org/10.1109/ACCESS.2019.2942062
Lankford S, Grimes D (2020) Neural architecture search using particle swarm and ant colony optimization. CEUR Workshop Proc 2771:229–240
Lashkaripour A, Abouei Mehrizi A, Rasouli M, Goharimanesh M (2015) Numerical study of droplet generation process in a microfluidic flow focusing. J Comput Appl Mech 46:167–175. https://doi.org/10.2205/jcamech.2015.55101
Lashkaripour A, Goharimanesh M, Abouei Mehrizi A, Densmore D (2018a) An adaptive neural-fuzzy approach for microfluidic droplet size prediction. Microelectron J 78:73–80. https://doi.org/10.1016/j.mejo.2018.05.018
Lashkaripour A, Silva R, Densmore D (2018b) Desktop micromilled microfluidics. Microfluid Nanofluid. https://doi.org/10.1007/s10404-018-2048-2
Lashkaripour A, Rodriguez C, Mehdipour N et al (2021) Machine learning enables design automation of microfluidic flow-focusing droplet generation. Nat Commun. https://doi.org/10.1038/s41467-020-20284-z
Mastiani M, Seo S, Riou B, Kim M (2019) High inertial microfluidics for droplet generation in a flow-focusing geometry. Biomed Microdev. https://doi.org/10.1007/s10544-019-0405-x
Mirjalili S, Lewis A (2016) The Whale optimization algorithm. Adv Eng Softw 95:51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008
Mirjalili S, Mirjalili SM, Lewis A (2014) Grey Wolf optimizer. Adv Eng Softw 69:46–61. https://doi.org/10.1016/j.advengsoft.2013.12.007
Mirjalili SZ, Mirjalili S, Saremi S et al (2018) Grasshopper optimization algorithm for multi-objective optimization problems. Appl Intell 48:805–820. https://doi.org/10.1007/s10489-017-1019-8
Mottaghi S, Nazari M, Fattahi SM et al (2020) Droplet size prediction in a microfluidic flow focusing device using an adaptive network based fuzzy inference system. Biomed Microdev. https://doi.org/10.1007/s10544-020-00513-4
Murshed SMS, Tan SH, Nguyen NT et al (2009) Microdroplet formation of water and nanofluids in heat-induced microfluidic T-junction. Microfluid Nanofluid 6:253–259. https://doi.org/10.1007/s10404-008-0323-3
Park SY, Wu TH, Chen Y et al (2011) High-speed droplet generation on demand driven by pulse laser-induced cavitation. Lab Chip 11:1010–1012. https://doi.org/10.1039/c0lc00555j
Pooyan T, Carlos HH (2017) Liquid-in-gas droplet microfluidics; experimental characterization of droplet morphology, generation frequency, and monodispersity in a flow-focusing microfluidic device. J Micromech Microeng 27:75020
Rao SS (2019) Engineering optimization theory and practice. Eng Optim Theory Pract. https://doi.org/10.1002/9781119454816
Rasouli MR, Mehrizi AA, Lashkaripour A (2015) Numerical study on low Reynolds mixing of t-shaped micro-mixers with obstacles. Transp Phenom Nano Micro Scales. https://doi.org/10.7508/tpnms.2015.02.001
Ray A, Varma VB, Jayaneel PJ et al (2017) On demand manipulation of ferrofluid droplets by magnetic fields. Sensors Actuators B Chem 242:760–768. https://doi.org/10.1016/j.snb.2016.11.115
Sardashti A, Daniali HM, Varedi SM (2013) Optimal free-defect synthesis of four-bar linkage with joint clearance using PSO algorithm. Meccanica 48:1681–1693. https://doi.org/10.1007/s11012-013-9699-6
Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation algorithm: theory and application. Adv Eng Softw 105:30–47. https://doi.org/10.1016/j.advengsoft.2017.01.004
Schneider T, Kreutz J, Chiu DT (2013) The potential impact of droplet microfluidics in biology. Anal Chem 85:3476–3482. https://doi.org/10.1021/ac400257c
Song H, Chen DL, Ismagilov RF (2006) Reactions in droplets in microfluidic channels. Angew Chemie Int Ed 45:7336–7356. https://doi.org/10.1002/anie.200601554
Timung S, Mandal TK (2013) Prediction of flow pattern of gas-liquid flow through circular microchannel using probabilistic neural network. Appl Soft Comput J 13:1674–1685. https://doi.org/10.1016/j.asoc.2013.01.011
Wiedemeier S, Eichler M, Römer R et al (2017) Parametric studies on droplet generation reproducibility for applications with biological relevant fluids. Eng Life Sci 17:1271–1280. https://doi.org/10.1002/elsc.201700086
Yang X (2010) Nature-inspired metaheuristic algorithms, vol 115, 2nd edn. Luniver Press, Bookswagon
Yang XS (2020) Nature-inspired optimization algorithms. Academin Press, New York
Yoshimura M, Shimoyama K, Misaka T, Obayashi S (2019) Optimization of passive grooved micromixers based on genetic algorithm and graph theory. Microfluid Nanofluid. https://doi.org/10.1007/s10404-019-2201-6
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix: Description of the optimization algorithms
A.1: Particle swarm optimization (PSO)
PSO is one of the most famous and powerful algorithms for solving engineering problems. It (PSO) has been introduced in 1995, according to bees, birds, and fish's social behaviors (Rao 2019). In this algorithm, each species is considered a "particle", and the swarm's dynamic movements are considered. Each particle's path in the swarm depends on its knowledge and information along with the swarm's knowledge, which would be adapted during iterations. If the swarm has NP particles, there is a position vector \(X_{j}^{t} = \left( {x_{j1, } x_{j2, } x_{j3 } , \ldots x_{jn } } \right) ^{T}\) and a velocity vector \(V_{j}^{t} = \left( {v_{j1, } v_{j2, } v_{j3 } , \ldots v_{jn } } \right) ^{T}\) at t iteration for the jth particle, where n is the number of design variables. The following equations are used for defining the jth particle's position and velocity (Sardashti et al. 2013; de Almeida and Leite 2019):
where c1 and c2 are constant and are considered as the learning coefficients of cognitive (individual) and social (group), respectively, these constants are generally considered to be 2. Also, u1 and u2 are two random numbers in the range of 0 and 1. Moreover, Pbest,j is the jth particle's best location, while Gbest is the whole swarm's best location in each iteration. Furthermore, γ(i) is a weighting coefficient whose value changes between [0.4 0.9]. This coefficient can be held constant or has decreased linearly by increasing the number of iterations. Figure 16 represents the flowchart of the PSO algorithm.
A.2: Firefly algorithm (FA)
X.S. Yang first developed the Firefly Algorithm (FA) in 2008 (Yang 2010), as a swarm-based optimization algorithm, based on fireflies' behavior. Fireflies are particular insects that produce flashing lights at night. They use the blinking light to attract the other fireflies or prey. FA is easy to implement and has fewer equations comparing to other metaheuristic algorithms (Johari et al. 2013).
In reality, FA is founded on the following assumptions (Yang 2020):
-
Fireflies are unisex; thus, each firefly, regardless of their sex, will be attracted to others.
-
The attraction between two fireflies is proportional to their brightness, and it decreases by increasing their distance. Therefore, the less bright firefly will go towards the brighter one.
-
The objective function's value determines the brightness of a firefly.
Equation (24) shows the relation for the ith firefly's movement at position Xi, where is excited to a brighter jth firefly at position Xj (Yang 2010, 2020; Johari et al. 2013).
where,
\(D_{1}\) is defined based on the attraction phenomenon, \({\beta }_{0}\) is the attractiveness when the distance between two fireflies is equal to zero (\(r=0\)), and γ is the coefficient for light absorption. Moreover, \({D}_{2}\) is a random term, where \(\alpha\) is the randomization parameter, and \(\varepsilon_{i}^{n}\) is a random numbers vector at iteration n, based on the Gaussian distribution. One can see that \({D}_{1}\) has produced the mutation in this algorithm.
It can be seen that some algorithms such as SA, DE, PSO, and HS are special cases of the FA (Yang 2020):
-
If \(\gamma\) is increased, the attractiveness or light intensity decreases immediately. Thus \(D_{1}\) becomes negligible, leading to the standard SA algorithm.
-
If \(\gamma\) is decreased (\(\gamma \to 0\)), the exponential term in \(D_{1}\) leads to one (\(e^{{ - \gamma r_{ij}^{2} }} \to 1\)). In addition, if it is supposed that \({\upalpha } = 0\), it becomes a variant of the DE algorithm.
-
If \(e^{{ - \gamma r_{ij}^{2} }} = 1\) and \(X_{j}^{n}\) in D1 is replaced by the current global best solution (\(G_{{{\text{best}}}}\)), FA also becomes the accelerated PSO algorithm.
-
If \(\beta_{0} = 0\), and \(\varepsilon_{i}^{n}\) is defined as a function of \(X_{i}^{{}}\), Eq. (24) represents a pitch adjustment variant in the HS algorithm.
Figure 17 represents the flowchart of the FA algorithm.
A.3: Grey wolf optimizer (GWO)
This swarm-based optimization algorithm is based on the grey wolves' behavior in hunting. They live in a group that follows a stringent social dominance regime, as shown in Fig. 18 (Mirjalili et al. 2014). Alpha is placed on the top, which is the swarm's leaders, and they are mainly in charge of planning hunting, resting place, time to wake, etc. (David Mech 1999). The next level in the hierarchy is beta, which helps alphas make the decision or other swarm activities. The betas are possibly the most suitable candidate to become the alpha once they pass away or become old. The lowest level in the grey wolf hierarchy is omega, where they have to obey all the other predominant wolves and are considered the last wolves that have the permission to eat. Deltas have to obey alphas and betas; however, they govern the omegas. The most suitable solution in this algorithm is the alpha (\(\alpha\)), while beta (\(\beta )\) and delta (\(\delta\)) are the second and third best solutions, respectively.
The stages of the hunting are considered as follows (David Mech 1999):
-
Track, chase, and approach the prey.
-
Pursue, encircle, and harass the prey till it stops moving.
-
Attack towards the prey.
For modeling the encircling behavior, the following equations are considered (Mirjalili et al. 2014):
Moreover, \(a\) is decreased from 2 to 0 during the optimization process, and \({r}_{1}\) and \({r}_{2}\) are random values between [0 1]. In addition, the current iteration number is known as \(t\).
For simulating the hunting attitude of grey wolves, Eq. (28) is calculated for the alpha, beta, and delta grey wolves as \(\vec{X}_{1}\), \(\vec{X}_{2}\), and \(\vec{X}_{3}\), and then \(\vec{X}(t + 1)\) in the mean value of them (Mirjalili et al. 2014):
In the exploration (search for prey) phase, grey wolves mainly look for prey based on the alpha, beta, and delta location. They diverge from each other to search for prey and converge to attack prey.
As a summary, a random initial population of grey wolves is produced in the algorithm's first step. Then, through iterations, the prey's possible position is predicted by alpha, beta, and delta wolves. In the next step, grey wolves (alpha, beta, and delta) update their distance to the prey. In the end, the algorithm is completed once a stop criterion is satisfied. Figure 19 shows the flowchart of the GWO algorithm.
A.4: Grasshopper Optimization Algorithm (GOA)
Grasshopper Optimization Algorithm (GOA) developed in 2017 (Saremi et al. 2017; Feng et al. 2020), as a swarm-based optimization algorithm, based on grasshopper swarms in nature. Grasshoppers have the largest swarms among all animals. The movement of the millions of grasshoppers is like rolling large cylinders (Feng et al. 2020). Thus GOA is inspired by the long-range and irregular motions of grasshoppers in giant swarms. Moreover, each grasshopper has a local motion inside the swarm. If the search process is divided into two stages, exploration and exploitation, grasshoppers' long-range and sudden motion describe the exploration stage. Moreover, the local motion for finding better food sources defines the exploitation stage (Saremi et al. 2017; Feng et al. 2020). The swarm motion is influenced by the individuals' local motion, food sources, wind effect, and gravity. Equation (29) has represented the relation for this behavior (Saremi et al. 2017):
where \(X_{i}\) and \(S_{i}\) represent the ith grasshopper's position and the social interaction, respectively. Moreover, \(G_{i}\) is the gravity of the ith grasshopper, and \(A_{i}\) exposes the wind effects. Besides, \(r_{1}\), \(r_{2}\), and \(r_{3}\) are random numbers to produce random behavior in the algorithm.
The social interaction, which is the main term of Eq. (29), can be represented as follows (Saremi et al. 2017; Mirjalili et al. 2018):
where \({\text{d}}_{{{\text{ij}}}}\) and \(\widehat{{{\text{d}}_{{{\text{ij}}}} }}\) are the distance between two grasshoppers (\(d_{ij} = \left| {X_{i} - X_{j} } \right|\)) and its unit vector (\(\widehat{{d_{ij} }} = \frac{{X_{i} - X_{j} }}{{d_{ij} }}\)), respectively. Moreover, \(s\) is simulating the strength of social forces as follows (Saremi et al. 2017; Mirjalili et al. 2018):
where \(f\) and \(l\) represent the strength of attraction and the attractive length scale, respectively. Changing these two parameters lead to different social behaviors in grasshoppers swarm. The gravity term (G) in Eq. (29) is obtained as follows:
\(g\) is the gravitational factor, and \(\widehat{{e_{{\text{g}}} }}\) exhibits the unit vector in the direction of the earth's center. Furthermore, for the wind term (A) in Eq. (29), one can use the following relation:
where \(u\) and \(\widehat{{e_{{\text{w}}} }}\) are the constant drift and the unit vector towards the wind, respectively. The influences of gravity and the wind effect are significantly lower than the relations between grasshoppers in the swarm which means that the following equation is used for finding the new positions instead of Eq. (29):
\({\text{ub}}\) and \({\text{lb}}\) denote the upper bound and lower bound vectors of the design variables, and \(\widehat{{T_{{\text{d}}} }}\) is related to the best solution found so far. In Eq. (34), it is supposed that \(g = 0\) (no gravity) and the wind direction (\(\widehat{{e_{{\text{w}}} }}\)) is in the direction of \(\widehat{{T_{{\text{d}}} }}\). Also, \(c\) is a coefficient that balances between exploitation and exploration processes, decreasing as following:
where \(n\) and \(n_{{{\text{max}}}}\) are the iteration number and the maximum number of iterations, respectively. Moreover, \(c_{\max }\) and \(c_{\min }\) are the maximum value and the minimum value of \(c\), which usually are chosen 1 and 0.00001, respectively. Figure 20 represents the flowchart of the GOA algorithm.
Rights and permissions
About this article
Cite this article
Nazari, M., Varedi-Koulaei, S.M. & Nazari, M. Flow characteristics prediction in a flow-focusing microchannel for a desired droplet size using an inverse model: experimental and numerical study. Microfluid Nanofluid 26, 26 (2022). https://doi.org/10.1007/s10404-022-02529-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10404-022-02529-z