Hostname: page-component-8448b6f56d-cfpbc Total loading time: 0 Render date: 2024-04-24T05:06:13.523Z Has data issue: false hasContentIssue false

A Markov Chain Analysis of Genetic Algorithms: Large Deviation Principle Approach

Published online by Cambridge University Press:  14 July 2016

Joe Suzuki*
Affiliation:
Osaka University
*
Postal address: Department of Mathematics, Osaka University, Toyonaka, Osaka 560-0043, Japan. Email address: suzuki@math.sci.osaka-u.ac.jp
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

In this paper we prove that the stationary distribution of populations in genetic algorithms focuses on the uniform population with the highest fitness value as the selective pressure goes to ∞ and the mutation probability goes to 0. The obtained sufficient condition is based on the work of Albuquerque and Mazza (2000), who, following Cerf (1998), applied the large deviation principle approach (Freidlin-Wentzell theory) to the Markov chain of genetic algorithms. The sufficient condition is more general than that of Albuquerque and Mazza, and covers a set of parameters which were not found by Cerf.

Type
Research Article
Copyright
Copyright © Applied Probability Trust 2010 

References

[1] Albuquerque, P. and Mazza, C. (2001). Mutation-selection algorithms: A large deviation approach. In Foundations of Genetic Algorithms-6, eds Martin, W. N. and Spears, W. M., Morgan Kaufmann, San Francisco, pp. 227240.Google Scholar
[2] Catoni, O. (1997). Simulated annealing algorithms and Markov chains with rare transitions. Söminaire de Probabilitös XXXIII. Lecture Notes Math. 709, Springer, Berlin, pp. 69119.Google Scholar
[3] Cerf, R. (1996). The dynamics of mutation-selection algorithms with large population sizes. Ann. Inst. H. Poincarö Prob. Statist. 32, 455508.Google Scholar
[4] Cerf, R. (1998). Asymptotic convergence of genetic algorithms. Adv. Appl. Prob. 30, 521550.Google Scholar
[5] Davis, T. E. and Principe, J. C. (1991). A simulated annealing-like convergence theory for the simple genetic algorithm. In Proc. 4th Internat. Conf. Genetic Algorithms, eds Belew, R. K. and Booker, L. B., Morgan Kaufmann, San Mateo, CA, pp. 174181.Google Scholar
[6] De Silva, U. C. and Suzuki, J. (1995). On the stationary distribution of GAs with positive crossover probability. In Proc. Genetic Evolutionary Comput. Conf. (GECCO 2005), ACM, Washington, DC, pp. 11471151.Google Scholar
[7] Francois, O. (2002). Global optimization with exploration/selection algorithms and simulated annealing. Ann. Appl. Prob. 12, pp. 248271.Google Scholar
[8] Freidlin, M. I. and Wentzell, A. D. (1984). Random Perturbations of Dynamical Systems. Springer, New York.Google Scholar
[9] Goldberg, D. E. (1988) Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, Reading, MA.Google Scholar
[10] Rigal, L. and Truffet, L. (2007). A new genetic algorithm specifically based on mutation and selection. Adv. Appl. Prob. 39, 141161.CrossRefGoogle Scholar
[11] Suzuki, J. (1995). A Markov chain analysis of simple genetic algorithms. IEEE Trans. Systems Man Cybernetics 25, 655659.CrossRefGoogle Scholar
[12] Suzuki, J. (1998). A further result on the Markov chain model of genetic algorithms and its application to a simulated annealing-like strategy. IEEE Trans. Systems Man Cybernetics 28, 95102.Google Scholar