Optimality and Stability of Symmetric Evolutionary Games with Applications in Genetic Selection

Symmetric evolutionary games, i.e., evolutionary games with symmetric fitness matrices, have important applications in population genetics, where they can be used to model for example the selection and evolution of the genotypes of a given population. In this paper, we review the theory for obtaining optimal and stable strategies for symmetric evolutionary games, and provide some new proofs and computational methods. In particular, we review the relationship between the symmetric evolutionary game and the generalized knapsack problem, and discuss the first and second order necessary and sufficient conditions that can be derived from this relationship for testing the optimality and stability of the strategies. Some of the conditions are given in different forms from those in previous work and can be verified more efficiently. We also derive more efficient computational methods for the evaluation of the conditions than conventional approaches. We demonstrate how these conditions can be applied to justifying the strategies and their stabilities for a special class of genetic selection games including some in the study of genetic disorders.

1. Introduction. We consider an n-strategy evolutionary game defined by a symmetric fitness matrix A ∈ R n×n . Let S = {x ∈ R n : x ≥ 0, i x i = 1} be the set of all mixed strategies. The problem is to find an optimal strategy x * ∈ S such that We call this problem a symmetric evolutionary game or SEgame for short. The problem has important applications in population genetics, where it can be used to model and study the evolution of genotypes in a given population when their corresponding phenotypes are under selection pressures.
The modeling of genetic selection has a long history [6]. It may be traced back to the earliest mathematical work in population genetics in early last century including the Hardy-Weinberg's Law by G. H. Hardy and W. Weinberg in 1908 [8,20] and the Fundamental Theorem of Natural Selection by R. A. Fisher in 1930 [7]. The work has especially been revived in 1970s when J. Maynard Smith introduced the game theory to biology and developed the evolutionary game theory for the study of evolution of population of competing species [10]. In this theory, a genetic selection problem can in particular be modeled as a SEgame [9].
The SEgame has a close relationship with the generalized knapsack problem or GKproblem for short, which is to find an optimal solution x * ∈ R n for the following maximization problem: max x∈R n x T Ax/2 (2) subject to i x i = 1, x ≥ 0. The GKproblem has been studied extensively, with applications in solving maximum clique problems [11], in convex quadratic programming [15], and especially in game theoretic modeling [2].
In this paper, we review the theory for obtaining optimal and stable strategies for symmetric evolutionary games, and provide some new proofs and computational methods. In particular, we review the relationship between the symmetric evolutionary game and the generalized knapsack problem, and discuss the first and second order necessary and sufficient conditions that can be derived from this relationship for testing the optimality and stability of the strategies. Some of the conditions are given in different forms from those in previous work and can be verified more efficiently. We also derive more efficient computational methods for the evaluation of the conditions than conventional approaches. We demonstrate how these conditions can be applied to justifying the strategies and their stabilities for a special class of genetic selection games including some in the study of genetic disorders.
1.1. Further mathematical background. A two-player game is said to be symmetric if the players share the same fitness matrix and the same set of strategies. Let A ∈ R n×n be the fitness matrix and S = {x ∈ R n : x ≥ 0, i x i = 1} the set of all mixed strategies. Let x ∈ S be the strategy played by player I and y ∈ S by player II. Then, the fitness for player I can be defined by a function π(x, y) = x T Ay and for player II by π(y, x) = y T Ax. A pair of strategies (x * , y * ) is said to be optimal if x * T Ay * ≥ x T Ay * for all x ∈ S and y * T Ax * ≥ y T Ax * for all y ∈ S, where x * and y * are said to be the best response to each other (see Fig. 1).
A special class of symmetric games is to find a strategy x * ∈ S which is the best response to itself, i.e., player I and II play the same strategy x * and x * T Ax * ≥ x T Ax * for all x ∈ S. This class of games is often used to model the evolution of a population of competing species, with player I being a particular individual and player II being a typical individual in the population. A strategy x for player I means the species type the particular individual prefers to be. It could be a pure species type, i.e., x = e i for some i or a mixed one with x i = 1 for any i, where e i is the ith unit vector. Note that by a mixed species type x we mean the frequency of the individual to play species i is x i . On the other hand, a strategy y for player II means the typical species type of an individual in the population, which depends on the species composition of the population. More specifically, if the portion for species i in the population is y i , then the chance for a typical individual to be species i is also y i . Therefore, y is also a population profile, and x T Ay is basically Figure 1. Two-Player Game: A two-player, two-strategy symmetric game is demonstrated. The strategies for player I are given in vector x = (x 1 , x 2 ) T , and for player II in y = (y 1 , y 2 ) T , x, y ∈ S = {x ∈ R 2 : Σ i x i = 1, x i ≥ 0, i = 1, 2}. The fitness A i,j of strategy pair (x i , y j ) is given in the (i, j)-entry of a 2×2 fitness matrix A. A strategy pair (x * , y * ) is said to be optimal if x * T Ay * ≥ x T Ay * for all x ∈ S and y * T Ax * ≥ y T Ax * for all y ∈ S, when the game is said to reach the Nash equilibrium.
the fitness for species x in population y. Such a game is called a population game, or an evolutionary game, or a game against the field [16,19]. The goal of the game is to find an optimal strategy x * ∈ S so that in population x * , an individual cannot find a better strategy than x * , i.e., x * T Ax * ≥ x T Ax * for all x ∈ S, which is when the population has reached the so-called Nash equilibrium. Biologically, this is when the population has reached a state so that the optimal strategy for an individual is a species type consistent with the typical species type of the population. If the fitness matrix of a symmetric game itself is symmetric, the game is called a doubly symmetric game [19]. An evolutionary game with a symmetric fitness matrix is a doubly symmetric game, which is what we call a symmetric evolutionary game, i.e., a SEgame as given in (1).

1.2.
Further biological background. SEgames can be used to model genetic selection and in particular, allele selection. An allele is one of several possible forms of a gene. Most of multi-cellular organisms are diploid, i.e., their chromosomes form homologous pairs. Each pair of chromosomes has a pair of alleles at each genetic locus. Thus, n different alleles may form n 2 different allele pairs, as two alleles in each pair may not be the same. Different allele pairs are considered to be different genotypes, which may result in different phenotypes or in other words, different genetic traits (see Fig. 2).
The fitness of all different allele pairs or in other words, all different genotypes at a given genetic locus can then be given in a matrix with the rows corresponding to the choices for the first allele and the columns to the choices for the second allele in the allele pair. Again, n different alleles will give n different choices for both the first and second alleles in the allele pair, and hence an n × n fitness matrix. With such a fitness matrix, a genetic selection game can then be defined with the choices of the first and second alleles in the allele pair at a given genetic locus as the strategies for player I and II. Here, player I can be considered as an individual with a specific choice of allele at the given locus. The choice could be one of the possible alleles or a combination of them with each selected with some chance. The Figure 2. Genetic Selection: In diploid species, there are always two alleles at each genetic locus. Each pair of alleles determines a certain genotype, which in turn determines a certain phenotype. For example, in Wendel's classical experiment, the color of the flowers depends on the pairing of the alleles at a specific genetic locus, one for pink color and dominant, and another for white and recessive. Let the dominant allele be denoted by A and the recessive one by a. There can be four possible allele pairs, AA, Aa, aA, and aa. Since A is dominant, AA, Aa, and aA will produce pink flowers, while aa will produce white ones. These genotypic and phenotypic outcomes can be summarized in a 2×2 allele-pairing matrix as arranged in the figure.
former corresponds to a pure strategy, while the latter to a mixed one. In any case, if there are n different alleles, the strategy for player I can be represented by a vector x ∈ R n , x ≥ 0, i x i = 1. On the other hand, player II can be considered as a typical individual in the given population. This individual could have only one of possible alleles at the given locus or a combination of them with each selected with some chance. Similar to player I, if there are n different alleles, the strategy for player II can be represented by a vector y ∈ R n , y ≥ 0, i y i = 1. This strategy y really is the same as the composition of alleles at the given locus in the whole population. Therefore, it is also the allele profile of the population for this particular genetic locus. Let the fitness matrix be given by A ∈ R n×n . Let S = {x ∈ R n : x ≥ 0, i x i = 1}. The average fitness of an allele choice x ∈ S in an allele population y ∈ S will be x T Ay. We then want to find an optimal choice of x * ∈ S such that x * T Ax * ≥ x T Ax * for all x ∈ S, i.e., in allele population x * , any individual with allele choice x other than x * will not have a better average fitness than allele choice x * [9]. Note that the fitness for allele pair (i, j) usually is the same as that for (j, i). Therefore, the fitness matrix for genetic selection is typically symmetric, and the corresponding game is then a SEgame.
2. GKproblems vs. SEgames. For an evolutionary game, it is well known that a mixed strategy x * ∈ S is optimal for the game if and only if the fitness x * T Ax * = (Ax * ) i for all i such that x * i > 0 and x * T Ax * ≥ (Ax * ) i for all i such that x * i = 0 [16,19]. These conditions also apply to any symmetric evolutionary game, i.e., any SEgame in (1), and can be stated formally as in the following theorem.
Theorem 2.1. Let A ∈ R n×n be a symmetric fitness matrix and S = {x ∈ R n : x ≥ 0, i x i = 1} the set of all mixed strategies. Then, a strategy x * ∈ S is an optimal strategy for the SEgame in (1) if and only if there is a scalar λ * such that The proof of the above theorem can be found in many text books such as in [16,19]. Since it is helpful for the understanding of the nature of the optimal strategies of the SEgame, we also provide one here for the self-containedness of the paper: Proof. If x * ∈ S satisfies the conditions in (3) and (4), by adding all equations in (4), we then obtain λ * = x * T Ax * . Let x ∈ S be an arbitrary strategy. Multiply the second inequality in (3) by x i . Then, by adding all second inequalities in (3), we obtain λ * − x T Ax * ≥ 0, i.e., x * T Ax * ≥ x T Ax * , since λ * = x * T Ax * . Therefore, x * is an optimal strategy for the SEgame in (1).
If x * ∈ S is an optimal strategy for the SEgame in (1), then x * T Ax * ≥ x T Ax * for any x ∈ S and therefore, By adding all the left-hand sides of the equations in (4), we then obtain λ * > x * T Ax * , which contradicts to the fact that λ * = x * T Ax * . Therefore, As we have mentioned in Section 1, the symmetric evolutionary game, i.e., the SEgame in (1) is closely related to the generalized knapsack problem, i.e., the GKproblem in (2). A knapsack problem is originally referred to as a problem for selecting a set of objects of different sizes and values into a given sack of fixed size to maximize the total value of objects in the sack. The problem can be formulated as a linear program, with a linear objective function i a i x i for the total value of the sack, where x i and a i are the size and unit value of object i, respectively and with a linear constraint i x i ≤ s, x i ≥ 0, i = 1, . . . , n on the total size of the objects that can be put into the sack, where n is the number of objects and s the size of the sack. The GKproblem in (2) can therefore be considered as a knapsack problem of n "objects" with the objective function generalized to a symmetric quadratic form x T Ax/2 and with the "sack" restricted in a simplex S = {x ∈ R n : x ≥ 0, i x i = 1}. If we interpret the "objects" to be the species fractions in a given population and the matrix A to be the fitness matrix of the species, the objective function for the GKproblem in (2) is exactly half of the average fitness of the population of the SEgame in (1). Therefore, the goal of the GKproblem in (2) is basically to maximize the average fitness of the population of the SEgame in (1).
Based on general optimization theory, an optimal solution to the GKproblem in (2) must satisfy certain conditions. We first consider a general constrained optimization problem where f (x) is the objective function, c i (x) the constraint functions, E the set of indices for equality constraints, and I the set of indices for inequality constraints. Assume that f (x) and c i (x) are all continuously differentiable. Let x be a feasible solution for the problem, i.e., c i (x) = 0, i ∈ E and c i (x) ≥ 0, i ∈ I. Let E 0 (x) be the set of indices for the constraints active at x, i.e., E 0 (x) = E ∪{i ∈ I : c i (x) = 0} and C 0 (x) be the Jacobian of the constraints active at x, i.e., C 0 (x) = {∇c i (x) : i ∈ E 0 (x)} T . We then have a set of first-order necessary conditions for an optimal solution to the general constrained optimization problem in (5) as can be stated in the following theorem. Here, we say that x * ∈ R n is an optimal solution for the general constrained optimization problem in (5), if x * is feasible, i.e., x * satisfies all the constraints, and if f (x * ) ≤ f (x) for all x feasible in a small neighborhood U of x * . 14]). Let x * ∈ R n be an optimal solution to the general constrained optimization problem in (5). Assume that the gradients of the constraints active at x * , i.e., the vectors in C 0 (x * ), are linearly independent. Then, there must be a set of Lagrange multipliers λ * ∈ R |E| and µ * ∈ R |I| such that where L(x, λ, µ) is called the Lagrangian function of the problem in (5), The conditions in (6) are called the KKT conditions of the general constrained optimization problem in (5) named after W. Karush, H. Kuhn, and A. Tucker, who first discovered and proved the conditions. As stated in Theorem 2.2, an optimal solution x * of the general constrained optimization problem in (5) must satisfy the KKT conditions, but a feasible solution x * that satisfies the KKT conditions, called a KKT point, may not always be an optimal solution.
We now apply Theorem 2.2 to the GKproblem in (2). By changing the maximization problem to a standard minimization problem, we then have the objective function for this problem f (x) = −x T Ax/2. If we name the nonnegative constraints c i (x) = x i ≥ 0, i = 1, . . . , n to be the first to the nth constraints and the equality constraint c n+1 (x) = 1 − i x i = 0 to be the n+1th constraint, we then have I = {1, . . . , n} and E = {n + 1}. Let x be a feasible solution for the problem. Let E 0 (x) be the set of indices for the constraints active at x, i.e., where e i is the ith unit vector and e = i e i . For any x ∈ S, there is at least one i ∈ I such that x i = 0 since x ≥ 0 and i x i = 1. Therefore, E 0 includes the index n + 1 and a subset of indices {i ∈ I}, and C 0 (x) contains the vector −e T and a subset of vectors {e T i : i ∈ I}, which are always linearly independent. We then have the following first-order necessary conditions for the GKproblem in (2): Theorem 2.3. Let A ∈ R n×n be a symmetric fitness matrix and S = {x ∈ R n : the set of all feasible solutions for the GKproblem in (2). If x * ∈ S is an optimal solution for this problem, then there must be a scalar λ * such that x Proof. The Lagrangian function for the GKproblem in (2) can be written in the following form: where x ∈ R n , λ ∈ R, µ ∈ R n . Since for this problem the gradients of the active constraints at any x ∈ S, i.e., the vectors in C 0 (x), are linearly independent, by Theorem 2.2, if x * ∈ S is an optimal solution to the GKproblem in (2), then there must be λ * ∈ R, µ * ∈ R n such that By substituting µ * = λ * e − Ax * in all the formulas, we then have which are equivalent to the conditions in (7) and (8).
Note that the conditions in (3) and (4) of Theorem 2.1 and in (7) and (8) of Theorem 2.3 are the same. However, it does not imply that the SEgame in (1) is equivalent to the GKproblem in (2), because the conditions are necessary and sufficient for an optimal strategy for the SEgame in (1) but only necessary for an optimal solution for the GKproblem in (2). Therefore, an optimal solution for the GKproblem in (2) must be an optimal strategy for the SEgame in (1), while the converse may not necessarily be true. We state this conclusion as a corollary from Theorem 2.1 and 2.3 in the following. Corollary 1. An optimal solution x * ∈ S for the GKproblem in (2) must be an optimal strategy for the SEgame in (1), while an optimal strategy x * ∈ S for the SEgame in (1) is only a KKT point for the GKproblem in (2), which is necessary but not sufficient to be optimal for the GKproblem in (2).
In any case, the above two types of problems are closely related. The properties of the optimal strategies for a SEgame can be investigated by examining the nature of the optimal solutions to the corresponding GKproblem. For example, the existence of the optimal strategy for a general game, which usually requires a more involved theoretical proof [13], now becomes much easier to verify for a SEgame based on the relationship between the SEgame and the GKproblem: There is always an optimal solution for the GKproblem in (2), given the fact that the objective function of the problem is a continuous function and the feasible set is a bounded and closed simplex. Based on Corollary 1, an optimal solution for the GKproblem in (2) is an optimal strategy for the SEgame in (1). Then, the next corollary follows: There is always an optimal strategy or in other words, a Nash equilibrium for a given SEgame in (1).
The fact that an optimal strategy for the SEgame in (1) maximizes the objective function of the GKproblem in (2) has been recognized in [16,19] and discussed in great detail in [2]. However, they have focused on the equivalence between the two types of problems when the strategy is evolutionarily stable, weak or strong. Here, we have made a clear distinction between them and shown that the strategies for the SEgame in (1) are not necessarily always be optimal solutions of the GKproblem in (2). When not, they can be local minimizers or saddle points of the GKproblem in (2). Though unstable, they can be interesting to analyze as well, as we will mention again in our concluding remarks in Section 8. Besides, we have provided detailed proofs for the necessary and sufficient conditions for both types of problems. Based on these proofs, we have been able to obtain the Corollary 2 easily for the existence of the equilibrium state of the SEgame in (1).
3. Second-order optimality conditions. We now focus on the GKproblem in (2) and derive additional second-order necessary and sufficient conditions for its optimal solutions, and extend them to the solutions for the SEgame in (1). These conditions have been mentioned in several literature [16,19] and especially analyzed in great detail in [2]. Here we review the conditions, with some given in different forms from those in [2]. They are in fact weaker conditions, but easier to verify, which is important for the later development of our computational methods for justifying the solutions and their stabilities for the GKproblems as well as the SEgames. We will comment more on these differences in the end of this section.
Consider again the general constrained optimization problem in (5). Let x * be an optimal solution to the problem. Let E 0 (x * ) be the set of indices for the constraints active at x * , i.e., E 0 (x * ) = E ∪ {i ∈ I : c i (x * ) = 0} and C 0 (x * ) be the Jacobian of the constraints active at x * , i.e., C 0 (x * ) = {∇c i (x * ) : i ∈ E 0 (x * )} T . We then have the following second-order necessary conditions for x * to be an optimal solution to the problem in (5). 14]). Let x * ∈ R n be an optimal solution to the general constrained optimization problem in (5). Assume that C 0 (x * ) has full row rank m. Let Z 0 ∈ R n×(n−m) be the null space matrix of C 0 (x * ). Then, i.e., the reduced Hessian of Now consider a KKT point x * ∈ R n for the general constrained optimization problem in (5). Let E 0 (x * ) be the set of indices for the constraints strongly active at x * , i.e., E 0 (x * ) = E∪{i ∈ I : c i (x * ) = 0 and µ * i > 0} and C 0 (x * ) be the Jacobian of the constraints strongly active at x * , i.e., C 0 ( where µ * i are the Lagrangian multipliers for the inequality constraints in the KKT conditions. We then have the following second-order sufficient conditions for x * to be a strict optimal solution to the problem in (5) Theorem 3.2 ( [14]). Let x * ∈ R n be a KKT point for the general constrained optimization problem in (5). Assume that C 0 (x * ) has full row rank m.
i.e., the reduced Hessian of f (x) at x * , Z 0T ∇ 2 f (x * )Z 0 , is positive definite, then x * must be a strict optimal solution to the problem in (5).
We now apply Theorem 3.1 and 3.2 to the GKproblem in (2). By changing the maximization problem to a standard minimization problem, we then have the objective function for the GKproblem in (2) . . , n to be the first to the nth constraints and the equality constraint c n+1 (x) = 1 − i x i = 0 to be the n+1th constraint, we then have I = {1, . . . , n} and E = {n + 1}. Let x * ∈ S be a KKT point for the GKproblem in (2). Let E 0 (x * ) be the set of indices for the constraints where e i is the ith unit vector and e = i e i . For any x * ∈ S, there is at least one i ∈ I such that x * i = 0 since x * ≥ 0 and i x * i = 1. Therefore, E 0 includes the index n + 1 and a subset of indices {i ∈ I}, and C 0 (x * ) contains the vector −e T and a subset of vectors {e T i : i ∈ I} as the rows, and is of full row rank. Note also that the Hessian of the objective function ∇ 2 f (x * ) = −A. We then have the following second-order necessary conditions for x * to be an optimal solution to the GKproblem in (2). Theorem 3.3. Let x * ∈ S be an optimal solution to the GKproblem in (2). Let the row rank of C 0 (x * ) be equal to m, and Z 0 ∈ R n×(n−m) the null space matrix of C 0 (x * ). Then, y T Z T 0 AZ 0 y ≤ 0 for all y ∈ R n−m , y = 0, (11) i.e., the reduced Hessian of the objective function of the GKproblem in (2) at x * , Z T 0 AZ 0 , must be negative semi-definite. Now consider a KKT point x * ∈ S. Let E 0 (x * ) be the set of indices for the constraints strongly active at x * , i.e., E 0 (x * ) = {i ∈ I : c i (x * ) = 0 and µ * i > 0} ∪ E and C 0 (x * ) be the Jacobian of the constraints strongly active at x * , i.e., C 0 ( i are the Lagrangian multipliers for the inequality constraints in the KKT conditions for the GKproblem in (2) where e i is the ith unit vector and e = i e i . Again, for any x * ∈ S, there is at least one i ∈ I such that x * i = 0 since x * ≥ 0 and i x * i = 1. Therefore, E 0 includes the index n + 1 and a subset of indices {i ∈ I}, and C 0 (x * ) contains the vector −e T and a subset of vectors {e T i : i ∈ I} as rows, and is of full row rank. Note also that the Hessian of the objective function ∇ 2 f (x * ) = −A. We then have the following second-order sufficient conditions for x * to be a strict optimal solution to the GKproblem in (2). Theorem 3.4. Let x * ∈ S be a KKT point for the GKproblem in (2). Let the row rank of C 0 (x * ) be equal to m. Let Z 0 ∈ R n×(n−m) be the null space matrix of C 0 (x * ). Then x * must be a strict optimal solution to the GKproblem in (2) if y T Z 0T AZ 0 y < 0 for all y ∈ R n−m , y = 0, i.e., the reduced Hessian of the objective function of the GKproblem in (2) at x * , Z 0T AZ 0 , is negative definite.
Note that the conditions in Theorem 3.3 and 3.4 are either necessary or sufficient but not both. In fact, since the GKproblem in (2) is a quadratic program, it is possible to establish a second-order necessary and sufficient condition for its optimal solution. For this purpose, we go back to the general constrained optimization problem (5) again. Let x ∈ R n be any feasible solution for the problem. We define the reduced tangent cone T (x) at x to be the set of vectors d ∈ R n such that ∇c i (x) T d ≥ 0, for all i ∈ I such that c i weakly active at x.
Then, based on general optimization theory, we know that if the general constrained optimization problem in (5) is a quadratic program, a feasible solution x * ∈ R n will be a strict optimal solution to the problem if and only if d , where C 0 and C 0 are as defined in Theorem 3.1 and Theorem 3.2. Then, clearly, In particular, when all the active inequality constraints are strongly active at x * , C 0 (x * ) = C 0 (x * ) and T 0 (x * ) = T 0 (x * ). It follows that if the general constrained optimization problem in (5) is a quadratic program, then x * will be a strict optimal solution to the problem if and only if d We now consider the GKproblem in (2), which is a typical quadratic program and ∇ 2 f (x * ) = −A. Let Z 0 and Z 0 be the null space matrices of C 0 (x * ) and C 0 (x * ), respectively. If all the active inequality constraints are strongly active at x * , C 0 (x * ) = C 0 (x * ), T 0 (x * ) = T 0 (x * ), and Z 0 = Z 0 . Let Z = Z 0 = Z 0 . Then, Z ∈ R n×(n−m) , and T (x * ) = T 0 (x * ) = T 0 (x * ) = {d ∈ R n : d = Zy : ∀y ∈ R n−m }, where m is the row rank of C 0 (x * ) and C 0 (x * ). It follows that x * ∈ S is a strict optimal solution to the problem if and only if y T Z T AZy < 0 for all y ∈ R n−m , y = 0. More accurately, we have Theorem 3.5. Let x * ∈ S be a KKT point for the GKproblem in (2). Assume that the active inequalities in S are all strongly active at x * . Then, x * ∈ S is a strict optimal solution to the GKproblem in (2) if and only if y T Z T AZy < 0 for all y ∈ R n−m , y = 0, i.e., the reduced Hessian of the objective function of the GKproblem in (2) at x * , Z T AZ, is negative definite.
The second-order optimality conditions presented in this section can be useful for checking the optimality of the solutions for the GKproblems and hence the strategies for the SEgames beyond the conditions given in Theorem 2.1 and 2.3. In order to apply these conditions, all we need to do is to find the null space matrices Z 0 or Z 0 and the eigenvalues of the reduced Hessians Z T 0 AZ 0 or Z 0T AZ 0 to see if they are negative semi-definite or negative definite. For example, suppose that we have a KKT point x * ∈ S for the GKproblem in (2) at which the only active constraint is the equality constraint 1 − i x i = 0. Then, C 0 (x * ) = C 0 (x * ) = {−e T }, for which we can construct a null space matrix Z = Z 0 = Z 0 ∈ R n×(n−1) such that Z i,j = 0 for all i and j, except for Z i,i = 1 and Z i+1,i = −1. Then the optimality of x * can be tested by checking the eigenvalues of the reduced Hessian Z T AZ. If any of the eigenvalues is positive, x * is not optimal, and if all the eigenvalues are negative, x * must be optimal and even strictly optimal. Here, in both cases, x * remains to be an optimal strategy for the corresponding SEgame in (1). However, the stability of the solution may be different, as we will discuss in greater detail in next section.
Note that the second order necessary and sufficient conditions for the optimal solutions of the GKproblem in (2) have been discussed in great detail in [2], where, related to our discussion, there are two necessary and sufficient conditions: (1) A feasible solution x * ∈ S for the GKproblem in (2) is a strict optimal solution if and only if d T Ad < 0 for all d ∈ T (x * ), d = 0, where T (x * ) is the reduced tangent cone of the problem at x * . (2) If all active inequalities for the GKproblem in (2) are strongly active at x * , then x * is a strict optimal solution if and only if Z T AZ is negative definite, when T (x * ) becomes a linear space defined by matrix Z. In our analysis, corresponding to (1), we have a necessary condition in Theorem 3.3 and sufficient condition in Theorem 3.4 separately. They are not equivalent to, but are in fact weaker than the condition in (1). The reason for doing so is that the condition in (1) is hard to test. It is equivalent to solving a matrix co-positivity problem, which is NP-hard in general [12]. On the other hand, the condition in Theorem 3.3 is equivalent to d T Ad < 0 for all d ∈ T 0 (x * ), which is a smaller cone than T (x * ), and is actually a linear space defined by Z 0 . Therefore, the condition is equivalent to Z T 0 AZ 0 negative definite, which can be verified in polynomial time [18]. Likewise, the condition in Theorem 3.4 is equivalent to d T Ad < 0 for all d ∈ T 0 (x * ), which is a larger cone than T (x * ), and is actually a linear space defined by Z 0 . Therefore, the condition is equivalent to Z 0T AZ 0 negative definite, which can again be verified in polynomial time. In our analysis, corresponding to (2), we have an equivalent necessary and sufficient condition in Theorem 3.5. They are equivalent because if all active constraints for the GKproblem in (2) are strongly active at x * , T (x * ) = T 0 (x * ) = T 0 (x * ) and Z = Z 0 = Z 0 . It follows that d T Ad < 0, for all d ∈ T (x * ), d = 0 is equivalent to Z T AZ negative definite. This condition is polynomial time verifiable. We do not need to modify it. The second order optimality conditions in Theorem 3.3, 3.4, and 3.5 are the basis for the later development of our second order stability conditions in Section 5 and computational methods in Section 6. 4. Evolutionarily stable states. An important concept in evolutionary game theory is the evolutionary stability of an optimal strategy. It characterizes the ability of a population to resist small changes or invasions when at equilibrium. Let x * ∈ S be an optimal strategy. Then, the population is at equilibrium state x * . Let x ∈ S be another arbitrary strategy. Mix x * and x = x * so that the population changes to a new state, x+(1− )x * , for some small fraction > 0. Then, x * is said to be evolutionarily stable if it remains as a better response to the new "invaded" population state. More accurately, we have the following definition.  16,19]). An optimal strategy x * ∈ S for an evolutionary game defined by a fitness matrix A is evolutionarily stable if there is a small number ∈ (0, 1) such that for any x ∈ S, x = x * , Usually, it is not easy to prove the evolutionary stability of the optimal strategies for an evolutionary game based on its definition. A more straightforward condition is to consider the strategies y in a small neighborhood U of the optimal strategy x * and check if no y = x * prevails x * such that y T Ay ≥ x * T Ay. It turns out that this condition is necessary and also sufficient: 16,19]). An optimal strategy x * ∈ S for an evolutionary game is evolutionarily stable if and only if there is a small neighborhood U of x * such that y T Ay < x * T Ay for all y ∈ U ∩ S, y = x * .
Note that a SEgame is an evolutionary game. Therefore, the condition in (18) also applies to a SEgame. For a SEgame, x * T Ay = y T Ax * since A is symmetric. Then, y T Ay < x * T Ax * for all y ∈ U ∩ S, y = x * since y T Ax * ≤ x * T Ax * for all y ∈ S. This implies that if x * is an evolutionary stable strategy for a SEgame, it must be a strict local maximizer of the corresponding GKproblem. It turns out that the converse is also true. We state this property in the following theorem, and also provide a slightly different proof from those given in [16,19].  16,19]). An optimal strategy x * ∈ S for a SEgame in (1) is evolutionarily stable if and only if it is a strict local maximizer of the corresponding GKproblem in (2).
Proof. Let x * ∈ S be an evolutionarily stable strategy for the SEgame in (1). Then, the necessary condition follows directly from Theorem 4.2, as we have discussed above.
To prove the sufficiency, we assume that x * is a strict local maximizer of the GKproblem in (2). Then, there must be a neighborhood U = {y ∈ R n : y − x * < < 2} of x * such that for any y ∈ U ∩ S, y = x * , y T Ay < x * T Ax * . Let x ∈ S be any mixed strategy. Let y = x + (1 − )x * , 0 < < 1. Note that x − x * ≤ x + x * < 2, and y − x * = x − x * < 2 . Then, for all < /2 < 1, y ∈ U and y T Ay < x * T Ax * . Note also that Replace /2 by and /4 by . Then, Since the above inequality holds for all x ∈ S, by Definition 4.1, x * must be an evolutionarily stable strategy for the SEgame in (1).

5.
Second-order stability conditions. By combining Theorem 4.3 with the second-order optimality conditions for the optimal solutions to the GKproblem in (2) derived in Section 3, we can easily obtain a set of second-order stability conditions for the optimal strategies for the SEgame in (1): Let x * ∈ S be an optimal strategy for the SEgame in (1). Let C 0 (x * ) be a matrix with {e T i : x * i = 0} and {−e T } being the rows, where e i is the ith unit vector and e = i e i . Theorem 5.1. Let x * ∈ S be an evolutionarily stable strategy for the SEgame in (1). Let the row rank of C 0 (x * ) be equal to m. Let Z 0 ∈ R n×(n−m) be the null space matrix of C 0 (x * ). Then, Z T 0 AZ 0 must be negative semi-definite.
Proof. If x * ∈ S is an evolutionarily stable strategy for the SEgame in (1), then by Theorem 4.3, it must be a strict local maximizer of the GKproblem in (2). It follows from Theorem 3.3 that Z T 0 AZ 0 must be negative semi-definite. Now, let x * ∈ S be an optimal strategy for the SEgame in (1). Let C 0 (x * ) be a matrix with {e T i : x * i = 0 and µ * i > 0} and {−e T } being the rows, where e i is the ith unit vector, e = i e i , and µ * i = x * T Ax * − (Ax * ) i . Theorem 5.2. Let x * ∈ S be an optimal strategy for the SEgame in (1). Let the row rank of C 0 (x * ) be equal to m. Let Z 0 ∈ R n×(n−m) be the null space matrix of C 0 (x * ). If Z 0T AZ 0 is negative definite, then x * must be an evolutionarily stable strategy.
Proof. If x * ∈ S is an optimal strategy for the SEgame in (1), then by Corollary 1, it must be a KKT point for the GKproblem in (2). Therefore, if Z 0T AZ 0 is negative definite, x * must be a strict local maximizer of the GKproblem in (2) by Theorem 3.4 and an evolutionarily stable strategy for the SEgame in (1) by Theorem 4.3.
Finally, let x * ∈ S be an optimal strategy for the SEgame in (1). If µ * i > 0 for all i such that x * i = 0, i.e., all the active inequalities in S are strongly active at x * , Theorem 5.3. Let x * ∈ S be an optimal strategy for the SEgame in (1). Assume that the active inequalities in S are all strongly active at x * . Then, x * ∈ S is an evolutionarily stable strategy for the SEgame in (1) if and only if Z T AZ is negative definite.
Proof. If x * ∈ S is an optimal strategy for the SEgame in (1), then by Corollary 1, it must be a KKT point for the GKproblem in (2). Therefore, x * is a strict local maximizer of the GKproblem in (2) if and only if Z T AZ is negative definite by Theorem 3.5 and an evolutionarily stable strategy for the SEgame in (1) by Theorem 4.3.
Although Theorem 5.1, 5.2, and 5.3 are simple extensions from Theorem 3.3, 3.4, and 3.5, they have great implications in practice, for they can be used to check the evolutionary stability of the optimal strategies for the SEgame in (1) directly. For example, if the fitness matrix A is positive definite, the reduced Hessian Z T 0 AZ 0 will never be negative semi-definite unless the dimension of the null space of C 0 (x * ) is zero or in other words, unless the row rank of C 0 (x * ) is n. Then, x * i = 0 for all but one i, and the optimal and stable strategies of the SEgame in (1) can only be pure strategies. On the other hand, if the fitness matrix A is negative definite, the reduced Hessian Z 0T AZ 0 will always be negative definite unless the dimension of the null space of C 0 (x * ) is zero, and then, all optimal and non-pure strategies for the SEgame in (1) will be evolutionarily stable. Even when C 0 (x * ) is only of rank one, i.e., i x * i = 1 but x * i > 0 for all i, x * is still evolutionarily stable. Note that an optimal strategy for the SEgame in (1) must be a KKT point of the GKproblem in (2), but it may not be a local maximizer of the GKproblem in (2). It could be a local minimizer or saddle point for the GKproblem in (2). Even if it is a local maximizer of the GKproblem in (2), it may not be evolutionary stable unless it is a strict local maximizer of the GKproblem in (2). In other words, as a KKT point for the GKproblem in (2), an optimal strategy for the SEgame in (1) could be a local maximizer, local minimizer, or saddle point of the GKproblem in (2) while evolutionarily unstable.
Since the second-order stability conditions in Theorem 5.1 and 5.2 are derived from Theorem 3.3 and 3.4, they are in different but weaker forms from those given in [2] as well. As we have mentioned in the end of Section 3, the advantage of introducing these forms is that they can be checked more efficiently in polynomial time than that given in [2]. The latter is equivalent to a matrix co-positivity problem and can be NP-hard to compute. The condition in Theorem 5.3 is equivalent to the one given in [2] since it can be verified in polynomial time as those in Theorem 5.1 and 5.2. 6. Computational methods. As we have discussed in previous sections, in order to test the second-order optimality or stability conditions, all we need to do is to form a reduced Hessian for the objective function of the GKproblem in (2) and see if it is negative semidefinite or negative definite. The Hessian of the objective function of the GKproblem in (2) is basically the fitness matrix A, while the reduced Hessian is Z T 0 AZ 0 or Z 0T AZ 0 , where Z 0 and Z 0 are the null space matrices of C 0 (x * ) and C 0 (x * ), respectively, for x * ∈ S to be tested, There are three major steps to complete a second-order optimality or stability test: (1) Compute the null space matrices Z 0 or Z 0 . (2) Form the reduced Hessians Compute the eigenvalues of the reduced Hessians. In step (1), it can be computationally expensive to find the null space matrix for a given matrix using a general approach, say the QR factorization, which typically requires O((n − m)n 2 ) floating-point calculations [18] if Z 0 or Z 0 is a n × (n − m) matrix. In step (2), each of the reduced Hessians involves two matrix-matrix multiplications, which also requires O(2(n − m)n 2 ) floating-point calculations. However, because of the special structures of C 0 (x * ) and C 0 (x * ), the calculations in step (1) and step (2) can actually be carried out in a very simple way, without much computational cost: First of all, the matrices C 0 (x * ) and C 0 (x * ) do not need any computation. They can be constructed straightforwardly as follows: First, form an (n + 1) × n matrix with the ith row equal to e T i and the last row equal to −e T , where e i is the ith unit vector and e = i e i . Then, for C 0 (x * ), remove row i such that x * i > 0; for C 0 (x * ), in addition to row i such that x * i > 0, remove row i such that x * i = 0 and µ * i = 0. We demonstrate the structure of C 0 (x * ) and C 0 (x * ) in the following matrix form: · · · · · · 0 · · · 1 · · · 0 · · · · · · −1 · · · −1 · · · −1 Next, given the simple structure of C 0 (x * ) and C 0 (x * ), we in fact do not have to compute the null space matrices Z 0 and Z 0 , either. They can also be constructed easily: First, form an n × n identity matrix with row k replaced by −e T for some k such that x * k > 0. Then, remove the kth column; in addition, for Z 0 , also remove column j such that x * j = 0; for Z 0 , only remove column j such that x * j = 0 and µ * j > 0. The following are the matrix forms of Z 0 and Z 0 : ⇐ row k such that x * k > 0 for some k (Remove column k. In addition, also remove column j such that x * j = 0.) ⇐ row k such that x * k > 0 for some k (Remove column k. In addition, remove only column j such that x * j = 0 and µ * j > 0.) It is easy to see that Z 0 or Z 0 are of full column rank n − m, where m is the row rank of C 0 (x * ) or C 0 (x * ). It is also easy to verify that C 0 (x * )Z 0 = 0 and C 0 (x * )Z 0 = 0, and therefore, Z 0 and Z 0 can indeed be used as null space matrices of C 0 (x * ) and C 0 (x * ), respectively. Yet, the construction of Z 0 and Z 0 does not have computational cost at all.
Finally, with Z 0 and Z 0 as given above, the computation of the reduced Hessians Z T 0 AZ 0 or Z 0T AZ 0 does not require full matrix-matrix multiplications. Let H = Z T AZ with Z = Z 0 or Z 0 . We show how H can be calculated with less computational cost: Let B = AZ. Then, H = Z T AZ = Z T B. Let B j and Z j be column j of B and Z, respectively. Assume that Z j = e i −e k for some i. Then, B j = AZ j can be obtained by subtracting column k from column i of A with n floating-point calculations. Since B has only n − m columns, the computation of B requires n(n − m) floating-point calculations. Let H i and Z iT be row i of H and Z T . Also assume that Z iT = e T j − e T k for some j. Then, H i = Z iT B can be obtained by subtracting row k from row j of B with n − m floating-point calculations. Since H has only n − m rows, the computation of H requires (n − m) 2 floating-point calculations. By putting the calculations for B and H together, we then obtain the computation for the whole reduced Hessian Z T AZ to be (n − m)(2n − m) floating-point calculations, which is much less costly than full matrix-matrix multiplications. 7. Games for genetic selection. A genetic selection problem and in particular, the problem for allele selection at single or multiple genetic loci can be formulated as a symmetric evolutionary game. Recall that the fitness of different allele pairs or in other words, different genotypes at a given genetic locus can be given in a matrix with the rows corresponding to the choices for the first allele and the columns to the choices for the second allele in the allele pairs. If there are n different alleles, there will be n different choices for both the first and second alleles, and the fitness matrix will be an n × n matrix. With such a fitness matrix, the allele selection game can be defined with the choices of the first and second alleles as the strategies for player I and player II of the game, where player I can be considered as a specific individual and player II as a typical individual in the given population. If there are n different alleles, the strategy for player I can be represented by a vector x ∈ R n , x ≥ 0, i x i = 1, and the strategy for player II by a vector y ∈ R n , y ≥ 0, i y i = 1. Let the fitness matrix be given by A ∈ R n×n . Let S = {x ∈ R n : x ≥ 0, i x i = 1}. The average fitness of an allele choice x ∈ S in an allele population y ∈ S will be x T Ay. We then want to find an optimal choice of x * ∈ S such that i.e., in allele population x * , any individual with allele choice x other than x * will not have a better average fitness than allele choice x * . Note that the fitness for allele pair (i, j) usually is the same as that for (j, i). Therefore, the fitness matrix for allele selection is typically symmetric, and the game in (19) is then a SEgame.
As we have discussed in previous sections, the selection game in (19) can be studied with a generalized knapsack problem: subject to i x i = 1, x ≥ 0. By Corollary 1, an optimal strategy of the selection game in (19) is equivalent to a KKT point of the GKproblem in (20), and by Theorem 4.3, if it is evolutionarily stable, it must correspond to a strict local maximizer of the GKproblem in (20), and vice versa. In addition, the optimality and stability conditions derived in previous sections all apply to the selection game in (19). We demonstrate the applications of these conditions with several example selection games including some from the study of genetic disorders.
We first consider a genetic locus with two alleles, one dominant and another recessive. Many genetic traits are due to the genotypic differences in a specific locus of two alleles. For example, in the well-known Mendel's experiment, the color of the flowers depends on the pair of alleles at certain genetic locus, one for pink color and dominant, and another for white and recessive. Let the dominant allele be denoted by A and recessive one by a. There can be four possible allele pairs, AA, Aa, aA, and aa. Since A is dominant, AA, Aa, and aA will produce pink flowers, while aa will produce white ones (see Fig. 2). According to the Hardy-Weinberg Law, if pink flowers and white flowers have the same selection chance, the distributions of the genotypes AA, Aa, aA, and aa and the alleles A and a in the population will not change over generations. Otherwise, different genotypes may have different fitness, and some may be selected while others eliminated [5].
Indeed, some alleles, either dominant or recessive, may cause genetic disorders. When they are dominant, both homozygote and heterozygote pairs containing the dominant allele will cause the disorders. When they are recessive, only the homozygote pairs of two recessive alleles will cause the problem. In either case, the genotypes that cause the genetic disorders will have lower fitness than those that do not. For example, cystic fibrosis is a disease caused by a recessive allele. The normal allele or the dominant one codes for a membrane protein that supports the transportation of ions for cells. It functions normally even when in the heterozygote form with one abnormal allele. However, if both alleles are the recessive ones, there will not be normal membrane protein expressions, giving rise to the cystic fibrosis disease. A further example is the Huntington's disease, a degenerative disease of the nerve system, caused by a lethal dominant allele. Both homozygote and heterozygote pairs of alleles containing the dominant allele will be harmful. Only the could happen for example in the study of malaria infection, where A represents the wild-type gene, while a represents the mutated gene. Individuals with AA types are susceptible to malaria infection, while those with Aa and aA types appear to be able to resist the infection. However, when aa types are formed, the individuals will develop a serious disease called the sickle cell disease. In any case, the SEgame in (21) has a single solution Since both x * 1 > 0 and x * 2 > 0, it is easy to construct a null space matrix Z = (1, −1) T , and see that Z T AZ = A 1,1 + A 2,2 − A 1,2 − A 2,1 < 0. Therefore, by Theorem 3.5, x * must be a strict local maximizer of the GKproblem in (22), and by Theorem 4.3 or 5.3, it is an evolutionarily stable state.
Next, we consider a more complicated case related to genetic mutations for malaria infections. In Africa and Southeast Asia, where human population has been exposed to serious malaria infection, certain genetic mutations have survived for a gene that codes the hemoglobin proteins of blood cells. These mutations resist malaria infection, but may cause other serious illness as well when in homozygote forms such as the sickle cell disease. Here we consider three well-studied allele forms of this gene, the wild type, S-mutation, and C-mutation, denoted by W , S, and C alleles. The normal genotype would be W W , but subnormal ones include W S, W C, and SC, which may have malaria resistance functions. Other forms, SS and CC, may cause other illness. These functions can be described with a 3 × 3 fitness matrix A, with rows corresponding to the choices of W , S, and C for the first allele, and the columns to the choices of W , S, and C for the second allele, when forming the allele pairs or in other words, the genotypes. Based on an estimate given in [17], this fitness matrix can be defined as follows: From this matrix, we see that the genotype W S has good fitness, while CC is the best. The genotype W W is not very good because it is susceptible to malaria infection, while SS is the worse because it causes the sickle cell disease. We may wonder how the alleles will eventually distribute in the population under such selection pressures. We have solved a SEgame with this fitness matrix and obtained three solutions: x (1) = (0, 0, 1) T , x (2) = (0.879, 0.121, 0) T , and x (3) = (0.832, 0.098, 0.070) T . The first solution suggests that the population may end up with all C alleles since the genotype CC seems have the best fitness. The second solution suggests a large portion of W alleles, with a small percentage of S alleles, which increases the resistance to malaria infection, yet does not have a large chance for SS combinations. The third solution means that the three alleles may co-exist.
We have also solved a corresponding GKproblem with the above matrix A, using a Matlab code. It turned out that we have only found two local maximizers for the GKproblem corresponding to x (1) and x (2) . At least, computationally, we have not found x (3) as a local maximizer, which suggests that x (1) and x (2) may be evolutionarily stable, while x (3) may not. Indeed, at solution x (3) , the only active constraint for the GKproblem is i x i = 1. The null space matrix Z for the Jacobian of this equation can be constructed as We then have the reduced Hessian of the GKproblem to be Based on the above analysis, we would predict that x (3) for the co-existing of three alleles in the population will never happen because it is unstable. The solution x (1) corresponds to a global maximizer of the GKproblem. Based on our simulation (not shown), it also has a large attraction region in the sense that most solutions would converge to x (1) unless the initial value for C allele is very small, say less than 5%. In current population, C allele is indeed rare and therefore, the population does not have much chance to evolve to this state. The population have typically a large percentage of W alleles, a small percentage of S alleles, and some rare C alleles, and therefore, we would predict that x (2) will be the most likely and stable state of the population in the end.
8. Concluding remarks. In this paper, we have reviewed the theory for obtaining optimal and stable strategies for SEgames, and provided some new proofs and computational methods. In particular, we have reviewed the relationship between the SEgame and the GKproblem, and discussed the first and second order necessary and sufficient conditions that can be derived from this relationship for testing the optimality and stability of the strategies. Some of the conditions are given in different forms from those in previous work and can be verified more efficiently. We have also derived more efficient computational methods for the evaluation of the conditions than conventional approaches. We have demonstrated how these conditions can be applied to justifying the strategies and their stabilities for a special class of genetic selection games including some in the study of genetic disorders. Further studies can be pursued in the following possible directions though: First, novel methods can be developed for solving special types of SEgames and especially for obtaining the evolutionarily stable strategies for the games by solving some special classes of GKproblems. For example, if the fitness matrix for a SEgame is negative definite, then the corresponding GKproblem is a strictly convex quadratic program and can be solved efficiently using some special algorithms [4]. Further, the solution is guaranteed to be a strict local maximizer for the GKproblem and hence an evolutionarily stable strategy for the SEgame. A more complicated case is when the fitness matrix is positive definite. Then, only pure strategies may be evolutionarily stable. A special algorithm can then be developed to only find the solutions for the GKproblem that correspond to the pure strategies of the SEgame.
Second, in Theorem 3.5 and 5.3, we have stated two optimality and stability conditions. They are necessary and sufficient, but require all active constraints to be strongly active at x * , when C 0 (x * ) = C 0 (x * ), T 0 (x * ) = T 0 (x * ), and Z 0 = Z 0 . However, in practice, this assumption may not hold. A more general necessary and sufficient condition, without the above assumption, is to require d T Ad < 0 for all d ∈ T (x * ), d = 0, where T (x * ) is the reduced tangent cone at x * , as given in [2]. As we have mentioned in previous sections, this condition is not easy to test. It is equivalent to testing the copositivity of a matrix, which is difficult in general [1,12]. But still, an efficient algorithm may be developed for SEgames and GKproblems for small sizes of problems or problems with special structures.
Third, it is not so hard to verify that the GKproblem is NP-hard in general, because the maximum clique problem can be formulated as a GKproblem [11,15]. However, how to extend this result to the SEgame is not so clear, because the SEgame is not exactly equivalent to the GKproblem. Several related questions are asked: is any maximal clique a local maximizer of the GKproblem for the maximum clique problem? If not, what condition is needed? If yes, is it a strict local maximizer? Is the maximum clique a global maximizer? Is it an evolutionarily stable strategy for the corresponding SEgame? We are interested in all these questions and are trying to find their answers.
Fourth, though not equivalent, the correspondence between the SEgame and GKproblem is interesting. A similar relationship may be found between a class of nonlinear games and nonlinear optimization problems. Indeed, we can define an nstrategy two-player game by a fitness function x T π(y) with π(y) being a nonlinear function. The game then becomes a nonlinear game. If π(y) is a gradient field, i.e., there is a function f (y) such that ∇f (y) = π(y), then, an optimal strategy x * ∈ S such that x * T π(x * ) ≥ x T π(x * ) for all x ∈ S corresponds to an optimal solution x * ∈ S such that f (x * ) ≥ f (x) for all x in a small neighborhood of x * , x ∈ S. Then, it would be interesting to see what additional relationships between the SEgame and GKproblem can be extended to their nonlinear cases.
Finally, we have demonstrated the applications of SEgames to allele selection at single genetic loci. They can be extended to alleles at multiple genetic loci, if there is no mutation or recombination. In this case, an individual can be identified by a sequence of alleles at the multiple loci. In other words, a selection strategy will be a choice of a specific sequence of alleles. This would certainly increase the strategy space substantially. For example, if there are two loci G 1 and G 2 , with two possible alleles A and a for G 1 and two other possible ones B and b for G 2 , then there will be four possible sequences of alleles for the two loci: AB, Ab, aB, ab, each corresponding to one pure strategy. In general, if there are m loci G i , i = 1, . . . , m, with m i possible alleles for G i , then there will be n = i=1:m m i possible sequences of alleles. The number of pure strategies and hence the dimension of the game will be n, which can be a large number. In any case, in practice, mutation and recombination often are not negligible, and therefore, our model must incorporate such effects. The topics could include other so-called linkage disequilibrium factors, but they are all beyond the scope of this paper [17]. We will pursue these issues in our future efforts.