An Interactive Fuzzy Satisﬁcing Method for Multiobjective Nonlinear Integer Programming Problems with Block-angular Structures through Genetic Algorithms with Decomposition Procedures

This paper considers multiobjective nonlinear integer programming problems with block-angular structures. Considering the vague nature of the decision maker’s judgments, an interactive fuzzy satisﬁcing method is presented. Realizing the special structures that can be exploited in solving problems, genetic algorithms with decomposition procedures are also proposed.


Introduction
Genetic algorithms (GA) [13], initiated by Holland, his colleagues and his students at the University of Michigan in the 1970s, as stochastic search techniques based on the mechanism of natural selection and natural genetics, have received a great deal of attention regarding their potential as optimization techniques for solving discrete optimization problems or other hard optimization problems. Although genetic algorithms were not much known at the beginning, after the publication of Goldberg's book [11], genetic algorithms have recently attracted considerable attention in a number of fields as a methodology for optimization, adaptation and learning. As we look at recent applications of genetic algorithms to optimization problems, especially to various kinds of single-objective discrete optimization problems and/or to other hard optimization problems, we can see continuing advances [18,1,2,7,21,22,4,8].
Sakawa and his colleagues proposed genetic algorithms with double strings (GADS) [26] for obtaining an approximate optimal solution to multiobjective multidimensional 0-1 knapsack problems. They also proposed genetic algorithms with double strings based on reference solution updating (GADSRSU) [27] for multiobjective general 0-1 programming problems involving both positive coefficients and negative ones. Furthermore, they proposed genetic algorithms with double strings using linear programming relaxation (GADSLPR) [25] for multiobjective multidimensional integer knapsack problems and genetic algorithms with double strings using linear programming relaxation based on reference solution updating (GADSLPRRSU) for linear integer programming problems [23]. Observing that some solution methods for specialized types of nonlinear integer programming problems have been proposed [12,16,17], as an approximate solution method for general nonlinear integer programming problems, Sakawa and his colleagues [24] proposed genetic algorithms with double strings using continuous relaxation based on reference solution updating (GADSCRRSU). -2 -In general, however, actual decision making problems formulated as mathematical programming problems involve very large numbers of variables and constraints. Most of such large-scale problems in the real world often have special structures that can be exploited in solving problems. One familiar special structure is the block-angular structure to the constraints and several kinds of decomposition methods for linear and nonlinear programming problems with block-angular structure have been proposed [15]. Unfortunately, however, for large-scale problems with discrete variables, it seems quite difficult to develop an efficient solution method for obtaining an exact optimal solution. For multidimensional 0-1 knapsack problems with block-angular structures, by utilizing the blockangular structures that can be exploited in solving problems, Sakawa and his colleagues [21,14] proposed genetic algorithms with decomposition procedures (GADP). For dealing with multidimensional 0-1 knapsack problems with block angular structures, using triple string representation, Sakawa and his colleagues [21,14] presented genetic algorithms with decomposition procedures. Furthermore, by incorporating the fuzzy goals of the decision maker, they [21] also proposed an interactive fuzzy satisficing method for multiobjective multidimensional 0-1 knapsack problems with block angular structures.
Under these circumstances, in this paper, as a typical mathematical model of largescale multiobjective discrete systems optimization, we consider multiobjective nonlinear integer programming problems with block-angular structures. By considering the vague nature of the decision maker's judgments, fuzzy goals of the decision maker are introduced, and the problem is interpreted as maximizing an overall degree of satisfaction with the multiple fuzzy goals. For deriving a satisficing solution for the decision maker, we develop an interactive fuzzy satisficing method. Realizing the block-angular structures that can be exploited in solving problems, we also propose genetic algorithms with decomposition procedures for nonlinear integer programming problems with block-angular structures.

Problem formulation
Consider multiobjective nonlinear integer programming problems with block-angular structures formulated as: where x J , J = 1, 2, . . . , P are n J dimensional integer decision variable column vectors and x = ((x 1 ) T , . . . , (x P ) T ) T . The constraints g(x) = (g 1 (x), . . . , g m 0 (x)) T ≤ 0 are called as coupling constraints with m 0 dimension, while each of constraints h J (x J ) = (h J 1 (x 1 ), . . . , h J m J (x J )) T ≤ 0, J = 1, 2, . . . , P is called as block constraints with m J dimension. In (1), it is assumed that f l (·), g i (·), h J i (·) are general nonlinear functions. The positive integers V J j , J = 1, 2, . . . , P , j = 1, 2, . . . , n J represent upper bounds for x J j . In the following, for notational convenience, the feasible region of (1) is denoted by X. As an example of nonlinear integer programming problems with block-angular structures in practical applications, Bretthauer et al. [3] formulated health care capacity planning, resource constrained production planning and portfolio optimization with industry constraints.

An interactive fuzzy satisficing method
In order to consider the vague nature of the decision maker's judgments for each objective function in (1), if we introduce the fuzzy goals such as "f l (x) should be substantially less than or equal to a certain value", (1) can be rewritten as: where µ l (·) is the membership function to quantify the fuzzy goal for the l th objective function in (1). To be more specific, if the decision maker feels that f l (x) should be less than or equal to at least f 0 l and f l (x)) ≤ f 1 l (≤ f 0 l ) is satisfactory, the shape of a typical membership function is shown in Figure 1. Since (2) is regarded as a fuzzy multiobjective optimization problem, a complete optimal solution that simultaneously minimizes all of the multiple objective functions does not always exist when the objective functions conflict with each other. Thus, instead of a complete optimal solution, as a natural extension of the Pareto optimality concept for ordinary multiobjective programming problems, Sakawa and his colleagues [28,20] introduced the concept of M-Pareto optimal solutions which is defined in terms of membership functions instead of objective functions, where M refers to membership.

Definition 1 (M-Pareto optimality)
A feasible solution x * ∈ X is said to be M-Pareto optimal to a fuzzy multiobjective optimization problem if and only if there does not exist another feasible solution x ∈ X such as µ l (f l (x)) ≥ µ l (f l (x * )), l = 1, 2, . . . , k and µ j (f j (x)) > µ j (f j (x * )) for at least one j ∈ {1, 2, . . . , k}.
Introducing an aggregation function µ D (x) for k membership functions in (2), the problem can be rewritten as: maximize where the aggregation function µ D (·) represents the degree of satisfaction or preference of the decision maker for the whole of k fuzzy goals. In the conventional fuzzy approaches, it has been implicitly assumed that the minimum operator is the proper representation of the decision maker's fuzzy preferences. However, it should be emphasized here that this approach is preferable only when the decision maker feels that the minimum operator is appropriate. In other words, in general decision situations, the decision maker does not always use the minimum operator when combining the fuzzy goals and/or constraints.
Probably the most crucial problem in (3) is the identification of an appropriate aggregation function which well represents the decision maker's fuzzy preferences. If µ D (·) can be explicitly identified, then (3) reduces to a standard mathematical programming problem. However, this rarely happens, and as an alternative, an interaction with the decision maker is necessary to find a satisficing solution for (2). In order to generate candidates of a satisficing solution which are M-Pareto optimal, the decision maker is asked to specify the aspiration levels of achievement for all membership functions, called reference membership levels. For reference membership levels given by the decision makerμ l , l = 1, 2, . . . , k, the corresponding M-Pareto optimal solution toμ, which is nearest to the requirements in the minimax sense or better than that if the reference membership levels are attainable, is obtained by solving the following augmented minimax problem (4).
where ρ is a sufficiently small positive real number.
We can now construct an interactive algorithm in order to derive a satisficing solution for the decision maker from among the M-Pareto optimal solution set. The procedure of the interactive fuzzy satisficing method is summarized as follows.

An Interactive Fuzzy Satisficing Method
Step 1: Calculate the individual minimum and maximum of each objective function under the given constraints by solving the following problems.
Step 2: By considering the individual minimum and maximum of each objective function, the decision maker subjectively specifies membership functions µ l (f l (x)), l = 1, 2, . . . , k to quantify fuzzy goals for objective functions.
Step 4: For the current reference membership levels, solve the augmented minimax problem (4) to obtain the M-Pareto optimal solution and the membership function value.
Step 5: If the decision maker is satisfied with the current levels of the M-Pareto optimal solution, stop. Then the current M-Pareto optimal solution is the satisficing solution of the decision maker. Otherwise, ask the decision maker to update the current reference membership levelsμ l , l = 1, 2, . . . , k by considering the current values of the membership functions and return to Step 4.
- 5 -In the interactive fuzzy satisficing method, it is required to solve nonlinear integer programming problems with block-angular structures (4) together with (5) and (6). It is significant to note that these problems are single objective integer programming problems with block-angular structures. Realizing this difficulty, in the next section, we propose genetic algorithms with decomposition procedures using continuous relaxation based on reference solution updating (GADPCRRSU).

Genetic algorithms with decomposition procedures
As discussed above, in this section, we propose genetic algorithms with decomposition procedures using continuous relaxation based on reference solution updating (GAD-PCRRSU) as an approximate solution method for nonlinear integer programming problems with block-angular structures.
Consider single-objective nonlinear integer programming problems with block-angular structures formulated as: Observe that this problem can be viewed as a single-objective version of the original problem (1). Sakawa and his colleagues [24] have already studied genetic algorithms with double strings using continuous relaxation based on reference solution updating (GADSCRRSU) for ordinary nonlinear integer programming problems formulated as: where an individual is represented by a double string. In a double string as is shown in Figure 2, for a certain j, ν(j) ∈ {1, 2, . . . , n} represents an index of a variable in the solution space, while y ν(j) , j = 1, 2, . . . , n does the value among {0, 1, . . . , V J }) of the ν(j)th variable x ν(j) .
In view of the block-angular structure of (7), it seems to be quite reasonable to define an individual S as an aggregation of p subindividuals s J , J = 1, 2, . . . , P , corresponding to the block constraint h J (x J ) ≤ b J as shown in Figure 3. If these subindividuals are represented by double strings, for each of subindividuals s J , J = 1, 2, . . . , P , a phenotype (subsolution) satisfying each of the block constraints can be obtained by the decoding algorithm in GADSCRRSU.
Unfortunately, however, the simple combination of these subsolutions does not always satisfy the coupling constraints. To cope with this problem, a triple string representation as shown in Figure 4 and the corresponding decoding algorithm are presented as an extension of the double string representation and the corresponding decoding algorithm. By using the proposed representation and decoding algorithm, a phenotype (solution) satisfying both the block constraints and coupling constraints can be obtained for each individual S = (s 1 , s 2 , . . . , s P ). Figure 4: Triple string.
To be more specific, in a triple string which represents a subindividual corresponding to the Jth block, r J ∈ {1, 2, . . . , P } represents the priority of the Jth block, each ν J (j) ∈ {1, 2, . . . , n J } is an index of a variable in phenotype and each y J ν J (j) takes an integer value among {0, 1, . . . , V J ν J (j) }. As in GADSCRRSU, a feasible solution, called a reference solution, is necessary for decoding of triple strings. In our proposed GADPCRRSU, the reference solution is obtained as a solution x * to a minimization problem of constraint violation. In the following, we summarize the decoding algorithm for triple strings using a reference solution x * , where N is the number of individuals and I is a counter for the individual number.
Step 7: If r > P , go to step 8. Otherwise, go to step 3.
Step 8: If L = 0 and l = 0, go to step 10. Otherwise, go to step 9.
Step 9: Find J(r) such that r J(r) = r for r = 1, . . . , L − 1. Then, let x It is expected that an optimal solution to the continuous relaxation problem becomes a good approximate optimal solution of the original nonlinear integer programming problem. In the proposed method, after obtaining an (approximate) optimal solutionx J j , J = 1, 2, . . . , P , j = 1, 2, . . . , n J to the continuous relaxation problem, we suppose that each decision variable x J j takes exactly or approximately the same value thatx J j does. In particular, decision variables x J j such asx J j = 0 are very likely to be equal to 0. To be more specific, the information of the (approximate) optimal solutionx to the continuous relaxation problem of (7) is used when generating the initial population and performing mutation. In order to generate the initial population, when we determine the value of each y J ν J (j) in the lowest row of a triple string, we use a Gaussian random variable with meanx J ν J (j) and variance σ. In mutation, when we change the value of y J ν J (j) for some (J, j), we also use a Gaussian random variable with meanx J ν J (j) and variance τ . Various kinds of reproduction methods have been proposed. Among them, Sakawa et al. [26] investigated the performance of each of six reproduction operators, i.e., ranking selection, elitist ranking selection, expected value selection, elitist expected value selection, roulette wheel selection and elitist roulette wheel selection, and as a result confirmed that elitist expected value selection is relatively efficient for multiobjective 0-1 programming problems incorporating the fuzzy goals of the decision maker. Thereby, the elitist expected value selection -elitism and expected value selection combined together -is adopted. Here, elitism and expected value selection are summarized as follows.
Elitism: If the fitness of an individual in the past populations is larger than that of every individual in the current population, preserve this string into the current generation.
Then, the integral part of N n (= [N n ]) denotes the definite number of s J n preserved in the next population. While, using the decimal part of N n (= N n − [N n ]), the probability to preserve s J n , J = 1, 2, . . . , P , in the next population is determined by .
If a single-point crossover or multi-point crossover is directly applied to upper or middle string of individuals of triple string type, the kth element of the string of an offspring may take the same number that the k th element takes. The same violation occurs in solving the traveling salesman problems or scheduling problems through genetic algorithms. In order to avoid this violation, a crossover method called partially matched crossover (PMX) is modified to be suitable for triple strings. PMX is applied as usual for upper strings, whereas, for a couple of middle string and lower string, PMX for double strings [26] is applied to every subindividual.
It is now appropriate to present the detailed procedures of the crossover method for triple strings.

Partially Matched Crossover (PMX) for upper string
be the upper string of an individual and be the upper string of another individual. Prepare copies X and Y of X and Y , respectively.
Step 1: Choose two crossover points at random on these strings, say, h and k (h < k).
Step 2: Set i := h and repeat the following procedures.
(a) Find J such that r J X = r i Y . Then, interchange r i X with r J X and set i := i + 1. (b) If i > k, stop and let X be the offspring of X. Otherwise, return to (a).
Step 2 is carried out for Y in the same manner, as shown in Figure 5.

Partially Matched Crossover (PMX) for double string Let
be the middle and lower part of a subindividual in the Jth subpopulation, and be the middle and lower part of another subindividual in the Jth subpopulation. First, prepare copies X and Y of X and Y , respectively.
Step 1: Choose two crossover points at random on these strings, say, h and k (h < k).
Step 2: Set i := h and repeat the following procedures.
Step 3: Replace the part from h to k of X with that of Y and let X be the offspring of X.
This procedure is carried out for Y and X in the same manner, as shown in Figure 6. It is considered that mutation plays the role of local random search in genetic algorithms. Only for the lower string of a triple string, mutation of bit-reverse type is adopted and applied to every subindividual.
For the upper string and for the middle and lower string of the triple string, inversion defined by the following algorithm is adopted: -10 - Figure 6: An example of PMX for double string.
Step 1: After determining two inversion points h and k (h < k), pick out the part of the string from h to k.
Step 2: Arrange the substring in reverse order.
Step 3: Put the arranged substring back in the string. netic algorithm with decomposition procedures as an approximate solution method for nonlinear integer programming problems with block angular structures. The outline of procedures is shown in Figure 8.

Computational procedures
Step 1: Set an iteration index (generation) t = 0 and determine the parameter values for the population size N, the probability of crossover p c , the probability of mutation p m , the probability of inversion p i , variances σ, τ , the minimal search generation I min and the maximal search generation I max .
Step 2: Generate N individuals whose subindividuals are of triple string type at random.
Step 3: Evaluate each individual (subindividual) on the basis of phenotype obtained by the decoding algorithm and calculate the mean fitness f mean and the maximal fitness f max of the population. If t > I min and (f max − f mean )/f max < ε, or, if t > I max , regard an individual with the maximal fitness as an optimal individual and terminate this program. Otherwise, set t = t + 1 and proceed to Step 4.
Step 5 † : Apply the PMX for double strings to the middle and lower part of every subindividual according to the probability of crossover p c .
Step 6 † : Apply the mutation operator of the bit-reverse type to the lower part of every subindividual according to the probability of mutation p m , and apply the inversion operator for the middle and lower part of every subindividual according to the probability of inversion p i .
Step 7: Apply the PMX for upper strings according to p c .
Step 8: Apply the inversion operator for upper strings according to p i and return to Step 3.
It should be noted here that, in the algorithm, the operations in the steps marked with † can be applied to every subindividual of all individuals independently. As a result, it is theoretically possible to reduce the amount of working memory needed to solve the problem and carry out parallel processing.

Numerical Examples
In order to demonstrate the feasibility and efficiency of the proposed method, consider the following multiobjective quadratic integer programming problem with block-angular structures.
For comparison, genetic algorithms with double strings using continuous relaxation based on reference solution updating (GADSCRRSU) [24] are also adopted. It is significant to note here that decomposition procedures are not involved in GADSCRRSU. For this problem, we set k = 3, P = 5, n 1 = n 2 = · · · = n 5 = 10, m 0 = 2 and m 1 = m 2 = · · · = m 5 = 5, V J j = 30, J = 1, 2, . . . , 5, j = 1, 2, . . . , 10. Elements of c and A in objectives and constraints of the above problem are determined by uniform random number on [−100, 100] and those of b in constraints are determined so that the feasible region is not empty.
Parameter values of GADPCRRSU are set as: population size N = 100, crossover rate p c = 0.9, mutation rate p m = 0.05, inversion rate p i = 0.05, variances σ = 2.0, -13τ = 3.0, minimal search generation number I min = 500 and maximal search generation number I max = 1000.
In this numerical example, for the sake of simplicity, the linear membership function is adopted and the parameter values are determined as [29]: . . , f l (x k min )}, l = 1, 2, . . . , k. For the initial reference levels (1.000, 1.000, 1.000), the augmented minimax problem (4) is solved. The obtained solutions are shown at the second column in Table 1. Assume that the hypothetical decision maker is not satisfied with the current solution and he feels that µ 1 (f 1 (x)) and µ 3 (f 3 (x)) should be improved at the expense of µ 2 (f 2 (x)). Then, the decision maker update the reference membership levels to (1.000, 0.9000, 1.000). The result for the updated reference membership levels is shown at the third column in Table 1.
Since the decision maker is not satisfied with the current solution, he updates the reference membership levels to (1.000, 0.900, 0.900) for obtaining better value of µ 1 (f 1 (x)). A similar procedure continues in this way and, in this example, a satisficing solution for the decision maker is derived at the third interaction.  Table 1 shows that the proposed interactive method using GADPCRRSU with decomposition procedures can find an (approximate) optimal solution at each interaction in shorter time than that using GADSCRRSU without decomposition procedures.
Furthermore, in order to see how the computation time changes with the increased size of block-angular nonlinear integer programming problems, typical problems with 10, 20, -14 -30, 40 and 50 variables are solved by GADPCRRSU and GADSCRRSU. As depicted in Figure 9, it can be seen that the computation time of the proposed GADPCRRSU increases almost linearly with the size of the problem while that of GADSCRRSU increases rapidly and nonlinearly.

Conclusions
In this paper, as a typical mathematical model of large-scale discrete systems optimization, we considered multiobjective nonlinear integer programming with block-angular structures. Taking into account vagueness of judgments of the decision makers, fuzzy goals of the decision maker were introduced, and the problem was interpreted as maximizing an overall degree of satisfaction with the multiple fuzzy goals. An interactive fuzzy satisficing method was developed for deriving a satisficing solution for the decision maker. Realizing the block-angular structures that can be exploited, we also propose genetic algorithms with decomposition procedures for solving nonlinear integer programming problems with block-angular structures. Illustrative numerical examples were provided to demonstrate the feasibility and efficiency of the proposed method. Extensions to multiobjective two-level integer programming problems with block-angular structures will be considered elsewhere. Also extensions to stochastic multiobjective two-level integer programming problems with block-angular structures will be required in the near future.