Skip to main content
Log in

Adaptive racing ranking-based immune optimization approach solving multi-objective expected value programming

  • Methodologies and Application
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

This work investigates a bio-inspired adaptive sampling immune optimization approach to solve a general kind of nonlinear multi-objective expected value programming without any prior noise distribution. A useful lower bound estimate is first developed to restrict the sample sizes of random variables. Second, an adaptive racing ranking scheme is designed to identify those valuable individuals in the current population, by which high-quality individuals in the process of solution search can acquire large sample sizes and high importance levels. Thereafter, an immune-inspired optimization approach is constructed to seek \(\varepsilon \)-Pareto optimal solutions, depending on a novel polymerization degree model. Comparative experiments have validated that the proposed approach with high efficiency is a competitive optimizer.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Aickelin U, Dasgupta D, Gu F (2014) Artificial immune systems. Search Methodologies. Springer US, pp 187–211

  • Aydin I, Karakose M, Akin E (2011) A multi-objective artificial immune approach for parameter optimization in support vector machine. Appl Soft Comput 11:120–129

    Article  Google Scholar 

  • Batista LS, Campelo F, Guimarães FG et al (2011) Pareto cone \(\varepsilon \)-dominance: improving convergence and diversity in multiobjective evolutionary approaches. In: Evolutionary multi-criterion optimization, Springer, Berlin, pp 76–90

  • Bui LT et al (2005) Fitness inheritance for noisy evolutionary multi-objective optimization. In: The 7th annual conference on genetic and evolutionary computation, ACM, pp 779–785

  • Cantú-Paz E (2004) Adaptive sampling for noisy problems. In: Genetic and evolutionary computation conference, GECCO2004, pp 947–958

  • Chen CH (2003) Efficient sampling for simulation-based optimization under uncertainty. In: Fourth International symposium on uncertainty modeling and analysis, ISUMA’03, pp 386–391

  • Coello CAC, Cortés NC (2005) Solving multi-objective optimization problems using an artificial immune system. Genet Program Evol Mach 6:163–190

    Article  Google Scholar 

  • Corne DW, Jerram NR, Knowles JD et al (2001) PESA-II: region-based selection in evolutionary multiobjective optimization. In: Genetic and evolutionary computation conference, GECCO’2001, pp 283–290

  • Deb K et al (2002) A fast and elitist multi-objective genetic approach: NSGA-II. IEEE Trans Evol Comput 6:182–197

    Article  Google Scholar 

  • Drugan MM, Nowe A (2013) Designing multi-objective multi-armed bandits approaches: a study. In: International joint conference on neural networks, IJCNN, pp 1–8

  • El-Wahed WFA, Lee SM (2006) Interactive fuzzy goal programming for multi-objective transportation problems. Omega 34(2):158–166

    Article  Google Scholar 

  • Eskandari H, Geiger CD (2009) Evolutionary multi-objective optimization in noisy problem environments. J Heuristics 15:559–595

    Article  MATH  Google Scholar 

  • Even-Dar E, Mannor S, Mansour Y (2006) Action elimination and stopping conditions for the multi-armed bandit and reinforcement learning problems. J Mach Learn Res 7:1079–1105

    MathSciNet  MATH  Google Scholar 

  • Gong MG, Jiao LC, Du HF, Bo LF (2008) Multi-objective immune approach with nondominated neighbor-based selection. Evol Comput 16:225–255

    Article  Google Scholar 

  • Gong MG et al (2013) Identification of multi-resolution network structures with multi-objective immune approach. Appl Soft Comput 13:1705–1717

    Article  Google Scholar 

  • Gutjahr WJ, Pichler A (2016) Stochastic multi-objective optimization: a survey on non-scalarizing methods. Ann Oper Res 236:475–499

    Article  MathSciNet  MATH  Google Scholar 

  • Higle JL, Zhao L (2004) Adaptive and nonadaptive samples in solving stochastic linear programs: a computational investigation. The University of Arizona, Tucson

    MATH  Google Scholar 

  • Hoeffding W (1963) Probability inequalities for sums of bounded random variables. J Am Stat Assoc 58(301):13–30

    Article  MathSciNet  MATH  Google Scholar 

  • Hu ZH (2010) A multiobjective immune approach based on a multiple-affinity model. Eur J Oper Res 202:60–72

    Article  MATH  Google Scholar 

  • Hughes EJ (2001) Constraint handling with uncertain and noisy multi-objective evolution. In: Congress on evolutionary computation 2001, CEC’2001, pp 963–970

  • Jin Y, Branke J (2005) Evolutionary optimization in uncertain environments: a survey. IEEE Trans Evol Comput 9:303–317

    Article  Google Scholar 

  • Lee LH et al (2010) Finding the nondominated Pareto set for multi-objective simulation models. IIE Trans 42:656–674

    Article  Google Scholar 

  • Lee LH, Pujowidianto NA, Li LW et al (2012) Approximate simulation budget allocation for selecting the best design in the presence of stochastic constraints. IEEE Trans Autom Control 57:2940–2945

    Article  MathSciNet  MATH  Google Scholar 

  • Lin Q, Chen J (2013) A novel micro-population immune multi-objective optimization approach. Comput Oper Res 40:1590–1601

    Article  MathSciNet  MATH  Google Scholar 

  • Liu B (2009) Theory and practice of uncertain programming. Physica, Heidelberg

    Book  MATH  Google Scholar 

  • Marler RT, Arora JS (2010) The weighted sum method for multi-objective optimization: new insights. Struct Multidiscip Optim 41(6):853–862

    Article  MathSciNet  MATH  Google Scholar 

  • Owen J, Punt J, Stranford S (2013) Kuby immunology, 7th edn. Freeman, New York

    Google Scholar 

  • Park T, Ryu KR (2011) Accumulative sampling for noisy evolutionary multi-objective optimization. In: the 13th annual conference on Genetic and evolutionary computation, ACM, pp 793–800

  • Phan DH, Suzuki J (2012) A non-parametric statistical dominance operator for noisy multi-objective optimization. In: Simulated evolution and learning, SEAL’12, pp 42–51

  • Qi Y, Liu F, Liu M et al (2012) Multi-objective immune approach with Baldwinian learning. Appl Soft Comput 12:2654–2674

    Article  Google Scholar 

  • Qi Y, Hou Z, Yin M et al (2015) An immune multi-objective optimization approach with differential evolution inspired recombination. Appl Soft Comput 29:395–410

    Article  Google Scholar 

  • Robert C, Casella G (2013) Monte Carlo statistical methods. Springer, Berlin

    MATH  Google Scholar 

  • Shapiro A, Dentcheva D, Ruszczyński A (2009) Lectures on stochastic programming: modeling and theory. SIAM-MPS Philadelphia

  • Tan KC, Lee TH, Khor EF (2001) Evolutionary approaches with dynamic population size and local exploration for multiobjective optimization. IEEE Trans Evol Comput 5:565–588

    Article  Google Scholar 

  • Tan KC, Goh CK, Mamun AA et al (2008) An evolutionary artificial immune system for multi-objective optimization. Eur J Oper Res 187:371–392

    Article  MathSciNet  MATH  Google Scholar 

  • Trautmann H, Mehnen J, Naujoks B (2009) Pareto-dominance in noisy environments. In IEEE congress on evolutionary computation(CEC’09), pp 3119–3126

  • Van Veldhuizen DA (1999) Multiobjective evolutionary algorithms: classifications, analyses, and new innovations. Ph. D. Thesis, OH: Air force Institute of Technology, Technical Report No. AFIT/DS/ENG/99-01, Dayton

  • Zhang Q, Li H (2007) MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11:712–731

    Article  Google Scholar 

  • Zhang ZH, Tu X (2007a) Immune approach with adaptive sampling in noisy environments and its application to stochastic optimization problems. IEEE Comput Intell Mag 2:29–40

    Article  Google Scholar 

  • Zhang ZH, Tu X (2007b) Probabilistic dominance-based multi-objective immune optimization approach in noisy environments. J Comput Theor Nanosci 4:1380–1387

    Article  Google Scholar 

  • Zhang ZH, Wang L, Liao M (2013a) Adaptive sampling immune approach solving joint chance-constrained programming. J Control Theory Appl 11:237–246

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang ZH, Wang L, Long F (2013b) Immune optimization approach solving multi-objective chance constrained programming. Evol Syst 6:41–53

    Article  Google Scholar 

  • Zhang W, Xu W, Liu G, et al (2015) An effective hybrid evolutionary approach for stochastic multiobjective assembly line balancing problem. J Intell Manuf 1–8. doi:10.1007/s10845-015-1037-5

  • Zheng JH et al (2004) A multi-objective genetic approach based on quick sort. Advances in Artificial Intelligence. Springer, Berlin

    Google Scholar 

  • Zitzler E, Deb K, Thiele L (2000) Comparison of multiobjective evolutionary approaches: empirical results. Evol Comput 8:173–195

    Article  Google Scholar 

  • Zitzler E, Thiele L (1999) Multi-objective evolutionary approaches: a comparative case study and the strength Pareto approach. IEEE Trans Evol Comput 3:257–271

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by National Natural Science Foundation NSFC (61563009).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhuhong Zhang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Communicated by V. Loia.

Appendices

Appendix 1

Proof Lemma 1

In the case of \({{\varvec{x}}}\in A_\varepsilon \), there exists at least a subscription \(j_0\) satisfying that

$$\begin{aligned} E[f_{j_0 } ({{\varvec{x}}},{\varvec{\xi } })]+\varepsilon <E[f_{j_0 } ({{\varvec{y}}},{\varvec{\xi } })]. \end{aligned}$$
(18)

Hence, once\({\hat{{f}}}_{j_0 } ({{\varvec{y}}})+\varepsilon \le {\hat{{f}}}_{j_0 } ({{\varvec{x}}})\), one can imply that

$$\begin{aligned} {\hat{{f}}}_{j_0 } ({{\varvec{y}}})+\varepsilon <E[f_{j_0 } ({{\varvec{y}}},{\varvec{\xi } })]-\frac{\varepsilon }{2} \end{aligned}$$

or

$$\begin{aligned} {\hat{{f}}}_{j_0 } ({{\varvec{x}}})>E[f_{j_0 } ({{\varvec{x}}},\xi )]+\frac{\varepsilon }{2}. \end{aligned}$$

Otherwise, by Eq. (18) we have that

$$\begin{aligned} {\hat{{f}}}_{j_0 } ({{\varvec{y}}})+\varepsilon \ge E[f_{j_0 } ({{\varvec{y}}},{\varvec{\xi } })]-\frac{\varepsilon }{2}>E[f_{j_0 } ({{\varvec{x}}},{\varvec{\xi } })]+\frac{\varepsilon }{2}\ge {\hat{{f}}}_{j_0 } ({{\varvec{x}}}). \end{aligned}$$
(19)

This yields contradiction. Consequently,

$$\begin{aligned}&\Pr \left\{ {{\hat{{f}}}_{j_0 } (y)+\varepsilon \le {\hat{{f}}}_{j_0 } (x)} \right\} \le \Pr \{{\hat{{f}}}_{j_0 } (y)<E[f_{j_0 } (y,\xi )] \nonumber \\&\quad -\frac{3\varepsilon }{2}\}+\Pr \{{\hat{{f}}}_{j_0 } (x)>E[f_{j_0 } (x,{\varvec{\xi }})]+\frac{\varepsilon }{2}\}. \end{aligned}$$
(20)

Further, it follows from Eq. (20) and Theorem 1 that

$$\begin{aligned} \Pr \left\{ {{\hat{{f}}}_{j_0 } ({{\varvec{y}}})+\varepsilon \le {\hat{{f}}}_{j_0 } ({{\varvec{x}}})} \right\} \le 2e^{-mc\varepsilon ^{2}}. \end{aligned}$$
(21)

Thus,

$$\begin{aligned}&\Pr \left\{ {{{\varvec{y}}}\prec _{\hat{{\varepsilon }}} {{\varvec{x}}}} \right\} \le \prod _{j=1}^q {\Pr \left\{ {{\hat{{f}}}_j ({{\varvec{y}}}) +\varepsilon \le {\hat{{f}}}_j ({{\varvec{x}}})} \right\} } \\&\quad \le \Pr \left\{ {{\hat{{f}}}_{j_0 } ({{\varvec{y}}})+\varepsilon \le {\hat{{f}}}_{j_0 } ({{\varvec{x}}})} \right\} \le 2e^{-mc\varepsilon ^{2}}.\nonumber \end{aligned}$$
(22)

The proof is completed. \(\square \)

Proof of Theorem 2

For a given \({{\varvec{x}}}\in A_\varepsilon \), if \({{\varvec{x}}}\notin {\hat{{A}}}_\varepsilon \), there exists \({{\varvec{y}}}\in {\hat{{A}}}_\varepsilon \) such that \({{\varvec{y}}}\prec _{\hat{{\varepsilon }}} {{\varvec{x}}}\). Hence,

$$\begin{aligned} \Pr \{{{\varvec{x}}}\notin {\hat{{A}}}_\varepsilon \}\le & {} \Pr \{\exists {{\varvec{y}}}\in {\hat{{A}}}_\varepsilon ,s.t.{{\varvec{y}}}\prec _{\hat{{\varepsilon }}} {{\varvec{x}}}\}\nonumber \\\le & {} \sum _{{{\varvec{y}}}\in {\hat{{A}}}_\varepsilon } {\Pr \{{{\varvec{y}}}\prec _{\hat{{\varepsilon }}} {{\varvec{x}}}\}} , \end{aligned}$$
(23)

and accordingly,

$$\begin{aligned} \Pr \{A_\varepsilon \not \subset {\hat{{A}}}_\varepsilon \}= & {} \Pr \{\exists {{\varvec{x}}}\in A_\varepsilon s.t.{{\varvec{x}}}\notin {\hat{{A}}}_\varepsilon \} \\\le & {} \Pr \{\exists {{\varvec{x}}}\in A_\varepsilon ,{{\varvec{y}}}\in {\hat{{A}}}_\varepsilon ,s.t.{{\varvec{y}}}\prec _{\hat{{\varepsilon }}} {{\varvec{x}}}\} \nonumber \\\le & {} \sum _{{{\varvec{x}}}\in A_\varepsilon } {\sum _{{{\varvec{y}}}\in {\hat{{A}}}_\varepsilon } {\Pr \{{{\varvec{y}}}} } \prec _{\hat{{\varepsilon }}} {{\varvec{x}}}\}. \nonumber \end{aligned}$$
(24)

Therefore, as related to Lemma 1 and Eq. (24), we can obtain that

$$\begin{aligned} \Pr \{A_\varepsilon \subseteq {\hat{{A}}}_\varepsilon \}=1-\Pr \{A_\varepsilon \not \subset {\hat{{A}}}_\varepsilon \}\ge 1-2N^{2}e^{-mc\varepsilon ^{2}}. \end{aligned}$$
(25)

This finishes the proof. \(\square \)

The proof of Theorem 3

In the case of \(x\in A_\varepsilon \), we know that

$$\begin{aligned} \Pr \{{{\varvec{x}}}\notin {\hat{{A}}}_\varepsilon \}=\Pr \{\exists {{\varvec{y}}}\in {\hat{{A}}}_\varepsilon ,s.t. {{{\varvec{y}}}\prec _{\hat{{\varepsilon }}} {{\varvec{x}}}}\} \le \sum _{{{\varvec{y}}}\in {\hat{{A}}}_\varepsilon } {\Pr \left\{ {{{\varvec{y}}}\prec _{\hat{{\varepsilon }}} {{\varvec{x}}}} \right\} } . \end{aligned}$$
(26)

By Lemma 1, Eq. (21) implies that

$$\begin{aligned} \Pr \{{{\varvec{x}}}\notin {\hat{{A}}}_\varepsilon \}\le 2|{\hat{{A}}}_\varepsilon |e^{-mc\varepsilon ^{2}}. \end{aligned}$$
(27)

Subsequently, we can easily obtain that

$$\begin{aligned} \Pr \{{\hat{{A}}}_\varepsilon \cap A_\varepsilon =\emptyset \}\le \prod _{x\in A_\varepsilon } {\Pr \{x\notin {\hat{{A}}}_\varepsilon \}} \le 2Ne^{-mc\varepsilon ^{2}}. \end{aligned}$$
(28)

Thereby, the conclusion is true. \(\square \)

The proof of Theorem 3

In Step 4, those \(\beta \)-dominated and redundant B cells in \(\hbox {M}_{set}\) will be eliminated after copying \(\hbox {B}_{n}\) into \(\hbox {M}_{set}\), for which the computational complexity is \(\hbox {O}({\vert }\hbox {M}_{set}{\vert }\log {\vert }\hbox {M}_{set}{\vert })\). In particular, it is possible that the number of B cells in M\(_{set}\) is beyond (\(N+M)\). Thus, in the worst case we can assert that the complexity of Step 4 is \(\hbox {O}((N+M) \log (N+M))\). Step 5 needs to calculate PDM values of elements in \(A_{n}\); more precisely, we need to calculate their crowding distances with at most \(N\hbox {C}_{\max }\log N\hbox {C}_{\max }\) times and their values on S with \((N\hbox {C}_{\max }+1)N\hbox {C}_{\max }/2\) times. Step 8 executes mutation with at most \(Np\hbox {C}_{\max }\) times. In Step 9, the size of \(\hbox {B}_{n}\cup \hbox {E}_{n}\) is at most \(\hbox {NC}_{\max }\). Thus, when ARRA is enforced on \(\hbox {B}_{n}\cup \hbox {E}_{n}\), the complexity in the worst case is \(O(NC_{\max } (M_2 +\log NC_{\max } ))\) by means of the computational complexity in Sect. 5.1. Summarily, MEIOA’s computational complexity in the worst case is decided by

$$\begin{aligned} O_c= & {} O\left( {(N+M)\log (N+M)} \right) +O\left( {NpC_{\max } } \right) \\&+O\left( {NC_{\max } (M_2 +\log NC_{\max } )} \right) +O((NC_{\max } )^{2}) \\= & {} O\left( (N+M)\log (N+M)+NC_{\max } (M_2 +p\right. \\&\left. +NC_{\max } ) \right) . \end{aligned}$$

Therefore, the conclusion is right. \(\square \)

Appendix 2

See Table 7.

Table 7 The first test suite (Van Veldhuizen 1999)
Table 8 The second test suite (Zitzler et al. 2000)

Appendix 3

See Table 8.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, K., Zhang, Z. & Lu, J. Adaptive racing ranking-based immune optimization approach solving multi-objective expected value programming. Soft Comput 22, 2139–2158 (2018). https://doi.org/10.1007/s00500-016-2467-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-016-2467-5

Keywords

Navigation