Skip to main content

Advertisement

Log in

Multi-objective particle swarm optimization with preference information and its application in electric arc furnace steelmaking process

  • INDUSTRIAL APPLICATION
  • Published:
Structural and Multidisciplinary Optimization Aims and scope Submit manuscript

Abstract

In this paper, multi-objective particle swarm optimization with preference information (MOPSO-PI) has been proposed. In the proposed algorithm, the information entropy is employed for measuring the probability distribution of particles; the user’s preference information is represented as the ranking of each particle through the possible matrix. The optimal procedure is guided by the preference information since the global best performance of particle is randomly chosen among non-dominated solutions with higher ranking value in each iteration. Finally, the MOPSO-PI is applied to optimize the steelmaking process; the power supply curve obtained reduces the electric energy consumption, shortens the smelting time and prolongs the lifespan of the furnace lining. The application results show the effectiveness of the proposed algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Alvarez-Benitez JE, Everson RM, Fieldsend JE (2005) A MOPSO algorithm based exclusively on pareto dominance concepts[C]. Evolutionary multi-criterion optimization. Springer, Berlin, pp 459–473

    Book  Google Scholar 

  • Amirjanov A (2006) The development of a changing range genetic algorithm[J]. Comput Methods Appl Mech Eng 195(19):2495–2508

    Article  MathSciNet  MATH  Google Scholar 

  • Coello CAC, Pulido GT, Lechuga MS (2004) Handling multiple objectives with particle swarm optimization[J]. IEEE Trans Evol Comput 8(3):256–279

    Article  Google Scholar 

  • Deb K, Pratap A, Agarwal S et al (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II[J]. IEEE Trans Evol Comput 6(2):182–197

    Article  Google Scholar 

  • Eberhart RC, Kennedy J (1995) A new optimizer using particle swarm theory[C]. Proc Sixth Int Symp Micro Mach Hum Sci 1:39–43

    Article  Google Scholar 

  • Feng L, Mao ZZ, Yuan P (2012) Improved multi-objective particle-swarm algorithm and its application to electric ace furnace in steelmaking process[J]. Control Theory Appl 27(9):1313–1319 (in Chinese)

    MathSciNet  MATH  Google Scholar 

  • Ishibuchi H, Tsukamoto N, Nojima Y (2008) Evolutionary many-objective optimization: A short review[C]. IEEE Congr Evol Comput 2419–2426

  • Jiao L, Luo J, Shang R et al (2014) A modified objective function method with feasible-guiding strategy to solve constrained multi-objective optimization problems[J]. Appl Soft Comput 14:363–380

    Article  Google Scholar 

  • Kennedy J, Eberhart R (1995) Particle swarm optimization[C]. Proc IEEE Int Conf Neural Netw 4(2):1942–1948

    Article  Google Scholar 

  • Kennedy JF, Kennedy J, Eberhart RC (2001) Swarm intelligence [M]. Morgan Kaufmann, 2001.

  • Lee KB, Kim JH (2013) Multiobjective particle swarm optimization with preference-based sort and its application to path following footstep optimization for humanoid robots[J]. IEEE Trans Evol Comput 17(6):755–766

    Article  Google Scholar 

  • Liou TS, Wang MJJ (1992) Ranking fuzzy numbers with integral value[J]. Fuzzy Set Syst 50(3):247–255

    Article  MathSciNet  MATH  Google Scholar 

  • Mostaghim S, Teich J (2003) Strategies for finding good local guides in multi-objective particle swarm optimization (MOPSO)[C]. Swarm Intelligence Symposium, 2003. SIS’03. Proceedings of the 2003 IEEE. IEEE 26–33

  • Purshouse RC, Fleming PJ (2003) Evolutionary many-objective optimisation: An exploratory analysis[C]. Evolutionary Computation, 2003. CEC’03. The 2003 Congress on. IEEE 3: 2066–2073

  • Reyes-Sierra M, Coello CAC (2006) Multi-objective particle swarm optimizers: a survey of the state-of-the-art[J]. Int J Comput Intell Res 2(3):287–308

    MathSciNet  Google Scholar 

  • Shannon CE (2001) A mathematical theory of communication[J]. ACM SIGMOBILE Mob Comput Commun Rev 5(1):3–55

    Article  Google Scholar 

  • Tanaka M, Watanabe H, Furukawa Y et al (1995) GA-based decision support system for multicriteria optimization[C]//Systems, man and Cybernetics, 1995. Intelligent systems for the 21st century. IEEE Int Conf 2:1556–1561

    Google Scholar 

  • Van den Bergh F, Engelbrecht AP (2006) A study of particle swarm optimization particle trajectories[J]. Inform Sci 176(8):937–971

    Article  MathSciNet  MATH  Google Scholar 

  • Xu ZS (2001) Algorithm for priority of fuzzy complementary judgement matrix[J]. J Syst Eng 16(4):311–314

    Google Scholar 

  • Yuan P, Wang FL, Mao ZZ (2005) Optimized power supply model in melting period of SR-EAF[J]. J Northeast Univ (Nat Sci) 26(10):930–933, in Chinese

    Google Scholar 

  • Zapotecas Martínez S, Coello Coello CA (2011) A multi-objective particle swarm optimizer based on decomposition[C]. Proceedings of the 13th annual conference on Genetic and evolutionary computation. ACM 69–76

  • Zheng X, Liu H (2010) A scalable coevolutionary multi-objective particle swarm optimizer[J]. Int J Comput Intell Syst 3(5):590–600

    Article  Google Scholar 

  • Zhou A, Qu BY, Li H et al (2011) Multiobjective evolutionary algorithms: a survey of the state of the art[J]. Swarm Evol Comput 1(1):32–49

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

The authors would like to acknowledge the anonymous reviewers for their helpful comments. This work was supported by the State Key Program of National Natural Science Foundation of China (No. 61333006).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lin Feng.

Appendix A

Appendix A

1.1 Description of algorithm 1

Each step of proposed MOPSO-PI named algorithm 1 is described as follows.

  1. 1)

    Initialize

    The initial population size is Pop, in which each particle has its own position xt and velocity vt. The xi t and vi t are the values of the ith particle at the tth iteration in the update process. The position and velocity for every particle can be specified by Np×Pop matrices, which are initialized randomly within the lower and upper values, where Np is the number of decision variables of the problem. The personal best performance (Pbi) of the ith particle is set to be the position of itself. For each iteration, non-dominated solutions are stored in the external archive, which is also initialized as a null set.

  2. 2)

    Convert constrained functions

    For using the multi-objective evolutionary algorithms to solve constrained optimal problem, the constrained problem should be converted into multi-objective problem (Amirjanov 2006). Then, the objective function of constrained problem will be converted into two parts: one part is the original objective function F(x), and the other part is the satisfactory summation function φ(x) under constraint conditions. Therefore, the new objective function E(x) is formulated as follow:

    $$ E\left(\boldsymbol{x}\right)=\left(F\left(\boldsymbol{x}\right),\varphi \left(\boldsymbol{x}\right)\right). $$
    (A1)

    The level of individual x that satisfies the constraint j is φ(x) explained as follows:

    $$ \begin{array}{l}{\varphi}_{g_j}\left(\boldsymbol{x}\right)=\left\{\begin{array}{ll}1,\hfill & {g}_j\left(\boldsymbol{x}\right)<0\hfill \\ {}{g}_j\left(\boldsymbol{x}\right)/{\delta}_j,\hfill & 0\le {g}_j\left(\boldsymbol{x}\right)\le {\delta}_j\hfill \\ {}0,\hfill & otherwise\hfill \end{array}\right.\hfill \\ {}{\varphi}_{h_j}\left(\boldsymbol{x}\right)=\left\{\begin{array}{ll}\left|{h}_j\left(\boldsymbol{x}\right)\right|-{\gamma}_j,\hfill & \left|{h}_j\left(\boldsymbol{x}\right)\right|\le {\gamma}_j\hfill \\ {}0,\hfill & otherwise\hfill \end{array}\right.\hfill \end{array} $$
    (A2)

    In order to find out the feasible solution, the usual methods evaluate every constraint violation value whether small or equal to zero. But we adopt the (Eq. (A2)) to see the satisfactory degree of solution x. In (Eq. (A2)), the parameter δ and γ are the tolerance value, where parameter δ and γ are adapted to reduce the strength of constraints, especially the equality constraints. These two parameters can maintain the diversity of particle population through adding some infeasible individuals.

    At last, the satisfactory summation function φ(x) is defined as follows:

    $$ \varphi \left(\boldsymbol{x}\right)={\displaystyle \sum_{j=1}^l{\varphi}_{g_j}\left(\boldsymbol{x}\right)+{\displaystyle \sum_{j=l+1}^p{\varphi}_{h_j}\left(\boldsymbol{x}\right)}} $$
    (A3)
  3. 3)

    Evaluate solutions

    First, for all the solutions, through a rapid dominance sort, the non-dominance particles are saved into the external archive (E t ), and the dominance particles are discarded out of P t . After that, if the current population size is not equal to the preset, some new particles randomly generated will add to the current population. Finally, by choosing the Pb i and the global best performance (Gb) from E t , the particles are guided by the user’s preference information, which are described in section II.B.

  4. 4)

    Update particles

    The velocity matrix and the position matrix are updated according to the following equations:

    $$ {v}_i^{t+1}=\omega {v}_i^t+{c}_1{r}_1^t\left(P{b}_i^t-{x}_i^t\right)+{c}_2{r}_2^t\left(G{b}^t-{x}_i^t\right) $$
    (A4)
    $$ {x}_i^{t+1}={v}_i^{t+1}+{x}_i^t. $$
    (A5)

    where the superscripts t and t + 1 refer to the time index of the current and the next iterations, ω is the inertia weight and decrease according to slope of the current iteration from 0.9 to 0.1. The acceleration coefficients c 1 and c 2 are the learning factors of the swarm, which control how far a particle will move in a single iteration, usually c 1 = c 2, and value range is [0, 2]. r 1 and r 2 are random real values uniformly distributed in the interval [0, 1]. The particles update their velocities and positions by using the current position and velocity information as given in (Eqs. (A4) and (A5)).

  5. 5)

    Go back to 3) and termination conditions check

    Go back to 3) and termination conditions check until one of the termination conditions is met. If the termination condition is satisfied, the algorithm terminates and exports the solutions, otherwise, executes sequentially.

    Algorithm 1 Multi-objective Particle Swarm Optimization with preference information

    1) Initialize

    t = 0

    for i = 1 : Pop

    for j =1 : N p

    x t  = rand(Min, Max)

    v t  = rand(Min, Max)

    Evaluate F(x)

    end

    end

    E t  = []

    2) Convert constraint functions

    Calculate φ(x) for particles

    Generate new objective function E(x)

    3) Evaluate solutions

    t = t + 1

    Sort the particles using quick sort method

    E t  = E t ∪ non-dominated solutions

    for i = 1 : N E

    Evaluate the particles in E t

    end

    Choose Gb t from E t for the particles

    4) Update particles

    for i = 1 : Pop

    for j = 1 : Np

    Update v t and x t by (A4) and (A5)

    end

    end

    5) Go back to 3) and termination conditions check

    The parameters used in algorithm are described as follows.

    Rand(Min, Max): Random integer value between Max and Min, v t: Velocity matrix of the particles, x t: Position matrix of the particles, E t : External archive, F(x): objective function, φ(x): satisfactory summation function, E(x): new objective function, Pop: Initial population size, N p : Number of particles, N E : Number of external archive.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Feng, L., Mao, Z., Yuan, P. et al. Multi-objective particle swarm optimization with preference information and its application in electric arc furnace steelmaking process. Struct Multidisc Optim 52, 1013–1022 (2015). https://doi.org/10.1007/s00158-015-1276-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00158-015-1276-2

Keywords

Navigation