Applying Autonomous Hybrid Agent-based Computing to Difficult Optimization Problems

Evolutionary multi-agent systems (EMASs) are very good at dealing with difficult, multi-dimensional problems, their efficacy was proven theoretically based on analysis of the relevant Markov-Chain based model. Now the research continues on introducing autonomous hybridization into EMAS. This paper focuses on a proposed hybrid version of the EMAS, and covers selection and introduction of a number of hybrid operators and defining rules for starting the hybrid steps of the main algorithm. Those hybrid steps leverage existing, well-known and proven to be efficient metaheuristics, and integrate their results into the main algorithm. The discussed modifications are evaluated based on a number of difficult continuous-optimization benchmarks.


Introduction
Despite the recent increases in computer performance, not all problems can be resolved in a timely manner. Solving problems by deterministic algorithms often takes too long (e.g., exceeding a dozen or so of cities in the traveling salesman problem (TSP) is such an example [1]). However, some problems do not have deterministic solutions. In these as well as in other cases, novel stochastic • We solved the problem of redistributing the agents' energy by using an appropriate redistribution operator.
• Through extensive computer simulations, we show that the proposed HEMAS computing system produces competitive results as compared to EMAS (the version with two hybrid algorithms was better than the one with one hybrid algorithm, yet the version with three hybrid algorithms was not superior to the one with two) by using proper comparison metrics (the numbers of the evaluations of the fitness functions); these are independent from the algorithm, implementation, and hardware.
The outline of the paper is as follows. After the introduction, we describe the original EMAS (with references to the state of the art) and follow with the description of HEMAS. Then, the details that are related to the hybridization with the selected metaheuristics are given (the concept, and the later parameters). Finally, the experimental setting and the results are shown and discussed, and the paper is concluded.

Original EMAS
EMAS may be perceived as a "proactive" alternative to classical evolutionary computation techniques that can relieve evolutionary metaheuristics of several inconsistencies with a real-life evolution; e.g., a lack of global control, and asynchronous reproduction. In this system, solutions (genotypes) are assigned to agents, realizing the several types of actions that are available to them in order to upgrade their solutions. Agents can meet with each other and either compete or exchange resources. In the former case, only the richer agent is allowed to reproduce (analogical to selection in EA); in the latter, part of the resources of the poorer agent are allocated to the richer one (analogical to crossover in EA) [5]. For a schematic view of EMAS, one can refer to Fig. 1. It is to note that the correctness of EMAS as a global universal optimizer has been formally proven by using Markov chain-based models that were inspired by the theoretical works of Michael Vose [11]. EMAS also has many extensions; e.g., an immunological one [12] that was applied to solve different single-criterion and multi-criteria problems.
The relationship between the elements of the algorithm is presented in Fig. 1. The agency is based on the particular implementation that has been used in numerous projects over the last two decades, and the agents are implemented as entities that undergo event-driven simulations [13]; The detailed EMAS algorithm is summarized in Algorithm 1. It can be noticed that the setting is quite similar to the classic genetic algorithm, for example. This implementation method of an agent-based computing system makes it possible to retain the idea of agency yet utilize simple technologies (in the discussed case, the system is based on jMetal [14]). The agent-oriented activities are actually realized inside the steps; e.g., meetStep(), reproStep(), and deadStep(). The event-driven simulation approach consists of making it possible that each agent realizes these steps one at a time when certain conditions are true. Thus, the concurrency is simulated, and the whole approach is very easy to implement and port among different metaheuristic-oriented software frameworks.
The event-driven approach allows for the building of a system that consists of loosely coupled components that communicate by using a simple mechanism of transmitted and received events.
This architecture considerably simplifies the system distribution and makes it more scalable and resistant to failures. One can notice that these are expected features for each computing architecture.
It is also compatible with the agent-computing paradigm that assumes that loosely coupled independent agents communicate with each other. The implementation of the agent paradigm in the event-driven architecture can be easily realized by treating agents as both emitters and consumers of the messages and can be freely configured and adopted to different hybrids of EMAS. while !isStoppingConditionReached() do 5: meetStep() 6: reproStep() 7: deadStep() EMAS also has many extensions that can be used to solve different single-criterion and multicriteria problems. Among others, it is worth mentioning immunological EMAS, elitist EMAS for multi-objective optimization, CoEMAS, memetic agent-based continuous optimization, and hybrids of EDA-type and EMAS algorithms. All of these are briefly presented below.
Immunological EMAS (iEMAS) [12] assumes the introduction of a new group of agents that act as lymphocyte T-cells (following an immunological inspiration). The goal of these special agents is to introduce a negative-selection mechanism into the evolution process that would eliminate other agents with similar genotypes. This can be achieved by either killing or weakening a selected agent by decreasing its strength. For this purpose, a defined matching function is used to calculate the similarity of a tested agent to a lymphocyte-agent (e.g., the percentage of similar genes). Lymphocyte agents are created after the death of an agent and are given a mutated variant of its genotype. Over a period of time, the lymphocyte patterns that recognize "good" agents (possessing high amounts of energy) are eliminated. In this way, the evolution promotes those lymphocyte agents that bear a resemblance to "bad" agents so that they can be continuously removed from the system -thus leaving the "good" agents intact. This approach is useful for applications in which the fitness evaluation is time-consuming, as the operation of lymphocyte agents is less expensive.
The elitist evolutionary multi-agent system for multi-objective optimization is another hybrid of EMAS that was introduced in [15]. The main idea behind this was to create a ranking of the best solutions according to an additional feature called "prestige." Each agent starts its life with a prestige level of zero; then, it increases its prestige level by meeting and comparing itself with other agents using the fitness function. This resource is accumulated by the agent throughout its lifetime and cannot be lost. The second important assumption of this approach is the introduction of a special elite island from where the agents cannot migrate to other islands (and in this way cannot take part in the ongoing evolution process). To migrate to this island, an agent needs to possess a proper level of prestige. The elitist approach allows for a deeper exploration of a solution's frontier and, therefore, is suitable for multi-objective optimization. Another advantage is its elegance and simplicity as well as its ease in providing additional extensions.
The problem with classical evolutionary algorithms and EMAS-type systems is the low diversity in their populations. This is a drawback in the context of solving multi-modal optimization problems where one needs to create species that are localized around different local optima. These problems can be solved by using another hybrid of EMAS known as a co-evolutionary multi-agent system with co-evolving species (nCoEMAS) [16]. The main idea behind the CoEMAS model is to introduce a multiplicity of species and, thus, their co-evolution. In particular, this approach introduces different types of computational agents that are called "niches." Individual agents live in so-called "niches" within which they can interact with other agents. Through this niching mechanism, it is possible to divide a whole population into subpopulations (species) that are located in the zones of particular local maxima in a whole solution area. The evolution of the agents takes place within individual niches; however, it is also possible to migrate between them if an agent has an adequate amount of resources. This model assumes the merging of niches if their centers are around a given maximum and the creation of new ones if a given agent has not found a suitable niche for itself to which to migrate.
Another important hybridization of EMAS is related to the use of memetic search algorithms [17]. Local search algorithms can gradually bring population units closer to local extremes (in this way, enhancing their genotypes). In the EMAS model, agents are autonomous entities and can conduct searches for solutions independently and freely while using the available resources.
Thus, a local search could be used more often than usual (during evaluation or reproduction). A local search results in different solutions (each solution is evaluated), and the best one is selected to replace the genotype of the individual. The conducted research shows that a local search can provide a significant improvement in the obtained results in a shorter amount of time and has further potential for development. On the other hand, the application of memetic search algorithms during an agent's lifetime does not result in improvements in the obtained results. An important aspect of the work is the applied caching mechanism in the search context, which resulted in a significant increase in the number of possible fitness-function evaluations; this clearly allows for the exploration of more-difficult multi-dimensional problems.
Interesting research on EMAS is presented in [18] in which several hybridizations of EDA-type and EMAS algorithms are evaluated. EDA stands for the estimation of distribution algorithms, which are universal metaheuristics that are built on evolutionary algorithms. One of the successful EDA-type algorithms is COM M A op , which was proposed in [19]. This algorithm includes exploration and exploitation phases by using a population of agents and assigning them different roles.
The population of the agents evolves mostly by mutations that are governed by specific rules that are based on geometric inspirations. The idea of this research was to extend it with mechanisms of population decomposition, cloning, crossover, and death (all inspired by the EMAS approach). The introduced modifications proved successful and produced promising optimized results as compared to the original version of the COM M A op algorithm. In addition, the modified algorithms seem more reliable, as the dispersion of the results is significantly lower.

EMAS hybridized with classic metaheuristics
A hybrid evolutionary multi-agent system (HEMAS) is an algorithm that is based on EMAS that assumes an additional hybridization step. The mentioned hybridization step (shown in Figure   2) incorporates one of the available metaheuristics into the basic EMAS algorithm. As HEMAS and EMAS have the same problem representation, the choice of metaheuristics is arbitrary. Should the representation be different, additional translation techniques would be required. The idea of hybridizing EMAS consists of running a regular computing algorithm until certain conditions are met; then, the following three phases of the algorithm are realized: 1. Optimization condition: a number of different rules are available, which leads to the firing of the hybridization sequence. The rules are individual (evaluated by the agents) and may be based on dropping each agent's energy below a certain threshold after observing that the agent loses too many fights or that its offspring die very early. These conditions are checked globally for a certain number of algorithm steps. Those agents that are willing to participate in possible be global -not agent-based. In this case, a whole population is evaluated; when a rule is satisfied, the hybridizing algorithm is run for the whole agent population of agents.
2. Running optimization algorithm: At this stage, the hybrid algorithms are run while incorporating the willing agents' solutions as starting ones. There is no limit to the number of algorithms or the lengths of their operations; however, it is worth choosing the algorithm that will support EMAS instead of replace it. Those algorithms can proceed in any possible way; however, they should share the same representation of the problem. After finishing the hybrid algorithm, the solutions are changed; therefore, the next step becomes inevitable.
3. The energy redistribution follows here. All of the agents who participated in the previous stage must have their energy modified (as their genotypes have surely changed). Several methods of the energy redistribution were invented; namely, proportional, ranking, and tournament (following well-known selection methods [6]). Based on the results that are discussed in [20], we are using a proportional redistribution operator.
The hybridization of EMAS is depicted in Algorithm 2. The previously known steps (cf. the EMAS algorithm 1) are supplemented with the hybridization step, which verifies the conditions of running certain hybrid algorithms, checks whether there are enough available agents that are willing to perform these steps, and delegates the genotypes of the agents to these hybridizing algorithms.
Finally, the energy is redistributed according to one of the possible schemes, and EMAS continues its run. The hybridization step is realized periodically. Within this paper, we propose a highly autonomous computing system where the decision about hybridization is fully entrusted to the agents (hybrid steps that use a particular algorithm may be used only when a minimal number of agents decide to do so). This paper assumes simple yet efficient ways of implementing rules that trigger hybridization; however, these can be even more complex; e.g., machine-learning models (like neural networks) may be employed and learned during many of the experiments and later used as triggers for firing up the hybridization steps.

Settings of hybridization
In the discussed setting, we have chosen two classic algorithms for hybridization with EMAS: 1. Particle swarm optimization [21] is an acclaimed global optimization metaheuristic algorithm that is based on particles moving in a feasible solution space and modifying their directions and velocities by using their own knowledge and the knowledge of a whole swarm to reach the optimum.
2. Genetic algorithm [22] is a classic global optimization metaheuristic algorithm that follows the Theory of Evolution by Charles Darwin by processing a number of individuals with encoded potential solutions to a problem. These individuals undergo selection, crossover, mutation, and evaluation phases. The used version of the algorithm follows the Michalewicz model of evolution that is embedded in a real-value search space (no encoding/decoding is necessary).
As one can see, these algorithms are completely different; however, they work on the same type of search space (classic versions of those algorithms are set in a real-value search space). Therefore, they can support basic EMAS by helping evade conditions like a lack of diversity when one of the conditions listed below are true, for example.
The conditions that are used in the presented research are local (evaluated by an individual agent) and global (evaluated globally). These conditions are constructed in order to detect unwanted situations during the search process; the above-mentioned algorithms are used to evade these situations: VE0 -Variety Equal 0: the diversity of the solutions in a population is equal to 0.0. The diversity itself is measured as a minimal standard deviation of particular genes that are counted for a whole population. A lack of diversity is dangerous for the whole search process and can be detected when the population sticks in a local extrema. Executing a hybridizing step is aimed at the escaping of some local extrema.
ELQ1 -Energy Less 1st Quartile: the energy of an agent is lower than the first quartile of the energy of a whole population of agents. Agents with low energy are about to die; such agents may tend to update their solutions by the means of a hybrid step so that they can live longer and help explore other promising areas in a search space.
EGQ3 -Energy Greater 3rd Quartile: the energy of an agent is higher than the third quartile of the energy of a whole population. Agents with high energy may also tend to become quite similar; therefore, running a hybrid step may also help introduce higher diversity in the best agents in a population.
VG0.5 -Variety Greater 0.5: the diversity of the solutions in a population is greater than 0.5. Detecting high diversity can be caused by the fact that a population has a very high exploratory power and is not particularly focused on exploitation at any given moment. Running the hybrid step aims at increasing the exploitation power of a whole population to focus on a search.

Experiment design
All of the experiments that are presented in this paper were performed on the Prometheus supercomputer, which is hosted by Academic Computing Center Cyfronet AGH; it has 2403 TFlops, runs on Linux CentOS 7, and is part of PL-Grid 1 .
The algorithms were implemented and run using the well-known jMetal (Ver. 5.6) framework 2 .
The version that was used was once updated by Leszek Siwik (who implemented EMAS and used it in several research papers [23] https://bitbucket.org/lesiwik/modelowaniesymulacja2018).
In the presented research, well-known multi-dimensional Rastrigin, Ackley, Griewank, and Sphere problems with 100, 300, 500, 1000, and 2000 dimensions were used [24]. These problems had global optima in the beginning of the Cartesian space with a fitness value that was equal to 0.0. Statistical processing was realized by using the R system.
All of the experiments were run 30 times, and the results were processed by means of descriptive statistics (means, medians, and box plots were shown) and mathematical statistics (statistical hypothesis testing that was based on Kruskal-Wallis and Dunn's test were applied). The stopping criterion for all of the tested metaheuristics was the maximum number of fitness function evaluations (this was 100 times the number of the dimensions of a problem being tackled; for a 100-dimensional problem, it was 10,000).
The EMAS and HEMAS algorithms that underwent the experiments had the following parameters: • Number of agents in population: 50; • Total amount of resources (energy) in population: 500; • Initial amount of agent's energy: 10.0; • Amount of energy passed to agent that wins meeting: 1.0; • Death condition: agent that reaches 0 energy level is removed from system; • Reproduction condition: agent that reaches 20.0 energy level may reproduce; • Crossover operator: SBXCrossover [14,25] (distribution coefficient -5.0; probability -1.0); • Mutation operator: PolynomialMutation [14] (distribution coefficient -10.0; probability -0.01); • Strong mutation operator: PolynomialMutation [14] (distribution coefficient -20.0; probability -1.0) (this is used when agent cannot reproduce with another agent in order to prevent effects of inbreeding).
Hybridization parameters: • Condition for running hybridization algorithms: periodically checking whether any rules are true and if there are agents that are willing to participate (period is 2000 steps of EMAS algorithm); • PSO was used as hybridization algorithm (for those tests with one, two, and three operators) -GA was only used for three-operator version; • Conditions for running hybridization: VE0 (one operator), ELQ1 and EGQ3 (two operators), VG0.5, ELQ1, and EGQ3 (three operators); • Maximum number of hybrid optimization cycles was 3; • Energy redistribution operator was proportional [20].

Parameters of PSO:
• Swarm size: k; • Agents utilize global knowledge of whole swarm.

Results
In the presented experiments, we focused on comparisons between EMAS and each of its hybrid versions; however, we kept in mind that comparisons between our newly proposed algorithms and classic acclaimed ones were necessary. Indeed, we showed such a comparison in [10], where one can find the results that indicate the better efficacy of the proposed hybrid agent-based algorithms as compared to the acclaimed differential evolution algorithm. , and third quartile [Q3]); however, the combination described above obtained the best results. The graph in Fig. 3 shows box-and-whisker plots of the averaged fitnesses over all of the experimental runs of the algorithm. As one can see, both algorithms produced quite similar results during the first 20,000 evaluations of the fitness function; apparently, the hybrid version then began to approach the optimum, while the classic version was apparently stuck in some local extrema and was able to increase its quality much more slowly than with the hybrid version. Moreover, one can see that the hybrid version had a higher exploratory power than the classic version did because of the significantly higher dispersion of the results. To sum up, EMAS did not lose its searching potential, but HEMAS apparently searched in a more efficient and effective manner.

HEMAS with one hybridization operator
In order to have a broader view of the efficacy and efficiency of HEMAS, let us take a look at Fig. 4; this shows the final results of EMAS and HEMAS that were computed for the most difficult benchmarks that were tackled. In all of the observed cases, the final solutions that were obtained for the hybrid version turned out to be visibly better than those of the classic one. Moreover, the observed whiskers did not overlap, which allowed us to predict that the observed differences were statistically significant. One should note that the Rastrigin benchmark (Figs. 4c and 4d) turned out to be the most difficult for both the tested hybrid and the original algorithm; however, the algorithm that was run for the other benchmarks (Ackley, Sphere, and Griewank) was able to finish the search much closer to the global optimum.
These observations can be confirmed when looking at Table 1, where the EMAS and HEMAS final results can be found (along with the necessary descriptive statistic factors). Both algorithms were able to approach the global extrema (which was equal to 0 for all of the problems tackled); however, HEMAS seemed to be closer. For each test case, the means, medians, minimum values, and maximum values (except for Rastrigin 2000) of the results were better for HEMAS with one optimization operator. EMAS had better standard deviations in most cases, which may result in faster stalling at local minima.

HEMAS with two hybridization operators
In Fig. 5, the best solution for EMAS and HEMAS is shown when two PSO hybridization operators were applied. These were executed for those agents that had energy levels that were lower than the first quartile of the energy in the system as well as for those agents that had energy levels that were higher than the third quartile of the energy in the system. This combination was   and ES) and conditions (EL3 + VE0, EL3 + VG0.5, EL3 + VG1, EG17 + VE0, EG17 + VG0.5, EG17 + VG1, EL3 + EG17, ELQ1 + EGQ3, and SLQ1 + SGQ3). For the observed 2000D Ackley problem (Fig. 5), HEMAS started to beat EMAS very quickly (starting with 20, 000 evaluations of the fitness function). EMAS apparently got stuck in a local optimum, and HEMAS retained its explorative power thanks to the application of hybrid operators. It is to note that the dispersion of the results was lower than it was in the case of one hybridization operator -the whole experiment was repeatable (the dispersion of the observed results was reasonable). The rationale for applying these two rules for running PSO was to rescue the worst solutions from imminent death and to improve the best solutions in order to get even closer to the optimum. Taking a look at the graphs in Fig. 6, the dispersion of the final fitnesses for the observed cases was lower than it was previously. The final fitnesses were visibly better in the observed case; e.g., solving the Rastrigin benchmark in 2000 dimensions reached ca. 2500, while this was approximately 4500 earlier (see Fig. 6d).
This observation can be confirmed when analyzing Table 2. Generally, the results were similar; however, adding another hybridization algorithm allowed us to obtain significantly better results than in the case with a single operator (and in the case of the original EMAS). In only four cases (Griewank 100, Rastrigin 100, Rastrigin 300, and Sphere 100), HEMAS with two optimization operators achieved slightly worse results than HEMAS with one operator. In the remaining cases,  adding a second operator helped improve the mean results, medians, and standard deviations as well as the minimum and maximum results.

HEMAS with three hybridization operators
In Fig. 7, the results that were obtained for HEMAS with three hybridization operators are shown. HEMAS executed PSO as its hybrid step for those agents with energy levels that were lower than the first energy quartile as well as for those agents with energy levels that were higher than the third quartile. Moreover, the genetic algorithm was run for the whole population when then diversity was greater than 0.5. Again, this combination proved to be the best of all of the tested + EG17, VG0.5 + ELQ1 + EGQ3, VG0.5 + SLQ1 + SGQ3, VG1 + EL3 + EG17, VG1 + ELQ1 + EGQ3, and VG1 + SLQ1 + SGQ3). The obtained results (Fig. 7) were quite similar to the previous ones (the experiment with two hybridization operators). The final fitnesses that are displayed using box-and-whisker plots (Fig. 8) were similar to the results that were obtained in the previous experiment.
Finally, Table 3 Tables 4 and 5. All of the p-values that were lower than the significance level (pointing out that the null hypothesis has a very low probability) are typeset in bold. Apparently, the results that were produced by both versions of the HEMAS algorithm were statistically different than with EMAS; however, the observed means were not different when comparing both HEMAS versions. It can be interpreted that the tested three-hybridization-operator configuration was not completely necessary; in the tested case, the two-operator version was enough to beat EMAS and the one-operator version. More-complex methods do not always bring better results.

Conclusions
In this paper, we have presented comprehensive results of the application of a novel hybridization concept in agent-based computing; namely, the autonomous hybridization of EMAS with selected classic metaheuristics. This approach assumes that a number of agents decide whether to be subjected to a selected hybrid optimization algorithm based on a predefined rule. Thus, the most important feature of agency (already visible in EMAS) becomes enhanced to a greater extentnow, the agents can autonomously decide whether to undergo a selected hybridization step, thus enhancing their solutions by employing classic metaheuristics such as GA or PSO.
Three rules (based on measuring the diversity of the solution and the energy of the agents) were tested in the presented setting, and three versions of HEMAS produced results that were shown and discussed. The presented results pointed out that introducing two hybrid operators allowed for upgrading the efficiency and efficacy of EMAS; however, it seems that introducing a third operator did not bring any further enhancement in the tested cases.
In order to maintain the homogeneity of the whole algorithm, dedicated methods for redistributing the energy (a resource that controlled EMAS) were applied after returning from the hybrid step [20].
The discussed setting assumes the implementation of various deterministic rules (three selected ones were tested) for selected multi-dimensional problems. The observations proved that introducing the proposed hybridizations into EMAS along with the proper use of an energy-redistribution mechanism increased the efficiency and efficacy of the search with HEMAS as compared to EMAS.
In particular, HEMAS with two hybridizations was better than the one with one (and without one) and not significantly worse than the one with three. Thus, it turned out that the two-hybrid version was sufficiently better than the one without hybridization for the setting that was presented in this paper.
Following the results presented in [26], EMAS and HEMAS additionally achieved better results faster than classic algorithms (e.g., the genetic algorithm). This allows us to claim that a hybridization of EMAS that is based on deterministic rules paves the way for the further development of a new promising metaheuristic algorithm that is built on EMAS (whose efficacy was proven theoretically) [8].
In this paper, we concentrated on common benchmarks that are used in metaheuristic-related research (multi-dimensional mathematical problems [24]). The fact that HEMAS is built on EMAS (cf. [8]) suggests that it may be applied for any similar problem; however, designers of such a system should be aware of the intricacies of each of its building blocks (as HEMAS is a complex algorithm that relies heavily on its hybrid parts). Here is an example: even though GA is an universal optimization technique and any possible problem may be approached by using this algorithm (providing that the proper representation is used and appropriate variation operators are selected), solving certain problems (e.g., discrete ones) may be tricky if one chooses PSO as the hybridization algorithm, as this algorithm is not inherently applicable to such problems (though certain modifications exist that can be used outright -see, e.g., [27]). Moreover, the initialization of a hybrid step will not be straightforward when using an algorithm that relies on a completely different representation (e.g., ant colony optimization); however, one can again turn to detailed strategies that make such attempts possible [28].
In future work, we will also apply machine-learning methods as a decision factor for running certain hybrid optimization algorithms. Now, agents apply the mentioned rules; in the future, however, they will use a pre-trained machine-learning model (like a neural network, for example) which will serve as a decision factor for applying to start a hybrid optimization step.