A NOTE ON MONOTONE APPROXIMATIONS OF MINIMUM AND MAXIMUM FUNCTIONS AND MULTI-OBJECTIVE PROBLEMS

. In paper [12] the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate ob- jective function. Suﬃcient conditions for accomplishing objectives are formulated using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are diﬀerentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example of a capture of two evaders by two pursuers is provided.Thisnote presents a preview of the treatment in [12].


1.
Introduction. An approach to pursuit-evasion games which is based on differentiable approximations of minimum and maximum functions was first proposed in [8]. The first aim was to provide a way to formulate closed-form control strategies for the players in multi-player pursuit-evasion games when the players' dynamic models are linear and time invariant. The second aim was to design a single scalar goal function for each agent which is based on its multiple objectives. This approach was later generalized for multiple player pursuit-evasion games governed by nonlinear dynamics which are affine in control in [10]; guarantees were formulated so the objectives will be accomplished for both continuous and discrete time observations based on a Liapunov-like approach. In order to formulate even less restrictive guarantees, for either capture of all or some of the evaders as well as for the evasion of all or some of the evaders, differential inequalities and maximal and minimal solutions of the corresponding comparison systems were employed in [11].
In this note we provide a short overview of results on the monotonicity of the convergent and differentiable approximations of minimum and maximum functions of an arbitrary number of arguments, which can be found in [12]. These functions are not only convergent and differentiable but they also converge monotonically to the minimum or maximum function. We also present an application of these functions to a class of multi-agent multi-objective problems. We assume that each agent has a goal which is to accomplish multiple objectives. Given that these multiple objectives are formulated using nonnegative scalar objective functions, we show how the approximations of the minimum and maximum can be used to formulate agents' goals. Each agent's goal is formulated using a scalar goal function which is designed using appropriate minimum and maximum approximation and objective functions. The design is based on agents' preferences such as that all objectives have to be satisfied or only one of the objectives has to be satisfied or a combination of both. In order to illustrate the design and how to establish sufficient conditions that the goals of a group of chosen agents will be accomplished, we provide a pursuit-evasion game example with two pursuers and two evaders.
2. Properties of minimum and maximum function approximations. In this section we recall properties of functions approximating minimum and maximum functions of an arbitrary number of arguments [12]. Then these functions are used to establish sufficient conditions for accomplishing multiple objectives, as is shown in [12].
2.1. Monotone approximation of the minimum function. In order to approximate the minimum function we recall the functions [11] defined by where √ = ( ) 1/ for any ∈ [0, +∞) and ∈ ℝ + = (0, +∞), = [ 1 , . . . , ] ∈ ℝ + , and is a positive integer. In addition we define functions of either or where one is the argument and the other is treated as a given parameter value. These functions are defined by ( ) = ( , ) for given ∈ ℝ + , and ( ) = ( , ) for given ∈ ℝ + , . . , }, and let be a variable with integer value of the index of a minimum ; that is = . Notice that if the minimum value is achieved by more than one argument, we can choose any of the corresponding indices without loss of generality. The following theorem provides some important properties of the minimum approximation functions: Theorem 2.1. The minimum approximation functions satisfy the following properties when ≥ 2: Proof. Given in [12].

Monotone maximum function approximations.
In order to approximate the maximum function from below and above, respectively, we recall the functions [11] defined by where again ∈ ℝ + = (0, +∞), = [ 1 , . . . , ] ∈ ℝ + , and is a positive integer. Similar to the case of minimum function approximations, we define additional functions of either or in the following way: and ( ) = ( , ) for given ∈ ℝ + , where ( ) is known as a p-norm of denoted as ∥ ∥ when = ∈ [1, +∞].
In what follows, we will denote Euclidean norm without the subscript, that is, ∥ ⋅ ∥ ≡ ∥ ⋅ ∥ 2 . Let = max ∈N { } and define as a variable taking the integer value of the index of a maximum ; that is = . Again, if there is more than one maximum then we can choose any of the corresponding indices without loss of generality. Now, analogous to the case of approximating the minimum, in the following theorem we recall some known properties of the approximation functions [11] and add an important monotonicity property: Proof. Given in [12].
3. Constructing multi-objective or goal functions. The main motivation for the results presented in this section is the recognized complexity of multi-objective optimization problems. As stated in [7], these problems are usually solved by an appropriate scalarization. Our approach is based on scalarization and the basic assumption is that each objective is represented by a scalar nonnegative function (⋅) : ℝ → [0, +∞), where the subscript denotes the -th objective and the subscript denotes the -th agent. The arguments of each of the objective functions may be time as well as state variables (here we assume that is of appropriate dimension and represents a concatenation of all agents' state variables).
It is important to note that often the same objective may be represented in different forms. For example, if we want to keep a safe separation between two agents and with their position coordinates represented by state vectors ∈ ℝ and ∈ ℝ , this may be formulated in a simple form as ∥ − ∥ ≥ or by the following modified avoidance functions [9] related to the concept of avoidance control [4,5,6]: where is a concatenation of all state variables and > > 0. The sensing or detection regions of the agents are assumed to be of radius and the avoidance or unsafe regions (in this case determined by the smallest safe distance between the agents) are given by radius .
Each agent is assumed to have a goal which is to accomplish multiple objectives. In order for agents to achieve their goals, we design a goal function for each agent by using monotone approximation functions and objective functions, as shown in [12]. 4. Control strategies based on the approximation functions. In this section, we provide some ideas on constructing control strategies based on approximation functions. One approach, as introduced in [8], is related to the concept of control Liapunov functions [2,1]. First we assume that a nonautonomous dynamic system, that is, a system with a control input, is represented by a set of differential equationṡ ( ) = ( , ( ), ( )), ( 0 ) = 0 , 0 ∈ [0, +∞) (15) where the state variable vector valued function (⋅) with ( ) ∈ ℝ is assumed to be an absolutely continuous function [3]. The input (⋅) : [0, +∞) → U is restricted to a set of admissible functions with the image being a compact set U, U ⊂ ℝ . By admissible we mean that any function (⋅) ∈ produces a unique absolutely continuous solution (⋅) from any initial condition 0 starting at given initial time Now let us assume that a goal is mathematically formulated as ( , ) ≤ where (⋅) is a differentiable function. Then one way to satisfy this goal is to decrease values of function (⋅) along the trajectories of (15) in time until it satisfies the goal condition at some time = when ( , ( )) ≤ . Since the goal function is differentiable, a design of a control strategy can be based on the following minimization procedure:ˆ where ∂ ( , )/∂ + ∂ ( , )/∂ ( , , ) is an expression that is equal to the time derivative of ( , ( )) computed along the trajectories ( ) of (15) for the given initial condition 0 and control = ( ). Variable is an auxiliary variable that takes values in the compact set U. Henceforth we assume that there exists a control which satisfies (16) and is admissible, that is, (⋅) ≡ˆ (⋅, (⋅)) ∈ .

A numerical example.
To illustrate the methodology for computing control strategies as well as establishing sufficient conditions for accomplishing agents' goals in multi-agent situations, we consider a pursuit-evasion problem. We pose and solve the problem involving two pursuers and two evaders, employing a kinematic model with ∈ ℝ 2 and ∈ ℝ 2 , and ∈ {1, 2} denoting the evaders and ∈ {3, 4} denoting the pursuers. Agent 's position in the plane is described by its horizontal coordinate which is the first entry of and its vertical coordinate which is the second entry of . The control strategies are computed using equations agents' goal functions with maximal speeds of the players being 1 = 4, 2 = 1, 3 = 2, and 4 = 5 (specific details are provided in [12]). All units are assumed to be normalized and thus not specified. The initial time is set to zero and the initial conditions for the players are given by 10 = [−10, 10] , 20 = [10,10] , 30 = [−1, 0] , and 40 = [1,0] . This example is constructed as an illustration of a capture of all evaders since capture is easier to depict than evasion. Thus, the pursuers are chosen to be faster and initially placed such that the slower pursuer (agent 3) is closer to the faster evader (agent 1) and the faster pursuer (agent 4) is closer to the slower evader (agent 2). Trajectories of the evaders are plotted using circles and trajectories of the pursuers are plotted using star symbols. All the simulations are done using MATLAB software.
We observe from Figure 1 that agent 3 immediately starts pursuing agent 1. Since agent 1 is faster than agent 3, after some time agent 4 starts pursuing agent 1 too (as seen in Figure 1). Then for a while agent 2 is not being pursued which eventually leads to an increase of the pursuers' goal function values resulting in agent 3 changing its course and starting to pursue agent 2 (as shown in Figure 2). Complete trajectories are depicted in Figure 3.
In this example we consider a soft capture with the radius of capture being = 0.1. Conclusions. An approach to formulate an accomplishment of multiple objectives using approximations of minimum and maximum functions as well as particular nonnegative objective functions is presented. This approach provides a differentiable scalar goal function and a corresponding goal condition for each agent where the goal of each agent is to accomplish a number of objectives. This is particularly convenient in the case of continuous-time dynamic systems since each agent wishes to accomplish its goal, which is formulated using a differentiable scalar function. Therefore problems caused by using nondifferentiable minimum and maximum functions as well as vectors of multiple objective functions, are bypassed. As an illustration, a pursuit-evasion dynamic game problem is provided.