Linear convergence of a primal-dual algorithm for distributed interval optimization

: In this paper, we investigate a distributed interval optimization problem whose local functions are interval functions rather than scalar functions. Focusing on distributed interval optimization, this paper presents a distributed primal-dual algorithm. A criterion is introduced under which linear convergence to the Pareto solution of distributed interval optimization problems can be achieved without strong convexity. Lastly, a numerical simulation is presented to illustrate the linear convergence of the algorithm that has been proposed


Introduction
Due to the theoretical significance and wide range of applications in areas such as machine learning, multi-agent system coordination, sensor networks, and smart grids, distributed optimization has received a lot of attention from researchers in recent years.Various distributed algorithms for solving distributed optimization problems have been introduced, and they involve agents collaborating with their neighboring agents in order to attain global minimization, see recent works [1][2][3][4][5][6][7].
Given this context, it is natural for us to consider the design of efficient algorithms to solve DIOPs over multi-agent networks.The DIOPs, nevertheless, remain a subject of ongoing research.This may be due to the ease with which line search algorithms (e.g., Wolfe or Lamke's algorithms [21][22][23][24]) can be applied in distributed settings, and very few papers [25] with related theoretical results have been published.In addition, algorithm designs are made difficult by the partial order of interval functions.
Furthermore, there is growing interest in the convergence rates of distributed algorithms for distributed optimization with scalar functions.In fact, when local objective functions were strongly convex, the algorithms of [2,26,27] achieved linear convergence rates for the centralized and distributed counterparts.Local scalar functions for distributed optimization are not strongly convex in a number of practical applications.Further investigation was undertaken by a group of scholars [1,[28][29][30] regarding the substitution of strongly convex conditions that dictate linear convergence rates.For example, [1] analyzed four distinct categories of function conditions and deduced the linear convergence of numerous centralized algorithms.The authors of [28,29] respectively demonstrated the linear rates of their distributed algorithms under metrically sub-regular and Polyak-Lojasiewicz conditions.
In this paper, we investigate the Pareto solutions of a DIOP whose local functions are interval functions rather than scalar functions.The DIOP is given as follows: where (c) We discuss a crucial criterion that, when applied to Pareto solutions of a DIOP, weaken the strict or strong convexity required for linear convergence.Given that this paper investigates DIOPs, the supplied criterion differ from those delineated in [1,28,29].In addition, the criterion is essential for evaluating the convergence of DIOP distributed algorithms.
The rest of the paper is organized as follows.The preliminaries of this paper are given in Section 2. In Section 3, the DIOP is analyzed.The primal-dual algorithm is further given to find a Pareto solution of the DIOP in Section 4 and a numerical example is given in Section 5. Finally, the conclusion of this paper is offered in Section 6.
Notations.Denote by R the set of real numbers, I n ∈ R n×n as the identity matrix, and 1 n = [1, 1, . . ., 1] ⊤ ∈ R n , respectively.Denote ⟨• , •⟩ as the inner product and ∥ • ∥ as the Euclidean norm in R n .

Preliminaries
In this section, we present an introduction to convex analysis for scalar functions [32], graph theory, and interval optimization [33].

Graph theory
Define N = {1, 2, ..., n} as the agent set and E ⊂ N × N as the set of edges between agents.The communication between n agents is described by an undirected graph G = N, E .If (i, j) ∈ E, then the agent i can communicate with the agent j.Therefore, each agent i ∈ N can communicate with agents in its neighborhood N i = { j|(i, j) ∈ E} ∪ {i}.
Denote A ∈ R n×n as the communication matrix between agents, whose elements a i j satisfy the following conditions: a ii , if i = j a i j , if i j and (i, j) ∈ E 0, otherwise.
(2.1) Denote d i by the degree of agent i, i.e., |d i | = n j=1 a i j .Further, denote D by the n×n diagonal degree matrix such that D = diag( n j=1 a 1 j , . . ., n j=1 a n j ).Then, the associated Laplacian matrix P ∈ R n×n is P := D − A.
The following assumption forms the basis of the communication topology G = N, E between agents over the network: Assumption 1 is extensively employed in [28], this ensures the consensus of vectors for agents over the network.

Convex analysis
Prior to proceeding with the discussion of interval functions, we define convexity and the Lipschitz continuity of scalar functions.
The following lemma is crucial for the analysis of convergence in distributed optimization problems involving scalar functions and interval functions.
Lemma 1. [32, Lemma 11, Chapter 2.2] Define {v k } k⩾1 and {w k } k⩾1 as two nonnegative scalar sequences.Define {h k } k⩾1 as a scalar sequence, which is bounded from below uniformly.If there exists a nonnegative constant sequence

Interval optimization problems
Let G : R p ⇒ R be any interval map.Now, we consider the following IOP: The Pareto optimal solution to an IOP is defined as follows: Definition 2.
[34] A point s * ∈ Ω is said to be a Pareto optimal solution to an IOP iff it holds that for some The example of the DIOP is presented below.There is no solution other than the Pareto solution in the example that follows.
Example 1.The IOP illustrated in Figure 1 does not have a solution.However, the Pareto optimal solutions to the given problem are [s 1 , s 2 ].
According to Definition 2, [s 1 , s 2 ] are Pareto optimal solutions to this given problem.To investigate the Pareto solutions of an IOP, let us consider the following IOP in conjunction with its scalarization (SIOP): where λ ∈ [0, 1].The following lemma holds for Pareto solutions of IOPs and solutions of SIOPs according to [34].Furthermore, it remains valid in distributed settings.Lemma 2. [34] We assume that G is compact-valued and convex with respect to x: such that s * ∈ Ω is an optimal solution of the SIOP.

Optimization model and algorithm
In this section, we consider a DIOP and introduce its distributed primal-dual algorithm.

Optimization model
Consider the following DIOP: For any given s i , L i (s i ) ⩽ R i (s i ) holds.Each agent i knows its local interval function G i .Define L(s) and R(s) as where z = z 1 , z 2 , . . ., z n ⊤ ∈ (0, 1) n and s = s ⊤ 1 , s ⊤ 2 , . . ., s ⊤ n ⊤ ∈ R np .Let z = z 0 1 n with z 0 ∈ (0, 1).The DSIOP (3.1) can be rewritten as follows: (DSIOP) min where each agent i possesses the following information: ∇ f i , s i , z i ∈ (0, 1) and s j ∈ N i .The given problem (3.5) can be modeled as a distributed optimization problem [28,36,37] with scalars when z represents a common vector to each agent i.Additionally, under Assumption 2, the following lemma remains valid: Lemma 3. [34,35] (a) f i s, z is linear with respect to z and f i s, z is convex with respect to s .(b) There are Lipschitz constants k i1 and K 1 such that the partial derivative ∇ f i x s, z is Lipschitz continuous with respect to s with k i1 and ∇F s s, z is Lipschitz continuous with respect to s with K 1 .(c) There are Lipschitz constants k i2 and K 2 such that f i s, z is Lipschitz continuous with respect to z with constant k i2 and F s, z is Lipschitz continuous with respect to z with constant K 2 .(d) There are Lipschitz constants k i3 and K 3 such that the partial derivative ∇ f i x s, z is Lipschitz continuous with respect to z with constant k i3 and ∇F s s, z is Lipschitz continuous with respect to z with constant K 3 .
It should be noted that although f i (s i , z i ) is convex with respect to s and z, f i (s i , z i ) is not a convex function.Owing to the non-convexity of f i (s i , z i ), the criteria for linear convergence rates of algorithms are no longer applicable to the DIOP.

Algorithm
During distributed optimization processes, s 1 , . . ., s n , z 1 , . . ., z n are not necessarily equal all of the time.Therefore, it is natural to treat those variables separately and impose the soft constraints s 1 = . . .= s n , z 1 = . . .= z n .By using the Laplacian matrix P, these constraints are equivalent to Ps = 0 and Pz = 0, where Define by z0 = 1 n n i=1 z 0 i , z0 = [(z 0 ) ⊤ , (z 0 ) ⊤ , . . ., (z 0 ) ⊤ ] ∈ R np , where z 0 i ∈ (0, 1) is an initial value for any agent i.For the vector z0 , denote S * as the optimal solution set of problem (3.6) and T * as the saddle point set of problem (3.7), respectively.According to Assumption 2, for a proper given z 0 , there exists t * such that (s * , t * ) ∈ S * × T * .(s * , t * ) ∈ S * × T * also satisfies the following lemma, which is also a basis for the analysis of convergence: With Lemma 4, we introduce a distributed primal-dual algorithm as follows: where the step-size h satisfies that 0 < h < 2 L+4σ , σ is the largest eigenvalue of the Laplacian matrix P. At the k-th iteration, for all i ∈ V = {1, 2, . . ., n}, each agent i only obtains a partial gradient in the form of , and it is cooperative with neighbors to achieve a Pareto solution of problem (3.1).
The constraint P lim k→∞ z(k) = 0, z i ∈ (0, 1) in (3.6), is satisfied through (3.9b) and the initialization of z i (0) ∈ (0, 1) in (3.9), while the constraint P lim k→∞ x(k) = 0 and the minimization of F x, z are satisfied through (3.9a) and (3.9c) in (3.9).Define , where W * is the primal-dual solution set of problem (3.7).Algorithm (3.9) can be rewritten in a compact form in terms of {w, z}: where We have the following basic result, whose proof is in the Appendix.
Theorem 1.Under Assumptions 1 and 2, {s k , t k } converges to the Pareto solution set W * .
Lemma 5.With Assumption 1, z k converges to z0 with a linear convergence rate γ 1 whose elements belong to (0, 1): Lemma 6 is additionally presented to illustrate the minimum and maximum values of V(w, z).Lemma 6.With Assumptions 1 and 2, the following inequality holds for the Lyapunov function V(w, z): where K 1 , K 2 are Lipschitz constants given in Lemma 1, and σ is the largest eigenvalue of the Laplacian matrix P.
The asymptotic convergence of (3.9) is demonstrated by Theorem 1, which is consistent with that of [28] for distributed optimization.It should be noted that the inclusion of the partial gradient term ∇F s (s, z) renders inapplicable the contraction mapping principle.In contrast to numerous distributed algorithms that rely on the contraction mapping principle for their proofs [26,28,37,39], this awork involves employing the martingale convergence theorem (Lemma 1) in Theorem 1.

Main results
In this section, we present our main results.A criterion without strong convexity is first introduced for the DIOP, which, together with (3.9) will imply linear convergence.Our criterion for (3.9) to achieve exponential convergence is as follows.
Criterion.The continuously differentiable function L > 0 has a restricted quadratic gradient growth.That is, if there exists a constant κ L such that for any w, w * = P W * (w), we have where L is the augmented Lagrangian function defined in (3.8).
The criterion given in this paper differs from the quadratic convex condition given in [1] and the metrically irregular condition discussed in [28] for distributed optimization problems with scalar functions.This criterion is given for DIOPs.On the other hand, regarding the dynamics given by (3.9), we will show that (4.1) is sufficient to achieve linear convergence.Theorem 2. Under Assumptions 1 and 2 and (4.1), {s k , t k } converges linearly to the optimal set W * .
As shown in Theorem 1, (4.1) plays an important role in achieving linear convergence even in the absence of strong convexity of f i (s i , z i ).In this paper, we extend the quadratic convex condition given in [1] to (4.1) for interval functions.Criterion (4.1) also describes another linear growth condition of gradients for distributed optimization problems.

Simulation
In this section, we demonstrate the following simulation:
where υ 1i , υ 1i ∈ R and ρ i ∈ R p .The problem is motivated from both a centralized IOP [35] and the distributed optimization [40].The communication topology between agents is described by Figure 2.

Conclusions
We have investigated a DIOP in which the local functions are interval functions in this paper.With distributed interval optimization as its primary focus, this article introduces a distributed primal-dual algorithm.A criterion has been proposed that allows the linear convergence to the Pareto solution of a DIOP without strong convexity.Finally, a numerical simulation has been executed to demonstrate the linear convergence of the proposed algorithm.Given that the existing research on DIOPs primarily focuses on objective interval functions, the investigation of distributed problems involving interval constraints should be expanded in the future.

Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.L 2 ∥s − s * ∥ 2 .According to Lemma 3, F(s, z) is Lipschitz continuous with respect to z, we have that F(s, z) − F(x, z0 ) ⩽ K 2 ∥z − z0 ∥.Note that where the second inequality builds on the definition of I 1 (w).Since ∥P∥ ⩽ σ, we have holds for every given s.Still, each agent can only get the gradient information of interval function G i .By means of neighborhood information communication, the global Pareto solution is obtained.The contributions of this paper are summarized as follows: (a) We investigate the Pareto solution of a DIOP whose local functions are interval functions.By incorporating convexity and well-defined partial orderings of interval functions, we convert the DIOP [11, 20, 31] into a solvable distributed optimization problem scalarization (DSIOP) with convex global constraints.(b) In this reformulation, the optimal solutions of the DSIOP correspond to the Pareto solutions of the DIOP.With this relationship, we propose a distributed primal-dual algorithm to find a Pareto solution of the DIOP.

Figure 1 .
Figure 1.L(x) and R(x) for vector x.
(a) If there exists a real number λ ∈ (0, 1) such that s * ∈ Ω is a solution to the SIOP, then s * ∈ Ω is a Pareto optimization of the IOP.(b) If a point s * ∈ Ω is a Pareto optimization of the IOP, then there exists a real number λ ∈ [0, 1]

. 2 )
With(3.2), the definition of Pareto solutions is then given to the DIOP.Definition 3. [34] s * ∈ Ω is a Pareto solution of the DIOP, iff for some s ∈ Ω, L(s) ⩽ L(s * ) and R(s) ⩽ R(s * ) both hold implying that L(s * ) ⩽ L(s) and R(s * ) ⩽ R(s).The existence of Pareto solutions for the DIOP is guaranteed by Assumption 2 which is consistent with the centralized counterpart[35].Assumption 2. (a) L i (s) and R i (s) are strongly convex, continuous functions.(b) Problem (3.1) has at least one Pareto solution.(c) Gradients of L i (s) and R i (s) are Lipschitz continuous.Lemma 2 also establishes a theoretical framework for Pareto solutions for the DIOP.Consider the following scalarization of the DIOP as well.Define f : R np

Figure 5 .Figure 6
Figure 5. s k i for agent i of centralized primal-dual algorithm.