OPTIMAL MATCHING PROBLEMS WITH COSTS GIVEN BY FINSLER DISTANCES

In this paper we deal with an optimal matching problem, that is, we want to transport two commodities (modeled by two measures that encode the spacial distribution of each commodity) to a given location, where they will match, minimizing the total transport cost that in our case is given by the sum of the two different Finsler distances that the two measures are transported. We perform a method to approximate the matching measure and the pair of Kantorovich potentials associated with this problem taking limit as p → ∞ in a variational system of p−Laplacian type.

1. Introduction. In this paper we continue the study of the optimal matching problem that we performed in [16]. An optimal matching problem (see [5], [6]) consists in transporting optimally two commodities to a prescribed location in such a way that they match there. The optimality criteria consists in minimizing the total cost of the operation measured in terms of the two Finsler distances that the commodities are transported. We deal with two general Finsler distances that are not necessarily symmetric, therefore the problem requires that we tackle some extra difficulties that are not present when the cost is given by the sum of two Euclidean distances.
By improving the tools developed in [16] and [17] we approach the problem by taking limit as p → ∞ in a system of PDEs of p−Laplacian type, which allows us to give an approximation method to get a matching measure (that encodes the location where the matching takes place) and the Kantorovich potentials for the involved transports. This procedure to approximate mass transport problems (taking limit as p → ∞ in a p−Laplacian equation) was first introduced by Evans and Gangbo in [11] and reveals quite fruitful, see [1], [3], [14], [15], [16], [17]. We have to remark that this limit procedure requires some care since here the involved PDE system is nontrivially coupled and therefore the estimates for one component are related to the ones for the other.
Optimal matching problems for uniformly convex costs where analyzed in [4], [5], [6] and have implications in economic theory (hedonic markets and equilibria), see also [7], [8], [9] and references therein. For the case in which costs are given by the Euclidean distance see [16] .
1.1. Optimal transport problems. Optimal matching problems are closely related to optimal mass transport problems. For notations, concepts and results from the Monge-Kantorovich Mass Transport Theory we refer to [1], [10], [18] and [19]. Below, for the reader's convenience, we just briefly introduce the usual terminology of optimal mass transport theory that we will use in the rest of the paper.
Let us introduce the cost functions we will handle. Given a Finsler structure F on Ω, we define the following cost function c F : where, for x, y ∈ Ω, the set Γ Ω x,y is given by Γ Ω x,y := {σ ∈ C 1 ([0, 1], Ω), σ(0) = x, σ(1) = y}. We have that c F is a Finsler distance. We make emphasis on the fact that c F is not necessary symmetric (i.e., c F (x, y) ̸ = c F (y, x) may happen) because F is merely positively homogeneous. This fact creates new difficulties in the optimal mass transport problem compared with the case in which the cost is given by a norm (that is symmetric).
1.3. The optimal matching problem. We fix two non-negative compactly supported functions f + , f − ∈ L ∞ , with supports X + , X − , respectively, satisfying the mass balance condition We also consider a compact set D (the target set). Then we take a large bounded domain Ω that contains all the relevant sets, the supports of f + and f − , X + , X − and the target set D. For simplicity we will assume that Ω is a convex C 2 bounded open set. We also assume that the resulting configuration verifies Now, we are given two continuous Finsler structures F and G and, associated to them, two Finsler distances, c F , c G , given by (3). Let us consider the set of measures The optimal matching problem is the minimization problem is a minimizer of the optimal matching problem (4), we shall call the measure µ * = π 1 #γ + = π 1 #γ − an optimal matching measure.
Note that on the right-hand side of (5) we are considering all possible measures supported in D with total mass M 0 and then we minimize the total transport cost. This is probably the most natural way of looking at the optimal matching problem and, as shown above, it is equivalent to our previous formulation.
The following result shows that there exist measures (γ * Consequently, we have existence of optimal matching measures. Theorem 1.1. The optimal matching problem (4) has a solution, that is, there Proof. Take in (5) a minimizing sequence µ n ∈ M(D, M 0 ), then by the weak compactness of M(D, M 0 ) there exist a subsequence, denoted equal, that converges weakly in the sense of measures to a µ ∞ ∈ M(D, M 0 ). Hence, by the weakly lower semi-continuity of the functions ν → W cF (µ, ν) and ν → W cG (µ, ν), we have and thanks to (2) we conclude.
As we have mentioned, our main goal here is to find a pair of Kantorovich potentials and a matching measures taking the limit as p → ∞ in a system of p−Laplacian type equations. Let us briefly introduce this. Consider the variational problem where F * and G * are the dual Finsler structures associated to F and G. Under adequate differentiability conditions on F and G, given a minimizer (u p , v p ) of (6), there exists a positive Radon measure h p supported in D such that together with Neumann boundary conditions on ∂Ω. Then, our main result reads as follows: These limit functions and the limit measure provide a solution to the optimal matching problem in the sense that they satisfy: h ∞ is an optimal matching measure, v ∞ is a Kantorovich potential for the transport of f + to h ∞ , and w ∞ is a Kantorovich potential for the transport of f − to h ∞ . Remark 1.3. In the case that both costs are given by the Euclidean distance we want to point out the following: First, the corresponding Monge transport problems have a solution and moreover (see [16]) In addition, in [16] we also showed that there exist optimal matching measures supported on the boundary of the target set. In this general setting of costs given by Finsler structures this is not always true, as the following simple example shows (for simplicity we consider discontinuous Finsler structures, but the same example can be adapted easily to provide a continuous example). Let Ω = (−1, 6), f + = χ [4,5] , 1] and D = [2,3]. For 0 < ϵ < 1, we consider continuous Finsler structures, F ϵ and G ϵ , defined as Hence, for x ∈ [4,5], if y ∈ (2, 2 + ϵ) then Then it is clear that, when ϵ is small enough, any optimal matching measure will be supported in the set [2 + 1 2 ϵ, 3 − 1 2 ϵ], since otherwise we have to pay something of order 1/ϵ per unit of mass for one of the two transports, and in the set (2 + ϵ, 3 − ϵ) we have to pay at most 3 for every unit of mass transported there.
Finally, let us remark that in the particular case in which c F and c G are geodesic distances on a geodesically complete, connected Riemaniann manifold (see [17] for several examples), by the results in [13], it is known that the corresponding Monge transport problems has a solution, that is, there exists Borel functions T * ± : Ω → Ω such that Therefore, Moreover, we also have These are the contents of the paper: in Section 2 we collect some properties of Finsler functions that will be used in the core of the paper; Section 3 contains our main results, we deal with the p−Laplacian system and show that we can pass to the limit as p → ∞ obtaining the optimal matching results, in addition we find transport densities for the transport problems involved.

Preliminaries on Finsler functions.
In this section we collect some properties of Finsler functions in R N that will be used in the sequel. Recall from the introduction that a Finsler function Φ is a non-negative continuous convex function, positively homogeneous of degree 1, that vanishes only at 0. Observe that Φ satisfies for some positive constants α, β.
Note that Finsler functions are extensions of norms. In fact, any norm in R N is a Finsler function, and any symmetric Finsler function is a norm. Moreover, for any Finsler function, convexity is equivalent to the triangular inequality. Let This set B Φ is a closed, bounded and convex set with 0 ∈ int(B). It is symmetric around the origin if Φ is a norm. Conversely, for any closed bounded convex set K with 0 ∈ int(K), ϕ K (ξ) := inf{α > 0 : ξ ∈ αK} is a Finsler function with B ϕK = K; when K is centrally symmetric, we have a norm. In the literature the Finsler functions are also denominated as Minkowski norms. The dual function (or polar function) of a Finsler function Φ is defined as It is immediate to verify that Φ * is also a Finsler function; and a norm when Φ is a norm. We also have Therefore, the following inequality of Cauchy-Schwarz type holds, If Φ is a norm, we have Now, for general Finsler functions the inequality (2) is not true. An example of a Finsler function that is not a norm in R is given by Φ(ξ) := aξ − + bξ + , with 0 < a < b. It is not difficult to see that Hence, If we assume that the Finsler function Φ is differentiable at ξ, then by Euler's Theorem, Φ(ξ) = ⟨DΦ(ξ); ξ⟩.
From (3) and (4), we also have 3. The limit as p → ∞ in a p−Laplacian system. In this section we show that we can follow the ideas of Evans and Gangbo in [11] to get the matching measure and the Kantorovich potentials at the same time. From now on we will assume that F, G are continuous Finsler structures on Ω satisfying α 1 |ξ| ≤ F * (x, ξ) ≤ β 1 |ξ| for any ξ ∈ R N and x ∈ Ω, (7) α 2 |ξ| ≤ G * (x, ξ) ≤ β 2 |ξ| for any ξ ∈ R N and x ∈ Ω, (8) being α i , β i positive constants. We can take w.l.g. α i = α and β i = β.
In [17] we proved the following result that will be used later on.
As consequence of Lemma 3.1, we have that the set of functions coincides with the set K * F (Ω) := {u : Ω → R : esssup x∈Ω F * (x, Du(x)) ≤ 1} 3.1. The limit procedure. Take p > N from now on, and recall that, for simplicity, we assumed that Ω is a convex C 2 bounded open set. We will use the following result whose proof follows standard Functional Analysis arguments.

Lemma 3.2 (A Poincaré's type inequality). There exists a constant C > 0 such that
The constants that appear in the previous inequality may not be unform in p. It is not our aim here to make this dependence precise, then we are not allowed to use these results in the passage to the limit as p → ∞, they are used only to show existence of a solution of the variational problem under consideration.
Let us consider the following variational problem OPTIMAL MATCHING PROBLEM FOR FINSLER DISTANCES 9 The next result deals with existence and uniqueness of solutions for the variational problem (9).

Theorem 3.3.
There exists a minimizer (v p , w p ) of (9). Moreover, when F * and G * are strictly convex we have uniqueness of minimizers up to an additive constant, that is, if (ṽ p ,w p ) is another minimizer then there exists a constant c such that Let us begin by observing that, since the functions in W 1,p (Ω) (p > N ) are continuous, it is easy to see that ∫ Ω w. Now, by Lemma 3.2, and having in mind (7) and (8), Ψ p (v, w) is a finite lower semicontinuous and coercive convex functional for the closed convex subset of W 1,p (Ω) × W 1,p (Ω), B, given by Then, by [2, Corollary 3.23], Ψ p attains its infimum on B, which is equivalent to say that inf is attained. Uniqueness for strictly convex Finsler structures follows as in [16].
Now we prove that we can pass to the limit as p → ∞ in a subsequence of minimizer functions. Theorem 3.4. Let (v p , w p ) be minimizer functions of (9). Then, there exists a subsequence p i → +∞ such that Proof of Theorem 3.4. Let us take (v p , w p ) ∈ B a minimizer of (9).
Now, by (10), we can assume that there exists x p ∈ D such that v p (x p )+w p (x p ) = 0. We can also assume that v p (z ∞ ) = 0 for all p, for any z ∞ ∈ Ω. Hence, as p > N , we have, and with C 1 not depending on p. See [16] for the details. From (12), (7), (8), (13) and (14) and using Hölder's inequality, we get with C i independent of p. Therefore, by Morrey's inequality and Arzela-Ascoli's compactness criterion, there exists a subsequence such that v pi → v ∞ uniformly in Ω and w pi → w ∞ uniformly in Ω, Finally, passing to the limit in (12), we get ∫ This ends the proof.

Remark 3.5.
Note that the convergence as p → ∞ is only along a subsequence. The main content of our result is that there is enough compactness to pass to the limit along subsequences and moreover that all possible limits are solutions to the maximization limit problem (11).
We now prove some properties of the minimizers that allow to show that (v ∞ , w ∞ ) are Kantorovich potentials. Moreover, we will see that this limit procedure gives much more since it allows us to identify the optimal matching measure. Lemma 3.6. Assume that F * (x, ·) and G * (x, ·) are C 1 (R N \ {0}). Let (v p , w p ) be minimizer functions of problem (9). Then, there exists a positive Radon measure h p of mass M 0 such that 1.
Here η is the exterior normal vector on ∂Ω, and ∂F * ∂ξ is the gradient of F * (x, ξ) with respect the second variable ξ, similarly for G * .

The positive measure h p is supported on
Proof. In this proof, for shortness, we will avoid to write the x dependence of F * and G * , that is, we write F * (Dv p ) and G * (Dw p ). Recall that since p > N , we have W 1,p (Ω) ⊂ C(Ω). For any φ, ψ ∈ W 1,p (Ω) such that φ + ψ = 0 in D, since (v p , w p ) is a minimizer of Ψ in the set has a minimum at t = 0. Therefore, I ′ 1 (0) = 0, from where it follows that ∫ Observe that, taking ψ = −φ in (17), we get that Similarly, for any φ ∈ W 1,p (Ω), φ ≥ 0, and any t > 0, we have Then, by taking limits in Ii(t) Therefore, thanks to (18) then, taking limits in I4(t) t as t → 0, we conclude. Given now φ ∈ D(R N ), if we take ψ ∈ D(Ω) such that φ + ψ = 0 en D, (17) On the other hand, since ψ ∈ D(Ω) and supp(h p ) ⊂ D, we have ∫ Then, from the two above expressions, by density we obtain that ∫ which shows the first statement in (1) for the first problem. Similarly, we obtain the second one. Finally, taking φ = 1 in (19), we get ∫ Ω dh p = M 0 , and the proof is finished.
3.2. The optimal matching problem. Let us begin with the following proposition.