On a Model for Mass Aggregation with Maximal Size

We study a kinetic mean-field equation for a system of particles with different sizes, in which particles are allowed to coagulate only if their sizes sum up to a prescribed time-dependent value. We prove well-posedness of this model, study the existence of self-similar solutions, and analyze the large-time behavior mostly by numerical simulations. Depending on the parameter $\Dconst$, which controls the probability of coagulation, we observe two different scenarios: For $\Dconst>2$ there exist two self-similar solutions to the mean field equation, of which one is unstable. In numerical simulations we observe that for all initial data the rescaled solutions converge to the stable self-similar solution. For $\Dconst<2$, however, no self-similar behavior occurs as the solutions converge in the original variables to a limit that depends strongly on the initial data. We prove rigorously a corresponding statement for $\Dconst\in (0,1/3)$. Simulations for the cross-over case $\Dconst=2$ are not completely conclusive, but indicate that, depending on the initial data, part of the mass evolves in a self-similar fashion whereas another part of the mass remains in the small particles.


Introduction
Mass aggregation is a fundamental process that appears in a large range of applications, such as formation of aerosols, polymerization, clustering of stars or ballistic aggregation, see for instance [2,6,12]. In all these applications, certain types of 'particles' form clusters that are characterized by their 'size' or 'mass' x. Smoluchowski's [11] classical mesoscopic mean-field description of irreversible aggregation processes describes the evolution of the number density g(t, x) of clusters of size x per unit volume at time t. Clusters of size x and y can coalesce by binary collisions to form clusters of size x + y at a rate given by a kernel K(x, y), such that the dynamics of g is given by ∂ ∂t g(t, x) = 1 2 x 0 K(y, x − y)g(t, x − y)g(t, y) dy − g(t, x) ∞ 0 K(x, y)g(t, y) dy .
An issue of fundamental interest in the mathematical analysis of coagulation processes is the phenomenon of dynamic scaling for homogeneous kernels. This means that for initial data from a certain class the solution to (1) converges to a certain self-similar solution. Unfortunately, except for some special kernels such as the constant one, this question is still only poorly understood (see e.g. [8] for an overview). While it has been common belief in the applied literature that self-similar solutions are unique, it has recently been shown for some special kernels [9] that there is a whole one-parameter family of self-similar solutions. These solutions can be distinguished by their tail behavior, and their respective domains of attraction are characterized by the tail behavior of the initial data.
In contrast to this, very little is known for other homogeneous kernels. The existence of fast-decaying self-similar solutions for a range of kernels is established in [3,4], but both the existence of solutions with algebraic tail and the uniqueness of solutions are still open problems. In general, unless explicit methods such as the Laplace transform work, just proving existence of self-similar solutions to a coagulation equation can be a formidable task.
Thus, despite their fundamental role, many properties of mean-field models for coagulation processes, in particular with respect to dynamic scaling, are not well understood. Motivated by an application of elasto-capillary coalescence of wetted polyester lamellas [1], we investigate this question for a special singular coagulation kernel K. This kernel allows only those clusters to coagulate that can form clusters of a given maximal size. To our knowledge this is the first mathematical analysis for such type of kernels. Even though tails of the size distribution do not play a role here, we find that self-similar solutions are still not unique and the analysis of the long-time behavior turns out to be delicate.
In model considered here, two particles can coagulate only if the sum of their sizes is equal to M (t), where M is a given increasing function of time. This means that at time t only particles of size M (t) are emerging and the amount of particles of all smaller sizes is decreasing. This corresponds to K(x, y, t) ∼ δ 0 (x + y − M (t)), where δ 0 denotes the delta-distribution in 0 and the factor of proportionality may depend on time.
As above, we denote the number density of particles of size x at time t by g(t, x). The total number of particles per unit volume at time t is then N (t) = M (t) 0 g(t, x)dx. At time t, the density of particles of any size x < M (t) is decreasing at a rate proportional to the probability density of a particle of size x meeting a particle of size M (t) − x. The coagulation process can hence be described by where K(t) is a rate function proportional to the expected number of coagulation events per unit time. Motivated by [1] we make the ansatz where k 0 is a constant of proportionality that depends on the particular physical process. The above equation hence reads The coagulation process described above neither creates nor destroys mass, which is expressed by the mass conservation equation where σ is a constant. From (2) and (3) we will derive an equivalent condition for g(t, M (t)), see Equation (7) below. For x > M (t), g(t, x) is not changing since particles of size greater than M (t) can not form a particle of size M (t) by coagulation. Hence ∂g ∂t (t, x) = 0 for x > M (t). We are normally interested in processes where the value of M at the starting time is greater than the size of all particles existing initially, and hence we also assume g(t, x) = 0 for x > M (t). An important feature of Equation (2) is its invariance under reparametrization of time. As a consequence M (t) is not determined by the initial data but can be prescribed to be an arbitrary increasing function of time (see also [7,10], where such an invariance has been crucial in the analysis of a model for min-driven clustering). In the following we choose M (t) = t and also normalize the mass by setting σ = 1. Consequently, in what follows we study the system of equations with t ≥ 1 and 0 ≤ x ≤ t.
The first aim of this article is to establish well-posedness of the initial value problem and to study the long time behavior of solutions to (4). To this end, we prove global existence and uniqueness of mild solutions in Section 2, Theorem 1 by rewriting (4) as a fixed point equation.
In contrast to coagulation equations with more regular kernels, for which well-posedness can often be proved in the space of probability measures [5], here we need to work with functions that are continuous up to the points x = 1 and x = t. The reason that the fixed-point argument is not quite straightforward is that at any time there is an influx of particles of the largest size M (t) that leads to a nontrivial boundary condition.
In Section 3 we study the existence of self-similar solutions. Combining rigorous arguments with numerical simulations, we identify k 0 = 2 as a critical value: For k 0 < 2 no self-similar solutions exist but for k 0 > 2 we find two self-similar solutions which have different shape and stability properties. Section 4 is devoted to numerical simulations of initial value problems. Our results for k 0 > 2 suggest that, after a suitable rescaling, each solution converges to the second self-similar solution. For k 0 < 2, however, g converges to a steady state g ∞ (x), x ∈ R + , whose shape depends strongly on the initial data. We finally prove this assertion in Proposition 6 for k 0 < 1 3 .

Global existence and uniqueness of solutions
In this section we derive a notion of mild solutions to (4) that relies on an appropritate reformulation of the mass constraint t 0 xg(t, x)dx = 1. We then use Banachs's Contraction Mapping Theorem to prove the local existence and uniqueness of mild solutions, and finally employ some a priori estimates to show that these mild solutions exist globally in time.

Notion of mild solutions and main result
To reformulate the mass constraint (4) 3 we suppose that a piecewise smooth solution g(t, x) to (4) 1 and (4) 2 is given. We then have and due to (4) 1 we obtain Moreover, substituting x This condition is equivalent to (4) 3 provided that g(t, x) solves (4) 1 , and for the ease of notation we introduce the operator B : In what follows we fix some T > 0 and introduce X as the space of all functions g that satisfy where the initial data g ini are supposed to satisfy To introduce the notion of mild solutions we now define the operator Γ : X → X by with Notice that Γ maps X into itself, and that for eachg ∈ X the function g = Γ[g] satisfies in almost all points (t, x) the linear differential equation withÑ (t) = t 0g (t, x)dx and initial data g(1, x) = g ini (x) . Our main result can be summarized as follows.
Theorem 1. For given initial data g ini with (5) and any T > 0 there exists a unique mild solution u ∈ X to (4) that satisfies (8) with Γ[g] = g.

Existence proof for fixed points of Γ
To employ the Contraction Mapping Principle we identify a subset S ⊂ X that is invariant under Γ and a metric such that X is complete, S is closed, and Γ is a contraction. In what follows S is given by Towards (10) we estimateÑ Therefore, f satisfies (10) provided that Lemma 2. There exists a constant D > 0 such that f (x) = h 0 e Dx 3 satisfies (12).
Proof. There is nothing to show for x ≤ 1. For x ≥ 1, condition (12) can be rewritten as For the first inequality we have estimated the integrand in the first and third part of the integral by 1, and in the second part by its value on the boundaries, which can be done since the integrand is a convex function. The second inequality then follows as A, D, and x are non-negative. We now choose A with A < (4T k 0 ) −1 , and this implies 2A x < 1 2T k 0 for all x ≥ 1. Next we choose D large enough so that e −3DA < 1 2T k 0 and 3DA > 1. Since the function xe −3DAx is decreasing for x ≥ 1 we find xe −3DAx < 1 2T k 0 for all x ≥ 1. Hence our choice of A and D guarantees (12).
We now define Lemma 3. We can choose T > 0 such that (11) is satisfied for allc ∈ S.
Proof. Forg ∈ S we have because Γ[g] is piecewise continuously differentiable in t and satisfies (9). The continuity properties of Γ[g] imply that t 0 xΓ[g](t, x)dx is differentiable in time, and we estimate For t = 1 we have We have now shown that S is invariant under Γ, and that there exists a constant M T such thatc(t, x) ≤ M T holds for allg ∈ S and all (t, x) ∈ Ω.
In the next step we construct a norm for X such that Γ is a contraction on S. To this end, we define and derive an estimate for Γ[g 1 ] − Γ[g 2 ] 1,2 in terms of g 1 −g 2 1,2 . Afterwards we show that Γ is a contraction with respect to some linear combination of these norms.

Lemma 4.
There exists a constant L > 0 such that for anyg 1 Proof. Let (t, x) ∈ Ω. Since |e a −e b | ≤ |a−b| for all a, b > 0, we obtain the following inequalities. For x < 1 we find that Now let x ≥ 1, and set N (s) ds). Then, We treat the last two summands separately. Analogously to (13) we estimate On the other hand, we can estimate Combining these results we find a constant L ′ that depends polynomially on k 0 , T, M T such that To derive the bounds for the second norm we split the interval [0, T ], and using (14) we find With L := max{T L ′ , 2L ′ } we then derive from (14), (15) that The assertions now follow by taking the supremum in the above inequalities. Now let β ≥ 2L, where L is as in the proof of Lemma 4, and define a norm on X by For anyg 1 ,g 2 ∈ S we then have and it is possible to choose T such that β(T − 1) < 1/2 and L(1 + β)(T − 1) < 1/2. Hence L(1 + β(T − 1)) < 3β/4, and this gives X equipped with · is a Banach Space and S is a closed and bounded subset, so the Banach Fixed Point Theorem guarantees that Γ has a unique fixed point g ∈ S. By construction, this fixed point solves the differential equation (4) 1 for t ≤ T . Moreover, since g(t, t) = B[g](t), it also satisfies condition (7), which is equivalent to (4) 3 .
It remains to prove that there exists a solution for all 1 < T < ∞. This can be done by standard methods because (i) for each T > 1 there exists a constant D = D(T ) > 0 as in Lemma 2, and (ii) each solution satisfies

Self-similar solutions
In this section we describe self-similar solutions to (4). These satisfy where G : [0, 1] → R + 0 is a continuously differentiable function, and the mass constraint (4) 3 requires This means t 2 α(t) is a positive constant. By rescaling α and G we can ensure that this constant is 1, so each mass preserving self-similar solution takes the form g(t, x) = t −2 G(x/t) with 1 0 yG(y)dy = 1. Using this relation in (4), and substituting y = x/t, we get We first consider a simplified problem where D > 0 is an arbitrary constant and we do not impose the mass constraint on G. At first, we notice that each solution G to (17) provides a solutionG to (16) via We also observe that (17) is invariant under the scaling G λG, D λ −1 D. Consequently, in order to characterize the solution set of (16), we have to investigate (17) for only one value of D, and then consider how the corresponding solutions transform under (18). Proof. We multiply (17) by y and substitute F (y) = G(y)y 2 to obtain Next, we decompose F into its odd and even parts with respect to 1/2 by setting Since (20) is locally Lipschitz for y ∈ (0, 1), the local existence and uniqueness of solutions to the initial value problems is granted. In particular, for given values   We next discuss some numerical ODE simulations that illustrate how the solutions of (17) with D = 1 depend on the value of G(1/2). For G(1/2) = 2 we get the trivial solution G ≡ 2. From now on, we refer to solutions with G(1/2) > 2 as supercritical and to those with G(1/2) < 2 as subcritical. These two types behave rather differently, see Figure 1, which shows the solutions for G(1/2) ∈ {0.2, 0.6, 1.0, 1.4, 1.8} and G(1/2) ∈ {2.0, 2.4, 2.8, 3.2, 3.6, 4.0}. Our numerical results indicate that each supercritical solution has precisely one local maximum between 1/2 and 1 but no local minimum. On the other hand, a subcritical solution has two local maxima close to 0 and 1, and a local minimum between 1/2 and 1, compare Figure 2, where 'variation near 1' refers to G(y) − G(1).   Based on this, we conjecture that for every k 0 > 2 there exist two solutions to (16), one having a subcritical and the other having a supercritical shape. For k 0 = 2 there is a trivial self-similar solution. For k 0 < 2 there seem to be no self-similar solution. To support this conjecture, we now prove that there is no self-similar solution for k 0 ≤ 1: Integrating (16) we find lim z→1 G(z) = N , so each solution to (16) satisfies G(y) ∼ y −2+k 0 as y → 0, which contradicts 1 0 G(y)dy < ∞. For 1 < k 0 < 2, however, this argument does not apply but our numerical results indicate there is still no singular solution.

Long-time behavior
To investigate the long time behaviour of solutions to (4) by numerical simulations we derive a discrete "box model" which follows naturally from the physical interpretation. We assume the number of initial boxes M b is sufficiently large (we choose M b = 200 for all simulations) and consider the discrete times t = εj with j ∈ N and ε = 1/M b . Moreover, we denote by G(j, i) the number of particles with size x ∈ (εi − ε, εi) at time t = 1 + εj, that means for all integers j ≥ 0 and i = 1, . . . , M b + j. The disrete analogue to the evolution equation (4) is then given by where N (j) = ε j+M b i=1 G(j, i) and the value of G(j, j+M b +1) is determined by the conservation of mass, i.e., We now present our numerical results for three different values of k 0 and three different sets of initial data. More precisely, we consider k 0 = 1, 2, 3 and assume that the initial data have Gaussian distributions with dispersion 0.3 and center at either 0.25, 0.5, or 0.75. For k 0 > 2 we expect convergence to one of the self-similar solutions. Numerical simulations suggest that the solution converges as t → ∞ to the supercritical self-similar solution that corresponds to k 0 . The same happens if the initial data is very close to the subcritical selfsimilar solution, and thus we can conclude that subcritical solutions are unstable. We failed to find a rigorous proof for this assertion, but the numerical evidence is strong. In Figure 3, we plot the scaled distributions after 0, 200, 1000 and 25000 steps (0, 1000, 25000 and 150000 if the center is at 0.25) for the three sets of initial data described above. The corresponding times are given by 0, 1, 5, and 125 (0, 5, 125, and 750). Along with these smooth curves we dotted the graph of the self-similar solution for k 0 = 3. As we see, the convergence is slowest when the center of the initial data is at 0.25. For k 0 = 2 we expect convergence to the trivial self-similar solution but the numerical simulations do not support this assertion. We can conclude that either the long time behaviour is more complicated or the convergence is extremely slow. Figure 5 contains the distributions after 0, 1000, 5000, 25000 and 150000 steps and the dotted plot of the self-similar solution.
In the first picture we observe a behavior similar to that for k 0 < 2, that means the mass is cumulated near the origin. It will possibly eventually disappear but this was not the case after any number of steps that we were able to simulate. This happens also for k 0 > 2 if the initial data is more cummulated. For example, if we choose dispersion equal to 0.2 then 150000 steps is not enough to see the convergence for k 0 = 3 and center at 0.25. This observation is not surprising because if the initial data is supported on [0, 1/2), then the evolution cannot start at 0.