From single-particle stochastic kinetics to macroscopic reaction rates: fastest first-passage time of $N$ random walkers

We consider the first-passage problem for $N$ identical independent particles that are initially released uniformly in a finite domain $\Omega$ and then diffuse toward a reactive area $\Gamma$, which can be part of the outer boundary of $\Omega$ or a reaction centre in the interior of $\Omega$. For both cases of perfect and partial reactions, we obtain the explicit formulas for the first two moments of the fastest first-passage time (fFPT), i.e., the time when the first out of the $N$ particles reacts with $\Gamma$. Moreover, we investigate the full probability density of the fFPT. We discuss a significant role of the initial condition in the scaling of the average fastest first-passage time with the particle number $N$, namely, a much stronger dependence ($1/N$ and $1/N^2$ for partially and perfectly reactive targets, respectively), in contrast to the well known inverse-logarithmic behaviour found when all particles are released from the same fixed point. We combine analytic solutions with scaling arguments and stochastic simulations to rationalise our results, which open new perspectives for studying the relevance of multiple searchers in various situations of molecular reactions, in particular, in living cells.


I. INTRODUCTION
Physical kinetics studies the dynamics of molecular chemical reactions in terms of the time dependence of the reactant concentrations [1]. If only one reactant is present with concentration [A] (unimolecular reaction), the associated first-order rate equation is typically written in the form d[A]/dt = −k[A] with the reaction rate k. The resulting dynamics is exponential, [A] = [A] 0 exp(−kt), with initial concentration [A] 0 and half-life time t 1/2 = ln(2)/k. A physical derivation of chemical reaction rates in terms of the diffusivity of the molecular reactants was given in the seminal 1916 paper by Smoluchowski [2] for immediate coagulation. For instance, the Smoluchowski rate of a particle with diffusion coefficient D to hit an immobile spherical target of radius a reads k S = 4πDa[A] 0 . A more general approach to calculate the reaction rate combining reaction and diffusion limitation was formulated by Collins and Kimball [3]. Different diffusion mechanisms can hereby lead to an effective renormalisation of the target size in these theories. Thus, when DNA-binding proteins search for their target site on a DNA chain, the combination of three-dimensional diffusion with one-dimensional diffusion along the DNA causes the target size a to be replaced by an "effective sliding length". In the "facilitated diffusion picture" this quantity is based on the typical distance covered by the protein during sliding while it * Electronic address: denis.grebenkov@polytechnique.edu † Electronic address: rmetzler@uni-potsdam.de; corresponding author intermittently binds to the DNA [4][5][6][7]. Such predictions and their generalisations can by-now be measured routinely in single-molecule assays [8][9][10][11][12][13][14].
In chemical rate-based formulations of reaction kinetics such as those by Smoluchowski or Collins and Kimball it is tacitly assumed that the reactants are present at sufficiently high abundance, thus effecting smooth concentration levels. Moreover, in a "well-stirred" reaction container spatial coordinates can be neglected. By-now the understanding is that such theories provide an adequate description of the chemical kinetics for most systems based on the interplay of molecular reaction and diffusion-upon some refinements and generalisations incorporating various physical factors missed in the original works [15][16][17][18][19][20][21][22][23][24]. Exceptions are provided by several particular reaction schemes in which, under some rather restrictive constraints, so-called fluctuation-induced behaviour emerges, see, e.g., [25][26][27][28][29][30][31][32][33][34] and references therein. One particular question concerns the possible concentrationdependence of the effective reaction rates [15,[35][36][37][38]. Indeed, the original Smoluchowski approach and many of its generalisations are only plausible for sufficiently low albeit finite concentrations. At higher concentrations diffusive transport as the rate-controlling factor becomes less important, and concurrently the diffusion coefficients themselves acquire a dependence on the concentrations of reactants and products.
Conversely, in a shift of general interest towards understanding the kinetic behaviour of reactions in diverse biochemical and biophysical systems low concentrations are increasingly considered to be a salient feature. Indeed, many intracellular processes of signalling, regulation, infection, immune reactions, and metabolism as well as transmitter release in neurons occur upon the arrival of one or few biomolecules to small specific regions [39,40]. The well-stirredness assumption based on simple diffusion models leads to the conclusion that DNAbinding proteins rapidly homogenise in the cell. Experiments show, however, that even in relatively small bacterial cells such transcription factors are localised around their encoding gene and further inhomogeneity is caused by the nucleoid state [41]. Moreover, genes that are controlled by a specific transcription factor tend to locate next to the gene encoding this transcription factor [42,43]. This is consistent with explicit models for intracellular gene regulation [44,45]. Such gene regulatory signals often rely on nanomolar concentrations [46,47], similar to molecular concentration levels of autoinducer molecules controlling the state of cell colonies in quorum sensing [48][49][50][51][52]. This causes strong fluctuations of regulation events [44,[53][54][55].
The nanomolar concentration range of relevant molecular species renders the concept of reaction rates rather ill-defined. Due to the lack of a sufficiently large number of molecules, reaction times become strongly defocused and one cannot describe the system in terms of a single time scale associated with a reaction, but rather in terms of the full distribution of random times to a reaction event [56][57][58][68][69][70][71]. Indeed, the shortest relevant time scale of the associated first-passage of molecules to their reaction target in such situations is "geometrycontrolled" [57,70] in terms of "direct paths" from the molecules' initial position to its target. This shortest characteristic time scale is also the most probable in the reaction time distribution [56,57]. The longest time scale, typically orders of magnitude longer than the most probable time scale, corresponds to "indirect" trajectories that on their path hit the boundary of the confined volume and thus lose any signature of their original position [57,70].
Most approaches determining reaction rates or full probability densities of reaction times rely on a singleparticle scenario, yet, despite of the small concentrations we alluded to above, typically a given number of particles are searching for a common target in parallel, for instance, several transcription factors seeking to bind to a specific binding site on the cellular DNA. For particles searching for a common target in parallel two relevant questions can be asked: (i) how much faster is the search process when more than one searcher is present and (ii) which searcher comes first, for instance, when we think of two different species of transcription factors competing for the same binding site. We could also think of an entire colony of cells, for instance, the many thousands of cells in a bacteria colony [72,73], competing with each other for which cell is able to react first to a common environmental challenge. On a different level such competitive scenarios come into play when sperm cells race towards the egg cell. Indeed, such "particle number" effects on the reaction kinetics of few-molecule diffusionlimited reactions have recently been addressed [74][75][76].
Generally, there is a range of systems in molecular and cellular biology for which the number of species involved lies in some intermediate range-much more than a few, but still much less than a macroscopic number (i.e., of the order of Avogadro's number N A ≈ 6 × 10 23 ) [77]. In particular, neuronal connections often occur on a dendritic spine, where calcium is present in big amounts and arrival of the fastest of the calcium ions can trigger a transduction [78]. In another instance, the post-synaptic current is generated when the first receptor of a neurotransmitter is activated, a process which is mediated by the release and transport of several thousands of neurotransmitters from the pre-synaptic terminal. In immunology, which hosts a variety of such examples, a gene mechanism responsible for the selection and expression of a specific membrane receptor on B-cells relies on many such receptors, which bind directly to recognise a molecular unit of a pathogen.
A mathematical model in which N Brownian particles start from the same position simultaneously and search for an immobile small target was first analysed by Weiss, Shuler, and Lindenberg almost 40 years ago [79]. Various additional aspects of this problem were subsequently considered [80][81][82][83][84][85][86][87][88][89][90][91]. Specifically it was already shown by Weiss and colleagues that the mean first arrival time of the fastest of the N Brownian searchers to the target is inversely proportional to ln N as N → ∞. This means that the change of the "efficiency" of a reaction is quite modest as compared to the quite large investment in requiring a larger number of searchers. It was thus concluded in [79] that the speedup due to many searchers is a comparatively minor effect: namely, to reduce the reaction time scale in a noticeable way, one needs a very large amount of searchers. For instance, to have a reduction by just a factor of 10, more than 20, 000 searchers need to be deployed, an expensive number for many biological scenarios. Indeed, it was recently argued in the context of the tens of millions of sperm cells competing for a single egg in higher mammals that when N such searchers are launched from the same initial location to find a given target, the decisive time scale is the arrival time of the first searcher, that is, the shortest time [74][75][76].
In this paper, we address the reaction dynamics between an immobile small target site and N searchers diffusing inside a bounded domain Ω of finite volume |Ω|. A major part of our analysis pertains to a very general geometry of the system such that a bounded domain may have an arbitrary shape, with the only constraint being that the boundary is smooth and thus having a finite area |∂Ω|. The target Γ with an area |Γ| and a characteristic extent R can be placed either on the boundary or in the bulk. For illustrative purposes, we will use some simple domains.
We take here a broader perspective beyond the mean shortest time associated with the arrival of the fastest among N Brownian searchers. Namely, we analyse the full distribution function of the time of the first reaction event. This comprises three different relevant as-pects. First, as it is already known from the singlesearcher scenario that the full distribution of reaction times spans several distinct characteristic time scales [56][57][58]70] ranging from the above-mentioned most probable time, a crossover time scale from a hump-like region to a plateau-like regime in which all values of the first passage times are nearly equally probable, and ultimately, the mean first passage time to the reaction event, followed by an exponential decay. When N searchers operate in parallel, we establish the full probability density of the fastest first-passage time.
Second, compared to previous works we include imperfect reactions characterised by a finite intrinsic reaction constant κ (with κ = ∞ corresponding to a perfect reaction). Namely, for a chemical reaction to be successful, it is not sufficient for the diffusing molecule just to arrive to the reaction centre, but a reaction activation barrier needs to be overcome [3,18,[59][60][61][62][63][64][65][66][67]. On top of this many biomolecules present a specific binding area, further reducing the probability of immediate reaction on encounter. As a consequence repeated collisions with the target are required, leading to repeated excursions in the volume. The effected further defocusing of the reaction times, analysed in detail for the case N = 1 in [56][57][58], was partially studied in [84,85] for searchers starting from the same location. Here we highlight some additional features.
The third and the most important difference to most previous works is that we consider the scenario in which the searching particles initially are placed at distinct random positions. The characteristic properties are then obtained by averaging over these fixed initial positions, which are supposed to be uniformly spread across the confining domain. Recall that for a target placed away from a boundary, in the thermodynamic limit when both the number of particles N and the volume |Ω| of the confining domain Ω tend to infinity while the concentration N/|Ω| is kept finite (neither very small nor too large), the Smoluchowski approach (see Appendix A) provides an exact solution in the case of a perfect reaction [92][93][94][95][96]. As shown in [97,98] this even holds for the case of imperfect reactions when the finite reactivity is modelled in terms of a stochastic Poisson gating process. In this sense it can be argued that our analysis provides a connection between single-molecule reaction kinetics and standard chemical kinetics based on effective reaction rates even in the case when targets are located on the boundary. We also stress the conceptual difference in dealing with the uniform initial distribution as compared to a fixed starting position for all searchers(see also [88] for a related discussion). In both cases, the randomness of reaction times follows from random realisations of the particle trajectories. However, an additional source of randomness comes into play due to the random initial placement of each particle within the confining domain. This creates an intriguing new aspect to the problem, and we analyse its impact on the reaction kinetics resorting to an analysis of the typical behaviour, the averaged be-haviour, and quantify fluctuations around the averaged behaviour. A uniform initial condition leads to a drastically different behaviour as compared to the case of a fixed starting point considered previously in [81][82][83][84][85]. In particular, as first noticed in [79] and later explored in [86][87][88], in a finite domain the contribution to the effective rate due to a diffusive search for a target decreases in proportion to 1/N 2 , while the contribution due to a penetration through a barrier against reaction vanishes as 1/N in the limit N → ∞. This means that (i) placing N searchers at distinct positions gives a substantial increase in the reaction efficiency, which can be orders of magnitude larger as compared to the case when all N searchers start from the same point, (ii) as compared to a standard Collins-Kimball relation, which is valid in the thermodynamic limit and in which both contributions have the same 1/N -dependence on the number of searchers, here the contribution due to a diffusive search acquires an additional power of a concentration of searchers, (which is a strong concentration effect), and hence, (iii) in the limit N ≫ 1 reactions become inevitably controlled by chemistry (kinetically-controlled reactions) rather than by diffusion (diffusion-controlled reactions). Overall, the analysis developed here provides a comprehensive insight into the binding kinetics of the extremes of first-passage phenomena in finite systems with multiple diffusing reactants competing for a reactive target.
The paper is organised as follows. In Section II we discuss the implications of fixed versus uniform starting positions. Section III contains our main theoretical results including a brief overview of general properties for bounded domains (Section III A), approximations for the mean first-reaction time (III B) and the variance (III C), the long-time behaviour of the volume-averaged probability density of the reaction time (III D), the fluctuations between individual realisations (III E), and the typical reaction time density (III G). Section IV is devoted to the discussion of these results and their implications to chemical physics and biological systems. Details of derivations and some additional analyses are presented in the Appendices.

II. FIXED VERSUS RANDOMLY DISTRIBUTED STARTING POINT
Before proceeding to the results we discuss the uniform initial condition analysed in this paper. In the conventional macroscopic description of chemical kinetics, the reaction rate is computed by averaging the probability density ρ(t|x 0 ) of the first-passage time to the target from the starting point x 0 , where c 0 (x 0 ) is the initial concentration profile of particles. As we assume a uniform initial concentration in the bounded confining domain Ω with finite volume |Ω|, the reaction rate is simply proportional to the volumeaveraged probability density, J(t) ∝ ρ(t), with As a consequence, the reaction rate J(t) incorporates two intertwined sources of randomness: a random choice of the initial position x 0 and a random trajectory from x 0 to the target. Even though both affect the first-passage time distribution, their respective roles are in fact not that well understood, thus deserving a more specific investigation. A volume-averaged description as entering equation (2) is justified whenever the number of diffusing particles is macroscopically large so that one can actually speak about concentrations. In many of the biologically relevant settings discussed above, however, the number of diffusing particles can be small or moderately large (say, a few tens or a few thousand). One may therefore question whether the above macroscopic approximation is still applicable. In particular, what is the role of stochastic fluctuations between different realisations of the starting points? This question becomes particularly relevant in the short-time limit. In fact, if there is a single searcher, the density ρ(t|x 0 ) decays exponentially fast at short times, as e −c/t , where c is proportional to the squared distance to a target from x 0 . In contrast, the volume average in equation (2) superimposes all ρ(t|x 0 ) including the contributions from those particles that started infinitely close to the target are dominant. As a result, ρ(t) exhibits not an exponential but a power-law behaviour as t → 0, as we detail below. One can conclude that ρ(t) is not representative of an actual behaviour here, as one could expect. But what happens for N particles, in particular, in the large N limit? In other words, we aim at uncovering the role of stochastic fluctuations related to random initial positions of the particles as we change N .
In what follows we investigate the gradual transition from the single-particle description, in which stochastic fluctuations are significant, to the macroscopic description. We consider N independent identical particles undergoing Brownian dynamics with diffusion coefficient D that search in parallel for a common target (Fig. 1). The first-passage time (FPT) τ 1 for a single particle, started from x 1 , is characterised by its survival probability S(t|x 1 ) = P x1 {τ 1 > t}, from which the probability density is obtained by differentiation, The independence of the diffusing particles immediately implies that the first-passage time T N = min{τ 1 , . . . , τ N } of the fastest particle among N , the so-called fastest firstpassage time (fFPT), follows from the N -particle survival probability The knowledge of this survival probability for a single particle therefore fully determines the one for multiple non-interacting particles searching in parallel, and the problem of characterising the fFPT may look trivial at first thought.
However, an exact explicit form of the survival probability S(t|x) is known for a very limited number of simple settings such as, for instance, the FPT to the endpoints of an interval or to the boundary of a sphere [99]. In turn, only the asymptotic behaviour or some approximate forms are known for most practically relevant cases such as the narrow escape problem [100][101][102][103][104][105][106][107][108][109] (see also the review [110]). Moreover, even if S(t|x) is known explicitly, finding the moments of the fFPT remains a challenging and quite involved problem. While most former studies of these extreme first-passage times focused on the case when all particles start from the same fixed point (i.e., x 1 = . . . = x N ) [74,76,79,[83][84][85] we here explore a different direction, namely, the case when the particles start from independent and uniformly distributed points. In other words, we aim at uncovering the role of stochastic fluctuations related to random initial positions of the particles. Qualitatively, as N increases, the particular random realisation of the starting points is expected to become irrelevant, and the macroscopic description should become increasingly accurate. Here, we investigate the transition from the single-particle setting to the macroscopic limit and address the practically important question of when and how such a description becomes applicable.

III. THEORY
To develop the theoretical framework we consider a given realisation of starting points x 1 , . . . , x N . Then the probability density function (PDF) of the fFPT to a tar-get domain Γ follows from equation (4) in the form If (x 1 , . . . , x N ) are independent and uniformly distributed points in the bounded confining domain Ω, the volume-averaged density reads where is the volume-averaged survival probability.
A. Summary of general single-particle properties For a bounded domain Ω of an arbitrary shape with a smooth boundary ∂Ω the survival probability admits in the most general case the spectral expansion [99,111,112] where the asterisk denotes the complex conjugate, λ n are non-negative eigenvalues which are sorted in ascending order, and the u n (x) are L 2 (Ω)-normalised eigenfunctions of the Laplace operator, −∆u n = λ n u n , subject to appropriate boundary conditions. The eigenvalues and eigenfunctions encode all the information about the geometry of the domain, the location of the target and its size [113]. We consider three typical situations (in increasing order of generality): (i) a fully reactive boundary described by the Dirichlet boundary condition (u n )| ∂Ω = 0 (here, Γ = ∂Ω); (ii) a partially reactive boundary described by the Robin boundary condition D∂ n u n + κu n ) |∂Ω = 0, where κ is the reactivity and ∂ n is the normal derivative oriented outwards the domain (here, Γ = ∂Ω); and (iii) a partially reactive target Γ on the otherwise reflecting boundary ∂Ω\Γ, described by the mixed boundary condition D∂ n u n + κu n ) |Γ = 0 and (∂ n u n ) |∂Ω\Γ = 0. We emphasise that the last situation includes two distinct cases: a target located on the reflecting boundary and a target located in the bulk (Fig. 1). In the latter case, the boundary of the confining domain Ω has two disjoint components: the outer reflecting boundary and the inner reactive target. Even though these two locations of the target are usually distinguished in physical literature, their mathematical description is the same. Note that the target region Γ does not need to be connected, i.e., one can consider multiple targets. The volume-averaged survival probability then reads where Similar spectral expansions hold for the PDF and its volume average, and From these expansions one can easily derive the volumeaveraged moments of the FPT T 1 , Throughout the paper we distinguish the volume average over random initial points (denoted by the overline) and the ensemble average over random trajectories (denoted by angular brackets). At long times the survival probability and the PDF decay exponentially fast, with decay rate 1/(Dλ 1 ) determined by the smallest eigenvalue λ 1 . The volume average does not affect this decay. In contrast, the short-time behaviour is drastically different for fixed-point and volumeaveraged quantities. In fact, the volume-averaged survival probability behaves as which follows from the short-time expansion of the heat content [114][115][116][117] (see also Appendix B for the next-order correction). Qualitative arguments behind this asymptotic result are simple. In the diffusion-limited regime (κ = ∞) only those particles in a thin layer of width √ Dt near the reactive boundary Γ can reach this boundary and thus disappear within short time. The relative fraction of such particles is √ Dt |Γ|/|Ω|, where |Γ| is the surface area of Γ and |Ω| is the volume of the domain Ω. In contrast, in the reaction-limited regime, the decay of S(t) is limited by κ and is thus proportional to t. An immediate consequence of Eq. (14) is In contrast, S(t|x 0 ) and ρ(t|x 0 ) exhibit a fast exponential decay as t → 0, with the constants A and α, see [70,76,85,[118][119][120]. As shown in [88], the above short-time asymptotic behaviour of the survival probability for a single particle determines the leading behaviour of the moments of the fFPT and its volume-averaged probability density ρ N (t) in the limit N → ∞. In particular, for a uniform distribution of the starting points, ρ N (t) is close to the Weibull density where k = 1/2 and for a perfectly reactive target, while k = 1 and B = |Γ|κ/|Ω| for a partially reactive target.

B. Volume-averaged mean fFPT
We now turn to the many-particle case and obtain a simple approximation for the volume-averaged mean fFPT where we used equation (6). We first split this integral into two parts, The first integral will be evaluated explicitly by using the short-time asymptotic formula (14). Under an appropriate choice of T , the contribution of the second term is greatly attenuated for large N and can be ignored. The "threshold time" T can be chosen to minimise the error of such an approximation but we will see that the final result does not depend on T , as expected.

Perfect reactivity
For a perfectly reactive target (κ = ∞) and a finite domain we use Eq. (14a) to get where z = 2|Γ| √ DT √ π|Ω| is a number between 0 and 1 that is set by our choice of T . When the correction term (1 − z) N +1 can be neglected, and one gets This approximate expression indeed does not depend on T , as it should. Figure 2(a) illustrates the remarkable accuracy of this approximation in the case of an interval (0, L) with absorbing endpoints (here, |Γ|/|Ω| = 2/L). This high accuracy results from the fact that the asymptotic relation (14) is exponentially accurate for the interval, In general, however, the next-order term in the shorttime behaviour of the survival probability is of order O(t) that deteriorates the exponential accuracy of Eq. (22). In Appendix B, we compute the correction O(N −3 ) term to this formula: where R is the mean curvature radius of the target defined in Eq. (B2) (note that R can be negative, see below). The leading, 1/N 2 -term was first mentioned in the seminal paper [79] and later re-discovered in [86,87]. Moreover, a transition from an intermediate 1/N scaling to the ultimate 1/N 2 behaviour was analysed in [86]. A mathematical derivation of the leading 1/N 2 -term for a rather general class of diffusion processes was recently reported in [88]. Figure 2(b) shows the relative error of our approximation for diffusion inside a ball of radius L toward its absorbing boundary (i.e, Γ = ∂Ω and R = L). As expected, the accuracy is lower at small N but still remains excellent. Finally, Fig. 2(c) presents the relative error for the case of a perfectly absorbing spherical target of radius R = 0.1L surrounded by a reflecting sphere of radius L (here, the mean curvature radius is negative:  (18) and from our approximation (22) for the interval, and (24) for spheres. The dashed line indicates the relative error for the case when only the leading term is kept in Eq. (24). For the interval the approximation is remarkably accurate because the next-order correction to the survival probability is exponentially small. For a small absorbing target, the asymptotic regime is established at much larger N . Note that S(t) was computed here via the spectral expansion (C21) truncated after 10,000 terms to ensure accurate integration at short times. R = −R). In contrast to previous examples, the asymptotic regime is established at much larger N , in agreement with the condition (21). In fact, here z ∝ |Γ| ∝ (R/L) 2 is 100 times smaller than in the case of an absorbing sphere, i.e., one needs a hundred-fold increase of N to get a comparable accuracy. How can one rationalise the unexpected 1/N 2 scaling? As the starting points of the searchers are spread in the domain, the closest particle to the target has more chances to reach it first. As discussed in Appendix D, the average distance δ between the closest particle and the target is of the order of 1/N (and not 1/N 1/d as one might expect from a standard estimate of the average inter-particle distance in d dimensions). Moreover, if N ≫ 1, there are many particles that start from a comparable distance. Even if the closest particle will diffuse far away from the target, there is a high chance that one of the other particle started at the distance of order 1/N will hit the target. In other words, this problem resembles the diffusion of a single particle on the interval (0, δ) with absorption at 0 (on the boundary) and reflection on δ, for which the mean FPT is δ 2 /(2D) ∝ 1/N 2 . We conclude that the scaling 1/N 2 arises from the fact that many particle can start close to the target. Moreover, the generic properties of the heat kernel ensure that this scaling still holds for non-uniform initial distributions which do not exclude particles from a close vicinity of the target (see Appendix D). As we will discuss in Sec. IV, the decay T N ∝ N −2 is much faster and drastically different from the scaling law T N ∝ 1/ ln N obtained earlier in the case of a fixed starting point [79].

Partial reactivity
For a partially reactive target (κ < ∞), Eq. (14b) entails a different scaling for the volume-averaged mean fFPT with N , i.e. the contribution to the volume-averaged mean fFPT in the regime of a reaction control vanishes only as a first inverse power of N . The derivation of the correction term is presented in Appendix B. The leading 1/N -term was recently obtained in [88]. Figure 3 illustrates the quality of this approximation for several values of κ for an interval of length L with a partially reactive endpoint (in which case Γ = {0} and thus |Ω|/|Γ| = L). The accuracy is lower than in the former case of absorbing boundaries, because the error of the approximation, O(N −1 ), decreases slower than that of Eq. (24). In addition, we plot the relative error in the case of a spherical partially reactive target of radius R = 0.1L surrounded by a reflecting sphere of radius L. In this setting, the coefficient in front of the correction term, κ|Ω|/(D|Γ|) ≈ 5.8, is not small so that this correction term turns out to deteriorate the quality of the approximation at small N , as can be seen from the dashed line. In turn, when N is large, the correction term improves the accuracy, as expected.
Since the ratio N/|Ω| in (25) can be interpreted as a mean concentration [A] of diffusing particles, our result is consistent with standard chemical kinetics, in which the reaction rate scales linearly with [A]. Note, however, that we also found the next-order term, which behaves as [A]. In contrast, expression (24) vanishes as 1/N 2 and hence, is proportional to a squared concentration of particles. This is very different from the Smoluchowski rate which is linear with the concentration, because it appears as a limiting form when both the volume and the number of particles grow to infinity, while their ratio is kept fixed and is sufficiently small (see Appendix A). It means that, in contrast to a standard Collins-Kimball relation, in which both rate controlling factors enter with the same power of concentration, for bounded domains in the limit N → ∞ the reaction inevitably enters into a regime of kinetic control, while the controlling factor of diffusion becomes subdominant and defines only some finite corrections. In a way, this situation is similar to a transition to kinetic control upon reduction of the size of the escape window, which was predicted recently for the so-called narrow escape problem [58,109].

C. Variance of the fFPT
As the fFPT T N includes two sources of randomness (from Brownian trajectories and from random initial positions), there are at least two ways of characterising the fluctuations of the T N . In the first way, one may use the variance defined as where the second term is just the square of the mean volume-averaged fFPT. From a physical point of view, it is more convenient to consider the volume-averaged conditional variance In other words, one first evaluates the variance of T N for a fixed set of initial positions x 1 , . . . , x N , and then averages this conditional variance over these positions distributed uniformly in Ω.
To find both variances, we first evaluate the second moment of T N with respect to the probability density ρ N (t|x 1 , . . . , x N ), and then average it over the starting points to find To get the variance in (26), it is sufficient to subtract the squared mean T N studied in Sec. III B. In turn, for the conditional variance in (27), one needs to subtract the squared first moment T N , averaged over the starting points, In order to further simplify this expression we use the spectral expansions (8) and (11) along with the orthonormality of Laplacian eigenfunctions to derive the identities In particular, the first identity implies which is twice smaller than T 2 1 , compare equation (13). Using the identity (31a) we complete the above computation, from which the volume-averaged conditional variance of the fFPT follows in the form To proceed, we substitute again S(t) by its short-time approximation (14). For a perfectly absorbing boundary (κ = ∞) we get from equation (29), where T is the cut-off time and B = 2|Γ| √ D/[ √ π|Ω|] (note that exponentially small corrections due to (1 − B √ T ) N are neglected in Eq. (35)). Substituting Eqs. (35) and (24) into Eq. (26), we get Similarly, we get Combining these results, we get the volume-averaged conditional variance of the fFPT in the leading in the limit N → ∞ order: Dividing this variance by the squared volume-averaged mean value of the fFPT produces Similarly, one can also obtain the squared coefficient of variation of T N x1,...,xN with respect to the random variables x 1 , . . . , x N , The following point is to be emphasised. In statistical analysis of the first-passage phenomena in bounded domains [68,89,90], the coefficient of variation of the PDF is a meaningful characteristic which probes its "effective" broadness. Typically, in situations when this parameter is much less than unity, one deals with a narrow distribution and the actual behaviour is well-captured by its first moment-the mean first-passage time, which sets a unique time scale. Only in this case the first-passage times are "focused", i.e., concentrated around the mean value. Conversely, when the coefficient of variation is of order of unity or even exceeds it, the PDF, despite the fact that it possesses moments of arbitrary order, exhibits in some aspects a behaviour reminiscent of socalled broad distributions, like heavy-tailed distributions which do not possess all moments. In this case, fluctuations around the mean value are comparable to the mean value itself and hence, it is most likely that the values of first-passage times observed in two realisations of the process will be disproportionately different. In the case at hand, we notice that the coefficient of variation of the volume-averaged fFPT, Eq. (39), is of order of unity, which implies that here the fluctuations of this quantity around its mean value are of order of this value itself. More striking, the fluctuations of the fFPT corresponding to randomly distributed initial positions exceed the mean value of the fFPT, see Eq. (40), which signifies that randomness in the initial positions of the searchers has a very pronounced effect on the spread of fFPTs. Therefore, we conclude that for both volume-averaged and "bare" fFPTs no unique time scale exists and their actual behaviour cannot be fully characterised by their mean values.

D. Long-time behaviour
Now we consider the volume-averaged probability density for N particles. As usual, the long-time limit presents the simplest setting for bounded domains due to the exponential decay of the survival probability and the PDF, as well as their volume averages. In particular one gets from equation (9) that is, the characteristic decay time 1/(Dλ 1 ) for a single particle gets simply reduced by the factor N . This steeper exponential decay shifts the PDF to smaller times. The decay time is a natural timescale of the diffusive exploration of the whole domain. For a single particle, this timescale is usually close to the mean FPT. Surprisingly enough, this is not true in case when multiple particles are searching for perfect targets starting from distinct random locations. As evidenced by our result (24), here the mean fFPT scales as 1/N 2 , i.e., it is much smaller than 1/ (Dλ 1 N ). Therefore, the speedup of the reaction kinetics due to a deployment of N searchers starting at distinct random positions appears to be really striking, as compared to a much more modest logarithmic increase in the efficiency predicted in the case when all of them start from the same point. It is also instructive to compare the exponential decay in (41) to the asymptotic Weibull density (17). For partially reactive targets, k = 1 and the functional form of (17) generally agrees with result (41), except for the coefficients. In contrast, for the case of perfectly reactive targets, k = 1/2, and (17) exhibits a stretchedexponential decay with time. The difference between these two asymptotic behaviours stems from the order of how the limits are taken: the Weibull density (17) was derived for a fixed t as N → ∞ [88], whereas expression (41) corresponds to the limit t → ∞ with fixed N .

E. Fluctuations between individual realisations
When the starting point x 0 is random, the survival probability S(t|x 0 ) and the PDF ρ(t|x 0 ) for each fixed t can be considered as random variables themselves. This circumstance has been emphasised in the insightful paper [122] studying a survival of an immobile target in presence of diffusive traps. Moments of the survival probability regarded as a random variable, and the difference between the typical and mean behaviour was analysed [123][124][125] for the problem of survival of a diffusive particle in the presence of immobilised traps. The volume-averaged PDF, ρ(t), is simply the mean of ρ(t|x 0 ). In order to characterise fluctuations it is instructive to compute the variance of ρ(t|x 0 ).

Single particle
We characterise fluctuations by the squared coefficient of variation where the second moment can be evaluated using equation (31c). At long times, we get where c 1 is defined by Eq. (10). For instance, for an interval (0, L) with absorbing endpoints, c 1 = 8/π 2 and thus γ(∞) = π 2 /8 − 1 ≈ 0.23 is of order of unity. For short times, equation (15) In other words, the coefficient of variation diverges in this limit, meaning that the volume-averaged density is no longer representative. This is confirmed by Fig. 5(a) see below.

Multiple particles
This illustrative computation can be extended to the case of N particles. We get Using the identities (31), we find that the squared coefficient of variation obeys At long times, the ratios entering this expression approach 1/c 1 so that we get The Hölder inequality, applied to the first (positive) eigenfunction u 1 , implies that As a consequence, γ N (t) exponentially diverges with N for large t. In section III F below we explain this paradoxical blow-up of fluctuations. At short times, we find (note that one can also get correction terms O(1)/N from the first expression). As the number of particles increases, there are two effects. On one hand, the first (divergent) term is progressively attenuated. In other words, for any fixed t, taking N large enough we diminish γ N (t) and hence, get narrower distribution of the random variable ρ = ρ N (t|x 1 , . . . , x N ) (here we forget that ρ N is itself the probability density and consider it as a random variable due to the randomness of x 1 , . . . , x N ).
On the other hand, the second term increases and rapidly reaches a constant contribution 1/2 to the squared coefficient of variation. Even though the second term comes with the negative sign, it cannot render the coefficient of variation negative: indeed, the short-time asymptotic relation (49) is only valid for t small enough such that the first term in (49) provides the dominant contribution. Figure 4 shows an excellent agreement between the exact expression for γ N (t), its short-time asymptotic behaviour and Monte Carlo simulations results for an interval (0, L). Note that we replaced the O(1) term in equation (44) by −1 because for an interval, the short-time asymptotic formulas (15) are exponentially accurate, and the only correction −1 comes from the definition of the coefficient of variation. Figure 5 illustrates the PDFs ρ N (t|x 1 , . . . , x N ) for 100 random combinations of the starting points x 1 , . . . , x N chosen uniformly and independently on the interval (0, L).
Fluctuations of the PDFs ρ N (t|x 1 , . . . , x N ) around their volume-averaged mean ρ N (t) are present and significant for all N ranging from 1 to 1000. The broadness of these fluctuations can be even better seen on figure 6 which illustrates 10, 000 random realisations of the starting points. This is a rather counter-intuitive observation as a sort of convergence to the volumeaveraged PDF is expected as N increases. Note that the asymptotic Weibull density (17) accurately describes the volume-averaged mean ρ N (t) already for N = 100.
We emphasise that the PDF is rapidly shifted towards smaller times as N increases. Intuitively, one could expect that this shift is controlled by the long-time exponential decay and its timescale, T 1 /N , see section III D. However, the appropriate time scale here is the volumeaveraged mean fFPT, T N , which decays much faster, as 1/N 2 , see the vertical dashed lines. We stress again that the decay time here is not related to the mean fastest FPT.
The general shape of the PDFs ρ N (t|x 1 , . . . , x N ) is significantly different from the volume-average ρ N (t) in figure 5. Thus for a single realisation the starting point is a finite distance away from the target, which effects the ex-ponential cutoff to very short times and the emergence of the peak (the most probable FPT). At longer times we see the exponential shoulder with time scale T 1 /N . In figure 5 (a) individual realisations feature a slight bend at intermediate times after the most probable FPT and the exponential shoulder, however, this feature is getting lost for increasing N . In this one-dimensional setting the pronounced plateau in the FPT density uncovered in [57] is not present, a further separation of the most probable time and the longest time scale T 1 /N would build up for decreasing reactivity. The volume-averaged PDF ρ N (t), in strong contrast, includes particles starting arbitrarily closely to the target, and here we do not see the initial exponential suppression. Instead ρ N (t) decays monotonically, and the only remaining relevant scale remaining is T N . Nevertheless the FPT density remains broad, as quantified by the 10%-quantiles in figure 5 and our results for the coefficient of variation.
Note that the small q-quantiles can be approximated by using again the short-time asymptotic formula (14). In fact, writing which is valid for small t and thus small q, one gets easily As can be seen in figure  5 the relative locations of the quantiles hardly change for N = 10, 100, to 1000 but are just shifted to shorter times, as expected. This approximate relation is indeed quite accurate. Here, the scaling is again as 1/N 2 , which is a direct consequence of the leading term O( √ t) in the short-time expansion of the volume-averaged survival probability.

F. Paradoxical divergence and its rationalisation
The constant c 1 from equation (10) determines the behaviour of the coefficient of variation in the long-time limit. This constant is defined to be independent of the size of the domain, i.e., any dilation of the domain does not change c 1 . For instance, one has c 1 = 8/π 2 ≈ 0.81 for an interval with absorbing endpoints (of any length) and c 1 = 6/π 2 ≈ 0.61 for an absorbing sphere (of any radius). According to inequality (48), this coefficient cannot exceed 1, and it is actually equal to 1 only when the eigenfunction u 1 is constant, in other words, for a reflecting boundary without any reaction. As a consequence, the coefficient of variation γ N at long times exhibits an exponential divergence with N . This observation appears paradoxical. We here resolve this counter-intuitive behaviour.
First, we recall that assuming the limit N → ∞ while keeping the volume |Ω| fixed corresponds to a diverging concentration, which is evidently unphysical. Moreover, in typical applications, the diffusing particles search for a small target contained within a much larger confining domain with reflecting boundary. In this setting, the ground eigenfunction u 1 (x) is close to a constant so that c 1 is close to 1. In other words, if the target is fixed but the confining domain grows, c 1 tends to 1, and the double limit N → ∞ and |Ω| → ∞ with a fixed concentration N/|Ω| leads to a finite value of γ N and thus mends this paradoxical divergence. To clarify these issues, it is therefore important to investigate its behaviour in the small target limit. We first analyse an exactly solvable setting and then briefly discuss the general case.
We consider the case of a spherical target of radius R surrounded by a larger reflecting sphere of radius L. In Appendix C 4, we provide the exact formula (C22) for the coefficients c n , which depend on the solutions α n of Eq. (C20). For small R the smallest solution α 0 of this equation is expected to be small. Denoting ǫ = R/L ≪ 1, we consider separately the cases of infinite and finite reactivity.
(i) For infinite reactivity (κ = ∞) one substitutes α 0 ≃ a 1 ǫ+a 2 ǫ 2 +O(ǫ 3 ) into equation (C20) and expands it into a Taylor series to determine the expansion coefficients a i , from which we get As a consequence, the coefficient of variation at long times can be approximated as where [A] = N/|Ω| is the concentration of the diffusing particles.
(ii) In case of a finite reactivity (κ < ∞) more terms are needed. Substituting α 0 ≃ a 1 ǫ+a 2 ǫ 2 +a 3 ǫ 3 +a 4 ǫ 4 +O(ǫ 5 ) into Eq. (C20), we find and thus To ensure the validity of this asymptotic analysis, the first correction term in equation (54) has to be small, which is realised when the inequality κR 2 ≪ DL holds. This inequality is evidently valid in the low concentration limit when the coefficient of variation vanishes, or when diffusion is very fast. Moreover, it holds in case of a sufficiently low reactivity κ; in this case, evidently, before a reaction eventually takes place there are multiple encounters with a target interspersed with bulk excursions such that the initial spatial heterogeneity due to a random distribution particles gets speared away, effecting γ N (∞) to be small. In contrast, a larger domain  (6) with ρ(t) and S(t) given by (C4a) and (C6a), whereas red filled circles represent the asymptotic Weibull density (17). In turn, filled blue triangles show the typical PDF ρ1,typ(t) from equation (58), computed by numerical integration over the starting point.  (22). Coloured bars at the top present ten 10%-quantiles based on SN (t).
favours a higher degree of spatial heterogeneity and thus increases γ N . Note also that, in principle, one can relax this restrictive relation between the parameters and use equations (C22) and (47) directly. This will permit us to compute c 1 and hence, γ N (∞) for any values of the parameters.
In the above setting, the target was a small ball located in the bulk and surrounded by a large reflecting sphere. If the target is located on the outer sphere (e.g., a small circular hole), one has to consider mixed Dirichlet-Neumann boundary condition on the sphere, for which there is no exact solution for the survival probability. In this case, one can either resort to the approximate solution in [58], or rely on the asymptotic analysis in the narrow escape limit. In the latter case, Ward and Keller showed for a perfectly reactive target that where the coefficient E 1 was expressed as an integral of the first-order correction to the ground eigenfunction [121]. This coefficient can also be expressed in terms of the surface Neumann Green function. When the con-fining domain Ω is a ball, the explicit form of this Green function was given in [105]. As a consequence, one can access the coefficient E 1 and thus the asymptotic behaviour of c 1 in the narrow escape limit. Moreover, this computation is valid for multiple small targets located on the sphere. Importantly, the asymptotic relation (56) remains valid even for nonspherical three-dimensional domains, even though the computation of E 1 is much harder. We emphasise the distinct scaling of 1 − c 1 with ǫ: it is linear for a target on the boundary, and quadratic for a target in the bulk, see Eqs. (56,52). In two dimensions, the approach of c 1 to 1 is much slower: where 2ǫL is the length of the target region, see [104,121] for details. An extension of these asymptotic results to partially reactive targets located on the boundary presents an interesting perspective.  ρN (t|x1, . . . , xN ).

G. Typical PDF
Following reference [122] we aim at determining the typical PDF of the fastest FPT where τ is a timescale to render the expression in the logarithm dimensionless. For this purpose, we need to compute where U (t) ≡ ln τ ρ(t|x 1 )/S(t|x 1 ) + . . . + τ ρ(t|x N )/S(t|x N ) . (60) As x 1 , . . . , x N are independent uniformly distributed starting points the sum in equation (59) contains N identical terms. In turn, the first term U (t) is more sophisticated as the logarithm couples different particles. Using the following representation for the logarithm we get where η(z; t) = 1 |Ω| Ω dx exp −zτ ρ(t|x)/S(t|x) .

Long-time behaviour
The long-time behaviour is easy to determine. As ρ(t|x) = −∂S(t|x)/∂t the ratio ρ(t|x)/S(t|x) is a positive function that is monotonously increasing from 0 at t = 0 to Dλ 1 as t → ∞. As a consequence equation (62) implies Moreover, we have and we recall that u 1 (x) can be defined to be positive (factors |Ω| ±1/2 were included to get dimensionless quantities in logarithms). In this limit, one also has ρ(t|x 0 )/S(t|x 0 ) ≃ Dλ 1 so that In contrast, the analysis of the short-time behaviour of the typical PDF is much more difficult and remains an open problem for future research.

IV. DISCUSSION
We studied the fFPT distribution for diffusion of N Brownian particles in a finite domain Ω with smooth boundary ∂Ω. The particles react with a surface region Γ of perfect or partial reactivity contained in ∂Ω. We obtained the fFPT moments and probability density in the case of the uniform initial condition. In the previously considered case of a common, fixed initial position, the mean fFPT was shown to exhibit the very slow, 1/ ln N scaling [74,76,79,[83][84][85]. In strong contrast, when the N particles are released random-uniformly in the domain Ω, the mean fFPT shows the much faster decay 1/N 2 for a perfectly reactive target and 1/N for a target with finite reactivity. In the former case, this scaling is also different from that of the decay time, T 1 /N . We provided a rationale of this significantly altered behaviour in the dependence on the specific initial condition by scaling arguments and supported it by direct numerical computations.
While the scenario of a fixed initial condition bears relevance for the example of the released sperm cells considered in [74][75][76], spread-out initial conditions are relevant in other biological systems. Thus, in biofilms autoinducer molecules are released on the cell surfaces all over the colony [72,73]. In intracellular gene regulation, so-called global regulatory proteins are distributed approximately evenly throughout the cell volume [43]. Other regulatory proteins may abound around their encoding gene [41], and genes that are controlled by a specific transcription factor tend to locate next to the gene encoding this transcription factor [42,43]. These examples demonstrate the need for a theory that considers spread-out initial conditions as considered here.
In many biological systems the number of searching entities such as sperm cells or regulatory proteins are produced at the expense of chemical and energy resources. At the same time a large number of searchers guarantees a high degree of redundancy, which is of particular importance for the case of sperm cells in singular, crucial events such as reproduction. In contrast, for processes that are constantly running off, such as gene regulatory processes, the biochemical resources would quickly be drained if excessively many molecules had to be produced. In this sense, and given the above scenarios when the molecules abound in the vicinity of their target, the scaling 1/N and 1/N 2 provides a new perspective on why relatively few molecules may already be sufficient to effect comparatively speedy regulation.
While in our discussion we contrasted a fixed initial position with the scenario of random-uniform initial positions, figure 5 indicates that the value of the mean fFPT provides the correct scale even for individual random realisations of starting points. In this sense, one may roughly distinguish two classes of initial conditions: (i) either the particles are allowed to start near their target (distributed in the entire domain or concentrated in a subdomain around the target), (ii) or the initial particle position is characterised by a minimal distance to the target. The former class corresponds to our analysis here (see also [88]), the latter class is described by the results in previous works [74,76,[83][84][85].
In summary, we believe that our results for the statistic of the fFPT for spread initial particle positions provide a fresh perspective to the field of many-particle search for an immobile target in a finite domain and its applications. It should be of interest to further develop this approach both from a mathematical point of view and with regard to concrete applications.
In this limit, one gets where For the survival probability of a spherical target of radius R one gets (see, e.g., [120]) In the limit t → ∞ the dominant behaviour of F (t) is provided by the first term, which yields the Collins-Kimball relation where K = 4πR 2 κ is the intrinsic reaction constant.
In the limit of an infinitely large intrinsic reactivity (κ → ∞), one retrieves from Eq. (A2) the classical Smoluchowski result Appendix B: Improved formula for the mean fFPT Our computation of the volume-averaged mean fFPT relied on the leading term of the short-time expansion of the survival probability. We can improve this computation by accounting for the next-order correction.

Perfect reactivity
For the perfect reactivity, one has [114,115] where B = (2/ √ π) √ D |Γ| |Ω| and where H(s) is the mean curvature of the boundary at the point s. Substituting this expression into the first integral in Eq. (19), we get where we changed the integration variable as z = 1 − For large N , the main contribution comes from the vicinity of z = 1 so that one can expand the function f (z) into a Taylor series around this point and we used that f (1) = 0. Integrating term by term, one gets with f ′ (1) = −1/B 2 and f ′′ (1) = 6C/B 4 . As a consequence, we get which takes the form of Eq. (24).

Partial reactivity
For partial reactivity, one has [116] where Substituting this expression into the first integral in Eq. (19), we get where we changed the integration variable as z = 1 − Bt + Ct 3/2 , with z 0 = 1 − BT + CT 3/2 , f (z) = 1/(B − 3C t(z)/2), and t(z) is the inverse of the above function z(t). Even though f (z) can be written explicitly in terms of the root of the cubic polynomial, it is not needed as we only use its expansion around z = 1, which reads Integrating term by term, one gets which takes the form of Eq. (25).

Appendix C: Explicit solutions
In this Appendix, we summarise the well-known explicit solutions for two simple settings that we used for illustrations: an interval and a sphere.

Interval with absorbing endpoints
For the interval (0, L) with absorbing endpoints at 0 and L, the survival probability admits two equivalent explicit expansions The first relation is numerically more convenient at small t, whereas the second at large t. We also have Note also that the Laplace-transformed probability density reads from which one can easily compute positive-order moments.

Interval with a partially reactive endpoint
We also consider the interval (0, L) with partially reactive endpoint at L and reflecting endpoint at 0, for which the Laplacian eigenvalues and eigenfuctions are (see, e.g., [126]) λ n = α 2 n /L 2 , u n (x) = β n cos(α n x/L), with and α n are the positive zeros of the equation α n sin α n = h cos α n , and h = κL/D. For each n = 1, 2, . . ., α n belongs to an interval [π(n − 1), π(n − 1/2)) that facilitates their numerical computation. Note that this problem is equivalent to diffusion on the interval (−L, L) whose both endpoints are partially reactive. The spectral expansion of the survival probability is [126] S(t|x 0 ) = 2 Its volume average reads The probability density ρ(t|x 0 ) and its volume average ρ(t) follow by taking the time derivative.

Absorbing sphere
We also consider the first-passage problem to the absorbing boundary of a ball of radius L, for which the survival probability is also explicitly known, As a consequence, the PDF reads We also get and We also easily get and b 1 = ln 1 where ζ(z) is the Riemann zeta function with ζ(3) ≈ 1.2021. Here we used the following expression for the first eigenfunction: u 1 (x) = β 1 sin(π|x|/L) π|x|/L , where β 1 = π/(2L 3 ) is the normalisation factor. Note also that c 1 = 6/π 2 .

Spherical target surrounded by a reflecting sphere
We also consider a more elaborate situation of a spherical target of radius R, which is surrounded by a larger reflecting concentric sphere of radius L. The related firstpassage time distribution for a single particle was studied in [57]. In particular, the exact spectral expansion for the survival probability was provided: In this Appendix, we generalise and rationalise the scaling relation T N ∝ 1/N 2 in terms of the closest particle to the target. Let us start with an example of the interval (0, L) with the absorbing endpoint 0 and reflecting endpoint L. For N independent particles randomly placed on the interval at positions x 1 , . . . , x N , the distance to the target at 0 is δ = min{x 1 , . . . , x N }. This is a random variable characterised by the simple law: P{δ > x} = (P{x 1 > x}) N = (1 − x/L) N , where we assumed the uniform distribution for x i . As a consequence, the mean shortest distance is (D1) Similarly, the second, the third, etc. closest particle are located on average at the distance 2L/(N + 1), 3L/(N + 1), etc. This is expected: if one had to place N particles at equal distances between any two neighbours (and from the endpoints), one would get their positions at kL/(N + 1), with k = 1, 2, . . . , N . If N is large, there are many particles that start at the distance of the order L/N from the target, and there is high chance that one of these particles reaches the target within the time of the order of δ 2 /D, in agreement with Eq. (22), up to a numerical prefactor.
The above argument can be easily extended to arbitrary bounded domains with a target having a smooth boundary Γ. Here, we define again the shortest distance to the target as δ = min{|x 1 − Γ|, . . . , |x N − Γ|}. This random variable is again characterised by the law P{δ > x} = (P{|x 1 − Γ| > x}) N due to the independence of the starting points. Even though the probability law for the distance from a single particle is now complicated, its asymptotic behaviour remains simple. Indeed, for small x, P{|x 1 − Γ| < x} is the probability of locating x 1 within a thin boundary layer of width x near the target Γ. For smooth Γ and small x, the volume of this layer can be estimated as x|Γ|, where |Γ| is the surface area of Γ. As a consequence, P{|x 1 − Γ| < x} ≃ x|Γ|/|Ω| as x → 0. The mean shortest distance is then where L is the maximal distance from the target: L = max x∈Ω |x−Γ|. For large N , the main contribution comes from x close 0, for which P{|x 1 −Γ| < x} is small. Substituting the above approximation in this region, one gets where the integral was truncated at L ′ = |Ω|/|Γ| to ensure that the integrand function remains positive. Similar computation can be performed to estimate that the mean distance to the target from the next-to-the-closest points is of the same order. If δ is small as compared to the mean curvature of the target, diffusion in the lateral direction does not matter, and the problem is again reduced to diffusion of a single particle on the interval (0, δ), for which the mean FPT is again δ 2 /(2D). This rationalises the scaling relation T N ∝ 1/N 2 that we derived in Eq. (22).
An important consequence of this reasoning is that the scaling 1/N 2 holds far beyond the considered case of the uniform distribution of starting points. First, one can realise that only the behaviour of the distribution near the boundary of the target matters in the limit of large N ; indeed, the integrand function (1 − P{|x 1 − Γ| < x}) N becomes exponentially small when P{|x 1 − Γ| < x} is not small. Second, even in the vicinity of the target, the distribution does not need to be uniform. For instance, if P{|x 1 − Γ| < x} ≃ (x/A) α with some scale A > 0 and any exponent α > 0, the integral in Eq. (D2) yields δ ≃ A/(αN + 1), i.e., the scaling 1/N is just affected by a prefactor. This result confirms the generality of the scaling relation (22). This conclusion can also be obtained on a more rigorous basis by considering the heat kernel short-time asymptotic behaviour. In fact, the asymptotic relation (14) for the volume-averaged survival probability holds in a much more general setting, in particular, after the average with a smooth nonuniform distribution of the starting point [114,116,117]. We do not explore this direction in the paper.
Only if the distribution of the starting point excludes points near the target, the volume-averaged mean fFPT would scale differently. A standard example is the Dirac distribution that fixes the starting point, which was studied previously and yielded the T N ∝ 1/ ln N behaviour [79] (see also Sec. I for more references). A similar behaviour is expected for any distribution that excludes points from some δ 0 -vicinity of the target, i.e., if there exists δ 0 > 0 such that P{|x 1 −Γ| < δ 0 } = 0. Moreover, this strict exclusion condition can be relaxed. For instance, if P{|x 1 − Γ| < x} ≃ e −A/x as x → 0 (with some scale A > 0), the integral in Eq. (D2) yields again the 1/ ln N behaviour for large N . A more systematic analysis of this problem and finding the sharp exclusion condition, present interesting problems for future research.