Quantifying configurational information for a stochastic particle in a flow-field

Flow-fields are ubiquitous systems that are able to transport vital signaling molecules necessary for system function. While information regarding the location and transport of such particles is often crucial, it is not well-understood how to quantify the information in such stochastic systems. Using the framework of nonequilibrium statistical physics, we develop theoretical tools to address this question. We observe that rotation in a flow-field does not explicitly appear in the generalized potential that governs the rate of system entropy production. Specifically, in the neighborhood of a flow-field, rotation contributes to the information content only in the presence of strain—and then with a comparatively weaker contribution than strain and at higher orders in time. Indeed, strain and especially the flow divergence, contribute most strongly to transport properties such as particle residence time and the rate of information change. These results shed light on how information can be analyzed and controlled in complex artificial and living flow-based systems.


Introduction
Fluid flows are present in diverse settings from microfluidic devices to solid-state and living systems, and they often transport small particles that experience stochastic dynamics and motion. Information regarding the location and transport of these particles can be vitally crucial for the function of the system. Living organisms are governed or regulated by signaling molecules in diverse situations, from the response of a slime mold to a nutrient droplet [1] or in the healthy development of mammals [2]. Recently, complex flow patterns have been measured in brain ventricles [3], which contain guidance molecules that are able to cue the migration and development of young neurons [4]. Hence, it is of interest to quantify the information that is carried by a small particle in a flow-field [5,6].
Much progress has been made in recent years regarding how to quantify the information content in small fluctuating systems. This stems from the rigorous formulation of stochastic processes and thermodynamics of information and entropy production [7][8][9][10][11][12][13], together with the experimental detection and manipulation of these properties in various systems including colloids [14,15], active [16,17] and living matter [18][19][20][21]. These questions, however, have so far not been explored in the ubiquitous system of flow-fields in which stochastic particles are transported (see figure 1).
To answer these questions, we develop a general expression for the rate of change of information content of a stochastic particle in a flow-field, which features contributions from the flow characteristics as well as the system fluctuations. Previous work on thermodynamics in flow-fields has focused on changes in local particle conformation [5,22] but not on the effects of advection. The joint effects of diffusion and advection have been studied for inertial particles or regular flows such as shear or strain [23][24][25][26]. Here, we develop a formalism for generic flows applicable to complex fields or realistic scenarios.
The effect of anti-symmetric or non-reciprocal terms has been a topic of great interest lately in soft and active matter systems, as they can qualitatively change the behaviour of the system in dramatic ways [18,[27][28][29][30][31]. Entropy production in particular has typically been identified with the part of the stochastic Figure 1. What is the information content after time t or particle residence time, for a stochastic particle in a complex field? The answer is determined by an interplay between diffusion and advection along the flow trajectories v(r).
action which determines the weight of trajectories that is odd under time reversal [32]. In our analysis, we find that the rotational component of a flow-field does not explicitly appear in the generalized stochastic chemical potential that governs system entropy production.
We analyze information as the difference in entropy [33,34] between a particle in a flow-field and a particle undergoing free diffusion. In a neigborhood of a flow, the rotational component only alters the rate of information change in the presence of a strain component. Even when both strain and rotation are present, rotation provides a quantitatively weaker contribution as compared to the strain, and only at higher orders in time. Indeed, strain and especially the flow divergence, contribute most strongly to transport properties such as particle residence time and the rate of information change.
We demonstrate a wide range of possible implementations of our formalism through the study of a flow field in an arbitrary neighborhood. This allows the calculation of the change in information content and residence time scale for various geometries and flow-fields. For instance, we uncover a mechanism for retaining a particle for a longer time than diffusion would typically permit. Our results allow the quantification of information and transport properties for generic flows, which can be applied in various contexts including experimentally measured fields.

Entropy production for a particle in a flow-field
We consider a particle with diffusion coefficient D that undergoes stochastic motion in a d-dimensional position space under the influence of a flow-field v(x), and is characterized by the probability distribution P(x, t). To analyze the information content, we use the system (Shannon) entropy S = − d d x P ln P = s , where s ≡ − ln P is the stochastic entropy of the system from a given finite trajectory [35]. This is an appropriate measure as it quantifies how spread out the distribution is: when the distribution is sharply peaked, the entropy is low as one can reliably locate the particle.
To provide some intuition, we can calculate the rate of entropy production for a freely diffusive particle, which has the probability distribution PD(x, t) = 1 (4πDt)d/2 exp − x2 4Dt . Strikingly, the rate of entropy production for this particle is simplyṠ depending only on the system dimension d and time t but not the diffusion coefficient D. We see that the rateṠ D (t) has a singularity at t = 0 as the particle shifts from being strictly localized to a diffusive process. This rate then decreases with 1/t as the particle spreads out, to reach 0 as the particle reaches a uniform distribution.
In the presence of a flow-field, there is an interplay between diffusion and advection. To identify the unique contributions from different components of the flow, we use the Helmholtz-Hodge decomposition to represent the vector field in terms of conservative and rotational components. This can be written as v(x) = −D∇Φ + w(x), where ∇ · w = 0.
By invoking the continuity equationṖ + ∇ · J = 0, where the flux is given as J = v(x)P(x, t) − D∇P(x, t), we find a closed-form expression for the rate of entropy production as (see where m ≡ Φ + ln P plays the role of a stochastic generalized chemical potential, which is spatially uniform in equilibrium. We thus find that system entropy production exists only when the system is manifestly out of equilibrium with a non-zero Laplacian of m. Remarkably, our result shows that the rotational component of the velocity, w, does not explicitly appear in m, which governs the rate of system entropy production. To understand this more clearly, equation (2) can be written in an alternative form aṡ which highlights two things. First, the rotational component of the flow-field drops out of the first term, since w is divergence-free. When studying specific probability distributions for particles in a neighborhood, we will see that the rotational component only contributes to the system entropy production when strain is present, and then only in a comparatively weaker way and at higher orders in time. Intriguingly, while Landauer showed how a variable that is odd under time-reversal (current) makes additional contributions to what one expects from minimal entropy production [36,37], in our regime we find that a variable that is odd under time-reversal (rotation) contributes less prominently to the entropy production.
Second, the entropic contribution (second term) is positive definite. A direct corollary of this result-which equation (3) manifestly highlights-is that the configurational system entropy production is strictly non-negative when the flow-field is divergence-free, which will be the case if the flow corresponds to an incompressible fluid with no sources or sinks.

Total entropy production and entropy of the medium
We can further study the total entropy productionṠ tot (t) =Ṡ(t) + S m , which has contributions from both the system entropyṠ(t) and entropy of the medium S m [35]. Following [5], we observe that due to absence of external forces (conservative or non-conservative) the total entropy production is given by the entropic part only, namelyṠ which guarantees that the total entropy production is positive-definite. Consequently, demanding consistency in the thermodynamic description requires the contribution to the medium entropy production to beṠ We observe that the change of entropy in the medium has an additional contribution when the fluid has a non-zero divergence. To understand this, we see that it is not possible to satisfy the continuity equation for an incompressible fluid (i.e. constant density) when ∇ · v = 0, unless there are sources and sinks of fluid (leading to addition or removal of fluid particles into or from the system) that are distributed across the medium in proportion to the value of ∇ · v. In addition to this contribution to the medium entropy production, there are other sources that contribute to heat production due to dissipation, such as internal friction that is proportional to the viscosity and the square of the strain rate [38]. As we assume that the tracer particles do not modify the fluid flow, these contributions to medium entropy production do not change in time, and therefore, can be excluded from our thermodynamic accounting of entropy production.

Probability distribution for a particle in a flow-field
To study the behavior of this expression in realistic flow-fields, we analyze the stochastic dynamics of a particle with trajectory r(t) in the presence of noise and advection due to a vector field v(r) (figure 1). Specifically, in the local neighborhood of the origin, r = 0, where the particle is located at t = 0, the flow-field can be approximated using a Taylor expansion. Up to the first order in the expansion, this gives the Langevin equation where ξ(t) represents a white noise (Gaussian random variable of unit strength), and we denote v This description will allow us to study how the stochastic dynamics of the particle depends on the local characteristics of the flow-field. Using a path integral method [39], we find that the probability for the particle to be found at a distance x away after time t is (see Appendix B for details) where It is useful to decompose K into its symmetric and antisymmetric components: K = E − Ω. Note that E ≡ 1 2 (∇v) + (∇v) T is the strain rate tensor and Ω ≡ 1 2 (∇v) − (∇v) T is the vorticity tensor. They are the linearly varying components of the more general quantities, D∇Φ and w(x) respectively, that were defined earlier.

Rate of information change and its dependence on flow properties
We can now calculate the average stochastic entropy for this probability distribution and find that equation (2) gives the system entropy production aṡ Note that this result depends only on the spatial derivative of the flow-field, and is independent of the diffusion coefficient D. In contrast, the entropy production of the medium can contain system-specific coefficients such as the viscous stress and fluid density [38]. While the expressions above are valid for all times, we can gain intuition about the short time behavior of the result, using a series expansion where we denote the commutator [S, T] ≡ ST − TS for any two tensors S and T, and I is the identity tensor. Substituting this in equation (9) giveṡ where we use the fact that the trace of a commutator vanishes. We can contrast this with our result for a freely diffusing particle in equation (1). By comparing the differences in entropy production from a particle in a flow-field compared to a freely diffusing particle, we obtain the information change due to transport in the flow-field. Hence the rate of change of the information content in a flow-field isİ Using equations (10) and (1), we obtaiṅ which determines how the local properties of a flow-field-divergence, strain rate, and vorticity-affect the information content of tracer particles in any given region.
We can see that vorticity makes a quantitatively weaker contribution compared to strain, and only at the third-order in time. In fact, examination of M(t) in equation (8) reveals that for a field with pure vorticity Ω = 0 and no strain E = 0, the exponentials in M(t) will cancel. In this case, M(t) = It as in free diffusion, and equation (11) predicts that the rate of change of information content will be 0, i.e. no different from that of a diffusive particle.
Note that this short time expansion is consistent with our earlier expansion of the velocity field in a local neighborhood. Further, characterizing information by comparing differences in entropy across regions has been done in other contexts [40,41], including information content in gene expression levels [42][43][44].

Particle transport and residence time scale
Before illustrating these results with specific cases, we analyze the transport observables such as residence time of a particle before it is washed away by the flow-field. This is given by the probability to observe the tracer particle in a region of σ d around the origin after time t, defined as P( Using our solution in equation (7), we find (see Appendix C for details) We choose σ 2 = 2Dt, the length scale set by diffusion, in order to continue comparing the behavior of our particle in a flow to that of free diffusion. Using a short time expansion, we find Keeping only the lowest order term, we have where gives the time scale for a particle to remain in a region set by diffusion.
If instead we choose the region of interest to be in the close vicinity of the origin, i.e. σ 2 2Dt, then we obtain which is controlled by the same residence time scale.

Fields with sources or sinks
These results predict that a divergent field will modify the transport behavior strongly from what is expected in the case of diffusion. In particular, equation (16) shows that a field with negative divergence, e.g. in an absorbing tissue, can retain a particle for much longer times. We verify this in an example where fluid is secreted or absorbed at rate k as described by the flow field v(r) = k 3 (r − r 0 ) which can be seen in figure 2(a). Figure 2(b) demonstrates that an absorbing field retains a particle for longer times as compared to diffusion, while the opposite is observed for a secreting field. Furthermore, we can use equation (8) to calculate the Gaussian covariance as M(t) = 3 2k [exp 2kt 3 − 1]I. As t → ∞, this covariance has different asymptotic behaviors depending on the sign of k. In a secreting field with k > 0, M(t) diverges exponentially at long times, which means that the particle becomes dispersed into the fluid and P( √ 2Dt, t) → 0 [red line in figure 2(b)]. Instead, for an absorbing field with k < 0, it saturates to a constant at long times as M(t) → 3 2|k| I. Hence, the probability for this case approaches unity (P( √ 2Dt, t) → 1) [blue line in figure 2(b)], a manifestation of the localization of the particle.
Using equation (12), we note that the change of information content has an initial value oḟ This is positive for the absorbing field k < 0, which is more localized as compared to diffusion and hence more informative. The converse is true for the secreting field [see figure 2(c)]. At long times, 1 2 tr M −1 goes to 0 when k < 0 and to −|k| when k > 0. Hence, equations (9) and (11) show that the absorbing field saturates toİ(t) = 0, consistent with an effective steady-state distribution. However, the secreting field saturates toİ(t) → −k, describing a probability distribution that spreads out at a faster rate than diffusion. Clearly, this is only physical within the length-scale in which the linear expansion remains valid, l ∼ |K jk | |∂ i K jk | , and within t ∼ 1 k ln l r 0 before boundary effects come into play.

Fields with both strain and vorticity
In fields that have both strain and vorticity components, equations (12) and (14) predict that both the change in information content and particle residence time will be dominated by strain and not vorticity. To see this, we examine flow-fields with both these components in 3d, while retaining linear flow profiles and parameters similar to the previous examples [see figure 3(a)]: The probability of observing the particle in a diffusion-limited range as a function of time. The absorbing field retains a particle for longer times (blue) compared to the baseline set by diffusion (green), while the converse is true for the secreting field (red). (c) The rate of change of the information content as a function of time. At the beginning, the rate of change of the information content isİ(t = 0) = − 1 2 ∇ · v so is positive, and hence more informative for the absorbing field (blue) as compared to diffusion (green), and vice versa for the secreting field (red). At long times, the absorbing field saturates toİ(t) = 0, consistent with an effective steady-state distribution. Meanwhile, the secreting field saturates toİ(t) → −k, describing a probability distribution that keeps spreading out. Clearly, this is only physical within the length-scale in which the linear expansion remains valid. All plots use D = 0.1, |k| = 0.5 and r 0 = 0.1 × (1, 1, 1).

Figure 3. (a)
We examine flows that have both strain and vorticity, where the transport and information properties are predicted to be dominated by the strain and not vorticity. (b) The probability of observing the particle in a diffusion-limited range as a function of time. We observe a shorter residence time for both signs of α (orange and purple lines) as compared to the baseline set by diffusion (green). There is a slightly faster drop-off that depends on the direction of the strain rate E but not on the direction of vorticity Ω. (c) The rate of change of the information content has no contribution at t = 0 as there is zero divergence. The particle becomes increasingly delocalized with time as compared to diffusion, before saturating at long times to −|α|. It is agnostic to the sign of α (orange and purple lines lie on top of each other). All plots again use D = 0.1 and r 0 = 0.1 × (1, 1, 1), with |α| = 0.5.
As expected from equation (14), the residence time in these fields is shorter than in free diffusion [see figure 3(b)]. The cases with positive and negative values of α have very similar features, although there is a slightly faster drop-off that depends on the direction of the strain rate E but not on the direction of vorticity Ω.
Equation (12) predicts no initial contribution at t = 0 in the change of information content since these fields are divergence-free. Instead, the leading term in time is linear and negative,İ(t) ∼ − α 2 4 t, hence these fields become increasingly delocalized with time as compared to diffusion [see figure 3(c)]. At long times,

Fields with only vorticity
As discussed in equations (11) and (12), a field with pure vorticity Ω = 0, E = 0 has M(t) = It as in free diffusion, independent of Ω. In this case, the rate of change of information content is 0, i.e. no different from that of a freely diffusing particle. While M(t) no longer depends on Ω, we note that r d and hence the exponential term in the probability expression still depends on Ω, as can be seen from equation (8). The vorticity Ω = 0 will result in e Kt having complex roots and hence oscillations.
To demonstrate this, we study a flow-field with pure vorticity ( figure 4(a)), which has Ω = 0, E = 0: At the origin (x, y) = (0.5, 0) [the blue cross in figure 4(a)], the particle rotates around the vortex center and hence the probability in that region decreases before returning to the baseline set by diffusion after a full period 2π/c [the blue plot in figure 4(b)]. The probability density around the vortex is plotted at various times within a period to show the oscillatory motion, which simultaneously decays due to diffusion ( figure 4(c)).

More complex fields
As our formalism is applicable in the local neighborhood of a generic flow, it can be applied to various scenarios including more complex flow patterns. To illustrate this, we examine a complex field: in which two vortices with opposite vorticity v(r) = ±ê z ×(r−r 1,2 ) |r−r 1,2 | 3 overlap (see figure 5). Their centers are r 1 = (0.5, 0, 0) and r 2 = (0.5, −2, 0) respectively from the origin (red cross).
In this case, we find that the change in information content begins from 0 and decreases with time to saturate, i.e. the probability distribution spreads faster than diffusion. This particle delocalization is what we would expect from flows with both anti-symmetric and symmetric contributions, similar to what we saw in figure 3.

Discussion
We derive analytical expressions for the rate of change of information content that explicitly illustrate its out-of-equilibrium character, through the introduction of a stochastic generalized chemical potential. We find that the leading contribution in time stems from the field divergence, and that the rotational component only makes a subleading contribution. We use this to study the information content and particle transport for a stochastic particle in a flow-field.
In a neighborhood of a flow, vorticity only contributes to the change of information content when there is an additional strain field component, but produces oscillations in the probability density. Even when both strain and rotation are present, rotation provides a quantitatively weaker contribution as compared to the strain, and only at higher orders in time. Similarly, strain and especially the flow divergence, are found to contribute much more strongly than vorticity to transport properties such as the particle residence time.
This formalism is applicable in the local neighborhood of a generic flow and can be applied to more complex flow patterns. For instance, it can be used for experimentally-measured flow-fields in each local neighborhood, and hence within complex geometries such as those in biological tissues [3]. The rich transport behavior is further characterized through an expression for the particle residence time, through which we identify a mechanism to retain a particle for longer times compared to diffusion. Our identification of the way in which local flow properties determine the information content and transport properties, can be used to design desired changes in information transmission.
Our work builds upon an earlier paper by Speck and Seifert that examine entropy production in a flow field [5], which focuses exclusively on the case where ∇ · v = 0. While our results are in agreement with reference [5] for ∇ · v = 0, when ∇ · v = 0, and therefore there are sources and sinks of fluid, we find new contributions to entropy production and qualitatively different results. Further, as we directly calculate probability distributions, we can explicitly evaluate the ensemble averages to identify dominant terms and the system behaviour, for both the system entropy and the rate of change of information. In addition, our study of the residence time in various flow fields provides another useful metric for the study of particle transport.
Our methods employ an initial condition that is sharply localized, in order to analyze the transient behaviour of the system within the region of expansion. This approach affords us with analytical tractability, and develops predictive tools that decipher the corresponding roles of the different characteristic properties of the flow field. Our study can be complemented with computational approaches that relax some of these approximations and focus on more realistic scenarios, e.g. when the particle moves far from the initial point.
This work opens many new directions as our analysis can be extended to include the presence of different chemical species and gradients [45], or non-conserved particle densities. It would also be of great interest to probe how information can create feedback loops or time-dependent control of the flow field, as well as the possibility of learning from the available information [46,47]. Overall, such work will inform the transmission of information in diverse scenarios, that are relevant for a range of vital chemical and mechanical processes.
which quantifies how spread out the distribution is. We use the continuity equationṖ + ∇ · J = 0, with the flux given by where we have used the definition s ≡ − ln P for the stochastic entropy of the system. The rate of entropy production can now be calculated aṡ plus boundary terms, which we assume to be negligible. Further, we use the Helmholtz-Hodge decomposition to decompose the vector field v(x) = −D∇Φ + w(x), into its rotational and conservative components, respectively. ∇ · w = 0. Then where we used the fact that ∇ · w = 0.

Appendix B. Probability distribution of a particle in an arbitrary linear flow-field
We start with the Langevin equation We can use a path-integral method to obtain the solution. First, the linear equation allows the solution to be broken into the deterministic and fluctuating contributions: This has the solution In the main text we have set r d (0) = 0.
Besides this, dr f dt where we set r f (0) = 0. Then We are interested in the probability of the particle to be found at a distance x away after time t, i.e.

Appendix C. Particle residence time
We would like to analyze the residence time of a tracer particle in a particular location before it is washed away by the flow-field. This can be characterized by the probability to observe the tracer particle in a region of size ∼ σ d after time t, i.e. where we used the expression for the probability distribution in equation (8).
Analyzing the small-time behavior of this expression too, we have Setting r d (0) = 0, i.e. the particle begins at the origin at t = 0, r d (t) = (I + Kt + 1 2 where we use v · Ω · v = 0. Then the argument of the exponential term in P(σ, t) from equation (C.1) is The full expression for probability (equation