Evolution of neuron firing and connectivity in neuronal plasticity with application to Parkinson’s disease

We unify different modeling structures for neuron state evolution showing how they can be derived from an invariance requirement imposed to a dissipation inequality under diffeomorphism-based changes of observer in the physical space. This is a completely non-standard way of using the second law of thermodynamics, although in isothermal setting, a law otherwise commonly used for determining constitutive restrictions and admissibility conditions as those pertaining to shock waves. In this setting, we also consider a time-varying neuronal connectivity and derive the consistent structure of its evolution equation from the same invariance principle. In the case of Parkinson, the connectivity depends also on the distribution of calcium channels that bring dopamine excess. Under special conditions, we show how the connectivity probability distribution changes in time and is influenced by the one of calcium channels. We account for memory effects in the cortical matter, namely dependence on firing and connectivity histories.


A view on modeling brain state variations
Connections between neighboring neurons through electrical impulses and neurotransmitters -amino-acids, precisely glutamate, aminobutyric acid, glycine; monoamines, namely serotonin, histamine, dopamine, epinephrine, norepinephrine; peptides, namely endorphins; acetylcholine -released by each axon terminal from synaptic vesicles, determine what we commonly call a neuronal firing.
A common measure of the firing state is the excess body potential or the inter-membrane one and possibly identified with a firing rate per unit time, generically indicated by   , with the subscript referring to the th neuron.It depends on the neuronal connectivity, which ranges from 1 mm to the whole brain [1].With reference to a spatial scale at which we may model neurons and their interconnections as elements of a discrete lattice, by analogy to electrical circuits, neuron firing suggested input-output membrane voltage models, from Hodgkin-Huxley's classical scheme [2] where   is an average delay in the neurotransmission occurring towards the th neuron,  ℎ the effective strength between th and ℎth * Corresponding author.
If we look at a coarse spatial scale and refer to a window in space of diameter , the excess potential  can be considered as an average over a neuron population in the window pertaining to , its mass center.So, we may thus write  = ṽ(, ), by referring to  homogenized (meanfield) firing properties of neurons in the pertinent spatial window.In this view, instead of  alone, we can consider as an alternative the fraction (density)  = c(, ) of neurons firing beyond a certain threshold within the window considered.
In short,  refers here to the state of neurons (about the notion of neuronal state see also [16]), considered in terms of mean-field for a population of neurons in a space window of size δ.By varying δ, we can reduce the window as long as it contains a single neuron; in this case we have a discrete network modeling, as indicated by Eq. ( 1), or we can widen the window to include the whole brain.
Rewriting the Wilson-Cowan model in terms of  is straightforward.For example, a version is in Ref. [13]; it reads where  is a monotonically increasing function that takes values between 0 and 1, while  is the refractory period.Basic assumptions pertinent to Wilson-Cowan's model [11] are as follows: • All cells receive the same number of excitatory and inhibitory afferents.
• The total number of afferents reaching a cell is sufficiently large to be represented by the convolution integral above.
The scheme does not account directly for the spatial variability of .
To overcome this aspect, S. Amari suggested a functional form for the evolution of  that, to within memory terms, reads as where  is a decaying rate, ξ the connectivity,  â a non-centered sigmoid function, possibly referred only to a specific family â of neurons in the region  ⊆ , with  the region occupied by the whole brain, and  an external input [17,18].The connectivity ξ(, ) is isotropic when it is of the type ξ( − ).It can be referred to only one type of neurons or it can include the connection with other families, so it may refer to all possible connections pertaining to the â-type neurons.Furthermore, we can consider the external input to be stochastic so that the above equation becomes with  the cortical region,  ≥ 0 and  = W (, ) a spatially correlated and additive noise.Let  be a random variable with realizations .The previous equation can be rewritten in fully stochastic form as where, with ℎ a square integrable function over ,  (ℎ)() =  â(ℎ()) is the Nemytskii operator for  â ∶ R ⟶ (0, +∞), assumed to be globally Lipschitz continuous, as a sigmoid is, so that  ∶  2 () ⟶  2 () is a nonlinear Lipschitz continuous operator;  is a convolution endowed with non-negative, symmetric, continuous and square integrable kernel; finally,  is the covariance operator for the noise, assumed to be Hilbert-Schmidt, non-negative, and symmetric; it is also such that   = ∫  0 d  .Eq. ( 5) admits a gradient-type representation (see the proof in Ref. [19]).
In the scheme of Eq. ( 3),  â can be substituted, more generally, by a non-negative non-decreasing gain function, which involves the Heaviside distribution, and the spatial connectivity ξ can be chosen in the Mexican Hat class, as in Ref. [20].
Spatial non-locality can be coupled with memory, which corresponds essentially to neuron refraction.The combination of time and space non-locality implies schemes like where ξ(, ; , ⋅), considered as a function defined on the real line is assumed to satisfy a requirement of causality: ξ(, ; ,  ′ ) = 0 for every  ′ > .For a specific choice of , such a scheme has been adopted in Refs.[1,21,22].
In non-local integral models, when  is close to the boundary of , we have a number of alternatives in cutting the kernel at the boundary, and the solution depends on the pertinent choice, in the absence of a clear physical principle suggesting how to cut ξ at the brain boundary.A way to overcome such possible difficulties is to consider weak non-locality, that is one of gradient type.In fact, by adapting B. D. Coleman's and W. Noll's results [23,24] on the approximation of memory functionals to space non-locality, thus considering expansion of the space integral in terms of a series based on Fréchet's derivatives, Eq. ( 6) reduces to the first order approximation to a structure of the type    = − + divh(, ∇, ,   , (∇)  ,   ) + (, ) , (7) where  is the counterpart of ξ after a Frechét's type expansion of the integral over  while h is a vector function depending on the present values of , ∇, and , and their histories indicated in short by   , (∇)  , and   .When the connectivity  is constant and h depends only on the present values of  and ∇, the scheme reduces to structures like those analyzed in Refs.[25][26][27], without including the histories and with different choices for .Here, to account for possible (and reasonably occurring) anisotropy in the spatial distribution of synaptic links, we consider here  as a second-rank tensor, namely  =      ⊗   , where   is the th vector of a basis in R 3 and   the th element of the dual basis defined to be such that   ⋅   =    , where the dot indicates a duality pairing, which coincides with the scalar product when the spatial metric is flat; finally,    is Kronecker's delta.The brain connectivity varies in time along neurodegeneration.This is described by considering the history   of , or  in the discrete schemes, and taking an evolution equation for it (see different proposals in [28][29][30][31]).
In the case of Parkinson's disease the strength of brain connectivity depends also on the distribution (represented by some its moment ) of calcium channels bringing dopamine excess.Precisely,  may be constant in some interval of time or it can vary according to biophysical-chemical conditions.In the latter case it should satisfy an appropriate evolution equation.

Problems tackled in the present analysis
Schemes described above are intended essentially on phenomenological ground, and they are of increasing complexity: from discrete schemes to continuous ones in time, to spatial non-locality reduced then to account for weak (gradient-type) non-local effects.A natural question is thus whether we can deduce them from first principles, such as requirements of invariance applied to entities including rather general "objects'' such as energy and interactions.
A pertinent path should take into account that neurodegeneration and related plasticity is a dissipative process.An appropriate version of the second law should then be called upon; also, interactions among neighboring neurons should be involved in its expression.
With this program in mind, what we do in this paper is as follows: • We derive a functional form leading to Eq. ( 7) from the covariance principle of the second law of thermodynamics introduced in Ref. [32] and later refined [33] • From the same principle we obtain at the same time an evolution equation for the connectivity  when neurodegeneration occurs • The proof allows us to extend the technique pertaining to the covariance principle to the presence of memory functionals • Lastly, we show how imperfect knowledge of  affects the distribution of .
This last item deserves a more detailed specification.It goes as follows: The values of (, ), which describe the distribution of neuronal connection strength in a neighborhood of  at the time , as it is implicit in Eq. ( 7), are not known exactly.We have, in fact, just an estimate for .In other words, we can consider  as the realization of a random variable , characterized by a certain distribution.If we presume to know the one of (0) = , its form at  ≠ 0 pertains to a random variable  that is the push-forward at  of  induced by the diffeomorphism solving an evolution equation pertaining to  in a time interval, when such equation admits a strong solution in that interval and such a solution is a diffeomorphism.This is the case of an equation like where memory is ruled by a kernel given by the gamma probability distribution with ,  > 0, and  is a Lipschitz function.Here,  ∈ R  collects parameters that characterize the calcium channel distribution.In the present case, we derive only an estimate from below for the marginal probability density function of  when () = , with  into an appropriate linear space.We specify later the conditions under which Eq. ( 8) is justified.
Remark 1.1.The choice to focus attention on Eq. ( 7), instead of the strongly non-local version (6) rests on the awareness that Eq. ( 7) is a structure that avoids problems connected with the way we have to cut the kernel when we are close to the brain boundary, and is, on the other side, sufficiently rich to describe a wide number of brain behaviors.

State of firing and changes of observers in the physical space
The region occupied by the brain has been indicated by  above.Precisely,  ⊂ R 3 is assumed to be an arcwise connected bounded region endowed with surface-type boundary oriented by the outward unit normal  to within a (possibly empty) finite set of corners and edges.
We have also adopted the notation  ∶= ũ(, ) ( ∈ ,  in some time interval) to indicate what describes the firing state of neurons at  in the continuum scale representation.Having in mind a neuron network, that is considering a microscopic scale, we interpret  as the average over a population of neurons in a space window of small diameter , with mass center at .
We have already mentioned various characterization of the state.They can be summarized as follows: •  can be a scalar or a pseudo-scalar -for example the excess of body potential or the inter-membrane one, as already mentioned, or the density of prions.•  can be a list in R  -for example, when  = 2,  can be identified with the pair ( () ,  () ) in Wilson-Cowan's model, as already mentioned above.Being a list means that, although  belongs to R  , it does not change as a vector when we rotate the basis in R  ; in short, it is insensitive with respect to the action of () over R  , as in the scalar case.
Of course, •  can be considered as a (true) vector in R  or, more generally, • an element of a finite-dimensional Riemannian manifold  with dimension greater than 1.
Among the options, we focus here on the case in which  is a scalar to cover directly models mentioned in the introduction, unifying them (analogoous analyses can be developed when  is a pseudo-scalar with rather straightforward adaptations).The technique we use extends straightforward to the case in which  is a list, not properly a vector.When  is, more generally, an element of  -obviously, considering it a vector in R  is a special case of  -technical variants are necessary; pertinent geometrical tools to be used are in Ref. [33].
An observer is here a frame of reference in R 3 and a time scale.Since  is a scalar, it is insensitive under changes of observers that are isometric (rigid-body type).However, when such changes of observers are not isometric,  is sensitive.
Consider one such change in which we presume to leave invariant the time scale.Precisely we take a parameterized family and  0 () = .
The derivative defines the infinitesimal generator of the action of {   } .Consider  to be differentiable with respect to  and ; indicate as above by u its time derivative.Under the action of   on the physical space and at  = 0, we consider the following rule: where (, ) is a differentiable function of its arguments.In other words, u⋄ (, ) , where the asterisk indicates pullback.When   is an isometry,  = 0: rotations of reference frames do not affect  and so is for translations because  does not describe a placement in space, rather it expresses a property of a neuron placed at a given  at time .
Since  =      ⊗   is a second-rank tensor over R 3 , it is sensitive to the action of  (3).Also, it is insensitive to translations of reference frames in space because the connectivity is a relative property between neighboring neurons.Precisely, take a smooth map  ⟼  ∈  (3) such that (0) = , with  the second-rank unit tensor.Under the action of  (3) where the linear operator () is a third-rank tensor given by e + e, which is, in terms of components,    = −(e  ℎ  ℎ  +   ℎ e ℎ  ).These preliminaries on the behavior of  become useful when we consider a generic change of observer in the physical space as indicated by   .In this case the counterpart of relation ( 9) reads (see analyses in [34]) where ῡ is a second-rank-tensor-valued differentiable map.Specifically, we set where () ∈ R 3 and  is as introduced above.Thus, the rule for changes of observer that we consider here for  is Definition 2.1 (Physical Acceptability).The changes of observer above are considered to be physically acceptable when , |ῡ|, |∇|, and |∇ῡ| are bounded for every .

Firing and connectivity histories
Write in short H for the list where  3×3 is the set of 3 × 3 real matrices and an analogous meaning holds for  3×3×3 , with the proviso of considering the additional dimension.
We assume that admissible values of H are in an open connected set At every  ∈  and for  > 0, we call a history a map We indicate by  the space of histories H  , as defined above, possibly partitioned into classes of equivalence, the defining relation described later. Take where  H F(H  ; H()) is a continuous linear functional on the tangent space to  at H, while F(H  ; H()| J ) is a continuous functional depending linearly on J and defined on the closed subspace of  spanned by the histories J  such that H  () + J  () ∈  (see also [35][36][37][38]).

Power of actions
The phase fields in hands are so far firing state  and connectivity , scalar and tensor fields respectively.We consider actions powerconjugated with them: they are those determining a power needed to vary firing state and connectivity in a unit of time.According to a common instance, we distinguish them into bulk and contact actions.The latter class includes only first-neighbor neuronal interactions.The former class accounts for external impulses, those represented by  in Eq. (7).
Consider a region b ⊆ , which is arcwise connected and endowed with a surface-type boundary oriented by the outward unit normal  to within a possibly empty finite set of corners and edges -roughly b has the same topological properties of .We define external power over b the functional where the interposed dot means duality pairing, which is coincident with the scalar product when the metric in space is flat and trivial, namely it refers to an orthonormal frame of reference; d 2 is the twodimensional Hausdorff measure;  is a second-rank tensor describing possible bulk actions on the connectivity, while   is also a secondrank tensor that represents first-neighbor (thus contact) interactions and depends on the boundary of b, beyond  and , a circumstance indicated by the subscript ;  and τ are scalar fields: specifically, τ represents first-neighbor interactions associated with relative firing of neurons; it depends on the boundary of b, beyond  and .In short,   b is the power (that is unit energy over unit time) of external actions that determine firing and may alter connectivity variation processes in b.The apparent vagueness in the interpretation of such a definition, as connected with neuronal processes, will become progressively vanishing when we will elaborate its consequences.

Free energy and a peculiar property under the action of 𝖿 𝑠
We consider a free energy for  as a Radon measure that is absolutely continuous with respect to the volume, namely, for the free energy () we write () ∶= ∫  (, , ∇, , ∇;   , ∇  ,   , ∇  ) d , so we presume that it depends on the present values of ,  and their first gradients, and the histories of these variables.Also,  is taken to be a differentiable function of ,  and their first gradients; also (, , ∇, , ∇; ⋅) is considered to be lower semicontinuous over .Consider a generic material body for which we can render explicit the notion of state.Assume there exist a set of operators on the state space such that their action determines state evolution.Associate with each process in the state space a functional that is continuous over states and additive with respect to continuation of processes.In this setting we can prove that the body (whatever its nature be) admits a free energy (see the pertinent proof in [39]).
Per se,  is a density associated with the volume form.So, according to proposals in Refs.[32,40], we assume that it varies tensorially under the action of   on the ambient space.Precisely, by interpreting exactly such a tensoriality as in Ref. [32], we consider a counterpart of relations ( 9) and (10) for φ, so that we take the following change for φ: where F is a continuous map of its arguments (see the differentiation rule ( 11)).

Mechanical dissipation inequality and pertinent covariance principle
We refer here to isothermal setting because the models whose foundations we analyze do not account for temperature variations.We are however aware that non-isothermal effects may play a nontrivial role in brain functioning.
The mechanical dissipation inequality we refer to reads and is presumed to hold for any choice of b and the (time) rate fields involved.Write in short D for the left-hand side terms of the inequality (13), which thus reads D ≤ 0 as referred to an observer .Another observer, namely  ′ , which differs from  by the action of diffeomorphisms   ∶  ⟶  ′ , records an inequality D ′ ≤ 0. According to the rules for changes of observers ( 9), (10), and ( 12), the pull back of D ′ into the frame of reference defining  gives rise to another inequality, say D • ≤ 0, with D • = D + D † .The term D † involves the fields , , ∇, ∇.
If we consider the change  ′ ⟶ , the pull-back of D into the observer  ′ reads D • and satisfies the inequality D • ≤ 0, where, now, according always to the adopted rules for changes of observers, In principle, D ‡ is different from D † .We impose the following covariance principle in dissipative setting, as given in Refs.[32,33]: it states, essentially, that both D † and D ‡ are always non-positive.In other words, if a process is dissipative for a given observer, it is so for any other observer related to the first one by a diffeomorphism.Dissipation is thus considered as an intrinsic property.
Axiom 1.In any change of observer, the additional term arising after pulling-back the Clausius-Duhem inequality evaluated by the second observer in a frame defining the first one is always non positive.

Consequences of the covariance principle in dissipative setting
Consider the change of observer  ⟶  ′ .According to previous notations, the term D † = D • − D furnishes, according to the covariance principle above, the inequality Rather immediate consequences are as follows: • Set  = 0 and  = 0, which means also ῡ = 0.The inequality ( 14) reduces to Assume that , F, and the partial derivatives of  with respect to  and ∇ are bounded.When τ (⋅, ) is continuous at every time , by Cauchy's theorem (see, e.g., [41, p. 3]) applied to τ′  = − τ we realize that τ depends on the boundary b, and for every b, only on the normal  at all points where it is well-defined.Precisely, we have τ (, ) = τ(, , ), with the additional property that τ(, , ) = − τ(, , −).Also, there exists a vector function h, depending on  and , and not on , such that τ(, , ) = h(, ) ⋅  .
In addition, if h(⋅, ) ∈  1 () ∩ ( ) at every , where  is the closure of , the inequality (15) reduces to Consider (⋅, ) to be a continuous function at every , and set presuming that   is the sum   =    +    , where    is an energetic component, meaning it is fully determined by the free energy, while the other is a dissipative component, which, per se, satisfies the inequality presumed to hold for every choice of u, a requirement compatible with a constitutive structure In the previous identity,   (⋯) is a positive-definite scalar function depending on H, H  , and possibly their gradients, according to Truesdell's equipresence principle.With these choices, the inequality ( 16) reduces to The arbitrariness of  and ∇ implies the constitutive restrictions and the local dissipation inequality when u = 0, after identifying  with , thanks to the arbitrariness of .Consequently, Eq. ( 17) becomes or When   (⋯) = , with  the relaxing time and  is quadratic with respect to , namely the local balance ( 21) of actions associated with firing reduces to Eq. ( 7), now derived from first principles rather than postulated.
• Set  = 0.The inequality ( 14) reduces to By maintaining previous hypotheses, among them the boundedness of F, assume also that || and the partial derivatives of  with respect to  and ∇ are bounded.When   (⋅, ) is continuous at every time , Cauchy's theorem applied to  ′  = −  allows us to conclude that   depends on the boundary b, and for every b, only on the normal  at all points where it is welldefined.Precisely, we have   (, ) = (, , ), with the additional property that (, , ) = −(, , −).Also, there exists a third-rank tensor-valued map , depending on  and , such that (, , ) = (, ) , which reads in components    =      , by considering the normal  as a covector.In addition, if (⋅, ) ∈  1 () ∩ ( ) at every , where  is the closure of , the inequality (22) reduces to where  * is the formal adjoint of , which coincides with the transpose, indicated by ⊤, when the spatial metric is flat and trivial.Take (⋅, ) to be a continuous map at every , and set presuming even in this case that   is the sum   =    +    , where the superscripts  and  have the same meaning adopted for   .The dissipative component    satisfies per se the inequality presumed to hold for every choice of ξ, a requirement compatible with a constitutive structure with â (⋯) a positive-definite scalar function depending on H, H  , and possibly their gradients.Then, the inequality (23) reduces to Once again, the arbitrariness of  and ∇ implies the constitutive restrictions When ∇ is also continuous, so is ∇ * by the definition of , the arbitrariness of  and b implies also What remains is the local dissipation inequality which furnishes again the local inequality (20) when ξ = 0, after identifying  with .Eq. ( 24) thus reduces to which is the desired evolution equation for the connectivity.
Remark 4.1.Recall that  = (H, H  ), so the dependence on memory is included in Eq. ( 28); coupling it with the balance of firing interactions (21) and including a delay, we can consider in the evolution of  pre-firing and post-firing effects (see, e.g., [28], where first-neighbor interactions described by  are not included).
Remark 4.2.Consider only the external power over a generic brain part b of actions associated with connectivity variations, namely Apply Gauss' theorem on the last integral and exploit Eq. ( 24).The result is The right-hand-side integral is what we call a internal power of actions that perform power in the connectivity evolution.So, Eq. ( 27) is nothing more that the condition assuring that the internal power vanishes when ξ is apparent, meaning that it is determined only by time-dependent rotations of the spatial frames of reference, which affect  due to its tensor nature.An analogous relation does not hold for  because it is chosen here to be a scalar insensitive to action of the special orthogonal group.When  is selected to be a vectornot a simple list of scalars -or a tensor of any higher order, a relation analogous to (27) should be derived: it would connect one another both firing and connectivity actions; also it would involve different linear operators , one for , depending on its tensor nature, another for , the one defined above.
Remark 4.3.On the basis of the obtained results, given two histories H  and H , we say that they are equivalent when and

Accounting for calcium channels bringing dopamine excess in Parkinson's disease
The scheme developed so far is suitable to describe brain plasticity intended as the reorganization of neuronal connectivity (see, e.g., remarks in [31]).Parkinson's disease involves the occurrence of a dopamine excess through calcium channels.Their distribution in a neighborhood of  at instant  is a function ĝ of the direction  ∈  2 , where  2 is the unit sphere in 3-space.It can be expanded as follows (see [42,Ch. 4]): ĝ() = ĝ0 +  (1)       +  (2)  ℎ    ℎ     + ⋯ , where ĝ0 is a scalar, namely the average density;   and  ℎ are second-rank and fourth-rank symmetric tensors, respectively.Choosing one or the other element of the approximation may allow us to consider variously refined representation of the particle distribution.For example, to approximate ĝ, we may choose a second-rank tensor  =     ⊗   , where   ∶= ĝ0   +  (1)    and   is Kronecker's delta.
If we consider  to be time-varying, by taking into account that under changes of observer  would vary according to a relation similar to the one for , namely (10), because  is a second-rank tensor too, and varying the relation (12) according to the presence of , the path followed so far would furnish an appropriate evolution equation for , which would be analogous in structure to those local balances obtained so far, namely ( 21) and (28).

Influence of uncertainties in the calcium channel distribution on homogeneous connectivity evolution at constant firing
We do not undertake the path leading to an evolution equation for .We just reduce attention to a special case because it allows us to characterize explicitly the influence on  distribution of uncertainties pertaining to data about .Our analysis is local, meaning focused at a point  and presumes that in a neighborhood of •  is homogeneous over  -first-neighbor interactions due to coonnectivity variations in space are thus excluded; in this case  does not depend on ∇ and its history ∇  .
Analyses pertinent to these ideal assumptions are below.They require some preliminaries.

Pushing forward Probability Density Functions (PDFs)
As it is known, the push-forward through a homeomorphism of a probability measure that admits a density with respect to the standard Lebesgue measure is itself a probability measure but it does not necessarily admit a density.Specific examples of such a lack of density are, for example, in the treatise [43]: there are, in fact, homeomorphisms of R that map a set of positive measure onto the Cantor set (a dust of points).If we push-forward a Lebesgue measure (density 1) through such a map, we assigns positive measure to the Cantor set, hence it is clearly not absolutely continuous.If, instead, the push forward is provided by a diffeomorphism, a density occurs and we may give an explicit formula for the target density.We specify here the circumstance with details because it is the tool that we use in analyzing the interactions between calcium channel distribution and neuronal connectivity.
For calculations in the rest of this paper we exploit Voigt's notation, namely we adopt an isomorphism between R 3 ⊗R 3 and R 9 , being aware that it is not unique.So, from now on, we consider  ∈ R 9 .However, our analyses do not change if assuming  ∈ R  , a choice that could include a different modeling for the connectivity representation (for example, we could consider  as a fourth-rank tensor with major and minor symmetries, so that the isomorphism would be with R 21 ).For this reason, and for the sake of generality, we will write R  instead of R 9 .We apply the same reasoning to  and will consider it in R  ; so, we maintain possibly distinguished the tensor rank of connectivity and calcium channel distribution descriptions.
Under the ideal assumptions at the beginning of this section, the local balance of actions associated with connectivity variations (28) reduces, when we neglect also the history   and , to an equation of the form where, with the adopted isomorphism discussed above,  ∶ R×R  → R  is assumed to be locally Lipschitz and such that solutions to Eq. ( 29) are defined for each (0) = x in some open subset  of R  up to time  > 0.
Eq. ( 29) defines the (Poincaré) translation (or flow) operator   (x) at time  that associates to x ∈  , the value at time  of the unique solution of (29), which is equal to x at  = 0, that is  ↦   (x) is solution to the Cauchy problem We will indicate by  the final point of the process.
Write  for a random variables associated with the initial point of the process.It admits a probability density function  1 on  , assumed to be known.So, we write  ) d (30) where   is solution to Proof.Given a diffeomorphism  ∶ R  → R  , as a direct consequence of the change of coordinate formula for multiple integrals, we have where ( −1 ) ′ () denotes the Fréchet derivative, evaluated at , of the inverse transformation  −1 .In our setting,  is merely the map that to any  ∈ R  associates the solution at time  of Eq. ( 29) with initial condition (0) = x. −1 () is given by the solution at time  =  of system (31), that is Also, ( −1 ) ′ () is solution at time  of the variation equation associated with (31).Namely, let  be solution of the following matrix-valued Cauchy problem: where  denotes the identity matrix in R × and  2 means derivative with respect to the second entry of  .We get ( ) d .

P.M. Mariano and M. Spadini
Assume that a probability density function   is known for the distribution of initial values for Eq.(42).On the basis of Proposition 6.1 and considering the system Assume that G depends on  and write G  to underline such an aspect.System (44) and is complemented by the condition In block-matrix notation we have also , so that tr 2  = tr 2  − .
Under the same assumptions of Section 6.

Additional remarks
Asking covariance of the second law of thermodynamics, even only written in terms of an isothermal dissipation inequality, allows one to derive from a unique source evolution equations for firing states and neuronal connectivity in the presence of brain plasticity.When Parkinson's disease occurs, the evolution of calcium channel distribution or prion accumulation can be derived along the same path.We proved it with details by looking at first-neighbor interactions for firing processes and connectivity variations, and including related memory effects.The path can be extended to cover second-neighbor or higher-order interactions.
There are uncertainties in the evaluation of calcium channel distribution at a certain instant.They imply uncertainties in the evolution of connectivity.Their general analysis is rather intricate; however, in some ideal conditions, as those described in Section 6, we can obtain explicit expressions for the marginal probability distribution of the connectivity.

Declaration of competing interest
The Authors have no conflict of interest concerning the present submission.
3×3×3 and every H  such that H  () is in  for almost every .