Online Natural Gradient as a Kalman Filter

We cast Amari's natural gradient in statistical learning as a specific case of Kalman filtering. Namely, applying an extended Kalman filter to estimate a fixed unknown parameter of a probabilistic model from a series of observations, is rigorously equivalent to estimating this parameter via an online stochastic natural gradient descent on the log-likelihood of the observations. In the i.i.d. case, this relation is a consequence of the"information filter"phrasing of the extended Kalman filter. In the recurrent (state space, non-i.i.d.) case, we prove that the joint Kalman filter over states and parameters is a natural gradient on top of real-time recurrent learning (RTRL), a classical algorithm to train recurrent models. This exact algebraic correspondence provides relevant interpretations for natural gradient hyperparameters such as learning rates or initialization and regularization of the Fisher information matrix.

In statistical learning, stochastic gradient descent is a widely used tool to estimate the parameters of a model from empirical data, especially when the parameter dimension and the amount of data are large [BL03] (such as is typically the case with neural networks, for instance). The natural gradient [Ama98] is a tool from information geometry, which aims at correcting several shortcomings of the widely ordinary stochastic gradient descent, such as its sensitivity to rescalings or simple changes of variables in parameter space [Oll15]. The natural gradient modifies the ordinary gradient by using the information geometry of the statistical model, via the Fisher information matrix (see formal definition in Section 1.2; see also [Mar14]). The natural gradient comes with a theoretical guarantee of asymptotic optimality [Ama98] that the ordinary gradient lacks, and with the theoretical knowledge and various connections from information geometry, e.g., [AN00,OAAH17]. In large dimension, its computational complexity makes approximations necessary, e.g., [LMB07,Oll15,MCO16,GS15,MG15]; this has limited its adoption despite many desirable theoretical properties.
The extended Kalman filter (see e.g., the textbooks [Sim06, Sä13, Jaz70]) is a generic and effective tool to estimate in real time the state of a nonlinear dynamical system, from noisy measurements of some part or some function of the system. (The ordinary Kalman filter deals with linear systems.) Its use in navigation systems (GPS, vehicle control, spacecraft...), time series analysis, econometrics, etc. [Sä13], is extensive to the point it can been described as "one of the great discoveries of mathematical engineering" [GA15].
The goal of this text is to show that the natural gradient, when applied online, is a particular case of the extended Kalman filter. Indeed, the extended Kalman filter can be used to estimate the parameters of a statistical model (probability distribution), by viewing the parameters as the hidden state of a "static" dynamical system, and viewing i.i.d. samples as noisy observations depending on the parameters 1 . We show that doing so is exactly equivalent to performing an online stochastic natural gradient descent (Theorem 2).
This results in a rigorous dictionary between the natural gradient objects from statistical learning, and the objects appearing in Kalman filtering; for instance, a larger learning rate for the natural gradient descent exactly corresponds to a fading memory in the Kalman filter (Proposition 3). Table 1 lists a few correspondences between objects from the Kalman filter side and from the natural gradient side, as results from the theorems and propositions below. Note that the correspondence is one-sided: the online natural gradient is exactly an extended Kalman filter, but only corresponds to a particular use of the Kalman filter for parameter estimation problems (i.e., with static dynamics on the parameter part of the system).
Beyond the static case, we also consider the learning of the parameters of a general dynamical system, where subsequent observations exhibit temporal patterns instead of being i.i.d.; in statistical learning this is called a recurrent model, for instance, a recurrent neural network. We refer to [Jae02] for an introduction to recurrent models in statistical learning (recurrent neural networks) and the afferent techniques (including Kalman filters), and to [Hay01] for a clear, in-depth treatment of Kalman filtering for recurrent models. We prove (Theorem 12) that the extended Kalman filter applied jointly to the state and parameter, amounts to a natural gradient on top of real-time recurrent learning (RTRL), a classical (and costly) online algorithm for recurrent network training [Jae02].
Thus, we provide a bridge between techniques from large-scale statistical learning (natural gradient, RTRL) and a central object from mathematical engineering, signal processing, and estimation theory. Casting the natural gradient as a specific case of the extended Kalman filter is an instance of the provocative statement from [LS83] that "there is only one recursive identification method" that is optimal on quadratic functions. Indeed, the online natural gradient descent fits into the framework of [LS83,§3.4.5]. Arguably, 1 For this we slightly extend the definition of the Kalman filter to include discrete observations, by defining (Def. 5) the measurement error as T (y) −ŷ instead of y −ŷ, where T is the sufficient statistics of an exponential family model for output noise with meanŷ. This reduces to the standard filter for Gaussian output noise, and naturally covers categorical outputs as often used in statistical learning (withŷ the class probabilities in a softmax classifier and T a "one-hot" encoding of y).
iid (static, non-recurrent) modelŷ t = h(θ, u t ) Extended Kalman filter on static Online natural gradient on θ with parameter θ learning rate η t = 1/(t + 1) Covariance matrix P t Fisher information matrix J t = η t P −1 t Bayesian prior P 0 Fisher matrix initialization J 0 = P −1 0 Fading memory Larger or constant learning rate Fading memory+constant prior Fisher matrix regularization Recurrent (state space) modelŷ t = Φ(ŷ t−1 , θ, u t ) Extended Kalman filter on (θ,ŷ) RTRL+natural gradient+state correction Covariance of θ alone, P θ Fisher matrix J t = η t (P θ ) −1 Correlation between θ andŷ t RTRL gradient estimate ∂ŷ t /∂θ  [Ber96] identifies the extended Kalman filter with a Gauss-Newton gradient descent for the specific case of nonlinear regression. [dFNG00] interprets process noise in the static Kalman filter as an adaptive, per-parameter learning rate, thus akin to a preconditioning matrix. [ŠKT01] uses the Fisher information matrix to study the variance of parameter estimation in Kalman-like filters, without using a natural gradient; [BL03] comment on the similarity between Kalman filtering and a version of Amari's natural gradient for the specific case of least squares regression; [Mar14] and [Oll15] mention the relationship between natural gradient and the Gauss-Newton Hessian approximation; [Pat16] exploits the relationship between secondorder gradient descent and Kalman filtering in specific cases including linear regression; [LCL + 17] use a natural gradient descent over Gaussian distributions for an auxiliary problem arising in Kalman-like Bayesian filtering, a problem independent from the one treated here. For the recurrent (non-i.i.d.) case, our result is that joint Kalman filtering is essentially a natural gradient on top of the classical RTRL algorithm for recurrent models [Jae02]. [Wil92] already observed that starting with the Kalman filter and introducing drastic simplifications (doing away with the covariance matrix) results in RTRL, while [Hay01,§5] contains statements that can be interpreted as relating Kalman filtering and preconditioned RTRL-like gradient descent for recurrent models (Section 3.2).

Perspectives.
In this text our goal is to derive the precise correspondence between natural gradient and Kalman filtering for parameter estimation (Thm. 2, Prop. 3, Prop. 4, Thm. 12), and to work out an exact dictionary between the mathematical objects on both sides. This correspondence suggests several possible venues for research, which nevertheless are not explored here.
First, the correspondence with the Kalman filter brings new interpretations and suggestions for several natural gradient hyperparameters, such as Fisher matrix initialization, equality between Fisher matrix decay rate and learning rate, or amount of regularization to the Fisher matrix (Section 2.2). The natural gradient can be quite sensitive to these hyperparameters. A first step would be to test the matrix decay rate and regularization values suggested by the Bayesian interpretation (Prop. 4) and see if they help with the natural gradient, or if these suggestions are overriden by the various approximations needed to apply the natural gradient in practice. These empirical tests are beyond the scope of the present study.
Next, since statistical learning deals with either continuous or categorical data, we had to extend the usual Kalman filter to such a setting. Traditionally, non-Gaussian output models have been treated by applying a nonlinearity to a standard Gaussian noise (Section 2.3). Instead, modeling the measurement noise as an exponential family (Appendix and Def. 5) allows for a unified treatment of the standard case (Gaussian output noise with known variance), of discrete categorical observations, or other exponential noise models (e.g., Gaussian noise with unknown variance). We did not test the empirical consequences of this choice, but it certainly makes the mathematical treatment flow smoothly, in particular the view of the Kalman filter as preconditioned gradient descent (Prop. 6).
Neither the natural gradient nor the extended Kalman filter scale well to large-dimensional models as currently used in machine learning, so that approximations are required. The correspondence raises the possibility that various methods developed for Kalman filtering (e.g., particle or unscented filters) or for natural gradient approximations (e.g., matrix factorizations such as the Kronecker product [MG15] or quasi-diagonal reductions [Oll15,MCO16]) could be transferred from one viewpoint to the other.
In statistical learning, other means have been developed to attain the same asymptotic efficiency as the natural gradient, notably trajectory averaging (e.g. [PJ92], or [Mar14] for the relationship to natural gradient) at little algorithmic cost. One may wonder if these can be generalized to filtering problems.
Proof techniques could be transferred as well: for instance, Amari [Ama98] gave a strong but sometimes informal argument that the natural gradient is Fisher-efficient, i.e., the resulting parameter estimate is asymptotically optimal for the Cramér-Rao bound; alternate proofs could be obtained by transferring related statements for the extended Kalman filter, e.g., combining techniques from [ŠKT01,BRD97,LS83].
Organization of the text. In Section 1 we set the notation, recall the definition of the natural gradient (Def. 1), and explain how Kalman filtering can be used for parameter estimation in statistical learning (Section 1.3); the definition of the Kalman filter is included in Def. 5. Section 2 gives the main statements for viewing the natural gradient as an instance of an extended Kalman filter for i.i.d. observations (static systems), first intuitively via a heuristic asymptotic argument (Section 2.1), then rigorously (Thm. 2, Prop. 3, Prop. 4). The proof of these results appears in Section 2.3 and sheds some light on the geometry of Kalman filtering. Finally, the case of non-i.i.d. observations (recurrent or state space model) is treated in Section 3.
Acknowledgments. Many thanks to Silvère Bonnabel, Gaétan Marceau-Caron, and the anonymous reviewers for their careful reading of the manuscript, corrections, and suggestions for the presentation and organization of the text. I would also like to thank Shun-ichi Amari, Frédéric Barbaresco, and Nando de Freitas for additional comments and for pointing out relevant references.
1 Problem setting, natural gradient, Kalman filter

Problem setting
In statistical learning, we have a series of observation pairs (u 1 , y 1 ), . . . , (u t , y t ), . . . and want to predict y t from u t using a probabilistic model p θ . Assume for now that y t is real-valued (regression problem) and that the model for y t is a Gaussian centered on a predicted valueŷ t , with known covariance matrix R t , namely The function h may represent any computation, for instance, a feedforward neural network with input u, parameters θ, and outputŷ. The goal is to find the parameters θ such that the predictionŷ t = h(θ, u t ) is as close as possible to y t : the loss function is up to an additive constant. For non-Gaussian outputs, we assume that the noise model on y t given y t belongs to an exponential family, namely, thatŷ t is the mean parameter of an exponential family of distributions 2 over y t ; we again define the loss function as ℓ t := − ln p(y t |ŷ t ), and the output noise R t can be defined as the covariance matrix of the sufficient statistics of y t given this mean (Def. 5). For a Gaussian output noise this works as expected. For instance, for a classification problem, the output is categorical, y t ∈ {1, . . . , K}, andŷ t will be the set of probabilitiesŷ t = (p 1 , . . . , p K−1 ) to have y t = 1, . . . , K − 1.
(The last probability p K is determined by the others via p k = 1 and has to be excluded to obtain a non-degenerate parameterization and an invertible covariance matrix R t .) This convention allows us to extend the definition of the Kalman filter to such a setting (Def. 5) in a natural way, just by replacing the measurement error y t −ŷ t with T (y t )−ŷ t , with T the sufficient statistics for the exponential family. (For Gaussian noise this is the same, as T (y) is y.) In neural network terms, this means that the output layer of the network is fed to a loss function that is the log-loss of an exponential family, but places no restriction on the rest of the model.

General notation.
In statistical learning, the external inputs or regressor variables are often denoted x. In Kalman filtering, x often denotes the state of the system, while the external inputs are often u. Thus we will avoid x altogether and denote by u the inputs and by s the state of the system.
The variable to be predicted at time t will be y t , andŷ t is the corresponding prediction. In generalŷ t and y t may be different objects in that y t encodes a full probabilistic prediction for y t . For Gaussians with known variance,ŷ t is just the predicted mean of y t , so in this case y t andŷ t are the same type of object. For Gaussians with unknown variance,ŷ encodes both the mean and second moment of y. For discrete categorical data,ŷ encodes the probability of each possible outcome y.
Thus, the formal setting for this text is as follows: we are given a sequence of finite-dimensional observations (y t ) with each y t ∈ R dim(y) , a sequence of inputs (u t ) with each u t ∈ R dim(u) , a parametric model where Z(β) is a normalizing constant, and λ(dy) is any reference measure on y. For instance, if y ∈ R K , T k (y) = y k and λ(dy) is a Gaussian measure centered on 0, by varying β one gets all Gaussian measures with the same covariance matrix and another mean. y may be discrete, e.g., Bernoulli distributions correspond to λ the uniform measure on y ∈ {0, 1} and a single sufficient statistic T (0) = 0, T (1) = 1. Often, the mean parameter T := Ey∼p β T (y) is a more convenient parameterization than β. Exponential families maximize entropy (minimize information divergence from λ) for a given mean of T . ŷ = h(θ, u t ) with parameter θ ∈ R dim(θ) and h some fixed smooth function from R dim(θ) × R dim(u) to R dim(ŷ) . We are given an exponential family (output noise model) p(y|ŷ) on y with mean parameterŷ and sufficient statistics T (y) (see the Appendix), and we define the loss function ℓ t := − ln p(y t |ŷ t ).
The natural gradient descent on parameter θ t will use the Fisher matrix J t . The Kalman filter will have posterior covariance matrix P t .
For multidimensional quantities x and y = f (x), we denote by ∂y ∂x the Jacobian matrix of y w.r.t. x, whose (i, j) entry is ∂f i (x) ∂x j . This satisfies the chain rule ∂z ∂y ∂y ∂x = ∂z ∂x . With this convention, gradients of real-valued functions are row vectors, so that a gradient descent takes the form x ← For a column vector u, u ⊗2 is synonymous with uu ⊤ , and with u ⊤ u for a row vector.

Natural gradient descent
A standard approach to optimize the parameter θ of a probabilistic model, given a sequence of observations (y t ), is an online gradient descent with learning rate η t . This simple gradient descent is particularly suitable for large datasets and large-dimensional models [BL03], but has several practical and theoretical shortcomings. For instance, it uses the same non-adaptive learning rate for all parameter components. Moreover, simple changes in parameter encoding or in data presentation (e.g., encoding black and white in images by 0/1 or 1/0) can result in different learning performance. This motivated the introduction of the natural gradient [Ama98]. It is built to achieve invariance with respect to parameter re-encoding; in particular, learning become insensitive to the characteristic scale of each parameter direction, so that different directions naturally get suitable learning rates. The natural gradient is the only general way to achieve such invariance [AN00, §2.4].
The natural gradient preconditions the gradient descent with J(θ) −1 where J is the Fisher information matrix [Kul97] with respect to the parameter θ. For a smooth probabilistic model p(y|θ) over a random variable y with parameter θ, the latter is defined as Definition 1 below formally introduces the online natural gradient. If the model for y involves an input u, then an expectation or empirical average over the input is introduced in the definition of J [AN00, However, this comes at a large computational cost for large-dimensional models: just storing the Fisher matrix already costs O((dim θ) 2 ). Various strategies are available to approximate the natural gradient for complex models such as neural networks, using diagonal or block-diagonal approximation schemes for the Fisher matrix, e.g., [LMB07,Oll15,MCO16,GS15,MG15].
Definition 1 (Online natural gradient). Consider a statistical model with parameter θ that predicts an output y given an input u. Suppose that the prediction takes the form y ∼ p(y|ŷ) whereŷ = h(θ, u) depends on the input via a model h with parameter θ. Given observation pairs (u t , y t ), the goal is to minimize, online, the loss function as a function of θ.
The online natural gradient maintains a current estimate θ t of the parameter θ, and a current approximation J t of the Fisher matrix. The parameter is estimated by a gradient descent with preconditioning matrix J −1 t , namely with learning rate η t and Fisher matrix decay rate γ t .
In the Fisher matrix update, the expectation over all possible values y ∼ p(y|ŷ) can often be computed algebraically, but this is sometimes computationally bothersome (for instance, in neural networks, it requires dim(ŷ t ) distinct backpropagation steps [Oll15]). A common solution [APF00, LMB07, Oll15, PB13] is to just use the value y = y t (outer product approximation) instead of the expectation over y. Another is to use a Monte Carlo approximation with a single sample of y ∼ p(y|ŷ t ) [Oll15,MCO16], namely, using the gradient of a synthetic sample instead of the actual observation y t in the Fisher matrix. These latter two solutions are often confused; only the latter provides an unbiased estimate, see discussion in [Oll15,PB13].
The online "smoothed" update of the Fisher matrix in (1.7) mixes past and present estimates (this or similar updates are used in [LMB07,MCO16]). The reason is at least twofold. First, the "genuine" Fisher matrix involves an expectation over the inputs u t [AN00, §8.2]: this can be approximated online only via a moving average over inputs (e.g., γ t = 1/t realizes an equal-weight average over all inputs seen so far). Second, the expectation over y ∼ p(y|ŷ t ) in (1.7) is often replaced with a Monte Carlo estimation with only one value of y, and averaging over time compensates for this Monte Carlo sampling.
As a consequence, since θ t changes over time, this means that the estimate J t mixes values obtained at different values of θ, and converges to the Fisher matrix only if θ t changes slowly, i.e., if η t → 0. The correspondence below with Kalman filtering suggests using γ t = η t .

Kalman filtering for parameter estimation
One possible definition of the extended Kalman filter is as follows [Sim06, §15.1]. We are trying to estimate the current state of a dynamical system s t whose evolution equation is known but whose precise value is unknown; at each time step, we have access to a noisy measurement y t of a quantitŷ y t = h(s t ) which depends on this state. The Kalman filter maintains an approximation of a Bayesian posterior on s t given the observations y 1 , . . . , y t . The posterior distribution after t observations is approximated by a Gaussian with mean s t and covariance matrix P t . (Indeed, Bayesian posteriors always tend to Gaussians asymptotically under mild conditions, by the Bernstein-von Mises theorem [vdV00].) The Kalman filter prescribes a way to update s t and P t when new observations become available.
The Kalman filter update is summarized in Definition 5 below. It is built to provide the exact value of the Bayesian posterior in the case of linear dynamical systems with Gaussian measurements and a Gaussian prior. In that sense, it is exact at first order.
The Kalman filtering viewpoint on a statistical learning problem is that we are facing a system with hidden variable θ, with an unknown value that does not evolve in time, and that the observations y t bring more and more information on θ. Thus, a statistical learning problem can be tackled by applying the extended Kalman filter to the unknown variable s t = θ, whose underlying dynamics from time t to time t + 1 is just to remain unchanged (f = Id and noise on s is 0 in Definition 5). In such a setting, the posterior covariance matrix P t will generally tend to 0 as observations accumulate and the parameter is identified better 3 (this occurs at rate 1/t for the basic filter, which estimates from all t past observations at time t, or at other rates if fading memory is included, see below). The initialization θ 0 and its covariance P 0 can be interpreted as Bayesian priors on θ [SW88,LS83].
We will refer to this as a static Kalman filter. In the static case and without fading memory, the posterior covariance P t after t observations will decrease like O(1/t), so that the parameter gets updated by O(1/t) after each new observation. Introducing fading memory for past observations (equivalent to adding noise on θ at each step, Q t ∝ P t|t−1 in Def. 5) leads to a larger covariance and faster updates.
An example: Feedforward neural networks. The Kalman approach above can be applied to any parametric statistical model. For instance [SW88] treat the case of a feedforward neural network. In our setting this is described as follows. Let u be the input of the model and y the true (desired) output. A feedforward neural network can be described as a functionŷ = h(θ, u) where θ is the set of all parameters of the network, where h represents all computations performed by the network on input u, andŷ encodes the network prediction for the value of the output y on input u. For categorical observations y,ŷ is usually a set of predicted probabilities for all possible classes; while for regression problems,ŷ is directly the predicted value. In both cases, the error function to be minimized can be defined as ℓ(y) := − ln p(y|ŷ): in the regression case,ŷ is interpreted as a mean of a Gaussian model on y, so that − ln p(y|ŷ) is the square error up to a constant.
Training the neural network amounts to estimating the network parameter θ from the observations. Applying a static Kalman filter for this problem [SW88] amounts to using Def. 5 with s = θ, f = Id and Q = 0. At first glance this looks quite different from the common gradient descent (backpropagation) approach for neural networks. The backpropagation operation is represented in the Kalman filter by the computation of H = ∂h(s,u) ∂s (2.17) where s is the parameter. We show that the additional operations of the Kalman filter correspond to using a natural gradient instead of a vanilla gradient.
Unfortunately, for models with high-dimensional parameters such as neural networks, the Kalman filter is computationally costly and requires blockdiagonal approximations for P t (which is a square matrix of size dim θ); moreover, computing H t = ∂ŷ t /∂θ is needed in the filter, and requires doing one separate backpropagation for each component of the outputŷ t .

Natural gradient as a Kalman filter: the static (i.i.d.) case
We now write the explicit correspondence between an online natural gradient to estimate the parameter of a statistical model from i.i.d. observations, and a static extended Kalman filter. We first give a heuristic argument that outlines the main ideas from the proof (Section 2.1). Then we state the formal correspondences. First, the static Kalman filter corresponds to an online natural gradient with learning rate 1/t (Thm. 2). The rate 1/t arises because such a filter takes into account all previous evidence without decay factors (and with process noise Q = 0 in the Kalman filter), thus the posterior covariance matrix decreases like O(1/t). Asymptotically, this is the optimal rate in statistical learning [Ama98]. (Note, however, that the online natural gradient and extended Kalman filter are identical at every time step, not only asymptotically.) The 1/t rate is often too slow in practical applications, especially when starting far away from an optimal parameter value. The natural gradient/Kalman filter correspondence is not specific to the O(1/t) rate. Larger learning rates in the natural gradient correspond to a fading memory Kalman filter (adding process noise Q proportional to the posterior covariance at each step, corresponding to a decay factor for the weight of previous observations); this is Proposition 3. In such a setting, the posterior covariance matrix in the Kalman filter does not decrease like O(1/t); for instance, a fixed decay factor for the fading memory corresponds to a constant learning rate.
Finally, a fading memory in the Kalman filter may erase prior Bayesian information (θ 0 , P 0 ) too fast; maintaining the weight of the prior in a fading memory Kalman filter is treated in Proposition 4 and corresponds, on the natural gradient side, to a so-called weight decay [Bis06] towards θ 0 together with a regularization of the Fisher matrix, at specific rates.

Natural gradient as a Kalman filter: heuristics
As a first ingredient in the correspondence, we interpret Kalman filters as gradient descents: the extended Kalman filter actually performs a gradient descent on the log-likelihood of each new observation, with preconditioning matrix equal to the posterior covariance matrix. This is Proposition 6 below. This relies on having an exponential family as the output noise model.
Meanwhile, the natural gradient uses the Fisher matrix as a preconditioning matrix. The Fisher matrix is the average Hessian of log-likelihood, thanks to the classical double definition of the Fisher matrix as square gradient or Hessian, for any probabilistic model p(y|θ) [Kul97]. Assume that the probability of the data given the parameter θ is approximately Gaussian, p(y 1 , . . . , y t |θ) ∝ exp(−(θ − θ * ) ⊤ Σ −1 (θ − θ * )) with covariance Σ. This often holds asymptotically thanks to the Bernstein-von Mises theorem; moreover, the posterior covariance Σ typically decreases like 1/t. Then the Hessian (w.r.t. θ) of the total log-likelihood of (y 1 , . . . , y t ) is Σ −1 , the inverse covariance of θ. So the average Hessian per data point, the Fisher matrix J, is approximately J ≈ Σ −1 /t. Since a Kalman filter to estimate θ is essentially a gradient descent preconditioned with Σ, it will be the same as using a natural gradient with learning rate 1/t. Using a fading memory Kalman filter will estimate Σ from fewer past observations and provide larger learning rates.
Another way to understand the link between natural gradient and Kalman filter is as a second-order Taylor expansion of data log-likelihood. Assume that the total data log-likelihood at time t, L t (θ) := − t s=1 ln p(y s |θ), is approximately quadratic as a function of θ, with a minimum at θ * t and a . Then when new data points become available, this quadratic approximation would be updated as follows (online Newton method): and indeed these are equalities for a quadratic log-likelihood. Namely, the update of θ * t is a gradient ascent on log-likelihood, preconditioned by the inverse Hessian (Newton method). Note that h t grows like t (each data point adds its own contribution). Thus, h t is t times the empirical average of the Hessian, i.e., approximately t times the Fisher matrix of the model (h t ≈ tJ). So this update is approximately a natural gradient descent with learning rate 1/t.
Meanwhile, the Bayesian posterior on θ (with uniform prior) after observations y 1 , . . . , y t is proportional to e −Lt by definition of L t .
, this is a Gaussian distribution centered at θ * t with covariance matrix h −1 t . The Kalman filter is built to maintain an approximation P t of this covariance matrix h −1 t , and then performs a gradient step preconditioned on P t similar to (2.2).
The simplest situation corresponds to an asymptotic rate O(1/t), i.e., estimating the parameter based on all past evidence; the update (2.1) of the Hessian is additive, so that h t grows like t and h −1 t in (2.2) produces an effective learning rate O(1/t). Introducing a decay factor for older observations, multiplying the term h t−1 in (2.1), produces a fading memory effect and results in larger learning rates.
These heuristics justify the statement from [LS83] that "there is only one recursive identification method". Close to an optimum (so that the Hessian is positive), all second-order algorithms are essentially an online Newton step (2.1)-(2.2) approximated in various ways.
But even though this heuristic argument appears to be approximate or asymptotic, the correspondence between online natural gradient and Kalman filter presented below is exact at every time step.

Statement of the correspondence, static (i.i.d.) case
For the statement of the correspondence, we assume that the output noise on y givenŷ is modelled by an exponential family with mean parameterŷ. This covers the traditional Gaussian case y = N (ŷ, Σ) with fixed Σ often used in Kalman filters. The Appendix contains necessary background on exponential families.
Theorem 2 (Natural gradient as a static Kalman filter). These two algorithms are identical under the correspondence (θ t , J t ) ↔ (s t , P −1 t /(t + 1)): 1. The online natural gradient (Def. 1) with learning rates η t = γ t = 1/(t + 1), applied to learn the parameter θ of a model that predicts observations (y t ) with inputs (u t ), using a probabilistic model y ∼ p(y|ŷ) withŷ = h(θ, u), where h is any model and p(y|ŷ) is an exponential family with mean parameterŷ.
2. The extended Kalman filter (Def. 5) to estimate the state s from observations (y t ) and inputs (u t ), using a probabilistic model y ∼ p(y|ŷ) withŷ = h(s, u) and p(y|ŷ) an exponential family with mean parameterŷ, with static dynamics and no added noise on s (f (s, u) = s and Q = 0 in Def. 5).
The correspondence is exact only if the Fisher metric is updated before the parameter in the natural gradient descent (as in Definition 1).
The correspondence with a Kalman filter provides an interpretation for various hyper-parameters of online natural gradient descent. In particular, J 0 = P −1 0 can be interpreted as the inverse covariance of a Bayesian prior on θ [SW88]. This relates the initialization J 0 of the Fisher matrix to the initialization of θ: for instance, in neural networks it is recommended to initialize the weights according to a Gaussian of covariance diag(1/fan-in) (number of incoming weights) for each neuron; interpreting this as a Bayesian prior on weights, one may recommend to initialize the Fisher matrix to the inverse of this covariance, namely, Indeed this seemed to perform quite well in small-scale experiments.
Learning rates, fading memory, and metric decay rate. Theorem 2 exhibits a 1/(t + 1) learning rate for the online natural gradient. This is because the static Kalman filter for i.i.d. observations approximates the maximum a posteriori (MAP) of the parameter θ based on all past observations; MAP and maximum likelihood estimators change by O(1/t) when a new data point is observed. However, for nonlinear systems, optimality of the 1/t rate only occurs asymptotically, close enough to the optimum. In general, a 1/(t+1) learning rate is far from optimal if optimization does not start close to the optimum or if one is not using the exact Fisher matrix J t or covariance matrix P t .
Larger effective learning rates are achieved thanks to so-called "fading memory" variants of the Kalman filter, which put less weight on older observations. For instance, one may multiply the log-likelihood of previous points by a forgetting factor (1−λ t ) before each new observation. This is equivalent to an additional step P t−1 ← P t−1 /(1 − λ t ) in the Kalman filter, or to the addition of an artificial process noise Q t proportional to P t−1 in the model. Such strategies are reported to often improve performance, especially when the data do not truly follow the model [Sim06, §5.5, §7.4], [Hay01, §5.2.2]. See for instance [Ber96] for the relationship between Kalman fading memory and gradient descent learning rates (in a particular case).
Proposition 3 (Natural gradient rates and fading memory). Under the same model and assumptions as in Theorem 2, the following two algorithms are identical via the correspondence (θ t , J t ) ↔ (s t , η t P −1 t ): • An online natural gradient step with learning rate η t and metric decay rate γ t • A fading memory Kalman filter with an additional step P t−1 ← P t−1 /(1− λ t ) before the transition step; such a filter iteratively optimizes a weighted log-likelihood function L t of recent observations, with decay (1 − λ t ) at each step, namely: provided the following relations are satisfied: For example, taking η t = 1/(t + cst) corresponds to λ t = 0, no decay for older observations, and an initial covariance P 0 = J −1 0 /cst. Taking a constant learning rate η t = η 0 corresponds to a constant decay factor λ = η 0 .
The proposition above computes the fading memory decay factors 1 − λ t from the natural gradient learning rates η t via (2.6). In the other direction, one can start with the decay factors λ t and obtain the learning rates η t via the cumulated sum of weights S t : S 0 := 1/η 0 then S t := (1 − λ t )S t−1 + 1, then η t := 1/S t . This clarifies how λ t = 0 corresponds to η t = 1/(t + cst) where the constant is S 0 .
The learning rates also control the weight given to the Bayesian prior and to the starting point θ 0 . For instance, with η t = 1/(t + t 0 ) and large t 0 , the gradient descent will move away slowly from θ 0 ; in the Kalman interpretation this corresponds to λ t = 0 and a small initial covariance P 0 = J −1 0 /t 0 around θ 0 , so that the prior weighs as much as t 0 observations. This result suggests to set γ t = η t in the online natural gradient descent of Definition 1. The intuitive explanation for this setting is as follows: Both the Kalman filter and the natural gradient build a second-order approximation of the log-likelihood of past observations as a function of the parameter θ, as explained in Section 2.1. Using a fading memory corresponds to putting smaller weights on past observations; these weights affect the first-order and the second-order parts of the approximation in the same way. In the gradient viewpoint, the learning rate η t corresponds to the first-order term (comparing (1.8) and (2.2)) while the Fisher matrix decay rate corresponds to the rate at which the second-order information is updated. Thus, the setting η t = γ t in the natural gradient corresponds to using the same decay weights for the first-order and second-order expansion of the log-likelihood of past observations.
Still, one should keep in mind that the extended Kalman filter is itself only an approximation for nonlinear systems. Moreover, from a statistical point of view, the second-order object J t is higher-dimensional than the firstorder information, so that estimating J t based on more past observations may be more stable. Finally, for large-dimensional problems the Fisher matrix is always approximated, which affects optimality of the learning rates. So in practice, considering γ t and η t as hyperparameters to be tuned independently may still be beneficial, though γ t = η t seems a good place to start.

Regularization of the Fisher matrix and Bayesian priors.
A potential downside of fading memory in the Kalman filter is that the Bayesian interpretation is partially lost, because the Bayesian prior is forgotten too quickly. For instance, with a constant learning rate, the weight of the Bayesian prior decreases exponentially; likewise, with η t = O(1/ √ t), the filter essentially works with the O( √ t) most recent observations, while the weight of the prior decreases like ≈ e − √ t (as does the weight of the earliest observations; this is the product (1 − λ t )). But precisely, when working with fewer data points one may wish the prior to play a greater role.
The Bayesian interpretation can be restored by explicitly optimizing a combination of the log-likelihood of recent points, and the log-likelihood of the prior. This is implemented in Proposition 4.
From the natural gradient viewpoint, this translates both as a regularization of the Fisher matrix (often useful in practice to numerically stabilize its inversion) and of the gradient step. With a Gaussian prior N (θ prior , Id), this manifests as an additional step towards θ prior and adding ε. Id to the Fisher matrix, known respectively as weight decay and Tikhonov regularization [Bis06, §3.3, §5.5] in statistical learning.
Proposition 4 (Bayesian regularization of the Fisher matrix). Let π = N (θ prior , Σ 0 ) be a Gaussian prior on θ. Under the same model and assumptions as in Theorem 2, the following two algorithms are equivalent: • A modified fading memory Kalman filter that iteratively optimizes L t (θ) + n prior ln π(θ) where L t is a weighted log-likelihood function of recent observations with decay (1 − λ t ): initialized with P 0 = η 1 1+n prior η 1 Σ 0 . • A regularized online natural gradient step with learning rate η t and metric decay rate γ t , initialized with J 0 = Σ −1 0 , provided the following relations are satisfied: Thus, the regularization terms are fully determined by choosing the learning rates η t , a prior such as N (0, 1/fan-in) (for neural networks), and a value of n prior such as n prior = 1 (the prior weighs as much as n prior data points). This holds both for regularization of the Fisher matrix J t + η t n prior Σ −1 0 , and for regularization of the parameter via the extra gradient step λ t n prior Σ −1 0 (θ − θ prior ). The relative strength of regularization in the Fisher matrix decreases like η t . In particular, a constant learning rate results in a constant regularization.
The added gradient step λ t n prior Σ −1 0 (θ − θ prior ) is modulated by λ t which depends on η t ; this extra term pulls towards the prior θ prior . The Bayesian viewpoint guarantees that this extra term will not ultimately prevent convergence of the gradient descent (as the influence of the prior vanishes when the number of observations increases).
It is not clear how much these recommendations for natural gradient descent coming from its Bayesian interpretation are sensitive to using only an approximation of the Fisher matrix.

Proofs for the static case
The proof of Theorem 2 starts with the interpretation of the Kalman filter as a gradient descent (Proposition 6).
We first recall the exact definition and the notation we use for the extended Kalman filter.
Definition 5 (Extended Kalman filter). Consider a dynamical system with state s t , inputs u t and outputs y t , where p(·|ŷ) denotes an exponential family with mean parameterŷ (e.g., y = N (ŷ, R) with fixed covariance matrix R).
The extended Kalman filter for this dynamical system estimates the current state s t given observations y 1 , . . . , y t in a Bayesian fashion. At each time, the Bayesian posterior distribution of the state given y 1 , . . . , y t is approximated by a Gaussian N (s t , P t ) so that s t is the approximate maximum a posteriori, and P t is the approximate posterior covariance matrix. (The prior is N (s 0 , P 0 ) at time 0.) Each time a new observation y t is available, these estimates are updated as follows.
The transition step (before observing y t ) is and the observation step after observing y t is (these are just the error E t = y t −ŷ t and the covariance matrix R t = R for a Gaussian model y = N (ŷ, R) with known R) For non-Gaussian output noise, the definition of E t and R t above via the mean parameterŷ of an exponential family, differs from the practice of modelling non-Gaussian noise via a nonlinear function applied to Gaussian noise. This allows for a straightforward treatment of various output models, such as discrete outputs or Gaussians with unknown variance. In the Gaussian case with known variance our definition is fully standard. 4 The proof starts with the interpretation of the Kalman filter as a gradient descent preconditioned by P t . Compare this result and Lemma 9 to [Hay01, (5.68)-(5.73)].
Proposition 6 (Kalman filter as preconditioned gradient descent). The update of the state s in a Kalman filter can be seen as an online gradient descent on data log-likelihood, with preconditioning matrix P t . More precisely, denoting ℓ t (y) := − ln p(y|ŷ t ), the update (2.20) is equivalent to where in the derivative, ℓ t depends on s t|t−1 viaŷ t = h(s t|t−1 , u t ).
Lemma 7 (Errors and gradients). When the output model is an exponential family with mean parameterŷ t , the error E t is related to the gradient of the log-likelihood of the observation y t with respect to the prediction y t by For a Gaussian y t = N (ŷ t , R), this is just a direct computation. For a general exponential family, consider the natural parameter β of the exponential family which defines the law of y, namely, p(y|β) = exp( i β i T i (y))/Z(β) with sufficient statistics T i and normalizing constant Z. An elementary computation (Appendix, (A.3)) shows that by definition of the mean parameterŷ. Thus, where the derivative is with respect to the natural parameter β. To express the derivative with respect toŷ, we apply the chain rule and use the fact that, for exponential families, the Jacobian matrix of the mean parameter ∂ŷ ∂β is equal to the covariance matrix R t of the sufficient statistics (Appendix, (A.11) and (A.6)).

Lemma 8. The extended Kalman filter satisfies
Proof of the lemma. This relation is known, e.g., [Sim06,(6.34)]. Indeed, using the definition of Proof of Proposition 6. By definition of the Kalman filter we have s t = s t|t−1 + K t E t . By Lemma 7, But by the definition of H, The first part of the next lemma is known as the information filter in the Kalman filter literature, and states that the observation step for P is additive when considered on P −1 [Sim06, §6.2]: after each observation, the Fisher information matrix of the latest observation is added to P −1 .
Lemma 9 (Information filter). The update (2.18)-(2.19) of P t in the extended Kalman filter is equivalent to (assuming P t|t−1 and R t are invertible). In particular, for static dynamical systems (f (s, u) = s and Q t = 0), the whole extended Kalman filter (2.12)-(2.20) is equivalent to Proof.
The first statement is well-known for Kalman filters [Sim06, (6.33)]. Indeed, expanding the definition of K t in the update (2.19) of P t , we have The second statement follows from Proposition 6 and the fact that for f (s, u) = s, the transition step of the Kalman filter is just s t|t−1 = s t−1 and P t|t−1 = P t−1 .
Lemma 10. For exponential families p(y|ŷ), the term H ⊤ t R −1 t H t appearing in Lemma 9 is equal to the Fisher information matrix of y with respect to the state s, where ℓ t (y) = − ln p(y|ŷ t ) depends on s viaŷ = h(s, u).
Proof. Let us omit time indices for brevity. We have ∂ℓ(y) ∂s = ∂ℓ(y) ∂ŷ Consequently, E y ∂ℓ(y) ∂s is the Fisher matrix of the random variable y with respect toŷ. Now, for an exponential family y ∼ p(y|ŷ) in mean parameterizationŷ, the Fisher matrix with respect toŷ is equal to the inverse covariance matrix of the sufficient statistics of y (Appendix, (A.16)), that is, R −1 t .
Proof of Theorem 2. By induction on t. By the combination of Lemmas 9 and 10, the update of the Kalman filter with static dynamics (s t|t−1 = s t−1 ) is Defining J t = P −1 t /(t + 1), this update is equivalent to Under the identification s t−1 ↔ θ t−1 , this is the online natural gradient update with learning rate η t = 1/(t + 1) and metric update rate γ t = 1/(t + 1).
The proof of Proposition 3 is similar, with additional factors (1 − λ t ). Proposition 4 is proved by applying a fading memory Kalman filter to a modified log-likelihoodL 0 := n prior ln π(θ),L t := ln p θ (y t ) + (1 − λ t )L t−1 + λ t n prior ln π(θ) so that the prior is kept constant inL t .
3 Natural gradient as a Kalman filter: the state space (recurrent) case

Recurrent models, RTRL
Let us now consider non-memoryless models, i.e., models defined by a recurrent or state space equationŷ with u t the observations at time t. To save notation, here we dump intô y t the whole state of the model, including both the part that contains the prediction about y t and all state or internal variables (e.g., all internal and output layers of a recurrent neural network, not only the output layer). The stateŷ t , or a part thereof, defines a loss function ℓ t (y t ) := − ln p(y t |ŷ t ) for each observation y t . The current stateŷ t can be seen as a function which depends on θ via the whole trajectory. The derivative of the current state with respect to θ can be computed inductively just by differentiating the recurrent equation (3.1) definingŷ t : Real-time recurrent learning [Jae02] uses this equation to keep an estimate G t of ∂ŷt ∂θ . RTRL then uses G t to estimate the gradient of the loss function ℓ t with respect to θ via the chain rule, ∂ℓ t /∂θ = (∂ℓ t /∂ŷ t )(∂ŷ t /∂θ) = (∂ℓ t /∂ŷ t )G t .
Definition 11 (Real-time recurrent learning). Given a recurrent modelŷ t = Φ(ŷ t−1 , θ t−1 , u t ), real-time recurrent learning (RTRL) learns the parameter θ via Since θ changes at each step, the actual estimate G t in RTRL is only an approximation of the gradient ∂ŷt ∂θ at θ = θ t , valid in the limit of small learning rates η t .
In practice, RTRL has a high computational cost due to the necessary storage of G t , a matrix of size dim θ × dimŷ. For large-dimensional models, backpropagation through time is usually preferred, truncated to a certain length in the past [Jae02]; [OTC15,TO17] introduce a low-rank, unbiased approximation of G t .

Statement of the correspondence, recurrent case
There are several ways in which a Kalman filter can be used to estimate θ for such recurrent models.
1. A first possibility is to view eachŷ t as a function of θ via the whole trajectory, and to apply a Kalman filter on θ. This would require, in principle, recomputing the whole trajectory from time 0 to time t using the new value of θ at each step, and using RTRL to compute ∂ŷ t /∂θ, which is needed in the filter. In practice, the past trajectory is not updated, and truncated backpropagation through time is used to approximate the derivatice ∂ŷ t /∂θ [Jae02,Hay01]. Intuitively, the joint Kalman filter maintains a covariance matrix on (θ,ŷ t ), whose off-diagonal term is the covariance betweenŷ t and θ. This term captures how the current state would change if another value of the parameter had been used. The decomposition (3.13) in the theorem makes this intuition precise in relation to RTRL: the Kalman covariance between y t and θ is directly given by the RTRL gradient G t .
Theorem 12 (Kalman filter on (θ,ŷ) as RTRL+natural gra-dient+state correction). Consider a recurrent modelŷ t = Φ(ŷ t−1 , θ t−1 , u t ). Assume that the observations y t are predicted with a probabilistic model p(y|ŷ t ) that is an exponential family with mean parameter a subset ofŷ t .
Given an estimate G t of ∂ŷ t /∂θ, and an observation y, denote the corresponding estimate of ∂ℓ t (y)/∂θ. Then these two algorithms are equivalent: • The extended Kalman filter on the pair (θ,ŷ) with transition function • A natural gradient RTRL algorithm with learning rate η t = 1/(t + 1), defined as follows. The state, RTRL gradient and Fisher matrix have a transition step and after observing y t , the state and parameter are updated as Moreover, at each time t, the covariance matrix of the extended Kalman filter over (θ,ŷ) is related to G t and J t via This result may explain an observation from [Wil92, §4.2] that RTRL can be obtained by introducing some drastic simplifications in the Kalman filter equations (changing the formula of the Kalman optimal gain and neglecting the covariance matrix update).
Again, the expectation for the Fisher matrix in (3.9) may be estimated by a Monte Carlo sample y ∼ p(y|ŷ t ), or by just using the current observation y = y t , as discussed after Definition 1.
As before, learning rates η t different from 1/(t + 1) can be obtained by introducing a fading memory (i.e., process noise Q proportional to P ) in the joint Kalman filter. We omit the statement for simplicity, but it is analogous to Propositions 3 and 4.
The algorithm above features a state update (3.12) together with the parameter update; this is not commonly used in online recurrent neural network algorithms. In small-scale experiments, we have not found any clear effect of this; besides, such state updates must be applied cautiously if the range of possible values for the stateŷ is somehow constrained.
In the result above, the Kalman filter is initialized with a covariance matrix in which every uncertainty comes from uncertainty on θ rather than the initial stateŷ 0 . This has the advantage of making the correspondence algebraically simple, but is not a fundamental restriction. If modelling an initial uncertainty onŷ 0 is important, one can always apply the theorem by incorporating the initial condition as an additional component of the parameter θ, with its own variance; in this case, G 0 must be initialized to Id on the corresponding component of θ, namely θ +init := (θ,ŷ 0 ) ⊤ , G 0 := ∂ŷ 0 ∂θ +init = (0, Id) (3.14) and then Theorem 12 can be applied to θ +init . Actually this operation is often not needed at all: indeed, if the dynamical system is such that the initial condition is forgotten reasonably quickly, then the initial covariance ofŷ 0 decreases (terms W in the proof below) and the Kalman covariance tends to the type (3.13) above exponentially fast, even without using θ +init . This is the case, for instance, for any stable linear dynamical system, as a consequence of Lemmas 13-14, and more generally for any system with geometric memory in the sense that ∂ŷt ∂ŷ t−1 is contracting for a fixed parameter and a given input.
The filtering literature contains updates similar to the above for G t , but more complex [LS83,Hay01]; this is, first, because they are expressed over the variable Cov(ŷ t , θ) = G t J −1 t instead of G t alone, second, because we have initialized the uncertainty onŷ 0 to 0, and, third, because in dual rather than joint filter approaches, higher-order terms depending on second derivatives of In terms of computational cost, for recurrent neural networks (RNNs), RTRL alone is already as costly as the joint Kalman filter [Wil92]. Indeed, RTRL requires (dim θ) forward tangent propagations at each step, each of which costs O(dim θ) for a standard RNN model [Jae02], thus for a total cost of O((dim θ) 2 ) per time step. The Fisher matrix is of size (dim θ) 2 ; if a single Monte Carlo sample y ∼ p(y|ŷ t ) is used, then the Fisher matrix update is rank-one and costs O((dim θ) 2 ); the update of the inverse Fisher matrix can be maintained at the same cost thanks to the Woodbury matrix identity (as done, e.g., in [LMB07]). Thus, if RTRL is computationally affordable, there is little point in not using the Fisher matrix on top.

Proofs for the recurrent case
We now turn to the proof for the recurrent case, involving a joint Kalman filter on (θ,ŷ). The key is to decompose the Kalman covariance matrix of the pair (θ,ŷ) into three variables (3.17): the covariance of θ, the correlation between θ andŷ, and the part of the covariance ofŷ that does not come from its correlation with θ (its so-called Schur complement). This provides a nice expression for the transition step of the Kalman filter (Lemma 13).
Then we find that the correlation between θ andŷ is exactly the gradient G = ∂ŷ ∂θ maintained by RTRL (Corollary 15); meanwhile, we find θ and its covariance essentially follow a standalone Kalman filter related to the observations via G, which is a natural gradient for the same reasons as in the static case.
In the recurrent case, we are applying an extended Kalman filter to the state s = θ y with transition function f = Id Φ . Let us decompose the covariance matrix P t of this system as From now on, for simplicity we omit the time indices when no ambiguity is present.
By the theory of Schur complement for positive-semidefinite matrices [BV04, Appendix A.5.5], letting P + be any generalized inverse of P θ , we know that Pŷ−P θŷ P + P θŷ ⊤ is positive-semidefinite and that P θŷ (Id −P + P θ ) = 0. The latter rewrites as P θŷ = P θŷ P + P θ . Let us set W := Pŷ − P θŷ P + P θŷ ⊤ , G := P θŷ P + (3.16) Then P θŷ = GP θ and W = Pŷ − GP θ G ⊤ . Thus, at each time t we can decompose P t as without loss of generality, where W is positive-semidefinite. This decomposition tells us which part of the covariance of the current stateŷ comes from the covariance of the parameter θ via the dynamics of the system. First, we will show that if W 0 = 0, then W t = 0 for all t, and that in this case G t satisfies the RTRL equation.
Lemma 13. Consider the extended Kalman filter on the pair s = (θ,ŷ) ⊤ with transition function f = (θ, Φ(ŷ, θ, u)) ⊤ and no added noise (Q t = 0). Then the Kalman transition step (2.13) on P , expressed in the decomposition (3.17), is equivalent to This equation for G is the RTRL update.
Proof of the lemma. This is a direct computation using the Kalman transition step (2.13) for P . Indeed, the decomposition (3.17) of P rewrites as in that order, where R is given by (2.16). Moreover, if P θ or W are invertible then their respective updates are equivalent to and Thus, the updates for W , (3.19) and (3.23), are just the updates of an extended Kalman filter onŷ alone, with covariance matrix W and noise measurement R. The update for P θ is identical to an extended Kalman filter on θ where measurements are made onŷ, withŷ seen as a function of θ with derivative ∂ŷ/∂θ = G, and where the measurement noise onŷ is R + W (the measurement noise on y plus the covariance ofŷ). Thus, these two lemmas relate the joint Kalman filter on (θ,ŷ) to the dual Kalman filter that filters separately θ givenŷ andŷ given θ, together with an estimate of ∂ŷ/∂θ. As far as we could check, this decomposition is specific to a situation in which one component (the parameter) is supposed to have static underlying dynamics, θ t+1 = θ t .
Proof of the lemma. In our case, the function h of the extended Kalman filter is the function that sends (θ,ŷ) toŷ. In particular, H t = (0, Id).
First, if P θ and W are invertible, then the updates (3.22), (3.23) for P θ and W follow from the updates (3.25), (3.26) on their inverses, thanks to the Woodbury matrix identity. Since working on the inverses is simpler, we shall prove only the latter. Since (3.22), (3.23) are continuous in P θ and W , the non-invertible case follows by continuity.
Starting again with the decomposition of P t as a product (3.21), the inverse of P t is From Lemma 9, the Kalman observation udpate for P t amounts to adding To interpret this as an update on P θ , W and G, we have to introduce new variablesW ,G, andP θ such that (3.29) takes the original form (3.28) in these new variables.
Since we have definedW andG by identifying (3.29) with the original form (3.28), we haveWG = W G by construction. Thus Putting the last two lemmas side by side in the case W = 0, we obtain a much simpler update.
Corollary 15. Consider the extended Kalman filter on the pair s = (θ,ŷ) ⊤ with transition function f (s) = (θ, Φ(ŷ, θ, u)) ⊤ and no added noise (Q t = 0). Decompose the covariance P of the state s as in (3.17) using P θ , G, W . If W = 0 and P θ is invertible then performing the Kalman transition update followed by the observation update is equivalent to in that order.
From this, the end of the proof of Theorem 12 essentially proceeds as in the non-recurrent case. Since we initialize W to 0 in Theorem 12, we have W = 0 at all times. As before, for exponential families R −1 is equal to the Fisher matrix with respect toŷ t , namely, R −1 = E y∼p (y|ŷ) ∂ℓ t (y) ∂ŷ t ⊗2 (Appendix). Now, the term E y∼p(y|ŷ) g t (y) ⊗2 in the Fisher matrix update (3.9) uses g t (y) = ∂ℓ t (y) ∂ŷ t G t (3.6) to estimate the derivative of the loss ℓ t (y) with respect to θ. So the term G ⊤ R −1 G in (3.38) coincides with the Fisher matrix update term E y∼p(y|ŷ) g t (y) ⊗2 in (3.9). (Compare Lemma 10.) So if we just define J t := η t (P θ t ) −1 with η t = 1/(t + 1), the additive update (3.38) on (P θ ) −1 translates as the online Fisher matrix update (3.9) on J t .
Moreover, since the Kalman gradient is an ordinary gradient preconditioned with the covariance matrix P t (Proposition 6), the update of the pair (θ,ŷ) is (3.40) (indeed ℓ t does not depend explicitly on θ in recurrent models, only via the current stateŷ t ). Given the decomposition P t = P θ (GP θ ) ⊤ GP θ GP θ G ⊤ , this translates as which is the update in Theorem 12.

A Appendix: reminder on exponential families
An exponential family of probability distributions on a variable x (discrete or continuous), with sufficient statistics T 1 (x), . . . , T K (x), is the following family of distributions, parameterized by β ∈ R K : where Z(β) is a normalizing constant, and λ(dx) is any reference measure on x, such as the Lebesgue measure or any discrete measure. The family is obtained by varying the parameter β ∈ R K , called the natural or canonical parameter. We will assume that the T k are linearly independent as functions of x (and linearly independent from the constant function); this ensures that different values of β yield distinct distributions. For instance, Bernoulli distributions are obtained with λ the uniform measure on x ∈ {0, 1} and with a single sufficient statistic T (0) = 0, T (1) = 1. Gaussian distributions with a fixed variance are obtained with λ(dx) the Gaussian distribution centered on 0, and T (x) = x.
Another, often convenient parameterization of the same family is the following: each value of β gives rise to an average valueT of the sufficient statistics, For instance, for Gaussian distributions with fixed variance, this is the mean, and for a Bernoulli variable this is the probability to sample 1. Exponential families satisfy the identities by a simple computation [AN00, (2.33)]. These identities are useful to compute the Fisher matrix J β with respect to the variable β, as follows [AN00, (3.59)]: or more synthetically J β = Cov(T ) (A.7) where the covariance is under the law p β . That is, for exponential families the Fisher matrix is the covariance matrix of the sufficient statistics. In particular it can be estimated empirically, and is sometimes known algebraically.
In this work we need the Fisher matrix with respect to the mean parameterT ,  which in particular, can be estimated empirically.