Observability and Structural Identifiability of Nonlinear Biological Systems

Observability is a modelling property that describes the possibility of inferring the internal state of a system from observations of its output. A related property, structural identifiability, refers to the theoretical possibility of determining the parameter values from the output. In fact, structural identifiability becomes a particular case of observability if the parameters are considered as constant state variables. It is possible to simultaneously analyse the observability and structural identifiability of a model using the conceptual tools of differential geometry. Many complex biological processes can be described by systems of nonlinear ordinary differential equations, and can therefore be analysed with this approach. The purpose of this review article is threefold: (I) to serve as a tutorial on observability and structural identifiability of nonlinear systems, using the differential geometry approach for their analysis; (II) to review recent advances in the field; and (III) to identify open problems and suggest new avenues for research in this area.


Introduction
A model is observable if it is theoretically possible to infer its internal state by observing its output. Model parameters can be considered as constant state variables. The particular case of parameter observability is called structural identifiability. Both concepts are structural in the sense that they depend only on the model equations; that is, they are completely determined by the system dynamics and output definition. They are not affected by limitations related to the frequency or accuracy of the experimental measurements, in contrast to the related concept of practical identifiability or estimability.
The concept of observability was introduced by Kalman in 1960 for linear time-invariant systems [1,2]. Conditions for checking observability of nonlinear systems were soon developed by several authors [3][4][5][6][7]. At the same time, the interest in parametric identifiability was growing among researchers using biological models, especially in biomedical applications. As a result, the concept of structural identifiability was introduced in 1970, when Bellman andÅström coined the term and presented the Laplace transform method for its study in the context of (linear) compartmental models [8].
Both concepts, observability and structural identifiability, are applicable to dynamic models of any kind: electrical, chemical, mechanical, biological, etc. Observability analysis, as well as the related question of observer design, has been and continues to be frequently investigated by systems and control theorists. In turn, researchers working in biological modelling (e.g., in mathematical biology and, more recently, in the systems biology community) have more often addressed structural identifiability issues. This is due to the fact that biological applications typically have more experimental limitations than engineering ones in terms of which measurements are feasible, making parameter identification a more challenging problem and calling for a deeper study of parametric identifiability issues and methods.
Observability and structural identifiability play a central role in system identification. There are a number of classic books on the subject, such as the ones by Walter and Pronzato [9] and Ljung [10]. In the context of biological modelling a very complete and recent reference is the book by DiStefano [11], which covers thoroughly the topic of identifiability, both from structural and practical points of view. The interested reader is also referred to [12], which reviews the different 2 Complexity types of identifiability and related concepts, and to [13,14], which deal specifically with structural identifiability. In a different context, Chatzis and coworkers have reviewed the observability and structural identifiability of nonlinear mechanical systems [15].
The present paper reviews observability and structural identifiability concepts and tools, with the aim of facilitating their application to biological models. Instead of attempting to discuss all the existing methodologies, it focuses on methods that adopt a differential geometry approach [16][17][18]. These properties may also be analysed with other symbolic approaches, such as power series [19][20][21], differential algebra [22][23][24][25][26], or others [27][28][29], to name just a few, as well as with seminumerical [30,31] or numerical approaches [32,33]. A comparison or discussion of the aforementioned methods is out of the scope of the present paper; the interested reader is again referred to [12][13][14]34].
This manuscript begins by motivating the study in Section 2, illustrating the possible consequences of unobservability and unidentifiability. In Section 3 these concepts are analysed with the differential geometry approach, which provides a unified view of observability and structural identifiability and can be applied to a very general class of nonlinear systems. Section 4 reports recent developments in this area, and Section 5 concludes by suggesting some open problems as possible research directions.

Motivation: Implications of Unobservability and Unidentifiability in Biological Models
The importance of structural identifiability analysis has been recently stressed in different areas of biological modelling, such as animal science [36], pharmacodynamics [37], epidemiology [38], environmental modelling [39], physiology [40], neuroscience [41], oncology [42], and many more. On the other hand, assessing observability and structural identifiability can be difficult even for relatively small systems and becomes increasingly complicated as the model complexity increases. Furthermore, the theoretical foundations of the analyses have some aspects that are not fully studied yet. These reasons help explain why some modellers are reluctant to analyse these properties of their models [11], which might be understandable taking into account the fact that even the need of determining parameter values has been questioned in the context of biological modelling [43]. However, such analysis is worth the effort, since lack of identifiability and/or observability can compromise the ability of a model to provide biological insight [36,37,[44][45][46]. For example, one of the possible purposes of a model is for inferring the values of certain parameters of interest; in such case, identifiability is obviously desirable per se. Alternatively, the main purpose of the model may be to predict the dynamic behaviour of unmeasured states; in this case one is more interested in state observability than in parameter identifiability (although issues with the latter property may compromise the former).
As an example, consider the model of a possible glucose homeostasis mechanism depicted in Figure 1, which was presented in [35] and analysed in [46]. This so-called IG model describes the regulation of plasma glucose concentration (G) by means of insulin (I), which is secreted by pancreatic cells. The model consists of three state variables ( ,I,G) whose time courses are defined by nonlinear ordinary differential equations (ODEs) with five parameters ( , , , , ). For the sake of the exercise, let us assume that glucose and -cell mass are the measured outputs. In this case, if the model parameters are unknown, and are structurally unidentifiable. Figure 1 illustrates this fact by showing that changes in the model outputs (i.e., glucose concentration and -cell mass) resulting from halving the value of can be compensated by doubling the value of . Therefore, it is not possible to distinguish between two parameter vectors of the form ( , ) and ( /2, 2⋅ ). This also entails that insulin is an unobservable state, since the impossibility of determining the true parameter vector leads to the impossibility of determining which of the time courses shown in the lower left plot of Figure 1 is the true one. Therefore, the model cannot be used for inferring insulin concentration from measurements of the other variables. This limitation can be overcome if the value of or of is known.
Such lack of structural identifiability can have important consequences. A nice illustration is given in a recent work [45], where Procopio et al. presented a model of the release of a cardiac damage biomarker, cardiac troponin T, with the purpose of diagnosing acute myocardial infarction in a clinical setting. After the authors realized that the first version of the model was structurally unidentifiable, which could potentially lead to wrong conclusions, they removed the redundancies in their model and obtained an equivalent one that was structurally identifiable.
Structural unidentifiability is related to unobservability, as shown in the IG model example, in which the inability to estimate leads to wrong predictions of . However, unidentifiability does not always entail unobservability. As a trivial example, consider the case in which the value of is known. Then the IG model becomes structurally identifiable and observable. If we now modify the model by replacing parameter with the sum of two new parameters ( → 1 + 2 ), the two new parameters would obviously be structurally unidentifiable, but the unmeasured state would remain observable. Therefore, it is desirable to analyse both the structural identifiability and observability of a model to decrease the possibility of drawing false conclusions from it.
Before concluding this section, it should be noted that a structurally identifiable model may nevertheless be practically unidentifiable, that is, the numerical estimates of its parameters may contain large errors due to insufficient or bad quality data. A recent example of this scenario is given in [44], where different models of cancer chemotherapy were analysed. The results showed that, although the models were structurally identifiable, they were not practically identifiable. This deficiency could lead to infer incorrect cell cycle distributions and, as a result, to the choice of suboptimal therapies. It is thus reasonable to ask: if a model can be structurally identifiable and yet unidentifiable in practice, why should we care about analysing its structural identifiability in the first place? The answer is that practical and structural unidentifiability have different causes and also different remedies. Practical unidentifiability may be  İ = Ｊ ·  · G 2  2 + G 2 −  · I Figure 1: Illustration of observability and structural identifiability issues. Top: diagram and equations of the ' IG model' of the glucoseinsulin system [35]. If glucose concentration (G) and -cell mass are measured, the parameters and are structurally unidentifiable: the bottom plots show that different combinations of and values yield identical curves of G and , so it is not possible to distinguish between them as long as the product ⋅ , which is structurally identifiable, remains constant. Likewise, in this case insulin concentration (I) is an unobservable state: it is not possible to determine which of the two time courses of I shown in the lower left plot is the true one.
surmounted by using more informative data for calibration, but structural unidentifiabilities cannot be removed in this way (unless the new data involves modifying the output of the model, which strictly speaking entails modifying the model structure). Any attempt to remove a structural unidentifiability by incorporating more experimental data to the calibration (e.g., by sampling more densely or for a longer time) is doomed to fail, leading to a loss of resources and time. Practical identifiability analysis is not covered in this review; the interested reader is referred to [9,11,12]. In summary, it is advisable to analyse the observability and structural identifiability of a model before attempting to obtain insights from it. If this analysis reveals deficiencies, actions must be taken depending on the intended application of the model.
For example, if the intended application is for determining the value of a parameter that turns out to be structurally unidentifiable, it is necessary to eliminate this structural identifiability. There are several ways of achieving this. Sometimes it may be possible to determine the unidentifiable parameter by direct measurements, either of the parameter of interest or of the parameter(s) that are correlated with it. However, direct measurements of parameters are seldom possible. It is often more practical to measure additional state variables, which may make the model (or at least the parameter of interest) structurally identifiable; this possibility should be analysed before performing the experiments. Finally, if the experimental setup cannot be modified, or if it is not practical to obtain new experimental data, one can try to modify the model structure by reducing the number of parameters. This can be achieved by fixing some parameters to values taken from the literature or by merging several unidentifiable parameters into an identifiable one.
If the intended application of the model is for determining the system states, as opposed to the parameters, a structurally unidentifiable model may still be useful-as mentioned previously-as long as the states of interest are observable. In this case, lack of observability may be remedied in a similar way as structural identifiability.

Background: Observability and Structural Identifiability
To define observability it is necessary to introduce the notion of distinguishable states.

Definition 1.
Let be a model with internal state and measurable output . Let 0 ( ) denote the time evolution of the model output when started from an initial state 0 at 0 . Two states 1 and 2 are indistinguishable if 1 ( ) = 2 ( ) for all ≥ 0 . The set of states that are indistinguishable from A model is observable if it is possible to distinguish its internal state from any other state, that is as follows.

Complexity
Observability describes the possibility of determining the current state from present and future measurements. A similar concept, reconstructability, refers to determining the current state from present and past measurements.

Observability of Linear Systems.
For illustration purposes, this subsection presents the special case of linear timeinvariant (LTI) systems, whose equations can be written as where ∈ R is the parameter vector, ( ) ∈ R the input vector, ( ) ∈ R the state variable vector, and ( ) ∈ R the output vector. ( ), ( ), and ( ) are constant matrices of dimensions × , × , and × , respectively. The dependence on may be dropped for ease of notation.
Assessing the observability of amounts to determining whether it is possible to infer its internal state, , by observing its output, . An intuitive way of obtaining a condition for checking observability is the following. The available knowledge consists of the output and its derivatives; that is, where ℎ is a known matrix function. Setting = and writing the above equations in matrix form leads to . .
where the linear observability matrix has been introduced, one can uniquely obtain from the knowledge of and its derivatives, as long as rank(O ) = . This is known as the linear observability rank condition. (1), a necessary and sufficient condition for complete observability is that

Theorem 3. Linear Observability Rank Condition. Given a linear time-invariant model as defined in
"Complete" observability means that all the model states can be inferred from observations of the output.

Observability of Nonlinear Systems.
Let us now consider nonlinear ODE models. In their most general form they can be written as where and are analytic vector functions. A special case of (4) is that of nonlinear affine-in-the-input systems: Shortly after Kalman's introduction of the concept of observability [1,2], several researchers worked on its application to nonlinear systems of the type defined in (4) and (5). As a result, sufficient and/or necessary conditions for nonlinear observability were obtained [3][4][5][6], allowing to extend the observability rank condition in this context. For nonlinear models, unlike for LTI models like (1), the derivatives of the output cannot be expressed in terms of the , , arrays. It is therefore necessary to define a nonlinear version of the observability matrix, O ; to this end Lie derivatives are used.
Higher order Lie derivatives can be recursively calculated as . . .
Complexity 5 It can be noticed from (3) that the linear observability matrix, O , is the partial derivative of the derivatives of the output with respect to the states; that is, . .
In a nonlinear model such as (4) with constant input, ( ) = , the ℎ Lie derivative of the output function ( ) coincides with the ℎ time derivative of ( ), i.e., ( ) ( ) = ( ). Thus, Lie derivatives can be used to calculate O for nonlinear models with constant inputs as follows: . . .
The nonlinear version of the observability rank condition can be stated as follows.
Two remarks are in order. First, it should be noted that the nonlinear observability rank condition (ORC) is a sufficient, but not strictly necessary, condition for nonlinear observability (unlike the linear case, in which the ORC is both sufficient and necessary). In the nonlinear case, the ORC is "almost necessary" in the sense that, if is locally observable around 0 , then rank(O ( 0 )) = for an open dense subset of the state space [18]. This is a rather technical distinction, and in practice a failure to comply with the ORC is often considered as a very strong indication of unobservability. Second, it should also be noted that the ORC determines local observability: if a model satisfies the ORC, it is possible to distinguish between two adjacent states, but there may still be distant states that are indistinguishable. A locally observable model is often-although not always-globally observable too.

Structural Local Identifiability as Observability.
In this paper structural identifiability is considered as a particular case of observability. As noted in the preceding Section 3.2, nonlinear observability is a local concept, which means we will study structural local identifiability. The analysis of structural global identifiability requires other approaches [12][13][14]. Note however that the definitions provided here do not prevent a locally identifiable model to be also globally identifiable, and this will actually be the case in many practical applications.

Definition 6. A parameter in a model
given by (4) is structurally locally identifiable (s.l.i.) if for almost any parameter vector * ∈ R there is a neighbourhood N( * ) such that the following property holds: Structural identifiability can be considered as a particular case of observability by considering the parameters as state variables with zero dynamics [31,[47][48][49][50]. The augmented state variable vector is̃= Similar to the nonlinear observability matrix of (9), it is possible to define an augmented nonlinear observabilityidentifiability matrix, O (̃), as . . .
Remark 11. Identifiability of individual parameters: if the OIC condition is fulfilled, all the parameters of are s.l.i. If the OIC does not hold, is s.u. and at least some parameter(s) are s.u. (and/or some states are unobservable). Since each column in O corresponds to the partial derivative with respect to a state or parameter, it is possible to determine which parameters (states) are structurally unidentifiable (unobservable) by removing the corresponding column and recalculating rank(O ). If deleting the ℎ column does not change rank(O ), then the ℎ parameter (state) is structurally unidentifiable (unobservable) [47]. We can thus define a Structural Identifiability Condition for a parameter as follows: (12), and O * (̃0) is the array that results from removing the column corresponding to / from O (̃0).

Example: Observability and Structural Identifiability
Analysis of a Nonlinear Model. The approach described in Section 3.3 is demonstrated here by applying it to the nonlinear model used as motivating example in Section 2. This case study was briefly described in Section 2 and Figure 1 The matrix made up of the two rows above has rank equal to two. Subsequent rows are calculated with Lie derivatives as defined in (6) and (7). In principle, + − 1 = 7 Lie derivatives must be symbolically calculated. However, in practice it may be possible to stop the calculation earlier: if the rank of the matrix does not increase after the addition of a new derivative, it is not necessary to calculate higher order derivatives since they will not modify the rank.
The first Lie derivative is obtained as Thus, the third and fourth rows of O arẽ where By adding the two rows corresponding to ( / )( (̃)), the rank of O increases from two to three. Proceeding in the same manner, the rank of the matrix increases with every additional Lie derivative until it stops: it is equal to 7 when O is built with both 5 and 6 Lie derivatives. Thus with 6 derivatives we know that the model has some observability/identifiability issues, since its matrix does not have full rank.
At this point we can determine the observability of each state and the structural identifiability of each parameter using the procedure described in Remark 11. This yields that the unmeasured state is not observable, and that there are two s.u. parameters ( , ) and three s.l.i. parameters ( , , ). It can be noticed that multiplying by the dynamic equation of shown in Figure 1 leads to a modified model in which the third state is ( ⋅ ) instead of , and parameter only appears in the equations as part of the product ⋅ . This model formulation highlights the fact that only the products ⋅ and ⋅ are observable (identifiable).

Computational Implementations of the Rank Conditions.
The conditions described in Section 3 involve building observability (O ) or observability-identifiability matrices  [49] used semidefinite programming to evaluate the OIC (Theorem 10). They used SOSTOOLS [51], a free MATLAB toolbox that performs a sum of squares decomposition. This technique allows assessing identifiability for all parameter values within an interval; however, the computational cost of the rank calculation quickly becomes high as the problem size increases, which hinders the applicability of this method to medium-to-large models.
Another MATLAB tool is the STRIKE-GOLDD toolbox [52], publicly available software that analyses structural identifiability and observability using the OIC. It includes options such as performing partial analyses and decomposing the models, which can be helpful for analysing large models.
For rational systems, the Exact Arithmetic Rank (EAR) method is a numerical alternative for calculating the rank. It is based on an algorithm originally presented by Sedoglavic [31], which was extended and implemented in Mathematica by Jirstrand and coworkers [30].

Accessibility and the Role of Initial Conditions.
The rank conditions of Theorems 5 and 10 provide results that are valid for "almost all" values of the variables (state and parameter vectors), that is, for all possible values except for a set of measure zero (a "thin set"). Consequently, for specific values there may be loss of identifiability. This was pointed out by Saccomani et al. [53,54], who analysed this phenomenon with a differential algebra approach, tracing its cause to a loss of accessibility from certain initial conditions. Accessibility, also called reachability, is a property that describes the ability to move a system to any state in a neighbourhood of the initial one. Saccomani and coworkers noted that a loss of accessibility from specific initial conditions could lead to loss of structural identifiability. This matter has been recently approached from the differential geometry viewpoint. In [55] it was remarked that loss of accessibility is not the only possible cause of loss of structural identifiability from specific initial conditions: this phenomenon can take place even for models that are not accessible from generic initial conditions. Furthermore, it was also noted that a decrease in rank(O ) at a specific initial condition (0) does not necessarily result in a loss of structural identifiability, even if the system is started at that initial condition. In [55] a method for finding potentially problematic vectors was also suggested, although it scales up poorly with system size.

The Role of Inputs.
The methodology presented in Section 3 assumes that the input vector is known and constant. Obviously, the same formulation can account for the case of unknown constant inputs simply by considering them as additional parameters, which are unknown and constant by definition. For known, time-varying inputs that are differentiable functions of time, a differential algebra approach would still be valid. However, the differential geometry procedure described in Section 3 needs to be extended in order to cope with this case. To this end it has recently been suggested to use extended Lie derivatives [56], which are defined as follows: where ( ) is the ℎ derivative of the input . Higher order extended Lie derivatives are recursively calculated as: (18) (Note that this definition considers a time-dependent input vector ( ), which is simply written as for ease of notation.) Unlike the previously defined Lie derivatives of ((6), (7)), the extended Lie derivatives are equal to the output derivatives for time-varying inputs, ( ) ( ) = ( ).
Evaluating the OIC with a O built with extended Lie derivatives correctly determines the observability and structural identifiability of a model. Some models may require timevarying inputs in order to be identifiable. In [56] it was shown how the extended Lie derivatives can be used for experimental design, by determining the number of nonzero derivatives of the input that are required for structural identifiability.
The identifiability of the IG model used in Sections 2 and 3.4 does not depend on the input derivatives. Hence in this section this situation will be illustrated with a different example, the following two-compartment model [56]: Compartmental models of this type are commonly used to describe physiological processes. Note that, although the model given by (19) is linear in the states, if the state vector is augmented with the parameters (as needed for structural identifiability analysis) the model becomes nonlinear. This model is structurally unidentifiable from an experiment with a constant input, but becomes structurally identifiable with a continuous time-varying input such as a ramp [56]. This is illustrated in Figure 2. The constant input result can be obtained by applying the procedure described in in Remark 11 determines that 2 is observable but all the parameters are s.u. The time-varying input result is obtained by building O with the extended Lie derivatives defined in ((6), (7)); in the corresponding symbolic derivationṡis set to a constant value and higher order derivatives,, ... , . . . are set to zero. This yields rank(O ) = 6 with 5 derivatives, and the model is observable and s.l.i. These calculations can be performed with STRIKE-GOLDD2 [56] and take less than one second in a standard computer. The difference in the results witḣ= 0 anḋ̸ = 0 is due to the presence of terms containinġin some entries of O , whose contribution is needed for a full rank. Settinġ= 0 removes these terms and decreases the matrix rank, leading to a loss of identifiability. It should be noted that this model can also be analysed with a differential algebra approach; for example, the COM-BOS application [57] obtains the same result in comparable time. Compared to the differential geometry approach, the advantages of this method are the ability to distinguish between local and global identifiability and to find identifiable combinations. Its disadvantages are that in principle it cannot consider specific derivatives being zero (e.g.,̸̇ = 0 buẗ = 0) and that it typically has worse computational scale-up for models with large nonlinearities.
A different problem arises when the inputs are timevarying and unknown. Such inputs can be viewed as external disturbances, of which there are neither measurements nor information about their dependence on time. Martinelli [58] extended the ORC to account for this situation for the case of nonlinear systems that are affine with respect to the inputs, which must be differentiable but may be known and/or unknown. To this end, the model defined by (5) is augmented in order to include an unknown input vector as follows: In [58] it was proposed to extend this model by augmenting the original state to , which includes the input and its derivatives up to order , that is = [ , ,,, . . . , ( ) ]. An extended observability rank condition (EORC) was then presented, allowing checking the observability of systems with unknown inputs, although not of the inputs themselves, at least in its published form. Although in [58,59] the structural identifiability problem was not explicitly considered, it is of course possible to apply this idea to a joint observability and structural identifiability analysis.

Model Symmetries and Identifiable
Combinations. If a set of parameters are found to be structurally unidentifiable, a question naturally arises: it is possible to reformulate the model by combining such parameters in an identifiable quantity? The answer to this question entails characterizing the form in which the structurally unidentifiable parameters are correlated. Many methods for structural identifiability analysis are capable of addressing this problem to a certain extent; however, no generally applicable and automatic procedure exists.
One of the first examples, the "exhaustive modelling" method for finding the set of models that are output indistinguishable from a given one, was presented in [60]. This procedure, also known as the similarity transformation approach, can be used to obtain structurally identifiable versions of linear compartmental models. An extension to controlled nonlinear models, which requires testing controllability and observability conditions, was presented in [28], and the case of uncontrolled systems was considered in [61,62].
Differential algebra is a classic approach for the study of observability [63] and structural identifiability [25]. The equivalence between the observability definitions from the algebraic and differential geometric viewpoints was established in [64] for a class of rational systems. DAISY is a software that adopts the differential algebra approach to assess global structural identifiability and observability [65], and COMBOS [57] is a tool specifically developed for finding identifiable parameter combinations using differential algebra concepts such as Gröbner bases [26,66].
Other approaches to this problem use Lie transformations. A method based on the generation of Lie algebras that represent the symmetries of the model equations was presented in [67]. This procedure uses random numerical specializations and is valid for autonomous, rational systems.
Instead of using random specializations, another method described in [68] finds Lie symmetries by transforming rational terms into linear terms. Finally, the aforementioned toolbox STRIKE-GOLDD [52], which uses Lie derivatives to calculate the observability-identifiability matrix O , includes a procedure for finding identifiable parameter combinations that is based on ideas from [47,69,70]. Briefly, it removes from O the columns corresponding to identifiable parameters and calculates a basis for the null space of the resulting matrix. The coefficients of this basis define a set of partial differential equations, whose solutions yield the identifiable combinations.

Sloppiness, Dynamical Compensation, and Structural
Identifiability. A structurally unidentifiable model can yield the same output for different parameter values. This situation might be interpreted as a sign of robustness of the system to changes in parameter values. However, while lack of identifiability is usually considered an undesirable model property, in certain contexts robustness is seen as a desirable property. This apparent contradiction highlights the subtle character of the relationship between identifiability and robustness. As an illustration of this relationship, this subsection discusses two concepts developed in recent years -sloppiness and dynamical compensation -that are related but not equivalent to unidentifiability.
The first concept, sloppiness or sloppy models, was introduced in [71] to refer to the situation in which the model output is sensitive to changes in so-called stiff parameters, but largely insensitive to changes in sloppy parameters. Sloppiness was defined as the existence of a clear gap between the eigenvalues of the system's Fisher information matrix (FIM), with large eigenvalues corresponding to stiff parameters and small eigenvalues corresponding to sloppy parameters. It was claimed that sloppiness is a universal feature of systems biology models [43], which would make it impossible to estimate all parameters accurately. More recent publications have provided new insights about sloppiness, as reviewed in [72]. The concept of sloppiness, which has been linked to information theory, highlights the fact that a model's output behaviour may still be tightly constrained despite the parameter values being only loosely constrained. Sloppiness provides a viewpoint for studying how distinguishable models are, and how they can be reduced. Several papers have clarified the relation between sloppiness and identifiability [73][74][75][76]. It is now understood that sloppiness is related to practical rather than structural identifiability, and that it is not equivalent to unidentifiability of any kind, meaning that sloppy models can indeed be identifiable.
The second concept, dynamical compensation (DC for short), was introduced in [35] as a property found in certain physiological circuits. Originally DC was defined simply as the invariance of the model output with respect to changes in a parameter value. It was immediately noted that according to this definition DC amounted to structural unidentifiability [77,78]. (Note that the glucose homeostasis mechanism discussed in the Introduction was proposed in [35] as a possible mechanism for achieving DC; depending on its formulation-i.e., on which states are measured and which parameters are known-this model can be structurally unidentifiable). This equivalence between structural unidentifiability and the original definition of DC was not discussed in [35] and was potentially problematic, since the purpose of DC was to describe a phenomenon different to structural unidentifiability. More precisely, DC referred to the capability of a physiological circuit to maintain its dynamic behaviour unchanged after a change in the value of a model parameter, following a transition period. An alternative definition of DC that provided a more detailed description of the phenomenon and that took into account the relationship with structural identifiability was proposed in [46].

Open Problems and Future Directions
The differential geometry approach adopted in this review has been used to analyse observability and structural identifiability of nonlinear systems for more than forty years. The theoretical and computational advances made in the last decades have increased its applicability. However, there are still many challenges that call for more research in this area.
For example, an intrinsic limitation of the approach is that it yields only local results. Other methods, such as differential algebra, are capable of providing global structural identifiability results. They could possibly serve as an inspiration for extending (hybridizing?) the differential geometry techniques to perform global analyses.
Other desirable developments would consist of advanced implementations to alleviate the computational burden of the analyses. Such improvements, which may benefit from the use of parallelization and high performance computing techniques, would facilitate the application of these methods to the increasingly large models being built in the biological modelling community.
Another possible direction concerns the role of inputs in observability and identifiability analysis. Despite recent advances, there are still several open questions regarding this matter. It has been noted that certain models that are structurally unidentifiable from a single constant input experiment can become identifiable if a continuously time-varying input is used [56]. In some cases the same improvement can be obtained with multiple constant input experiments [56,79], or, equivalently, with a single experiment with a piecewise constant input. However, the question of when a time-varying input and multiple constant inputs are equivalent for the purpose of structural identifiability has not been answered yet. Likewise, the problem of analysing observability and structural identifiability in presence of unmeasured inputs has not been fully solved yet.
Finally, an important open question is the relationship between observability/identifiability and model predictions. On the one hand, it is known that lack of the former can lead to errors in the latter. On the other hand, it is true that this is not necessarily the case. Therefore, further insights into the requisites for accurate predictive modelling would be a valuable contribution.

Conflicts of Interest
The author declares that he has no conflicts of interest.