Computing all identifiable functions of parameters for ODE models

Parameter identifiability is a structural property of an ODE model for recovering the values of parameters from the data (i.e., from the input and output variables). This property is a prerequisite for meaningful parameter identification in practice. In the presence of nonidentifiability, it is important to find all functions of the parameters that are identifiable. The existing algorithms check whether a given function of parameters is identifiable or, under the solvability condition, find all identifiable functions. However, this solvability condition is not always satisfied, which presents a challenge. Our first main result is an algorithm that computes all identifiable functions without any additional assumptions, which is the first such algorithm as far as we know. Our second main result concerns the identifiability from multiple experiments (with generically different inputs and initial conditions among the experiments). For this problem, we prove that the set of functions identifiable from multiple experiments is what would actually be computed by input-output equation-based algorithms (whether or not the solvability condition is fulfilled), which was not known before. We give an algorithm that not only finds these functions but also provides an upper bound for the number of experiments to be performed to identify these functions. We provide an implementation of the presented algorithms.


Introduction
In this paper, we study structural parameter identifiability of rational ODE systems. Roughly speaking, a parameter is structurally identifiable if its value can be recovered from the observations assuming continuous noise-free measurements and sufficiently exciting inputs (also referred to as the persistence of excitation, see [21,43]). If not all of the parameters of a model are identifiable, the next question usually is what rational functions h(μ) ∈ C(μ) of the parametersμ are identifiable. The knowledge of identifiable functions can be used in these ways: • If the functions of interest to the modeler are identifiable, then the lack of identifiability of some parameters is not an issue (sometimes, this is even an advantage [38]). • Identifiable functions can be used to find an identifiable reparametrization of the model [1,23,24], which is a way of improving the model. • Knowledge of identifiable functions can be used to discover parameter transformations that preserve the inputoutput behavior and thus could provide additional insights to the modeler (see Section 5.2). To the best of our knowledge, all existing approaches to computing identifiable functions extract them from the coefficients of input-output equations (going back to [29]; for a concise summary, we refer to [31, Introduction and Algorithm II.1]). To conclude that the coefficients of an input-output equation are identifiable, one can, for example, verify if the solvability condition [35,Remark 3] holds for the equation. The condition can be checked by an algorithm (see [8,Section 4.1] and [28,Section 3.4]) and holds for some classes of models [31]. If the condition does not hold, then this approach of finding identifiable functions of parameters is not applicable but is still used by some of the existing software packages, including DAISY and COMBOS. This is a reason why these tools may miss the nonidentifiability of some of the parameters in such systems. For a simple example of a system for which this condition does not hold, see [16,Example 2.14] (see also Sections 5.2 and 5.3).
Therefore, we are not aware of any prior algorithm that can compute all identifiable functions (e.g., by computing generators of the field of identifiable functions). Note that some existing software can, for any fixed rational function of parameters, check whether it is identifiable [32,Remark 1]. However, looking for all identifiable functions, it is not known in advance what functions of parameters to test for identifiability, so this approach cannot be used as an algorithm.
The above issues motivate the following questions (which remained unanswered as far as we know) we study in this paper: (Q1) How to find the identifiable functions of a model even if the solvability condition does not hold? (Q2) If the solvability condition does not hold, what is the meaning of the coefficients of the input-output equations? Our main results are the following answers to these questions: • We answer (Q1) by providing Algorithm 1 for computing generators of the field of identifiable functions. This is the first such algorithm and is based on our theory established in Theorem 11.
• We show in Theorem 19 that the coefficients of the input-output equations are generators of the field of functions identifiable from multiple experiments (with generically different inputs and initial conditions among the experiments [20]), thus answering (Q2). To the best of our knowledge, this natural interpretation of the coefficients of the input-output equations has not been known before despite the popularity of this method. Furthermore, we use this to derive the first upper bound for the number of such experiments, which can be used further for experimental design (e.g., for protocols such as [10,Section 7]). The multi-experiment setup is natural, for example, for models involving constant inputs [42]. The theoretical basis for this work uses differential algebra and commutative algebra. We employ characteristic sets, a tool from computational algebra. The key difference with the prior algorithms based on characteristic sets is that we provide a mathematically sound way to treat a typically ignored case in which the solvability condition is not satisfied. To achieve this, we analyze the Wronskians of the monomials of characteristic sets using methods from linear algebra. Our results are informed by model theory in the sense of mathematical logic, though this does not appear explicitly in our presentation. We elaborate on this connection in a follow-up work [30]. Additional related results on identifiability using input-output equations and differential algebra include [3,7,14,17,[25][26][27][34][35][36].
The rest of the paper is organized as follows. Section 2 contains definitions and notation that we use. In Section 3, we give our algorithm for computing the generators of the field of identifiable functions, which is based on the theory we present in this section as well. Section 4 is on theory for multi-experimental identifiability. We illustrate our methods with examples in Section 5. We prove our main results in Appendix A. In the other remaining appendices, we present and prove correctness of two algorithms that are used in our main algorithmic contributions and also provide a mathematical discussion, illustrated with examples, on our main theorems.
We have implemented Algorithm 1 and an algorithm for computing the bound from Theorem 19 (as in Remark 23) in Maple. A Maple implementation together with the examples from Section 5 is available at https://github.com/pogudingleb/AllIdentifiableFunctions. This implementation has recently been incorporated into a freely available web app https://maple.cloud/app/5710317752942592/SIAN.

Basic notions and notation
In this section, we will present the basic notions and notation from differential algebra and parameter identifiability that are essential for our main results.

Background and notation from differential algebra
Differential algebra has been a standard theory behind identifiability, and we will simply fix the basic notation. General references include [19,33]. For other presentations of these concepts in the context of control theory, see [11,18,21,35].

Notation 1 (Differential rings and ideals).
(a) A differential ring (R, ′ ) is a commutative ring with a derivation ′ : R → R, that is, a map such that, for all A differential field is a differential ring that is a field. For i > 0, a (i) denotes the i-th order derivative of a ∈ R. Const(K) denotes the field of constants of a differential field K. (b) The ring of differential polynomials in the variables z 1 , . . . , z n over a differential field (K, ′ ) is the ring K z (i) j | i 0, 1 j n with a derivation defined on the ring by z (i) . This differential ring is denoted by K{z 1 , . . . , z n }. (c) For differential fields F ⊂ L and a 1 , . . . , a n ∈ L, the smallest differential subfield of L that contains F and a 1 , . . . , a n is denoted by F a 1 , . . . , a n . (d) For a commutative ring R and a subset F ⊂ R, the smallest ideal containing F is denoted by (F). (e) An ideal I of a differential ring (R, ′ ) is called a differential ideal if, for all a ∈ I, a ′ ∈ I. For F ⊂ R, the smallest differential ideal containing F is denoted by [F]. (f) For an ideal I and element a in a ring R, we denote I : a ∞ = {r ∈ R | ∃n : a n r ∈ I}. This set is also an ideal in R.
This will be useful for dealing with ODE systems in which (non-polynomial) rational functions appear. (g) For a 1 , . . . , a n in a differential ring R, we denote the n × n matrix with (i, j)-entry a (i−1) j by Wr(a 1 , . . . , a n ) and call it the Wronskian of a 1 , . . . , a n . For example, The rest of the definitions in this section generalize Gaussian elimination to systems of non-linear ODEs. Differential rankings are analogous to ordering of variables in Gaussian elimination; characteristic sets and presentations are analogous to row echelon form and reduced row echelon forms, respectively. Definition 2. A differential ranking is a total order > on Z := {z (i) j | i 0, 1 j n} satisfying: Notation 3. For f ∈ K{z 1 , . . . , z n }\K and a differential ranking, • lead( f ) is the element of z (i) j | i 0, 1 j n of the highest rank appearing in f . This is partly analogous to the leading variable in Gaussian elimination.
• The leading coefficient of f viewed as a polynomial in lead( f ) is called the initial of f . This is similar to the leading coefficient in Gaussian elimination.
• The separant of f is ∂ f ∂ lead( f ) . One can show that it is equal to the leading coefficient of any derivative of f .
The ranks are compared first by lead, and in the case of equality, by deg. This is analogous to the leading variable in Gaussian elimination/leading term in Gröbner bases.
• For S ⊂ K{z 1 , . . . , z n }\K, the product of initials and separants of S is denoted by H S . This is used in handling division with remainder algebraically.

Definition 4 (Characteristic sets).
• For f, g ∈ K{z 1 , . . . , z n }\K, f is said to be reduced w.r.t. g if no proper derivative of lead(g) appears in f and deg lead(g) f < deg lead(g) g.
• A subset A ⊂ K{z 1 , . . . , z n }\K is called autoreduced if, for all p ∈ A, p is reduced w.r.t. every element of A \ {p}. Every autoreduced set is finite [19,Section I.9].
• Let A = A 1 < . . . < A r and B = B 1 < . . . < B s be autoreduced sets ordered by their ranks (see Notation 3). We say that A < B if r > s and rank(A i ) = rank(B i ), 1 i s, or there exists q such that rank(A q ) < rank(B q ) and, for all i, 1 i < q, rank(A i ) = rank(B i ).
• An autoreduced subset of the smallest rank of a differential ideal I ⊂ K{z 1 , . . . , z n } is called a characteristic set of I. One can show that every non-zero differential ideal in K{z 1 , . . . , z n } has a characteristic set.
Definition 5 (Characteristic presentation). (cf. [5,Definition 3]) A polynomial is said to be monic if at least one of its coefficients is 1. This is how monic is typically used in identifiability analysis and not how it is used in [5]. A set of polynomials is said to be monic if each polynomial in the set is monic. Let C be a monic characteristic set of a prime differential ideal P ⊂ K{z 1 , . . . , z n }. Let N(C) denote the set of non-leading variables of C. Then C is called a characteristic presentation of P if all initials of C belong to K[N(C)] and none of the elements of C has a factor in K[N(C)].

Parameter identifiability for ODE models
Consider an ODE system of the form wherex: a vector of state variables,ȳ: a vector of output variables,μ: a vector of time-independent parameters,ū: a vector of input variables, andf andḡ: tuples of elements of C(x,μ,ū). Bringingf andḡ to the common denominator, writef =F/Q andḡ =Ḡ/Q, for F 1 , . . . , F n , G 1 , . . . , G m , Q ∈ C[x,μ,ū]. Consider the (prime, see [16, Lemma 3.2]) differential ideal Note that every solution of (1) is a zero of every element of I Σ .
Rigorously written definitions of identifiability in analytic terms can be found in [31,Definition 1] and [16,Definition 2.5]. [16,Proposition 3.4] implies that the following is an equivalent definition of identifiability, which we will use.
It can be computed by computing the characteristic presentation C of I Σ with respect to the differential ranking that is compatible with > and in which any derivative fromx is greater than any derivative from (ȳ,ū), and returning C ∩ C{ȳ,ū} (e.g., by the Rosenfeld-Gröbner algorithm [6]).
In the Sections 3 and 4, we will present our two main results, Theorem 11 and Theorem 19. The former is the main theoretical ingredient for our Algorithm 1 to find all single-experiment identifiable functions of parameters. The latter is a key to calculating a bound for a sufficient number of experiments to check identifiability of multi-experiment identifiable functions of parameters. Both main results are used in our software implementations referenced in the introduction. We prove our main results in Appendix A.

Main result: Single-experiment identifiability
In this section, we give an algorithm to compute all functions of the parameters that are identifiable from a single experiment for system (1). We begin with a construction in Section 3.1, which is a refinement of considering Wronskians of monomials (cf. [8,12,28,31]). Using this, we give an algebraic characterization, Theorem 11 (our first main result), of the identifiable functions, which we turn into Algorithm 1. The proof of Theorem 11 can be found in Appendix A.1.

Preparation for Theorem 11
To find the identifiable functions, we will begin with a new construction. Let K be a differential field and k a constant subfield such that C ⊂ k and letā = (a 1 , . . . , a n ) ∈ K n . For p ∈ k{z}, wherez = (z 1 , . . . , z n ), such that p(ā) = 0, we construct a subfield F(p) ⊂ K as follows: 1. Let W p denote the Wronskian (see Notation 1(g)) of the monomials of p evaluated atā.
2. Define F(p) to be the field generated over C by (the nonleading) entries in the reduced row echelon form of W p .
Then the monomials of p are z ′ 1 , z 1 z 2 , and 1, their Wronskian and its evaluation atā are which is already in reduced row echelon form, and so F( For examples in which F(p) is strictly greater than C, see Example 14 and Section 5.2. There, evaluation of the Wronskian at a point is by differential ideal calculations.
For a tuplep ⊂ k{z} of differential polynomials, Proof. Follows from all entries of W p being from C ā .

Statement of main result
We will now show that the problem of finding the field of identifiable functions is reduced to computing the intersection of fields of constants defined by their generators. This is a key step in Algorithm 1 to find the field of all identifiable functions.
Theorem 11 (Single-experiment identifiability). For system (1), the field of identifiable functions is equal to wherep is a set of input-output equations of (1) (Definition 8).
Remark 12. Similarly to Theorem 19, the statement of Theorem 11 remains true if, in the calculation of F(p), for each p, one replaces the Wronskian of the monomials evaluated atā by the Wronskian of any q 1 , . . . , q n ∈ C{z} evaluated atā such that p = n i=1 c i q i for some c 1 , . . . , c n ∈ k.

An algorithm for computing all identifiable functions
In this section, we present an algorithm that computes generators of the field of all identifiable functions of system (1). We also give an example following the algorithm step by step.

Algorithm 1 Computing all identifiable functions
Input System Σ as in (1) Output Generators of the field of identifiable functions of Σ (Step 1) Compute a setp of input-output equations of Σ (see Definition 8). ( Step 2) For each p ∈p, compute W p the Wronskian of the monomials of p. Compute W p by replacing each y ( j) i in W p with the j-th Lie derivative of g i with respect to Σ (g i 's are the same as in (1)).

(
Step 3) For each p ∈p, calculate the reduced row echelon form of the matrix W p and let F(p) be the field generated over C by all non-leading coefficients of all matrices W p . By [16, Lemma 3.1] and Remark 25, the generators of F(p) belong to C(μ,x).
Remark 13. In practice, the runtime of the algorithm depends on the chosen ranking, and it would be interesting to have a way to choose the ranking based on the problem.

Example 14 (Computing identifiable functions -illustration).
To illustrate, we will follow Algorithm 1 for the system: . This system is a variant of the example from [30, Section 5].
(Step 1) For the elimination differential ranking with x 1 > x 2 > y > u, a calculation shows that is a monic characteristic presentation for I Σ . Therefore,p = (p), where p = yy ′′ − auy ′′ − y ′2 + au ′ y ′ − buy ′ + bu ′ y. (Step 2) The Wronskian W p = Wr(u ′ y, uy ′ , u ′ y ′ , y ′2 , uy ′′ , yy ′′ ) is computed (too large to be displayed here). Then, to compute W p , all derivatives of y are replaced with the corresponding Lie derivatives of x 2 , for example: Step 3) A calculation shows that the corresponding reduced row echelon form is: (Step 4) By Theorem 11, the field of identifiable functions is Applying Algorithm 2, we find that so there are no nontrival identifiable functions in this model.

Main result: Multi-experiment identifiability
In this section, we show that the coefficients input-output equations generate the field of multi-experiment identifiable function and derive a generally tight upper bound for the number of independent experiments for system (1) sufficient to recover the field of multi-experiment identifiable functions of parameters. These results are stated in Section 4.1 and proven in Appendix A.2. We apply them to specific examples from the literature in Section 5. The tightness of the bound from the mathematical point of view is discussed in the appendix.

Preparation for Theorem 19
Definition 15 (Input-output identifiable functions). A function of parameters h ∈ C(μ) in system (1)  As shown in [32, Section 4.1], every identifiable function is input-output identifiable but not every input-output identifiable function is necessarily identifiable.
We also say that h is N-experiment identifiable in this case.
Example 17 (Illustrating the definition). Consider the system (intentionally simple to illustrate the definition) in which θ is the unknown parameter. By [16, Example 2.14], θ is not (globally) identifiable. Consider now the corresponding 2-experiment system and so is identifiable.
Remark 18. SIAN [15] (see also [32,Remark 2.3]) is software that can check (SE-) global and local identifiability of any given function h ∈ C(μ) of parameters of an ODE model Σ. If h is globally ME-identifiable, then, running SIAN on models of the form Σ N (see (3)) for N = 1, 2, . . ., one will in principle eventually find this out. However, if h is not globally ME-identifiable, one will not be able to conclude this from assessing SE-identifiability of Σ N without a bound on the number of experiments (provided by Theorem 19).
On the other hand, one could use SIAN to find the sufficient number of experiments given a set of generators of the field of ME-identifiable functions. Indeed, for each of these generators, there is an N such that the generator is SE-identifiable in Σ N , so the sufficient number of experiments can be taken as the maximum of these Ns. However, this approach works only if generators of the field of ME-identifiable functions are known in advance. Theorem 19 and an algorithm to compute IO-equations (Definition 8) yield an algorithm to find such generators. (1).

Statement of main result Theorem 19 (Multi-experiment identifiability). A function of parameters h ∈ C(μ) in system (1) is multi-experiment identifiable if and only if it is input-output identifiable in system
h is identifiable in the N-experiment system, where s i and r i are defined by the following: . . , p m is a set of input-output equations of system (1), and for all i, 1 i m, we write where f i, j ∈ C{ȳ,ū} and linearly independent over C (so, s i is the length of such a presentation of p i minus 1), Example 20 (Degenerate Wronskian). The goal of this intentionally simple example is to demonstrate that the Wronskians in the theorem can indeed be singular. Consider system (4) again. A calculation shows that p = y ′ 1 , y 2 − θy 1 − θ 2 is a set of IO-equations for (4). Then m = 2, s 1 =0, and s 2 = 2. We have and so r 2 = 1. From [16, Example 2.14], θ is not (globally) identifiable (so, we cannot take N = 1). By Theorem 19, for all the field of ME-identifiable functions C(θ, θ 2 ) = C(θ) is N-experiment identifiable.
Remark 21. In some works (e.g., [2, Section 3.1]), it was suggested that the Wronskians of monomials in a characteristic set be always of corank one (r i = s i in the notation of Theorem 19). As Example 20 (see also Sections 5.2 and 5.3) shows, this is not the case.

Computational aspects
Remark 22 (Dependence on decomposition (5)). For fixed input-output equations p 1 , . . . , p m , the bound given by Theorem 19 may depend on the choice of decomposition (5). In Appendix D, we give an algorithm to compute a representation yielding the best possible bound (compared to other representations). We use this algorithm in our implementation.
Remark 23 (Computing the bound). The rank of the Wronskian matrix from Theorem 19 can be found by: 1. Calculating the Wronskian matrix inȳ,ū, 2. For each matrix entry, computing its differential remainder [19,Section I.9] with respect to the characteristic set defined by Σ, and 3. Applying a (symbolic) algorithm for rank computation. The correctness follows from [16,Lemma 3.1]. Before computing the rank, one can evaluate the Wronskian at a point. Since the rank cannot increase after an evaluation, the resulting bound will always be correct although might be larger than the bound from Theorem 19.
Remark 24. The bound for N from Theorem 19 can be improved if some of the output variables are constant as discussed in Section 5.3. Constant outputs arise, e.g., to encode the case of constant inputs, which is common in some application domains [42]. The general idea of the refinement is first to treat the constant outputs as parameters, apply Theorem 19 to the rest of the outputs, and then use simultaneous rational interpolation to extract the coefficients with respect to the constant outputs.

Examples
We illustrate our results with 3 examples. In Section 5.1, Lotka-Volterra model with control, we show that the SEidentifiable and ME-identifiable functions coincide, so one can find the generators of the field of identifiable functions from the coefficients of the IO-equations. The second example (Section 5.2) is a chemical reaction exhibiting the slow-fast ambiguity [41]. Here, the bound from Theorem 19 is exact, and yields that all parameters are identifiable from 2 experiments. In Section 5.3, we show another Lotka-Volterra model, for which some of the parameters become identifiable only after 2 experiments. For other models with more ME-identifiable functions than SE-identifiable ones, we refer to [42,Section III]. Finally, in Section 5.4, we apply our results to the SEIR epidemiological model studied in [40].
All the computations for the examples in this section can be performed automatically using our implementation. The corresponding files can be found in the examples folder in the repository https://github.com/pogudingleb/AllIdentifiableFunctions.

Lotka-Volterra model with control
Consider the following system in which a, b, c, d, e are the unknown parameters and u is the input (control). With Theorem 19, we show that, for this model, the fields of SE-identifiable and of ME-identifiable functions coincide. A computation shows that the IO-equation is: p = (yy ′′ − y ′2 − dy 2 y ′ + cyy ′ + ady 3 − beuy 2 − acy 2 ), so, in the notation of Theorem 19, m = 1 and, for f 1 = y 2 y ′ , f 2 = yy ′ , f 3 = y 3 , f 4 = uy 2 , f 5 = y 2 , and f 6 = yy ′′ − y ′2 , we have s 1 = 5. A computation shows that By Theorem 19, for any N 5 − 5 + 1 = 1 the ME-identifiable functions are identifiable from N experiments (cf. [31, Main Results 1 and 2]). In particular, 1 experiment is sufficient. Hence, by Theorem 19, the field of SE-identifiable functions is C(d, c, ad, be, ac) = C(a, be, c, d).

Slow-fast ambiguity in chemical reactions
In this example, we consider the system [15, Section A.1,equation (3)]. This system originates from the following chemical reaction network [41, equation (1.1)]: Then the quantities x A , x B , and x C of species satisfy the system: The observed quantities will be • y 1 = x C , the concentration of C; • y 2 = ε A x A + ε B x B + ε C x C , which may represent a property of the mixture, e.g. absorbance or conductivity [41, p. 701].
As explained in [41, p. 701], in practice, x B might be hard to isolate, so ε B is also an unknown parameter, while the values ε A and ε C can be assumed to be known but could depend on A, C, and the details of the experimental setup.
The assumption that ε A and ε C are known can be encoded into the ODE system by making them state variables with zero derivatives and adding outputs to make them observable. This will yield the following final ODE model (the same as [15, Section A.1,equation (3)]): wherex = (x A , x B , x C , ε A , ε C ),ȳ = (y 1 , y 2 , y 3 , y 4 ), andμ = (k 1 , k 2 , ε B ). As noted in [41] (see also [15, Section A.1]), this model has slow-fast ambiguity: it is possible to recover a pair of numbers {k 1 , k 2 } from the observations but impossible to know which one is k 1 and which one is k 2 . A similar phenomenon occurs in epidemiological models, see [40,Proposition 2].
We start with assessing the SE-identifiability of the model (7) using Algorithm 1 to find the field of identifiable functions. For (Step 1), a calculation in Maple shows that the following setp = {p 1 , p 2 , p 3 , p 4 } is a set of IO-equations of (7): Step 2) and (Step 3), we compute the reduced row echelon forms of W p 1 = Wr(y 2 , y 1 y 4 , y ′ 1 , y ′ 1 y 3 , y ′′ 1 y 3 ) and W p 2 = Wr(y ′′ 1 , y ′ 1 , y ′′′ 1 ) modulo the equations Σ and obtain the matrices respectively. W p 3 and W p 4 are 1 × 1 matrices with the reduces row echclon form (1). Therefore, Before going to (Step 4), we show that this intermediate result of computation can provide additional insights, for example, recover the parameter transformation corresponding to the slow-fast ambiguity [41, equation (1. 3)]. From the proof of Theorem 11, F(p) consists of identifiable constants. So, any parameter transformation induces an automorphism α of the constants over F(p). Since k 1 + k 2 and k 1 k 2 are identifiable, α(k 1 ) = k 1 and α(k 2 ) = k 2 or α(k 1 ) = k 2 and α(k 2 ) = k 1 . Consider the latter case. Since ε A ∈ F(p), we have α(ε A ) = ε A . Hence, , giving the transformation [41, (1.3)]: Finally, in (Step 4), we compute Now we will consider model (7) in the multi-experiment setup in which one is allowed to perform several experiments with the same k 1 , k 2 , ε B but different initial concentrations and ε A , ε C . We will show that, in this setup, the ambiguity can be resolved by one extra experiment. The first part of Theorem 19 implies that the field of MEidentifiable functions is generated by the coefficients ofp, so it equals Therefore, all the parameters can be identified from several experiments. Now we use the bound from Theorem 19 to find the number of experiments sufficient to make all the parameter identifiable. In the notation of the theorem, for i = 1, we take This bound is tight because, as we demonstrated earlier, neither of the parameter is identifiable from a single experiment.

Lotka-Volterra model with "mixed" output
In this example, we will illustrate the refinement of the bound on the number of experiments mentioned in Remark 24 on the following variant of the Lotka-Volterra model: where we assume that a, b, c, d, e are unknown parameters and f is a known parameter that takes different values if multiple experiments are conducted. In the context of our differential algebra setup, this can be encoded as follows: Our implementation shows that the field of ME-identifiable functions is In particular, a, b, c are ME-identifiable, d and e are not but their ratio is. We now discuss the number of experiments for globally identifying the functions (10). A straightforward application of Theorem 19 yields a bound 35 (Wronskian of dimension 51 and rank 17). 35 could be viewed as a rather high number of experiments and is far from the actual number (2, as shown below).
We can get a better bound equal to 4 using the same Theorem 19 as follows. Observe that, since y 2 is constant, then there will be the following input-output equations for the model: y ′ 2 = 0 and p = 0, where p is a differential polynomial y 1 and y 2 over C(μ) of zero order in y 2 . We observe that, if one replaces y 2 in p = 0 with f , the resulting equations will be the input-output equations for the following simplified model, in which f is considered as a scalar parameter: Our implementation shows that the bound for this model is one, so SE-identifiable and ME-identifiable functions for this model are the same. In particular the coefficients of the monic input-output equation of (11) are identifiable from a single experiment. These coefficients are rational function in f over C (a, b, c, d, e). We write them as C 1 /C, . . . , C s /C, where C, C 1 , . . . , C s are polynomials in f over C (a, b, c, d, e), and C is monic. We denote the number coefficients not belonging to C in C, C 1 , . . . , C s by n, n 1 , . . . , n s , respectively. Then these coefficients can be determined uniquely from max n + min The obtained bound 4 is close to the exact bound 2, which can be obtained using Theorem 19 and SIAN as follows. Using SIAN, we obtain that a, b, c, d/e are only locally identifiable (from one experiment), so N > 1. Running SIAN for 2 experiments shows that the functions (10) are 2-experiment globally identifiable. Since from Theorem 19 we know that these functions generate all ME-identifiable functions, we conclude that N = 2. Replication of the system makes it substantially more challenging for SIAN, so this approach might be impractical if N is large, while computing the bound above may be feasible.

SEIR epidemiological model
Structural identifiability of the following epidemiological model has been considered in [40,Equation 2.
where N is the total population which is constant and known. The following two setups are considered in [40]: • Prevalence observation. In this case, these is an output y 1 = I. We also add y 2 = N to account for the fact that N is known. Our implementation shows that the bound from Theorem 19 is equal to one, so the fields of MEidentifiable and SE-identifiable functions coincide. It also finds that these fields are equal to C(αη, α + η, β, N).
• Cumulative incidence observation. In this case, the observed quantity is ηE dt. This can be encoded by introducing a new state variable C with C ′ = ηE and the outputs y 1 = C and y 2 = N. Our algorithm again shows that the fields of ME-identifiable and SE-identifiable functions coincide and that they equal C(α, β, η, N), so all of the parameters are globally identifiable.
These results confirm the findings of [40] obtained from analysis of input-output equations (that is, for MEidentifiability) and show that they are valid for SE-identifiability as well.
Recall (see [22,Section 2]) that a differential field K is differentially closed if: for all m and finite G ⊂ K{w 1 , . . . , w m }, if there exists L ⊃ K such that G = 0 has a solution in L, then G = 0 has a solution in K. Let K diff be a differential closure of K, that is, a differentially closed field containing K that embeds into any other differentially closed field containing K.
We have K diff ⊃ k acl , the algebraic closure of k, and k acl ∩ K = k. Since b F(p), there exists an automorphism α : Const(K diff ) → Const(K diff ) such that α| F(p) = id and α(b) b. We pick such an α and extend it to a differential automorphism of K diff and denote the extension by α as well.
For a vector K-subspace V of K n with C ⊂ K, we denote the field of definition of the subspace over C by FD(V). Recall that V has a K-basis e 1 , . . . , e ℓ of V such that e 1 , . . . , e ℓ ∈ FD(V) n . Fix 1 i m. Let V p i denote the right kernel of W p i . Note that V p i is defined over Const(K). Since p i (ā) = 0, the vector of coefficients of p i belongs to V p i . Note that FD(V p i ) = F(p i ). By the preceding paragraph, there exist r i,1 , . . . , r i,N i ∈ FD(V p i ){z 1 , . . . , z n } such that • for every 1 j N i , the vector of the coefficients of r i, j belongs to V p i (in particular, r i, j (ā) = 0); • p i is a K-linear combination of r i,1 , . . . , r i,N i .
For the reverse inclusion, let p ∈p, where, for each i, f i ∈ C{w} and f 1 , . . . , f s are linearly independent over C. By dividing p by an element of k, we may assume that deg f s+1 = deg p. Let where, for each i,ā i is the image ofw i modulo I Σ N . We will first show that det A 0. For this, let M be a minimal (by size) zero minor of A. Let, for some i and ℓ, f i (ā ℓ ) appear in M and q ∈ k{w} be the differential polynomial obtained from M by replacing f j (ā ℓ ) with f j (w), 1 j s. By the minimality of M and linear independence of f 1 , . . . , f s , q(w) 0. Since q(ā ℓ ) = 0, there exist q i, j ∈ k{w j } such that Hence, there exist α, and q 1 , . . . , q α ∈ I Σ , and b 1 , . . . , b α ∈ k ā 1 , . . . ,ā ℓ−1 ,ā ℓ+1 , . . . ,ā s such that q = α i=1 b i q i and, for each i, every monomial that appears in q i also appears in q (and, therefore, in p). Letq be the primitive part of q 1 considered as a polynomial in its leader. Since I Σ is prime,q ∈ I Σ . Sincep is autoreduced andq divides a linear combination of the monomials of p, the characteristic setp ofp \ {p} ∪ {q} satisfies rankp rankp. Hence,p is a characteristic set of J, and sȭ Thus,p is a characteristic presentation of I Σ . Ifq q, then degq < deg q. Ifq = q, thenq has fewer monomials than p does. Thus, in either case, p/q k. However, [5,Theorem 3] implies that p/q ∈ k, which is a contradiction. This shows that det A 0. Thus, the rows of A are linearly independent.

Appendix B. Intersection for rational function fields
In this section, we will describe how [4,Algorithm 2.38] can be used to compute the intersection L 1 ∩ L 2 , where L 1 = C(μ) and L 2 = F(p), as required by Algorithm 1.
Algorithm 2 below is a version of [4,Algorithm 2.38]. It was shown [4, p. 37-38] that the output of the algorithm is correct if the algorithm terminates. Termination was proved only if both input fields are algebraically closed in the ambient rational function field. To use the algorithm in our applications, we relax this condition to requiring only one of the fields to be algebraically closed (C(μ) in C(μ,x)) in Proposition 27.
Notation: Introduce new variables X := (X 1 , . . . , X n ). In the algorithm, for a set S ⊂ K(x)[X], S will denote the ideal generated by S in K(x) [X]. Since J j has a set of generators with coefficients in L, the inclusion b ∈ J j implies that b λ ∈ I j ∩ L[x] for every λ ∈ Λ. Therefore, b λ ∈ I 0 ∩ L[x] for every λ ∈ Λ. Thus, b ∈ J 0 .
Proof of Proposition 27. We will assume that K(f ) is algebraically closed in K(x). The proof for the case of K(ḡ) being algebraically closed in K(x) is analogous. Assume that the algorithm does not terminate. By the construction, I j ⊃ J j for every j 1. The ideals I 1 and J 1 are radical (the latter is due to [4, Definition 2.16 and Proposition 2.21] and since the intersection of a radical ideal with a subring is radical and the extension of a radical ideal is radical). It follows then that all I i 's and J i 's are radical. For every i 1, we define d i to be the minimum of the dimensions of the prime components P of J i such that P I i . We will show that the sequence d i is strictly increasing thus arriving at a contradiction.
Fix i 1. Let P 1 , . . . , P m be the prime components of J i so that P 1 , . . . , P r are the components of the dimension < d i and P r+1 , . . . , P m are the components of the dimension d i . By the construction, J i is defined over K(f ). [4, Proposition 2.37] implies that P 1 , . . . , P m are also defined over K(f ).
Since I i ⊃ J i , and P 1 , . . . , P r contain I i , P 1 , . . . , P r are exactly the prime components of I i of dimension < d, so Q := P 1 ∩ . . . ∩ P r is the intersection of the equidimensional components of I i of dimensions < d. Therefore, since I i is defined over K(ḡ), Q is defined over K(ḡ). Hence, (B.1) Consider C := {C | C is a prime component of P j ∩ K(ḡ) [X] for j > r} [37] implies that, for every j > r, all prime components of P j ∩ K(ḡ) [X] are of the same dimension, so, for all C ∈ C, dim C d i . For every C ∈ C, denote C ′ := C ∩ K(f ) [X] . [4,Proposition 2.37] implies that C ′ is prime. If C C ′ , then dim C ′ > d i . Otherwise, C ′ = C ⊃ I i+1 . Therefore, due to Lemma 28, we have: Since every component of B has dimension at least d i + 1, we deduce that d i+1 > d i .
Example 30 (Achieving the bound in Theorem 19). A natural mathematical question about a bound is whether it is tight in the sense that the equality can be reached for all the values of the parameters appearing in the bound. We will give an indication of the tightness of the bound from Theorem 19 by providing, for every positive integers h n, a model with n+1 monomials in the IO-equations and the corresponding Wronskian having rank h so that every element of the field of IO-identifiable functions is (n − h + 1)-identifiable but not necessarily (n − h)-identifiable. Fix h n and consider the system with unknown parameters {c i , 1 i n}. By a calculation, is a set of IO-equations of (C.1). We have modulo I Σ : Wr(y 2 , . . . , y n , 1) =  , whose rank is r 1 . On the one hand r 1 h because the matrix has only h non-zero rows. On the other hand, det Wr(y 2 , . . . , y h , 1) I Σ since Wr(y 2 , . . . , y h , 1) is not reducible (to zero) byp. Thus, r 1 = h. Also, s 1 = n and, for all i 2, s i = 0. So, by Theorem 19, for all N s 1 − r 1 + 1 = n − h + 1, the field of IO-identifiable functions C(c 1 , . . . , c n ) is N-experiment identifiable. We will show that it is not (n − h)-experiment identifiable, thus showing the desired tightness of the bound in Theorem 19. For this, consider the following set of IO-equations for the (n − h)-experiment system Σ n−h : Let a j,i denote the image of y j,i modulo I Σ n−h . Since, for all i and j, h + 1 i n, 1 j n − h, a j,i is constant, we can define a differential field automorphism ϕ of C ā 1 , . . . ,ā n−h (c 1 , . . . , c n ) over C ā 1 , . . . ,ā n−h by Thus, there exists i ∈ {1, h + 1, . . . , n} such that c i C ā 1 , . . . ,ā n−h , and so the IO-identifiable parameter c i is not (n − h)-experiment identifiable.
Therefore, after the i-th iteration of the loop in (Step 4), the value (A,B)∈S AB is increased by C i M i . Since it starts with zero, it will be equal to ℓ j=1 C j M j = P after (Step 4). Therefore, s i=0 A i q B i = P q = p. To prove the C-linear independence of B j 's, for each 1 j s, consider the pair (C i , M i ) that was the j-th appended pair for (Step 4)b. Then M i will not appear in any of B j+1 , . . . , B s , so B 1 , . . . , B s are C-linearly independent.
The linear independence of A 0 , . . . , A s follows from the fact that, lm(A j ) does not appear in A j+1 , . . . , A s for every 0 j s, and this property is due to the reduction procedure (D.1).