Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states

Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well known from the literature.


Introduction
Numerous mechanisms in cells are fueled by the energy contained in concentration gradients that exist across the cell membrane. One such process is the electrical signaling that results from ionic fluxes carried by specialized membrane proteins, collectively termed ion channels. Cell membranes are densely populated by ion channels, which fluctuate between different states in a stochastic manner that depends on the charge difference across the membrane (the membrane potential). These fluctuations between the various states are commonly represented by continuous-time Markov models; see e.g. [ 1 , 2 , 3 , 4 ]. Markov models have been successfully used for half a century and offer great flexibility for precisely describing the functional states of a channel. Given the structure and rates of the Markov model, it is straightforward to use simulations to study channel behavior. But the inverse problem, which seeks to infer the structure and rates of the Markov model from singlechannel recordings, is much more challenging and remains a field of active research; see e.g. [ 5 , 6 , 7 , 8 , 9 ].
Single channel recording was first demonstrated in the seminal work by Sakmann and Neher; see [ 10 , 11 ]. Many of the principles used to deduce the form and rates of Markov models based on single channel data were developed by Colquhoun and Hawkes beginning almost forty years ago; see [ 12 , 13 ], and are classically summarized in chapter 18 of [ 14 ]. More recently, Qin, Auerbach and Sachs [ 15 , 16 ] developed maximum likelihood (MLE) approaches for defining the most probable hidden Markov model for a given dataset. These algorithms form the core of the open source QuB software package, which is available to the community, and contains tools for automated Markov model construction and parameterization; see Nicolai and Sachs [ 17 ]. Furthermore, Markov chain Monte Carlo (MCMC) fitting has been used to solve the inversion problem for models of intracellular calcium channels, which poses the same mathematical problem as the gating of voltagedependent channels, see [ 18 , 19 , 20 , 6 , 7 , 8 , 9 ]. While both the MLE and MCMC approaches have been used to good effect, each has specific shortcomings [ 7 ], and a robust method for determining hidden Markov model structure from single-channel data remains a priority for the field.
The problem of identifiability of a model based on recordings of conducting and nonconducting states is of great importance and has been addressed by many authors. Characterizations of identifiable models and non-identifiable models have been provided by e.g. Fredkin and Rice [ 21 ] and more recently by Siekmann et al. [ 7 ] and Hines et al. [ 8 ]. These results, however, address the classical problem based on voltage clamped data and the results are therefore not automatically applicable to the case under consideration in the present paper.
Prior efforts to address the problem of identifying the rates of Markov models have been based on conventional experimental procedures for recording single-channel activity. That is, by fitting the binary changes in transmembrane current associated with channel opening and closure under conditions where transmembrane potential is clamped by the experimenter. The purpose of the present paper is to describe a fundamentally different approach, which involves an analysis procedure for defining a hidden Markov model from measurements of time-varying transmembrane potential in current clamped single ion channels. To the best of our knowledge, this is the first attempt to identify Markov models based on such voltage traces of single channel data. A similar approach has previously been applied for identification of Markov models from voltage recordings made in whole-cells; see e.g. Milescu et al. [ 22 ]. Following the tradition in the field, we present the method using simulated data. It turns out that the methodology described here also enable us to address the question of identifiability. We demonstrate use of the method to compute rates of a Markov model and we will show to determine wether the computed rates are unique.

Methods
Our aim is to devise an alternative method for inverting Markov models. The inversion will be based on time-traces of the transmembrane potential of single ion-channels. The traces used in our analysis will be pseudo-experimental data generated by a dynamical model formulated in terms of an ordinary differential equation including a stochastic term. The stochastic term is governed by a Markov model with rate-functions expressing the probability of going from one state to another state. The inversion problem is to determine these rate-functions based on observations of recordings of the time-traces of the transmembrane potential.
By running numerous Monte Carlo simulations using the stochastic differential equation, we can compute probability density functions by gathering the simulation results in histograms. The same probability density functions can be computed by solving a deterministic system of partial differential equations, see e.g. Smith [ 2 ], Bressloff [ 3 ] or [ 4 ]. In this report we will use the simulation results based on the stochastic differential equations as pseudoexperimental data in terms of time-dependent voltage traces represented by histograms. The method of inversion is to adjust the parameters of the system of deterministic partial differential equation so that the solution of this system is as close as possible to the histograms representing pseudo-experimental data. The adjustment will be performed in term of minimizing a cost-function.
The method is described for Markov models of the potassium channel, but the method is quite general and can, in principle, be applied to Markov models of any ion channel.

Markov models
Let us first consider a Markov model on its simplest possible form; (1) Here C and O denote the closed and open states, respectively. The rates α and β represent the probability of leaving a state in the sense that, for a small time-step Δt, we have and respectively. The problem of inversion for this simple model is to find the rates α and β such that the Markov model represents the behavior of the model as good as possible. It is well known that the probabilities o = o(t) and c = c(t) of being in the open (O) or closed state (C) evolve according to the following system of ordinary differential equations (see e.g. Keener and Sneyd [ 1 ]), Since the sum of the probabilities add up to one, we can reduce this system to a single equation that is easily solved. Experimental data on probability of being in the open or closed state can thus be used to find determine the rates α and β.
In principle this approach is straightforward to generalize to much more complex Markov models involving many states. Generally, the system of ordinary differential equations governing the vector v of probabilities of occupying different states is given by Here the matrix A contains the rates of the Markov model and the problem is to compute these rates. For a matrix with constant coefficients, the solution of this problem can, in principle, always be found on the form where p 0 is the vector of probabilities at time t = 0. In principle, this formula can be used to deduce the rates of the Markov model, but severe difficulties arise due to inherent instabilities in the inversion process. This approach to inversion is based on experimental data with clamped transmembrane potential. Our method is based on data where the transmembrane potential is recorded.

Stochastic model of the transmembrane potential
For a single channel, with the initial condition v(0) = v 0 , the dynamics of the transmembrane potential v̄ can be modeled as follows (see e.g. [ 3 , 2 , 4 ]), (2) where C is the specific capacitance of the preparation, v K , v S are the Nernst potentials for potassium and for the seal current, respectively, γ is a stochastic variable depending on the state of the channel, and I 0 = I 0 (t) is an applied current; note also that the bar on v̄ is used to indicate that this is a stochastic variable. The dynamics of γ are governed by a Markov model, and in the forward simulations, the rates of the Markov model are assumed to be given. Additionally, the expression for seal current is used to simulate under as realistic clamp conditions as possible, where a proportion of any applied current will be lost to ground through a conductance that exists in parallel to the membrane patch. Intuitively, this conductance represents the current leak between the pipette wall and the membrane, which allows charge to escape directly to ground without traveling through a channel or being directed towards the membrane capacitance. This leak is a major constraint for single channel recordings of all types.
In most computations, we used the following parameters: In addition the applied current will be specified in each simulation, and it will be in the interval 1000 to 3000 fA. If other parameters are used, it will be stated in the text.

Voltage clamp and current clamp recordings
Both voltage and current clamp recordings are based on the same fundamental recording arrangement in which a portion of cell membrane (and included channel or channels) is contained between two electrodes, and as such are elements in a circuit that can be made between the two electrodes. Feedback control can then be used to prescribe either the current passing between the two electrodes (current clamp), or the potential difference between them (voltage clamp). In the case of voltage clamp, the experimenter prescribes a voltage waveform and the feedback amplifier then applies the current required to maintain that control. Because single ion channels open and close in a binary and stochastic fashion the current required to maintain the prescribed voltage reflects those opening and closing events as sharp switch-like changes (Figure 1 Left). In current clamp, the feedback control is modified to prescribe a current waveform and the voltage between the two electrodes (and therefore across the membrane patch) is measured simultaneously. Because the gating of the membrane channels is subject to this voltage, channel opening and closing will occur and result in subsequent changes to measured voltage (as shown at Right in Figure 1). It has become conventional for single channel recordings to be conducted in voltage clamp mode because this provides a direct measurement of single channel current at the prescribed voltage, which is intuitive to experimenters and analysts alike. However, because channel transitions are stochastic and the rates at which individual channels transition between their closed, open, and other functional states (e.g. inactivated) are voltage-dependent, this approach also requires many sweeps at different test voltages to characterize the voltagedependencies of these transitions. In principle, appropriately designed current clamp recordings should allow these constraints to be partially circumvented because voltage varies continuously with channel opening and closing (thus eliminating discontinuities implicit to choosing discrete test voltages). However, at this time we are not aware of any data that have been collected for single channels in this recording mode. In part it is likely that this is due to technical aspects in making the recordings -specifically the combination of patch capacitance, single channel current, and recording noise are likely to apply significant constraints. Additionally the analytic framework for interpreting such recordings is poorly defined, and this is a significant disincentive for attempting challenging experiments of this type. With this paper we seek to develop that framework. In Figure 1 the two protocols are illustrated.

Probability density functions for the voltage-gated channel
By running the stochastic model (2) many times, we can summarize the results in probability density functions for the open and closed states. These probability densities can also be computed directly by solving a deterministic system of partial differential equations; see e.g. [ 3 , 2 , 4 ].
For a derivation of this system, see e.g. [ 2 , 4 ]. Numerical methods for solving this system is discussed in detail in [ 4 ].
Boundary conditions of this model are set up such that probability do not leak across the boundary. Suppose we solve the system on the interval Ω = [v 1 , v 2 ], then we require that the flux is zero at both v = v 1 and v = v 2 . Here v 1 and v 2 define the upper and lower limits of the transmembrane potential governed by the model (2).

Bounds on the solutions
In order to find the domain Ω = [v 1 , v 2 ] where we want to solve the system (3) we need to derive bounds on the solution of the model (2). When the channel is open, the model reads Tveito et al. Page 6 (6) and when the channel is closed (non-conducting) (γ = 0), the model reads (7) Therefore, the solutions will either converge towards (conducting channel) (8) or towards (non-conducting channel) where we have used v S = 0. Furthermore, if the initial condition v 0 is in the interval bounded by v o and v n , the solution will always stay in this interval; this holds for any Markov model governing γ. Therefore, by adjusting the value I 0 of the applied current and the initial condition v 0 , we can force the solution of the model (2) to cover the interval of interest. It is useful to note that which is negative for all values of I 0 of interest and therefore v 1 = v o and v 2 = v n .
In order to invert the measured data to find the coefficients of the Markov model, it is important to have data covering the values of the transmembrane potential that is of interest in the Markov model. This can be achieved by adjusting the applied current I 0 , the initial condition v 0 , or the equilibrium potential v K .

Inversion based on the output least squares method
Suppose we have pseudo-experimental data generated by repeated runs of the model (2). These data can be combined in order to define the probability density functions for the open state and for the non-conducting states. In the special case of a 2x2 Markov model (see (1)) there is only one non-conducting state (the closed state C), but for more complex Markov models there may be many conducting or non-conducting states. Real experimental data can only distinguish between conducting (open) states and nonconducting states (closed or inactivated states), and therefore we group the pseudo experimental data into two distributions; ρō = ρō(v, t) and ρn = ρn(v, t) represent the open and non-conductive distributions respectively. We assume that the data (ρ̄o , ρn) are given. The problem is now to adjust the rates r = (α, β) of the system (3) such that the solution of the system satisfies (ρ o , ρ n ) ≈ (ρ̄o , ρn). We do this by minimizing the cost function (10) Here ρ = ρ(r) = (ρ o (r), ρ n (r)) and . The norm || · || will be defined below.
The crux of our method is to compute the parameters r by minimizing the cost function J = J(r) given by (10). We will show detailed examples below how this is done and how it works, but first we will characterize what we mean by an optimal solution.

Characterization of the optimal solution
In general, it is not at all clear that the rates of a Markov model can be computed based on the observations of wether the channel is in the conducting or non-conduction mode. As pointed out in e.g. [ 7 ], the situation is often complicated and the problem of deciding wether the rates can be identified is topic for ongoing research. Siekmann et al. [ 7 ], use the MCMC method to detect non-identifiable models. We will use a completely different approach but our results coincides with their observations. In order to characterize invertible and noninvertible cases, we need to define invertibility in this context. Roughly speaking, we state that a model is invertible by our method if we can find a unique minimum of the cost function (10).
From classical Calculus (see e.g. Apostol [ 23 ]), we may get an indication about the optimality of a given candidate solution r * by invoking the properties of the function J in the vicinity of r * . By Talyor's theorem, we have (11) for any vector r and small number ε. Here H is the Hessian matrix of the function J. Clearly, H is symmetric, and assume in addition that it is positive definite. Then, for any vector y, we have where λ 0 > 0 is the smallest eigenvalue of the Hessian matrix H, and | · | is the usual Euclidian norm. Suppose that r * is a stationary point of J, i.e. ∇J(r * ) = 0, then, for any nonzero vector r and small non-zero value ε, we have Since, for all small and non-zero values of ε and any non-zero vector r we have J(r * + εr) > J(r * ), the parameters given by r * define a local minimum of the function J. We also note that the smallest eigenvalue is a measure for how well-defined the local minimum is. If the smallest eigenvalue is much larger than zero, any step r away from the minimum will increase the value of the function J considerably. Contrary, when the value of λ 0 is very small, the value of J varies little around r * and a well defined minimum is therefore hard to find. In the case of λ 0 = 0, we know that there is an associate (non-zero) eigenvector r 0 such that H(r*)r 0 = 0 and thus So given the local minimum r * , we can add εr 0 for a small value of ε, and the value of J is more or less unchanged; the local minimum r * is not unique. It is also straightforward to see that if the smallest eigenvalue λ 0 is negative (then the matrix is not positive definite) at a stationary point r * , that parameter vector is not a local minimum since Thus, if r * is changed in the direction of the eigenvector r 0 associated the negative eigenvalue λ 0 , the value of the function J is reduced and therefore r* is not a local minimum.

Definition of identifiability using our method
Based on the consideration above, we are in position to discuss what we mean by identifiability of a Markov model. We will refer to a model as identifiable by our method if the model satisfies three criterions: 1. Stationary point. If we use the Fminsearch algorithm (a Matlab function based on the Nelder-Mead algorithm) to find an optimal parameter vector r * , this parameter vector must be a stationary point of the cost function J = J(r) in the sense that | ∇J(r * )| ≈ 0.
2. Local minimum. The candidate optimal solution r * must be a local minimum in the sense that all eigenvalues of the Hessian matrix H evaluated at r * are strictly positive. 3. Global minimum. If the Fminsearch algorithm is run with different initial conditions and result in different local minima r, a candidate optimal solution r * must have a value of the cost function J that is significantly lower than all the other candidates.
This is a pragmatic definition of identifiability which is useful in practical computations. However, the computational procedure outlined here may still fail; it may identify a minimum that is not really the global minimum, and it may indicate that the solution is unique even if it is not. This is the unsatisfying state of matters due to the fact that minimization of complicated non-linear functions is very challenging.

Defining the norm
In the procedure introduced above, it remains to define the norm used in (10) to measure the difference between the pseudo-experimental data ρ̄ and the probability density functions ρ computed by solving the system (3). The issue here is to compare a numerical solution defined on a uniform mesh representing the solution of a system of partial differential equations of the form (3) with pseudo-experimental data based on repeated solves of stochastic differential equation (2). We do the comparison by introducing an adaptive mesh that reflects the number of samples in the pseudo-experimental data in a given interval, and then we integrate difference between the experimental data and the numerical solutions over the interval. Here ρ = (ρ o (v; r), ρ n (v; r)) denotes probability density functions defined by numerical solution of a system of the form (3) for a given rate vector r. The integration of these solutions are performed by integrating a piecewise linear function defined by the numerical solution.

Computations
The pseudo-experimental data used for inversion data was generated by solving the stochastic equation (2) 100 times for 100 seconds using Δt = 1/1000 ms giving 10 billion samples. In the optimization, the first 10% of each run is discarded in order to obtain steady state behaviour. Two different applied currents are applied; I 0 = 1000fA and I 0 = 3000fA.
For every inversion, 1000 initial guesses were evaluated and the ten best cases (lowest value of J) were used as start vectors for Fminsearch (Matlab). The best of the ten solutions provided by Fminsearch was defined to be the optimal solution. All computations were performed on a standard desktop computer. The PDE problems typically took a few seconds to solve, and the overall optimisation in the order of hours.
The eigenvalues of the Hessian matrix are computed using singular value decomposition (the svd function in Matlab). The Hessian matrix is computed using numerical differentiation of sixth order [ 24 ].

Results
We will show results of inversion using the method described above for a series of Markov models. The inversion is based on the fact the probability density functions of the open and the non-conducting states can be obtained by pseudo-experimental data and by solving a system of partial differential equations. The parameters defining the Markov model are adjusted in order for the solution of the system of partial differential equations to be as similar to the pseudo-experimental data as possible. We will start by showing that the distribution computed by repeated runs of the stochastic model (2) (Monte Carlo simulations) and the solution of the deterministic system of partial differential equations converge as the mesh parameters are refined, and the number runs of the stochastic model is increased. This result is well known, see e.g. [ 2 , 3 , 4 ].
For ease of reference, we will call the solutions computed by solving the stochastic model (2) Monte Carlo simulations (MC), and the solutions of the associated deterministic system of partial differential equations (PDE) will be called the PDE-solution.

Comparison of the MC and the PDE solutions
We consider the Markov model  (13) Here the rates (ms −1 ) are specified on the form with α 1 = −1.0, α 2 = 0.02, b 1 = −4.5, b 2 = −0.05 and I 0 = 3000fA. The PDE model associated this Markov model is given by (14) where ρ o , ρ c 1 and ρ c 1 denote the probability density function of the states O, C 1 and C 2 , respectively, and the functions α o and α n are given by (4) and (5).
In Figure 2 we compare the results of the solution of the system (14) (red, solid lines) and the solution obtained running MC simulations (histograms) using the stochastic model (2).

Sensitivity with respect to changes in the rates of the Markov model
Clearly, in order to be able to identify the rates of the Markov model, the model output must be sensitive with respect to changes in the model. For the Markov model (13) with the rates given above, we illustrate the sensitivity with respect to changes in the rates in Figure 3. The figure shows the steady state probability distribution of the open state (left) and the nonconducting states (right). We observe that by multiplying the rate b 1 with either 0.9 or 1.1, the stationary solution changes significantly. This indicates that it is possible to invert this model.

Inversion of three models with no loops
We first apply our inversion procedure to three Markov models with no loop. The form of the models are defined in Table 1 and the results are given in Table 2. The Markov models are based on Scheme 4 in Zhou et al. [ 25 ]. The CCCCOI model is identical to their model except that the β rate has been altered to fit the standard Boltzmann form. In the CCCCO model, we have simply removed the inactivated state I, and in the CO model we have kept just one closed and one open state.
We observe that for all the three Markov model (CO, CCCCO and CCCCOI) we are able to compute the rates based on the pseudo-experimental data. The accuracy of the computed parameters are quite good. In the optimum, the value of the cost function is quite small and so is the norm of the gradient of the cost function which means that we are close to a stationary point. Finally, the smallest eigenvalue for all cases are positive. These cases are identifiable according to the definition given above.
In the above problems, all parameters are involved with transitions between conducting and non-conducting states. The rates are also voltage dependent. Next, we try a problem where this is not the case: with α i = exp(α i ) and β i = exp(b i ), i = 1, 2. Parameters and results are given in Table 3. We see that the inversion also in this case is successfull, consistent with the relatively large smallest eigenvalue.

Inversion of a three state model with a loop
It is quite common to use Markov models including a loop of the form illustrated in Figure  4. However, Siekmann et al [ 7 ] conclude that this model is not identifiable. They considered a case with constant rates (independent of the trans-membrane potential) and their method is very different from ours. It is therefore interesting to see if the model is identifiable in our framework The rates of the model are parameterized as follows: In order to test if the model is identifiable using our method, we can use data provided by the PDE model an evoke to value of the smallest eigenvalue. We find that which indicate that this case is very hard to invert. Also, running inversion procedure does not result in a converged solution.
We also consider a case where we have removed the voltage dependence of the rates, i.e. we have put α i2 = b i2 = 0 in the model. Using PDE-generated data we find that and thus the model is not invertible. These observations concur with the results of Siekmann et al. [ 7 ].

A special case of the three-state model
For a special case of the three state model shown in Figure 4, we can show mathematically that it is impossible to identify the rates based on observation of (ρ̄o, ρn). To see this, we first note that the associate PDE system is given by (15) By adding the two latter equations of the system above, we find the system where ρ n = ρ c + ρ o . If we now consider the special case of α 1 = β 2 =: μ, we get the system (20) (21) From this system we notice that we can compute ρ o and ρ n without any knowledge of the rates α 3 and β 3 connecting the closed and the inactivated states. These parameters are therefore clearly non-identifiable since they do not in any way affect the probability density functions of the conducting (open) and non-conducting states (closed plus inactivated).

The size of the capacitance
The size of the capacitance influence the identifiability of a Markov model. This is illustrated in Figure 5 for the model CO (upper left) CCCCO (upper right), CCCCOI (lower left) and the three state model including a loop (lower right). The computations is based on data generated by solving the PDE using the correct rates of the Markov model. We observe that for the CO, CCCCO and CCCCOI models, there is a range of values of the capacitance for which the model is invertible. For the three state model with a loop, however, the model is not identifiable for any value of the capacitance.

17 state Markov model of the potassium channel
The method introduced here is easy to work with for Markov models with few states, but in principle it can also be applied to more complex methods. In Figure 6 we show a 17 state model taken from Zhou et al. [ 25 ]. The parameters of the model are given in Table 4, and the eigenvalues of the Hessian are given in Table 5. This model has many very small eigenvalues and is therefore clearly not identifiable by our method. In Table 5 we show the change of the cost function when the parameter vector is changed. We change the parameter by a small step along each eigenvector of the Hessian matrix. Each eigenvector is normalized to have Euclidian norm equal to one, and we change each parameter vector by a step of length 10 −5 , 10 −3 and 0.1. We note that for the eigenvectors associated small eigenvalues, the change of the cost function is very small when the parameter vector is changed. This is exactly as expected from the theory discussed above. Finally, large steps in the direction of eigenvectors with larger eigenvalues result in significant changes of the cost function.
In Table 6 we consider the reduced case where the O 2 -state is removed from the model. The smallest eigenvalue does not change much, indicating that the O 1 -O 2 transition is not the most difficult one to identify.

Discussion
The main purpose of this report is to propose a new method for computing rates of Markov models of voltage-gated ion channels. Other methods are based on experimental data obtained by clamping the transmembrane potential and recording whether the channel is in the conducting (open) or non-conducting (closed or inactivated) mode. Our method is based on repeated recordings of traces of the transmembrane potential of the single channel. The method proposed here also enables analysis of identifiability of the model and it may provide a way of quantifying the uncertainty of the resulting rates of the Markov model.

Inversion
As pointed out by Siekmann et al. [ 7 ], it is straightforward to devise procedures that will provide the rates of Markov models based on experimental data. The difficulty involved is to obtain some measure of the reliability of the computed rates. Clearly, if two sets of completely different rates can provide the same results from a model, the model is not suited to give the insight we are ultimately looking for.

Identifiability using our method
Our method is based on comparing probability density functions from experimental data with probability density functions generated by solving a system of partial differential equations. The experimental data in this study are generated by Monte Carlo (MC) simulations where a given Markov model governs the stochastic state of the ion channel. The challenge is then to tune the parameters of a system of partial differential equations (PDE) such that the solution of the PDE system resembles the solution generated by MC as closely as possible. The model is identifiable if different set of rates for the Markov model gives different probability density distributions.
We find that the simple CO, CCCCO and CCCCOI models are identifiable, and that the looped model, see Figure 4, is not identifiable using our method. For the latter case, one eigenvalue of the Hessian matrix is very small and thus we may change the parameter vector r defining the Markov model a step in the direction of the associated eigenvector and observe little or no change in the cost function.
For the three state Markov model with a loop (Figure 4), we give an example where the rates between the closed and inactivated states simply disappear from the model in the sense that we can compute the probability density distribution of the conducting (open) and nonconducting (closed plus inactivated) states without knowledge of these rates. The model is therefore clearly not identifiable in this particular case.
For the complex Markov model given in Figure 6, we find that eight eigenvalues are very small. We can change the parameter vector in either of the associated eigenvectors and observe very little change of the cost function.
Following Siekmann et al [ 7 ] we have considered the Markov models shown in Figure 7. Here

The cost function
All the observations regarding identifiability reported in the present paper are based on the particular cost function (10). This cost function can be changed in many different ways; we may vary the applied current I 0 in numerous ways, we can add the time course of each trace of pseudo-experimental data, and we can alter the transmembrane concentration gradient and thus change the associated equilibrium (Nernst) potential of the potassium current. None of these changes have significantly altered the observations reported here, and we have therefore chosen to rely on the simplest version of the cost function.

What is zero?
As demonstrated above, a model is not identifiable if the smallest eigenvalue of the Hessian matrix is zero. In numerical computations, however, you will never get exactly zero -rather a very small number. But by considering the Taylor series (11), it is evident that a very small eigenvalue implies that the cost function changes very little if the parameter vector is changed in the direction of the associated eigenvector. Therefore, we refer to a model as not identifiable when the smallest eigenvalue is sufficiently small. It may be possible to follow this line of reasoning one step further and use the size of the smallest eigenvalue to quantify more precisely the uncertainty of the rates of the Markov model using the method introduced here.

Errors in computed parameters
The errors of the computed parameters can be caused of variations in the data since the data are generated by a stochastic process, and it can be caused by inaccuracies inherent in the method of inversion. If we do the inversion based on data created by the deterministic system of partial differential equations, there is no randomness involved but still there are errors in the computed parameters. This indicates that there are inherent inaccuracies in the method of inversion (which is very common for methods of solving inverse problems). One source of error is numerical method used to solve the system of partial differential equations. This part of the error is typically reduced by mesh refinement, but extremely fine meshes are unattractive from a computational point of view. On the other hand; when stochastic data are used, the error typically are reduced with larger samples, and also repeated inversion using the same data but different initial values, yields the same results. This confirms that also stochastic variations are source of variations in computed parameters.

Weakness of our approach
Although single channel data has in principle have been available for forty years ( [ 10 , 11 ]), the majority of patch clamp data is still obtained from whole cell recordings. Our method is suitable for inverting data based on single ion channel but is not directly applicable to data from whole cell recording. For a single channel it is relevant to consider the probability density as a function of the trans-membrane potential. For an identifiable Markov model, such distributions can clearly be used to identify the rates of the Markov model. The whole cell situations is completely different since thousands of single channels contribute the transmembrane potential which is therefore a deterministic entity even if the individual channels are stochastic.  The stationary probability distributions computed using the PDE model with I 0 = 0 using the parameters given above (green), and the solutions obtained when the parameter b 1 is multiplied by 1.1 (red) and 0.9 (blue).  Markov model of the potassium channel taken from Zhou et al. [ 25 ]. The rates of the model are given in Table 4. Two four states models taken from Siekmann et al. [ 7 ]; without a loop (left) and with a loop (rihgt). The table defines the name and the form of three Markov models. The rates are written on the form α(v) = exp(α 1 + α 2 v), β(v) = exp(b 1 + b 2 v). The O-I transition is assumed to be voltage independent; γ = exp(c 1 ), δ =  Table 2 The table shows the correct parameters (True value) of the Markov models, the computed (inverted) parameters denoted by r * for the models CO, CCCCO and CCCCOI, the value of the cost function J(r * ), the value of the norm |∇J(r * )| and the value of the smallest eigenvalue of the Hessian matrix at r * .  Table 3 The table shows the correct parameters (True value) of the Markov models, the computed (inverted) parameters denoted by r * for the model CCO, the value of the cost function J(r * ), the value of the norm | ∇J(r * )| and the value of the smallest eigenvalue of the Hessian matrix at r * .  The rates and parameters of the 17 state model.  Column 1: numbering of eigenvectors. Column 2: Eigenvalues (smallest at the first row and then increasing values). Columns 3, 4 and 5: The value of the cost function J(r* + δr i ) where the parameter vector r* is given in Table 4 and r i denotes the i-th eigenvector. Recall that J(r * ) = 0 so the table illustrates how much the cost function is changed by changing the the parameter vector along and eigenvector of the Hessian matrix.  Column 1: numbering of eigenvectors. Column 2: Eigenvalues (smallest at the first row and then increasing values). Columns 3, 4 and 5: The value of the cost function J(r* + δ r i) where the parameter vector r* is given in Table 4 and r i denotes the i-th eigenvector. Recall that J(r * ) = 0 so the table illustrates how much the cost function is changed by changing the the parameter vector along and eigenvector of the Hessian matrix.