Skip to main content

Advertisement

Log in

Structure-preserving model reduction of passive and quasi-active neurons

  • Published:
Journal of Computational Neuroscience Aims and scope Submit manuscript

Abstract

The spatial component of input signals often carries information crucial to a neuron’s function, but models mapping synaptic inputs to the transmembrane potential can be computationally expensive. Existing reduced models of the neuron either merge compartments, thereby sacrificing the spatial specificity of inputs, or apply model reduction techniques that sacrifice the underlying electrophysiology of the model. We use Krylov subspace projection methods to construct reduced models of passive and quasi-active neurons that preserve both the spatial specificity of inputs and the electrophysiological interpretation as an RC and RLC circuit, respectively. Each reduced model accurately computes the potential at the spike initiation zone (SIZ) given a much smaller dimension and simulation time, as we show numerically and theoretically. The structure is preserved through the similarity in the circuit representations, for which we provide circuit diagrams and mathematical expressions for the circuit elements. Furthermore, the transformation from the full to the reduced system is straightforward and depends on intrinsic properties of the dendrite. As each reduced model is accurate and has a clear electrophysiological interpretation, the reduced models can be used not only to simulate morphologically accurate neurons but also to examine computations performed in dendrites.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

References

  • Bai, Z., & Skoogh, D. (2006). A projection method for model reduction of bilinear dynamical systems. Linear Algebra and its Applications, 415, 406–425.

    Article  Google Scholar 

  • Braun, M. (1975). Differential equations and their applications. New York: Springer.

    Google Scholar 

  • Bush, P., & Sejnowski, T. (1993). Reduced compartmental models of neocortical pyramidal cells. Journal of Neuroscience Methods, 46, 159–166.

    Article  PubMed  CAS  Google Scholar 

  • Destexhe, A., Mainen, Z., & Sejnowski, T. (1998). Kinetic models of synaptic transmission. In C. Koch, & I. Segev (Eds.), Methods in neuronal modeling (Chapter 1, pp. 1–25). Cambridge: MIT Press.

    Google Scholar 

  • Freund, R. (2000). Krylov-subspace methods for reduced-order modeling in circuit simulation. Journal of Computational and Applied Mathematics, 123, 395–421.

    Article  Google Scholar 

  • Freund, R. (2011). The SPRIM algorithm for structure-preserving order reduction of general RCL circuits. In P. Benner, M. Hinze, & E. ter Maten (Eds.), Model reduction for circuit simulation (pp. 25–52). New York: Springer.

    Chapter  Google Scholar 

  • Gabbiani, F., & Cox, S. (2010). Mathematics for neuroscientists. Boston: Elsevier/Academic Press.

    Google Scholar 

  • Golding, N., Kath, W., & Spruston, N. (2001). Dichotomy of action-potential backpropagation in CA1 pyramidal neuron dendrites. Journal of Neurophysiology, 86, 2998–3010.

    PubMed  CAS  Google Scholar 

  • Grimme, E. (1997). Krylov projection methods for model reduction. PhD thesis, University of Illinois at Urbana-Champaign, Urbana, Illinois.

  • Gu, C. (2011). QLMOR: A projection-based nonlinear model order reduction approach using quadratic-linear representation of nonlinear systems. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 30, 1307–1320.

    Article  Google Scholar 

  • Gugercin, S., Antoulas, A., & Beattie, C. (2008). \({{\cal H}_2}\) model reduction for large-scale linear dynamical systems. SIAM Journal on Matrix Analysis and Applications, 30, 609–638.

    Article  Google Scholar 

  • Hodgkin, A., & Huxley, A. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. Journal of Physiology, 117, 500–544.

    PubMed  CAS  Google Scholar 

  • Jarsky, T., Roxin, A., Kath, W., & Spruston, N. (2005). Conditional dendritic spike propagation following distal synaptic activation of hippocampal CA1 pyramidal neurons. Nature Neuroscience, 8, 1667–1676.

    Article  PubMed  CAS  Google Scholar 

  • Johnston, D., & Amaral, D. (1998). Hippocampus. In G. Shepherd (Ed.), The synaptic organization of the brain (Chapter 10, pp. 417–458). New York: Oxford University Press.

    Google Scholar 

  • Kellems, A., Chaturantabut, S., Sorensen, D., & Cox, S. (2010). Morphologically accurate reduced order modeling of spiking neurons. Journal of Computational Neuroscience, 28, 477–494.

    Article  PubMed  Google Scholar 

  • Kellems, A., Roos, D., Xiao, N., & Cox, S. (2009). Low-dimensional, morphologically accurate models of subthreshold membrane potential. Journal of Computational Neuroscience, 27, 161–176.

    Article  PubMed  Google Scholar 

  • Koch, C. (1999). Biophysics of computation: Information processing in single neurons. New York: Oxford University Press.

    Google Scholar 

  • Krapp, H., & Gabbiani, F. (2005). Spatial distribution of inputs and local receptive field properties of a wide-field, looming sensitive neuron. Journal of Neurophyiology, 93, 2240–2253.

    Article  Google Scholar 

  • Li, P., & Pileggi, L. (2005). Compact reduced-order modeling of weakly nonlinear analog and RF circuits. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 23, 184–203.

    Google Scholar 

  • Li, R., & Bai, Z. (2005). Structure-preserving model reduction using a Krylov subspace projection formulation. Communications in Mathematical Sciences, 3, 179–199.

    Google Scholar 

  • Lin, Y., Bao, L., & Wei, Y. (2009). Order reduction of bilinear MIMO dynamical systems using new block Krylov subspaces. Computers and Mathematics with Applications, 58, 1093–1102.

    Article  Google Scholar 

  • Mohler, R. (1991). Nonlinear systems: Applications to bilinear control. Englewood Cliffs: Prentice Hall.

    Google Scholar 

  • Odabasioglu, A., Celik, M., & Pileggi, L. (1998). PRIMA: Passive reduced-order interconnect macromodeling algorithm. IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems, 17, 645–654.

    Article  Google Scholar 

  • O’Shea, M., & Rowell, C. (1976). The neuronal basis of a sensory analyser, the acridid movement detector sysem. II. Response decrement, convergence, and the nature of excitatory afferents to the fan-like dendrites of the LGMD. Journal of Experimental Biology, 65, 289–308.

    PubMed  Google Scholar 

  • Phillips, J. (2000). Projection frameworks for model reduction of weakly nonlinear systems. In Proceedings of DAC 2000 (pp. 184–189).

  • Phillips, J. (2003). Projection-based approaches for model reduction of weakly nonlinear, time-varying systems. IEEE Transations on Computer-Aided Design Integrated Circuits Systems, 22, 171–187.

    Article  Google Scholar 

  • Pinsky, P., & Rinzel, J. (1994). Intrinsic and network rhythmogenesis in a reduced Traub model for CA3 neurons. Journal of Computational Neuroscience, 1, 39–60.

    Article  PubMed  CAS  Google Scholar 

  • Poznanski, R. (1991). A generalized tapering equivalent cable model for dendritic neurons. Bulletin of Mathematical Biology, 53, 457–467.

    PubMed  CAS  Google Scholar 

  • Rall, W. (1959). Branching dendritic trees and motoneuron membrane resistivity. Experimental Neurology, 1, 491–527.

    Article  PubMed  CAS  Google Scholar 

  • Roychowdhury, J. (1999). Reduced-order modeling of time-varying systems. IEEE Transactions on Circuits and Systems. II: Analog and Digital Signal Processing, 46, 1273–1288.

    Article  Google Scholar 

  • Rugh, W. (1981). Nonlinear system theory. Baltimore: Johns Hopkins University Press.

    Google Scholar 

  • Schierwagen, A. (1989). A non-uniform equivalent cable model of membrane voltage changes in a passive dendritic tree. Frontiers in Neuroscience, 1, 19–42.

    Google Scholar 

  • Spruston, N. (2008). Pyramidal neurons: Dendritic structure and synaptic integration. Nature Reviews Neuroscience, 9, 206–221.

    Article  PubMed  CAS  Google Scholar 

  • Traub, R., Wong, K., Miles, R., & Michelson, H. (1991). A model of a CA3 hippocampal pyramidal neuron incorporating voltage-clamp data on intrinsic conductances. Journal of Neurophysiology, 66, 635–650.

    PubMed  CAS  Google Scholar 

  • Trefethen, L., & Bau, D. (1997). Numerical linear algebra. Philadelphia: Society for Industrial and Applied Mathematics.

    Book  Google Scholar 

  • Villemagne, C., & Skelton, R. (1987). Model reduction using a projection formulation. International Journal of Control, 46, 2141–2169.

    Article  Google Scholar 

  • Yan, B., & Li, P. (2011). Reduced order modeling of passive and quasi-active dendrites for nervous system simulation. Journal of Computational Neuroscience, 31, 247–271.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

This work is supported by NSF grant DMS-0739420 and by a training fellowship from the Keck Center for Interdisciplinary Bioscience Training of the Gulf Coast Consortia (NIBIB Grant No. 1T32EB006350-01A1).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kathryn R. Hedrick.

Additional information

Action Editor: Brent Doiron

Appendices

Appendix A: Construction of the Volterra series

The Volterra series is a useful representation of nonlinear systems, but thorough accounts of its derivation and convergence are sparse in the literature. In this appendix we derive the Volterra series for the passive cable given synaptic input and establish its convergence by relating it to the well-studied Picard iterates. For an alternative approach to the Volterra series, see Section 5.3 of Mohler (1991).

Consider the full model for the passive cable driven by monosynaptic input,

$$ v'(t) = Av(t) + bg(t) + Bv(t)g(t), \quad v(0) = 0, $$

where by Eq. (33), A = − C  − 1 G, \(b = C^{-1}Ee_p,\) and B = − C  − 1 N. Define f(t) ≡ e − At v(t). Then,

$$ f'(t) = \mathrm{e}^{-At} \big[ bg(t) + B\mathrm{e}^{At} f(t)g(t) \big], \quad f(0) = 0. $$
(51)

If g(t) were transient, the product f(t)g(t) would be relatively small, motivating the initial guess

$$ f^1(t) = {\int_0^t} \mathrm{e}^{-As}bg(s) ds. $$

Equation (51) is solved by the fixed point problem

$$ f(t) = f^1(t) + {\int_0^t} K(s)f(s)ds, $$

where

$$ K(t) = \mathrm{e}^{-At}B\mathrm{e}^{At}g(t). $$

Each iterate is used to generate the next guess for f via

$$ f^k(t) = f^1(t) + {\int_0^t} K(s)f^{k-1}(s) ds. $$

This method is known as Picard iterations or the method of successive approximations, and as both f k and K are continuous, f k(t) converges to f(t), as shown in Section 1.10 of Braun (1975). We construct the Volterra series by defining v 1(t) ≡ eAt f 1(t) and v k(t) ≡ eAt (f k − f k − 1)(t) for k > 1. Its convergence follows via

$$ \sum\limits_{k=1}^{\infty} v^k(t) = \mathrm{e}^{At}\left( f^1 + \sum\limits_{k=1}^{\infty} \left(f^{k+1} - f^k\right) \right) = \mathrm{e}^{At} f(t) = v(t). $$

Unwrapping each term reveals

$$ v^1(t) = \mathrm{e}^{At} f^1(t) = {\int_0^t} \mathrm{e}^{A(t-s)} b g(s) ds, $$
(52)

implying that (v 1)′(t) = Av 1(t) + bg(t), or

$$ C(v^1)'(t) + Gv^1(t) = Ee_pg(t), \quad v^1(0) = 0. $$
(53)

Similarly, for k ≥ 2,

$$ \begin{array}{rll} v^k(t) &=& \mathrm{e}^{At} {\int_0^t} K(s) e^{-As} v^{k-1}(s) ds \\ &=& {\int_0^t} \mathrm{e}^{A(t-s)} Bg(s) v^{k-1}(s) ds, \end{array} $$
(54)

implying that (v k)′(t) = Av k(t) + Bg(t)v k − 1(t), or

$$ C(v^k)'(t) + Gv^k(t) = -g(t)Nv^{k-1}(t), \quad v^k(0) = 0. $$
(55)

The series easily generalizes for polysynaptic input by replacing the input of Eq. (53) with \(\sum_{j=1}^m E_{p_j} e_{p_j} g_{p_j}(t)\) and of Eq. (55) with \(-\sum_{j=1}^m g_{p_j}(t) {N^{(p_j)}} v^{k-1}(t)\).

Appendix B: Transfer functions for the Volterra series

When the cable is driven by current injections, we construct a reduced system to match the leading moments of the transfer functions for the full and reduced systems. The same procedure can lead to an appropriate reducer for the cable driven by synaptic conductances by considering the transfer functions for each term in the Volterra series, where each transfer function maps the synaptic conductance, g, to \(y^k = {e_{\rm SIZ}}^T v^k\). In this appendix we derive the transfer functions for each Volterra term given monosynaptic input. The transfer functions can be generalized for polysynaptic input.

Assume g(t < 0) = 0 and define the kernel

$$ h_1(t) \equiv \left\{ \begin{array}{ll} \mathrm{e}^{At}b & \mbox{if $t\geq 0,$}\\ 0 & \mbox{otherwise,}\\ \end{array} \right. $$

where A = − C  − 1 G and \(b = C^{-1}Ee_p\). Equation (52) can be written as

$$ v^1(t) = \int_{-\infty}^\infty h_1(\tau) g(t-\tau) d\tau = (h_1 \star g)(t). $$
(56)

As expected, v 1 depends linearly on g, and by the convolution theorem,

$$ {\cal L} v^1(s) = {\widetilde{H}}_1(s) {\cal L} g(s), {\quad\hbox{where}\quad} {\widetilde{H}}_1 = {\cal L} h_1 $$

and \({\cal L}\) denotes the Laplace transform. The transfer function \({\widetilde{H}}_1\) is then given by

$$ {\widetilde{H}}_1(s) = {\int_0^\infty} \mathrm{e}^{(A-sI)t} b\ dt = (G+sC)^{-1} Ee_p. $$

Since the passive system is stable, \({\widetilde{H}}_1\) is well-defined if Re(s) ≥ 0 and provides the mapping

$${\cal L} y^1(s) = H_1(s) {\cal L} g(s), {\quad\hbox{where}\quad} H_1 = e_{\rm SIZ}^T {\widetilde{H}}_1. $$
(57)

We next strive to write the second Volterra term as a convolution, which would allow us to easily compute its transfer function. By Eq. (54), v 2 depends on the product of v 1 and g, both of which depend on g. We thus make the educated guess that v 2 can be written as

$$ v^2(t) = {\int\limits_{\Re^2}} h_2(\tau_1,\tau_2) g(t-\tau_1) g(t-\tau_2) d\tau_1 d\tau_2 $$
(58)

and solve for the kernel h 2. As Eq. (58) is not quite a convolution, we pause to show the advantage of its form. Define \({\overline g}(t_1,t_2) \equiv g(t_1)g(t_2)\), and define

$$ {\overline v}(t_1,t_2) \equiv (h_2 \star {\overline g})(t_1,t_2). $$

Then, \(v^2(t) = {\overline v}(t,t)\), and by the convolution theorem,

$$ \begin{array}{rll} {\cal L} {\overline v}(s_1,s_2) &=& {\widetilde{H}}_2(s_1,s_2) {\cal L} {\overline g}(s_1,s_2) \\ &=& {\widetilde{H}}_2(s_1,s_2) {\cal L} g(s_1) {\cal L} g(s_2), \end{array} $$

where \({\widetilde{H}}_2 = {\cal L} h_2\). Therefore,

$$ v^2(t) = {\cal L}^{-1} \big( {\widetilde{H}}_2(s_1,s_2) {\cal L} g(s_1) {\cal L} g(s_2) \big) (t,t), $$
(59)

and the two-dimensional transfer function \({\widetilde{H}}_2\) does indeed map the input g to the output v 2 in the frequency domain. We now return to the computation of the kernel h 2 and corresponding transfer function. By Eq. (54),

$$ v^2(t) = \int_0^\infty \mathrm{e}^{A\tau_1} B v^1(t-\tau_1) g(t-\tau_1) d\tau_1, $$

where B = − C  − 1 N. By Eq. (56),

$$ \begin{array}{rll} v^1(t-\tau_1) &=& {\int_0^\infty} h_1(s) g(t-\tau_1-s) ds \\ &=& {\int_0^\infty} h_1(\tau_2-\tau_1) g(t-\tau_2) d\tau_2, \end{array} $$

given the change of variables, τ 2 = τ 1 + s. Equation (58) is then obtained by defining the kernel

$$ h_2(t_1,t_2) \equiv \left\{ \begin{array}{ll} \mathrm{e}^{At_1} B \mathrm{e}^{A(t_2-t_1)} b & \mbox{ if $0\leq t_1 \leq t_2,$}\\ 0 & \mbox{ otherwise.} \end{array} \right. $$

The transfer function, \({\widetilde{H}}_2 = {\cal L} h_2\), is given by

$$ \begin{array}{rll} {\widetilde{H}}_2(s_1,s_2) &=& {\int_0^\infty} {\int_0^\infty} \mathrm{e}^{-s_1t_1 - s_2t_2} h_2(t_1,t_2) dt_1 dt_2 \\ &=& {\int_0^\infty} \mathrm{e}^{(A-s_1I)t_1} B {{\cal I}}(t_1; s_2) dt_1 \end{array} $$

where

$$ \begin{array}{rll} {{\cal I}}(t_1; s_2) &=& {\int_0^\infty} \mathrm{e}^{-s_2t_2} h_1(t_2-t_1) dt_2 \\ &=& {\int_0^\infty} \mathrm{e}^{-s_2(t_1+\tau)} h_1(\tau) d\tau = \mathrm{e}^{-s_2t_1} {\widetilde{H}}_1(s_2). \end{array} $$

Hence, if Re(s 1 + s 2) ≥ 0, then

$$ \begin{array}{rll} {\widetilde{H}}_2(s_1,s_2) &=& {\int_0^\infty} \mathrm{e}^{(A-(s_1+s_2)I)t_1} B {\widetilde{H}}_1(s_2) dt_1 \\ &=& -(G+(s_1+s_2)C)^{-1} N (G+s_2C)^{-1} Ee_p. \end{array} $$

Finally, \(H_2 \equiv e_{\rm SIZ}^T {\widetilde{H}}_2\) combined with Eq. (59) leads to

$$ y^2(t) = {\cal L}^{-1}\big( H_2(s_1,s_2){\cal L} g(s_1){\cal L} g(s_2) \big) (t,t). $$
(60)

To simplify notation, the transfer function can be written in its regular form, defined in Section 2.3 of Rugh (1981) such that \(H_2(s_1,s_2) = H_2^{{\text{reg}}}(s_1+s_2,s_2),\) or

$$ H_2^{{\text{reg}}}(s_1,s_2) = -e_{\rm SIZ}^T (G+s_1C)^{-1} N (G+s_2C)^{-1} Ee_p. $$

In a similar manner, one can iteratively compute the kernels and corresponding transfer functions for each Volterra term. As each Volterra term has an increasingly nonlinear dependence on g, it can be written as

$$ v^k(t) = (h_k\star {\overline g})(t,\cdots,t), $$

where \({\overline g}(t_1,\cdots,t_k) \equiv \prod_{j=1}^k g(t_j)\). Given \({\overline v}(t_1,\cdots,t_k) \equiv\) \( (h_k \star {\overline g})(t_1,\cdots,t_k),\)

$$ {\cal L} {\overline v}(s_1,\cdots,s_k) = {\widetilde{H}}_k(s_1,\cdots,s_k) {\cal L} g(s_1) \cdots {\cal L} g(s_k), $$

and \(v^k(t) = {\overline v}(t,\cdots,t)\). The transfer function for the output y k in its regular form is then given by

$$ \begin{array}{rll} H_k^{{\text{reg}}}(s_1,\cdots,s_k) &=&(-1)^{k+1} e_{\rm SIZ}^T \prod\limits_{j=1}^{k-1} \left(\left(G+s_jC\right)^{-1} N\right) \\ &&\times (G+s_kC)^{-1}Ee_p, {\quad\hbox{where}\quad} \end{array} $$
$$ H_k(s_1,\cdots,s_k) = H_k^{{\text{reg}}}(s_1+\hdots+s_k,\ s_2+\hdots+s_k,\hdots,s_k). $$

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hedrick, K.R., Cox, S.J. Structure-preserving model reduction of passive and quasi-active neurons. J Comput Neurosci 34, 1–26 (2013). https://doi.org/10.1007/s10827-012-0403-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10827-012-0403-y

Keywords

Navigation