Adaptive Conductance Control

Neuromodulation is central to the adaptation and robustness of animal nervous systems. This paper explores the classical paradigm of indirect adaptive control to design neuromodulatory controllers in conductance-based neuronal models. The adaptive control of maximal conductance parameters is shown to provide a methodology aligned with the central concepts of neuromodulation in physiology and of impedance control in robotics.


Introduction
Due to the very nature of nervous electrical signals, brain medicine and brain-inspired computing put an increasing demand on completely novel control design methodologies. Neurotechnology interfaces neural tissues with electronic devices via the exchange of electrical signals. While of the same physical nature, those signals are radically different in animals and in machines: neural signals are analogue and spiky, whereas the information processing of actuator and sensor signals is digital and quantised. Current control systems in neurotechnology use classical linear filtering techniques in combination with conventional analogue-to-digital converters from robotics and electronics [1]. They fail to acknowledge the excitable dynamics underlying the generation of spiky signals.
Future developments will call for more compliance and more integration between the biological and technological domains. Such requirements will impose control interfaces that interconnect (or, in the behavioural language of Willems [2], share) signals of the same nature.
The significance of designing a controller that shares the input-output properties of the controlled physical system has a long and rich history in robotics.
Passivity-based control is rooted in the concept of port interconnection, which wires the physical terminals of a passive plant and a passive controller both modelled as mechanical ports [3]. In robotics, impedance control is rooted in the design of a controller that shapes the mechanical impedance of the closed-loop system to comply with the environment, itself modelled in terms of a mechanical impedance [4].
The present paper adopts the same philosophy for the control of a neuronal system. We use the well-established framework of conductance-based modelling both for the controller and for the system to be controlled. We consider conductance-based neural networks in which each neural node is a one-port circuit composed of one leaky capacitor (the 'passive membrane') in parallel with a bank of ohmic current sources of variable conductance. The controller is itself an additional set of ohmic current sources connected in parallel to those of the neuron. Each conductance is voltage-dependent, gating the current flow in a specific temporal and amplitude window.
A key emphasis in the present paper is on the adaptive control of the maximal conductances of a conductance-based model. Each maximal conductance modulates the relative importance of a specific current source. Similar to the parameters of a conventional controller, the maximal conductances shape the total conductance of the controlled neurons. We wish to demonstrate that the online adaptation of maximal conductances provides a versatile framework to control the behaviour of a neuronal system. Online adaptation of the maximal conductances is aligned with the concept of neuromodulation, which is of key importance in the biological control of neuronal systems [5,6,7]. All nervous systems are subject to neuromodulation, and each neuron is potentially the target of multiple neuromodulators [8]. Furthermore, each modulator can act on multiple ionic currents in the same neuron [9].
Our methodology exploits the classical framework of model reference adaptive control [10]. This framework relies on key physical properties of electrical and mechanical circuits: a relative degree one between the two terminal variables of the port (current and voltage in the present paper), and contracting internal dynamics. We show that these properties can be exploited in conductance-based electrical circuits in the same way as they have been exploited in the impedancebased mechanical circuits of robotics. A core element of the proposed adaptive control design is the adaptive observer recently proposed for real-time estimation of conductance-based models in [11].
The paper is organised as follows. Section 2 introduces conductance-based models, including the specific parametrisation we will require. Section 3 summarises the adaptive observer design detailed in [11]. Section 4 employs this observer to solve the basic problems of adaptive reference tracking and adaptive disturbance rejection, as well as showing the relevance of such problems in the control of a simple biophysical neural network. Section 5 discusses the idealised assumptions of the present paper and possible routes to make the theoretical methodology amenable to practical solutions of control problems in electrophysiology or in neuromorphic engineering.

Conductance-Based Modelling
Since the seminal work of Hodgkin and Huxley [12], biophysical models in neurophysiology have been founded on nonlinear electrical circuits known as conductance-based models. A detailed introduction to such models can be found in [13,14]. In this section, we extend the system-theoretic conductance-based modelling framework found in [11].

Conductance-based model of a neuron
A conductance-based model of a neuron is a one-port electrical circuit with the architecture shown in Figure 1: a capacitor of capacitance c > 0 in parallel with a leak current I leak and several intrinsic ionic currents. The capacitance an input (or applied ) current u. The latter represents the current injected into the circuit by an intracellular electrode. The capacitor voltage v, which models the neuronal membrane potential, evolves according to Kirchhoff's law, that is where I int is the parallel combination of intrinsic ionic currents, I ext is the parallel combination of synaptic currents, I is the index set of intrinsic ionic currents, S is the index set of synaptic neurotransmitter types, and P is the index set of presynaptic neurons.
Each current in the circuit is ohmic in nature, but with a conductance that can be nonlinear and voltage-dependent. The leak current has a constant conductance and is given by whereas the intrinsic ionic currents are modelled by The constants µ ion > 0 and E ion ∈ R are called (intrinsic) maximal conductances  The HH model only includes two voltage-dependent conductances to parameterise the intrinsic current. Those types of currents are necessary and sufficient to model an action potential, or spike [13]. This is due to the presence of fast-activating negative conductance and slower-activating positive conductance, which act as sources of positive and negative feedback respectively [15].
The fast negative conductance is provided by the activation of the inward current I Na (due to the activation gating variable m Na ) and the slower positive conductance is provided by both the inactivation of I Na (due to h Na ) and the activation of the outward current I K (due to m K ) [16,17].
Biophysical neurons may exhibit much richer behaviours, including for instance transitions between spiking and bursting patterns. The conductancebased models of such neurons include more conductances, leading to a plethora of single-neuron models differing from each other by the kinetics and activation ranges of their gating variables.
The remaining intrinsic gating variables evolve according to equations of the type (2b)-(2c). Figure 2 illustrates the simulated behaviour of this model. The precise dynamics of each gating variable are specified in Appendix B.
Recall that the neuron in Example 1 exhibits spiking due to the presence of both fast-activating negative conductance and slower-activating positive conductance. The model in Example 2 exhibits robust bursting because this pairing is replicated at the timescale of the burst [19]. The T-and L-type calcium currents I T and I L provide negative conductance in the slower timescale of calcium activation, and even slower positive conductance is provided by the calciumactivated potassium current I KCa .
Synaptic currents arise from electrochemical connections between neurons [20,Chapter 7]. They are modelled as current sources in the same way as intrinsic currents; the only difference is that the voltage dependence of their conductances is on the presynaptic neuron, noted v p in the example below.
Example 3. For the sake of illustration, all the models in this paper only use one type of (inhibitory) synapse, called a GABA synapse in neurophysiology, obeying the standard model reproduced from [21]: with a synaptic activation function σ syn given by a sigmoid function of the form Here, s syn,p is the synaptic gating variable and v p is the membrane voltage of the presynaptic neuron. The constants µ syn,p > 0 and E syn ∈ R are (synaptic) maximal conductances and reversal potentials, respectively. This model can represent excitatory or inhibitory synapses, depending on the value of E syn . In the present paper we set E syn = −90, which models the inhibitory synapses encountered in central pattern generators. where the vector w (i) collects the intrinsic and synaptic gating variables of the i th neuron, that is, m ion,i , h ion,i , and s syn,p,i , as well as the calcium concentration

Conductance-based model of a neural network
[Ca] i . Notice that with this notation, P ⊆ N .
In addition to synaptic current interconnections, neuronal networks may include electrical gap junctions, modelled as ohmic resistive currents flowing between neurons. Gap junction currents are thus passive components of the network, just as leak currents are passive components in a single neuron. It follows that the model of a conductance-based neural network has a form completely analogous to that of a single neuron, given by (1). The network model is given by the vector equations where I leak is the overall leaky current, given by and I int ∈ R nv , I ext ∈ R nv and u ∈ R nv are formed by gathering the intrinsic and extrinsic currents and inputs of each neuron in the corresponding n v -dimensional vectors. The addition of gap junction currents extends the system-theoretic modelling framework of [11].
The corresponding vector function g(v, w) in (6b), which represents the inter-  generating autonomous rhythms for motor control [22].
The HCO model of this paper interconnects two identical bursting neurons modelled according to Example 2 with the synapse model given in Example 3.
The synapse index set is given by S = {G}. The state of the internal dynamics is given by w = col(w (1) , w (2) ), with The HCO model is a building block of more complex central pattern generators such as the example considered in Section 4.3, which includes gap junctions as well as synapses.

Adaptive observers for conductance-based models
Following the notation of [11], the conductance-based network model (6) can be written in the formv where v(t) ∈ R nv is a vector of measured output membrane voltages, u(t) ∈ R nv is a vector of input currents, w(t) ∈ R nw is a vector of unmeasured internal states, and θ ∈ R n θ is a vector of biophysical parameters to be estimated. The vector of unknown constant parameters θ is included in the state-space model via (7c).
In the present paper, the vector θ only includes maximal conductances.
Example 5. Consider the HCO described in Example 4. Let for i, p ∈ N = {1, 2} and p = i. The estimated parameters of the HCO are chosen as with µ (1) and µ (2) given by (8). Letting v = (v 1 , v 2 ) T and w = col(w (1) , w (2) ), the voltage dynamics of the model can then be written as (7a), where An important property of the parametrisation in Example 5 is that it is decentralised: the network estimation problem decouples into independent singleneuron estimation problems. This decoupling allows the estimation problem to be scaled to a possibly high-dimensional network.
The recent work [11] provides an adaptive observer to estimate the parameters of the system (7) in real-time. This observer has global convergence properties and is based on the recursive least squares algorithm. The state-space where γ > 0 is a constant gain, and the matrices P and Ψ evolve according tȯ where α > 0 is a constant forgetting rate, required to discount the initial error between w(0) andŵ(0). It can be shown that this adaptive observer recursively solves a least-squares regression problem with exponential forgetting, where the regression error is defined by filtering the derivative of v with the first-order filter H = γ/(s + γ). Without loss of generality, we assume Ψ(0) = 0.
The convergence of the above adaptive observer relies on the key property that the internal dynamics (7b) are contracting in w on a positively invariant compact set, uniformly in the voltage v. We refer the reader to [11] for further details, including a contraction-based proof of convergence.

Model Reference Adaptive Conductance Control
Adaptive observers are instrumental to the classical design approach called model reference adaptive control (MRAC) [10]. In this section we illustrate the application of model reference adaptive control to conductance-based models. We regard a single neuron as a voltage-controlled circuit. We review the two canonical control problems of adaptive control: the adaptive tracking of a reference signal v r (Section 4.1), and the adaptive disturbance rejection of an external current I d (Section 4.2). We then illustrate the relevance of those elementary control problems in a network example (Section 4.3). See Figure 3 for a block diagram representation of the two problems.

Adaptive reference tracking
The classical problem of reference tracking is to design a controller such that the voltage output v asymptotically converges to the voltage reference v r . If we assume that both the output v and the reference v r are solutions of identical conductance-based models, this tracking problem reduces to the classical problem of synchronisation.
Here we consider the adaptive version of the tracking problem: we assume that the reference generator is a conductance-based model with constant but unknown vector parameter θ r and that the controlled circuit is a conductancebased model with the same model structure, and with a parameter θ that is also unknown. The problem of adaptive synchronisation has been considered in [23,24], in the context of secure communication via encryption in chaotic reference generators. Assuming a perfect knowledge of the ion channel kinetics g(v, w), the solution presented here is relevant to track the neuromodulation of a neuron in vivo or to learn the parameters of a silicon neuron from experimental traces.
We will first describe how to solve adaptive reference tracking without synchronisation. This yields an oscillation which follows the reference but with a possible phase shift. The phase shift is then eliminated with an additional resistive element in the controller.
We solve the adaptive reference tracking problem by estimating both the ref- We use the control law u(t) = I control (t) + u r (t), where u r (t) is the input to the reference neuron, and where ρ(x) := max (0, x) is the rectified linear (ReLU) function, and ρ(x) := min(x, β). Both functions are applied to their arguments elementwise. Together, they ensure that the solutions of the controlled neuron remain in a positively invariant set, as required for convergence of the observer [11]. We require β to satisfy max j {θ j } ≤ β ≤β, whereβ is the largest value which preserves the set. We empirically choose β by setting it to a large value (relative to plausible values of θ) and reducing it if the membrane potential of the controlled neuron diverges.
Exponential convergence of the estimated parameters (θ andθ r ) to the true parameters provides a solution to the tracking problem without synchronisation.
The proof relies on the virtual system idea of contraction theory [25], and follows the same lines as in [11, Section V.C]: the estimatex r contracts exponentially fast to the reference x r while the estimatex contracts exponentially fast to x. Upon convergence of both the reference and plant observers, we obtain the non-adaptive synchronisation problem between two identical systems.
The solution to this problem is particularly simple for conductance-based models because of their property of output feedback contraction or output feedback incremental passivity [26,27]. Contraction of the error dynamics is ensured by including the output feedback term κ(v r − v) in the control law, for a sufficiently large gain κ > 0. The circuit realisation of this feedback term is a resistive wire between the reference and controlled circuit, or a gap junction in the language of neurophysiology. When this wire is introduced, Theorem 2 of [26] applies, and v(t) → v r (t) as t → ∞. The full control law is thus given by where I control (t) is given by (12).

Adaptive disturbance rejection
The classical problem of disturbance rejection has a solution similar to that of the tracking problem. Given a disturbance current I d generated by a synaptic current, we wish to design a feedback controller that asymptotically rejects that −2   This problem is of relevance in electrophysiology. Electrophysiologists study the properties of a given neuronal circuit in vitro by extracting the circuit from its nervous system and probing its responses to electrical stimuli in an experimental preparation, see e.g. [28]. Classical solutions include pharmacological agents that block specific types of ion channels, thereby reducing the synaptic currents to zero; see e.g. [29]. A downside of using pharmacological agents is that they may affect other properties of the circuit; another downside is the global effect of such agents within a given preparation. These may be undesirable when highly specific ion channel blocking is required.

An interesting alternative for targeted synaptic isolation in an experimental
preparation is the design of a conductance-based controller for the rejection of synaptic currents. In this case, synaptic currents to be blocked are regarded as disturbances to be rejected.
Assuming that the pre-and post-synaptic voltages v d and v are measured, a target synaptic current flowing between the two neuronal membranes can be blocked by means of the adaptive disturbance rejection control scheme shown in Figure 6. Note that this is the same circuit as in Figure 1, but with two additional circuit elements connected in parallel.
The first additional element is the disturbance I d , which is interpreted as the specific synaptic current to be blocked. This disturbance current is modelled as where σ s is a monotonically increasing activation function and a 1 , a 2 > 0 are constant (known) synaptic parameters.
The second additional element is the controller. Its inputs are the measured voltages v and v d , and it generates a control current I control which is designed to cancel I d . We require that the behaviour of the closed-loop circuit converges to that of an undisturbed conductance-based model, that is, one where the targeted synaptic connection is absent. We assume that both the pre-and post-synaptic neurons are the bursting neurons of Example 2, and that they are interconnected with the inhibitory synapse of Example 3.
To achieve perfect disturbance rejection, the unknown synaptic maximal conductance µ syn has to be estimated; this estimate is denoted byμ syn . The disturbance rejection controller can then be designed following the certainty equivalence principle [10]. In other words, using the estimateμ syn , the controller is designed to perfectly cancel I d whenμ syn = µ syn . In this case, the estimation problem therefore requires an estimation method such thatμ syn → µ syn as t → ∞. This can be accomplished by the observer given by (10)-(11), using the parametrisationθ =μ syn . The observer also produces an estimate of the synaptic gating variable such thatŝ → s as t → ∞. Given this observer, the input to the neuron is given by u = I control + I d +ū, whereū is an arbitrary new input signal, and where σ s , a 1 and a 2 are the same as in (14b). As in Section 4.1, the function ρ ensures the positively invariant set. Figure 7 illustrates the performance of the disturbance rejection controller.
The model and observer parameters, and the input currents, are provided in Appendix A.

Network neuromodulation
The adaptive conductance controller developed in the previous sections for a single cell is by nature decentralised: it can be applied independently to different neurons (nodes) in a network. Each neuron in the network receives synaptic currents that can be adaptively estimated using the measurements of the presynaptic neuron voltages. As a consequence, an observer can be designed for each neuron of the network, and each of those neurons can be controlled to synchronise to a given neuron or to adaptively reject specific synaptic currents.
We illustrate the versatility of this adaptive conductance control in a fiveneuron network previously analysed in [30] and itself inspired by the Stomatogastric Ganglion, a crustacean central pattern generator [8]. The network interconnects a fast HCO and a slow HCO, both with the model structure of Examples 4 and 5, through a central "hub" neuron. The connectivity diagram of this network is shown in the centre of Figure 8a. Notice the lack of direct connections between the two HCOs on either side of the hub neuron.
In previous work [30], we have shown that this network can be switched between distinct rhythmic states by the modulation of specific internal conductances. In every possible rhythmic configuration, the network is composed of five neurons generated with the model of Example 2 and interconnected with gap junctions and the inhibitory synapses of Example 3. In an application of model-reference adaptive control, the challenge is to adaptively regulate the network by only modulating the maximal conductance parameters. This can involve up to five distinct observers, assuming measurement of the five neuronal voltages.
As a simple illustration, we show how to decouple the two central pattern   generators by disconnecting the central hub using the disturbance rejection controller of the previous section. This is achieved by control of the hub (Neuron 3 in Figure 8a). The controller rejects the inhibitory synapse from Neuron 5, using the same control law as Section 4.2. This is illustrated in Figure 8b by the behaviour of the hub's membrane potential, v 3 . During part (a) of the simulation, v 3 expresses a rhythm governed almost entirely by the first HCO (that is, Neurons 1 and 2). As only Neuron 1 is inhibiting the hub, v 3 is low when v 1 is active and each burst of v 3 is the same length. In part (b), when the disturbance is introduced, v 3 expresses a 'mixed' rhythm governed equally by both HCOs.
Bursts of v 3 are interrupted whenever v 1 or v 5 is active. Finally, the observer and controller are introduced in part (c), and v 3 converges to the undisturbed rhythm of (a). The simulation parameters are provided in Appendix A.

Discussion
The  ous study of how noise impacts the system identification of conductance-based models, see [31]. Robustness of the adaptive control laws against modelling uncertainty and measurement noise will be the scope of future investigation, both analytical and experimental.
We envision at least three distinct angles of attack to increase the robustness and the biological plausibility of the proposed adaptive design: • Redundancy and degeneracy of conductances seem essential in biological neurons to cope with uncertainty. Biological neurons can use vastly different choices of maximal conductances to exhibit the same behaviour [5]. It has been suggested that this degeneracy plays a key role in homeostasis [32]. Viewing the internal dynamics of a neuron as a bank of nonlinear filters that collectively shape the conductance of the total internal current, the adaptive controller functions very much like a onelayer artificial neural network with recursive least-squares estimation of its parameters (this provides a link between biophysical models and the phenomenological models proposed in [33]). The bank of conductances is however not arbitrary in neurophysiology. Ion channel types identify specific time scales and amplitude ranges of activation that are critical for the excitability thresholds of the neuron. These specific scales of activation have been well-documented over a long history of voltage-clamp experiments. The resulting dynamic conductances shape the closed-loop behaviour very much like the zeros and poles of a classical LTI controller shape the sensitivity of the closed-loop system [15,34].
• Voltage measurements are assumed to be exact in this paper, but the spiking nature of the signals suggests that a much coarser information about reference or disturbance signals might be sufficient to modulate the rhythm of a neuron. Future work will explore this possibility to reduce the computational demand of the full observer. Here, neurophysiology will also be a useful guide. For instance, it is well-known that calcium is an essential second messenger involved in neuromodulation. In the present paper (as in many models of the literature), the intracellular calcium concentration is simply modelled as a low-pass filtered version of the voltage (as in (3)). Earlier models have suggested simple yet general homeostatic principles for the adaptation of cellular conductances using the intracellular calcium concentration as a feedback signal [32]. Here, we also anticipate that the physical nature of the electrical circuit model will allow flexible designs of multi-scale and hierarchical adaptive controllers.

Conclusion
This paper has investigated the classical framework of model-reference adaptive control to design neuromodulatory controllers in conductance-based neuronal models. A key message of the paper is that conductance-based models are linearly parameterised in maximal conductance parameters, and that the adaptive control of individual conductances provides flexible adaptation principles reminiscent of those observed in neurophysiology. The proposed methodology makes use of the adaptive observer recently proposed in [11]. It fundamentally relies on the physical input-output properties of conductance-based models, namely the relative degree one property between currents and voltages, and the contraction property of the internal (gating) dynamics. It is also decentralised, in the sense that the adaptive controller of each node in a network only estimates local states and parameters based on local measurements, that is, the nodal voltage and the voltage of presynaptic neurons.
We have presented adaptive controllers to solve the two key control problems of reference tracking and disturbance rejection. We have also provided a simple illustration of the role of nodal adaptive control in a conceptual network inspired by the stomatogastric ganglion.
The results of this paper are preliminary in that they assume full knowl-

Appendix B. Gating Dynamics
All neurons share the same gating variables (2). The activation and timeconstant functions for each gate are provided below. First note that some of  The activation functions are as follows: