K1 The cerebral cortex, a delay coupled oscillator network: Computations in high dimensional dynamic space

Wolf Singer 1

1 Max Planck Institute for Brain Research (MPI), Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation With Max Planck Society, Frankfurt Institute for Advanced Studies (FIAS), Frankfurt am Main, Germany

Email: w.singer@brain.mpg.de

The supra-granular layers of the cerebral cortex can be considered as a delay coupled recurrent network whose nodes are feature selective and have a propensity to oscillate. Such networks exhibit high dimensional non-linear dynamics that can be exploited for computations. Results obtained with parallel recordings of neuronal responses in cat and monkey visual cortex suggest that the cerebral cortex exploits this high dimensional dynamic space for the flexible encoding of relations among features (feature binding), for the acquisition and storage of information about statistical contingencies of features in the environment (priors), for the ultra-fast matching of priors with sensory evidence (predictive coding) and the classification of stimulus specific activity vectors by segregation in high dimensional space. In addition, the network dynamics allow for the generation of stimulus specific response sequences (temporal codes) and the superposition of information provided by sequentially presented stimuli. These computations complement those realized in multilayer feed forward architectures and allow for the coexistence of rate and temporal codes. It is proposed that differences between the performance of natural and artificial systems, e.g., the deep learning networks, are mainly due to the fact that recurrent processing permits exploitation of the temporal dimension for computation. For review, see [1].

References

1. Singer W. Recurrent dynamics in the cerebral cortex: Integration of sensory evidence with stored knowledge. Proceedings of the National Academy of Sciences. 2021 Aug 17;118(33).

K2 Correlations, scaling, and dimensionality

William Bialek 1

1 Princeton University and The CUNY Graduate Center, Princeton, NJ, United States of America

Email: wbialek@Princeton.edu

We confront high dimensional data in thinking about the inputs to our sensory systems, the activity of neural populations, and the behavioral outputs of organisms. It is tempting to simplify be searching for lower dimensional structure, but what are the alternatives?

As an example, the scale invariant structure of natural images means that one can achieve a thousand-fold reduction of dimensionality with only a two-fold loss of variance, but this would miss important aspects of the natural image ensemble. Renormalization group ideas from statistical physics offer other paths to simplification, reducing the dimensionality of models rather than the data itself. We have tried this approach to analyzing the collective activity of 1000 + neurons in the hippocampus, revealing scaling behaviors that are reproducible across animals, sometimes to the second decimal place. For animal behavior, we have tried simplifying by compressing behavioral states while maintaining information about future states, and the simplest models that can capture the resulting correlations involved scale invariant interactions over multiple time scales. We are just scratching the surface, but these results suggest that high dimensional data on brains and behavior can be organized in surprising ways.

K3 Advances in computational psychiatry: Understanding cognitive control as a network process

Danielle Bassett 1

1 University of Pennsylvania, Department of Bioengineering, Department of Physics and Astronomy, Philadelphia, PA, United States of America

Email: dsb@seas.upenn.edu

The human brain is a complex organ characterized by heterogeneous patterns of interconnections. Non-invasive imaging techniques now allow for these patterns to be carefully and comprehensively mapped in individual humans, paving the way for a better understanding of how wiring supports cognitive processes. While a large body of work now focuses on descriptive statistics to characterize these wiring patterns, a critical open question lies in how the organization of these networks constrains the potential repertoire of brain dynamics. Here I describe an approach for understanding how perturbations to brain dynamics propagate through complex wiring patterns, driving the brain into new states of activity. Drawing on a range of disciplinary tools – from graph theory to network control theory and optimization – I identify control points in brain networks and characterize trajectories of brain activity states following perturbation to those points. Finally, I describe how these computational tools and approaches can be used to better understand the brain's intrinsic control mechanisms and their alterations in psychiatric conditions.

K4 Significant spatio-temporal spike patterns in macaque monkey motor cortex

Sonja Grün 1,2

1 Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), and JARA Brain Institute I (INM-10), Jülich, Germany

2 Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany

Email: s.gruen@fz-juelich.de

The cell assembly hypothesis [1] postulates that neurons coordinate their activity through the formation and repetitive co-activation of groups. While the classical theory of neural coding revolves around the concept that information is encoded in firing rates, we assume that assembly activity is expressed by the occurrence of precisely timed spatio-temporal patterns (STPs) of spikes emitted by neurons that are members of the assembly, e.g., a synfire chain. We first report on a method that is capable to detect significant STPs in massively parallel spike trains (SPADE [2–4]), and then present pattern results from the analysis of Utah array recording from pre-/motor cortex of monkey. SPADE first identifies repeating STPs using Frequent Itemset Mining [5], and then evaluates the detected patterns for significance through comparison to patterns found in surrogate data. Various surrogate techniques can be used to evaluate significance, and their correct choice is crucial to ensure that by-chance patterns are not classified as significant [6]. The final step of the method is the removal of false positive patterns being a by-product of true patterns with background activity. Here, we evaluate how different six types of surrogate techniques affect the results of SPADE, in terms of the general statistics of the generated surrogates, and in terms of the amount of false positives. We conclude that spike-train shifting [7] is the preferable choice for our type of data, which typically show a CV < 1 and have a dead time after the spikes of 1.6/1.2ms induced by the spike sorter (Plexon). Uniform dithering, in contrast, leads to a high false positive rate. In a next step, we evaluate if cell assemblies are active in relation to motor behavior [2]. Therefore, we analyze 20 experimental sessions, each of about 15min recording, consisting of parallel spike data recorded by a 10x10 electrode Utah array in the pre-/motor cortex of two macaque monkeys performing a reach-to-grasp task [8,9]. The monkeys have four possible behavioral conditions of grasping and pulling an object consisting of combinations of two possible grip types (precision or side grip) and two different amounts of force required to pull the object (low or high). We segment each session into 6 periods of 500ms duration and analyze them independently for the occurrence of STPs. Each significant STP is identified by its neuron composition, its number and times of occurrences and the delays between spikes. We find that significant STPs indeed occur in all phases of the behavior. Their size ranges between 2 and 6 neurons, and their maximal spatial extent is 60ms. The STPs are specific to the behavioral context, i.e., within the different trial epochs and across conditions (different grip and force type combinations). This suggests that different assemblies are active in the context of different behavior. Within a recording session, we typically find one neuron that is involved in all STPs of a session. The neurons involved in STPs within a session are not clustered on the cortex, but may be far apart (up to 3.6mm). We further plan to investigate the spatial arrangement of the patterns on the Utah array, to determine whether there are preferred spatial directions of pattern spike sequences, as found in [2] for synchronous patterns. Finally, we plan to investigate whether the grip type can be better decoded on the basis of the type of STPs or by using the firing rates of the neurons.

References

1. Hebb DO. The organisation of behaviour: a neuropsychological theory. New York: Science Editions; 1949.

2. Torre E, Quaglio P, Denker M, Brochier T, Riehle A, et al. Synchronous spike patterns in macaque motor cortex during an instructed-delay reach-to-grasp task. Journal of Neuroscience. 2016 Aug 10;36(32):8329–40.

3. Quaglio P, Yegenoglu A, Torre E, Endres DM, Grün S. Detection and evaluation of spatio-temporal spike patterns in massively parallel spike train data with spade. Frontiers in computational neuroscience. 2017 May 24;11:41.

4. Stella A, Quaglio P, Torre E, Grün S. 3d-SPADE: Significance evaluation of spatio-temporal patterns of various temporal extents. Biosystems. 2019 Nov 1;185:104,022.

5. Porrmann F, Pilz S, Stella A, Kleinjohann A, Denker M, et al. Acceleration of the SPADE Method Using a Custom-Tailored FP-Growth Implementation. Frontiers in Neuroinformatics. 2021:48.

6. Stella A, Bouss P, Palm G, Grün S. Generating surrogates for significance estimation of spatio-temporal spike patterns. bioRxiv. 2021 Jan 1.

7. Pipa G, Wheeler DW, Singer W, Nikolić D. NeuroXidence: reliable and efficient analysis of an excess or deficiency of joint-spike events. Journal of computational neuroscience. 2008 Aug 1;25(1):64–88.

8. Brochier T, Zehl L, Hao Y, Duret M, Sprenger J, et al. Massively parallel recordings in macaque motor cortex during an instructed delayed reach-to-grasp task. Scientific data. 2018 Apr 10;5(1):1–23.

9. Riehle A, Wirtssohn S, Grün S, Brochier T. Mapping the spatio-temporal structure of motor cortical LFP and spiking activities during reach-to-grasp movements. Frontiers in neural circuits. 2013 Mar 27;7:48.

F1 Cortical oscillations support sampling-based computations in spiking neural networks

Andreas Baumbach 1 , Agnes Korcsak-Gorzo 2 , Michael G. Müller 3 , Luziwei Leng 4 , Oliver Julien Breitwieser 1 , Sacha van Albada 2 , Walter Senn 5 , Karlheinz Meier 1 , Robert Legenstein 3 , Mihai A. Petrovici 6

1 Heidelberg University, Heidelberg, Germany

2 Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6, INM-10) and Institute for Advanced Simulation (IAS-6), Jülich, Germany

3 Graz University of Technology, Graz, Austria

4 Heidelberg University and University of Bern, Heidelberg, Germany

5 University of Bern, Bern, Switzerland

6 University of Bern and Heidelberg University, Bern, Switzerland

Email: andreas.baumbach@kip.uni-heidelberg.de

Humans and animals are confronted with incomplete information in a world marked by uncertainty. Their brains have thus faced evolutionary pressure to develop representations of this uncertainty that enable appropriate responses and decisions. This requires considering different interpretations of available sensory input or multiple solutions to an encountered problem. In a neural network, the different coherent interpretations correspond to different attractor states that usually lie far apart in the network's state space. Switching between multiple attractors is thus very difficult, and this mixing problem is particularly challenging for high-dimensional, complex distributions.

We show that cortical oscillations, a ubiquitous phenomenon in the brain [1], can help overcome this problem. We consider biologically plausible, mechanistic models of neural sampling in spiking networks [2,3]. These networks use the cortical background as a means of attaining stochasticity, with the background intensity determining the sensitivity of neuronal transfer functions. Increasing background levels decrease neuronal sensitivity, which renders the probability landscape sampled by the network more uniform. Formally, this creates a correspondence between background firing rates and ensemble temperatures, allowing the interpretation of oscillatory background as a form of simulated tempering [4,5].

We exemplify the functional implications of cortical oscillations using two different computational tasks that simultaneously highlight advantages of sampling-based inference. In a hierarchical model of visual processing, the network is tasked with retrieving a diverse set of images from memory (Fig. 1A). Such networks are faced with an exploration–exploitation dilemma: they need to travel wide distances between attractor states in order to sample from all image categories, while still persisting in local attractors for long enough to produce clean outputs. Cortical oscillations, in contrast to static-intensity background, periodically flatten the network's probability landscape, allowing the network to escape attractors and produce a diverse set of crisp images (Fig. 1B). We further consider a multisensory stimulus disambiguation task, where the different modalities receive conflicting input (Fig. 1C). To solve this task, a network needs to form consistent opinions across all modalities and quickly visit all valid interpretations of the stimulus. Cortical oscillations help structure network activity into sampling episodes, during which valid interpretations are highly probable and in between which switches become likely (Fig. 1D).

Our work thus provides a rigorous framework for the suggested functional role of cortical oscillations as a tempering mechanism. It shows that cortical background acts as an ensemble temperature and rhythmic changes can modulate exploration without compromising sampling quality. This identifies a new computational role of cortical oscillations and connects them to various phenomena in the brain, such as sampling-based probabilistic inference, memory replay, multisensory cue combination and place cell flickering.

References

1. Buzsaki G. Rhythms of the Brain. Oxford university press; 2006 Aug 3.

2. Berkes P, Orbán G, Lengyel M, Fiser J. Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment. Science. 2011 Jan 7;331(6013):83–7.

3. Petrovici MA, Bill J, Bytschok I, Schemmel J, Meier K. Stochastic inference with spiking neurons in the high-conductance state. Physical Review E. 2016 Oct 20;94(4):042,312.

4. Brunel N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of computational neuroscience. 2000 May;8(3):183–208.

5. Korcsak-Gorzo A, Müller MG, Baumbach A, Leng L, Breitwieser OJ, et al. Cortical oscillations implement a backbone for sampling-based computation in spiking neural networks. arXiv preprint arXiv: 2006. 11099. 2020 Jun 19.

Fig. 1
figure a

A Hierarchical model of visual processing. B With static background noise, the network is either stuck in individual attractors or produces blurry images. In contrast, cortical oscillations promote diverse and crisp images. C Multisensory ensemble tasked with stimulus disambiguation under conflicting input. D Oscillatory background helps find consistent, valid interpretations

F2 Branch specific dendritic computation in a Purkinje cell

Gabriela Capo Rangel 1 , Erik De Schutter 1

1 Okinawa Institute of Science and Technology, Computational Neuroscience Unit, Onna-Son, Japan

Email: gabriela-capo@oist.jp

Cerebellar Purkinje cells (PCs) are some of the most impressive neurons in the central nervous system due to their unique dendritic morphology. The most remarkable feature of PC dendrites is their extensive branching, which allows them to integrate large amount of information. While PCs constitute the unique output of the cerebellar cortex, they receive two types of excitatory synaptic input: a single climbing fiber, that forms hundreds of synapses with the PC, or more than 100,000 parallel fibers (PFs) that run orthogonally to the PC dendritic tree [1]. In order to understand cerebellar function, it is important to unveil the mechanisms through which PCs encode the input information and transmit output signals for the downstream neurons. Unlike the calcium spikes that are trigged by climbing fibers, the dendritic spikes triggered by the parallel fibers are quite unexplored. Recent literature [2] has unveiled their essential role on cerebellar function. Via in vivo two-photon imaging of cerebellar PFs, the authors were able to determine that clustered parallel fiber input can drive dendritic spikes, postsynaptic calcium signaling and synaptic plasticity in the downstream Purkinje cells.

We propose a model that is able to explore the bimodal computation in a PC, i.e. tonic firing at low input range and burst-pause dynamics at high input range, and the biophysical mechanism of dendritic spikes. As done in previous work [3], we grouped PC spiny dendrites into 22 branches along the main dendrite and we distributed a set number of PFs on each of the branches. Previous research using a model with uniform channel densities in the dendrite [4] has shown that only 4 out of the 22 branches exhibit a bimodal linear step-plateau response with increasing PF synapses number, while the others were showing a linear response. Here, we show that by altering particular ionic current densities in each of the branches, we can covert the response from linear into step-plateau for all of the branches. We determine the different PF thresholds for each of the branches and we discuss how their values correlate to the surface area and volume of each branch. In the case of each branch, we address dendritic spike propagation to the neighboring branches.

References

1. Hirano T. Purkinje neurons: development, morphology, and function.

2. Wilms CD, Häusser M. Reading out a spatiotemporal population code by imaging neighbouring parallel fibre axons in vivo. Nature communications. 2015 Mar 9;6(1):1–9.

3. Zang Y, De Schutter E. The Cellular Electrophysiological Properties Underlying Multiplexed Coding in Purkinje Cells. Journal of Neuroscience. 2021 Mar 3;41(9):1850–63.

4. Zang Y, Dieudonné S, De Schutter E. Voltage-and branch-specific climbing fiber responses in Purkinje cells. Cell reports. 2018 Aug 7;24(6):1536–49.

F3 Salience of low-frequency entrainment to visual signal for classification points to predictive processing in sign language

Evguenia Malaia 1 , Katie Ford 2 , Sean Borneman 3 , Julia Krebs 4 , Brendan Ames 2

1 University of Alabama, Communicative Disorders, Tuscaloosa, AL, United States of America

2 University of Alabama, Mathematics, Tuscaloosa, AL, United States of America

3 Independent Scholar, Bloomington, IN, United States of America

4 University of Salzburg, Department of Linguistics, Salzburg, Austria

Email: eamalaia@ua.edu

Objectively differentiating patient mental states based on electrical activity, as opposed to overt behavior, is a fundamental neuroscience problem with medical applications, such as identifying patients in locked-in state vs. coma. Electroencephalography (EEG), which detects millisecond-level changes in brain activity across a range of frequencies, allows for assessment of external stimulus processing by the brain in a non-invasive manner. We applied machine learning methods to 26-channel EEG data of 24 fluent Deaf signers watching videos of sign language sentences (comprehension condition), and the same videos reversed in time (non-comprehension condition), to objectively separate vision-based high-level cognition states. While spectrotemporal parameters of the stimuli were identical in comprehension vs. non-comprehension conditions, the neural responses of participants varied based on their ability to linguistically decode visual data. We aimed to determine which subset of parameters (specific scalp regions or frequency ranges) would be necessary and sufficient for high classification accuracy of comprehension state.

Optical flow, characterizing distribution of velocities of objects in an image, was calculated for each pixel of stimulus videos using MATLAB Vision toolbox. Coherence between optical flow in the stimulus and EEG neural response (per video, per participant) was then computed using canonical component analysis with NoiseTools toolbox. Peak correlations were extracted for each frequency for each electrode, participant, and video. A set of standard ML algorithms were applied to the entire dataset (26 channels, frequencies from .2 Hz to 12.4 Hz, binned in 1 Hz increments), with consistent out-of-sample 100% accuracy for frequencies in .2-1 Hz range for all regions, and above 80% accuracy for frequencies < 4 Hz. Sparse Optimal Scoring (SOS) was then applied to the EEG data to reduce the dimensionality of the features and improve model interpretability. SOS with elastic-net penalty resulted in out-of-sample classification accuracy of 98.89%. The sparsity pattern in the model indicated that frequencies between 0.2–4 Hz were primarily used in the classification, suggesting that underlying data may be group sparse. Further, SOS with group lasso penalty was applied to regional subsets of electrodes (anterior, posterior, left, right). All trials achieved greater than 97% out-of-sample classification accuracy. The sparsity patterns from the trials using 1 Hz bins over individual regions consistently indicated frequencies between 0.2–1 Hz were primarily used in the classification, with anterior and left regions performing the best with 98.89% and 99.17% classification accuracy, respectively. While the sparsity pattern may not be the unique optimal model for a given trial, the high classification accuracy indicates that these models have accurately identified common neural responses to visual linguistic stimuli. Cortical tracking of spectro-temporal change in the visual signal of sign language appears to rely on lower frequencies proportional to the N400/P600 time-domain evoked response potentials, indicating that visual language comprehension is grounded in predictive processing mechanisms.

Acknowledgements

We thank NSF for supporting EM with grants #1734938 and #1932547.

F4 Lost neural heterogeneity in human epilepsy is a fundamental principle unifying epileptic etiologies

Scott Rich* 1 , Homeira Moradi Chameh 1 , Jeremie Lefebvre 2 , Taufik Valiante 1

1 Krembil Research Institute, Division of Clinical and Computational Neuroscience, Toronto, Canada

2 University of Ottawa, Department of Biology, Ottawa, Canada

*Email: sbrich@umich.edu

A myriad of pathological changes associated with epilepsy, ranging from the loss of specific cell types [1], improper expression of individual ion channels [2], and synaptic sprouting [3], can all be recast as decreases in cell and circuit heterogeneity. We recently demonstrated that biophysical diversity is a key characteristic of human cortical pyramidal cells in non-epileptogenic tissue [4]. We thus hypothesize that epileptogenesis can be recontextualized as a process where reduction in cellular heterogeneity renders neural circuits less resilient to transitions into seizure [5].

By comparing whole-cell patch clamp recordings from layer 5 (L5) human cortical pyramidal neurons from epileptogenic and non-epileptogenic tissue, we present the first direct experimental evidence that a significant reduction in neural heterogeneity accompanies epilepsy. We implement these heterogeneity levels in excitatory-inhibitory (E-I) spiking network models motivated by previous modeling of synchronous cortical activity [6]. Networks with pathological, low levels of neural heterogeneity display unique dynamics typified by a sudden transition into a hyper-active and synchronous state paralleling ictogenesis (see Fig. 1, panel B). Mean-field analysis reveals that these networks also have a distinct mathematical structure distinguished by multi-stability and a saddle-node bifurcation accompanying the seizure-like transition. Furthermore, the mathematically characterized linearizing effect of heterogeneity on input–output response functions [7] explains the counter-intuitive experimentally observed reduction in single-cell excitability of the population of neurons from epileptogenic tissue.

This joint experimental, computational, and mathematical study showcases that decreased neuronal heterogeneity exists in epileptogenic human cortical tissue, that this difference yields dynamical changes in neural networks paralleling ictogenesis, and that there is a fundamental explanation for these dynamics based in the mathematically characterized effects of heterogeneity. Viewed jointly, these interdisciplinary results provide convincing evidence that biophysical diversity imbues neural circuits with resilience to seizure, and potentially a new lens through which to view epilepsy that could reveal new targets for clinical treatment of the most common serious neurological disorder in the world.

References

1. Cossart R, Dinocourt C, Hirsch JC, Merchan-Perez A, De Felipe J, et al. Dendritic but not somatic GABAergic inhibition is decreased in experimental epilepsy. Nature neuroscience. 2001 Jan;4(1):52–62.

2. Arnold EC, McMurray C, Gray R, Johnston D. Epilepsy-induced reduction in HCN channel expression contributes to an increased excitability in dorsal, but not ventral, hippocampal CA1 neurons. Eneuro. 2019 Mar;6(2).

3. Sutula TP, Dudek FE. Unmasking recurrent excitation generated by mossy fiber sprouting in the epileptic dentate gyrus: an emergent property of a complex system. Progress in brain research. 2007 Jan 1;163:541–63.

4. Chameh HM, Rich S, Wang L, Chen FD, Zhang L, et al. Diversity amongst human cortical pyramidal neurons revealed via their sag currents and frequency preferences. Nature communications. 2021 May 3;12(1):1–5.

5. Rich S, Chameh HM, Lefebvre J, Valiante TA. Resilience through diversity: Loss of neuronal heterogeneity in epileptogenic human tissue renders neural networks more susceptible to sudden changes in synchrony. bioRxiv. 2021 Jan 1.

6. Rich S, Hutt A, Skinner FK, Valiante TA, Lefebvre J. Neurostimulation stabilizes spiking neural networks by disrupting seizure-like oscillatory transitions. Scientific reports. 2020 Sep 21;10(1):1–7.

7. Lefebvre J, Hutt A, Knebel JF, Whittingstall K, Murray MM. Stimulus statistics shape oscillations in nonlinear recurrent neural networks. Journal of Neuroscience. 2015 Feb 18;35(7):2895–903.

Fig. 1
figure b

E-I networks subjected to linearly increasing excitatory drive with high excitatory/high inhibitory heterogeneity A and low/low heterogeneity B. Top row: Mean ± standard deviation of excitatory synchrony (red/blue) and excitatory (black) and inhibitory (grey) firing rate. Bottom rows: bifurcation analysis. Purple = unstable oscillator, black = stable oscillator, green = saddle, and yellow = sink

O1 Computation through spiking dynamics of an E-I network with predictive coding

Veronika Koren 1 , Tilo Schwalger 1

1 Technische Universität Berlin, Institute of Mathematics, Berlin, Germany

Email: koren@math.tu-berlin.de

One of the goals of neuroscience is to understand the computational principles that describe the formation of behaviorally relevant signals in the brain, as well as how these computations are realized within the constraints of biological networks. Currently, most functional models of neural activity are based on firing rates, while the most relevant signals for inter-neuron communication are spikes. Recently, the framework of predictive coding [1] has suggested a theory on how neural networks might compute behaviorally relevant signals with spikes. So far, the network with predictive coding has been derived from a single objective function, resulting in a network of one cell type. The model with one cell type, however, does not comply with Dale’s law. Moreover, unless spiking is artificially restricted to one spike per time step [1], or the regularization terms in the objective function are fine-tuned [2], or else Poissonian spike generation is imposed on the top of derived network equations [3], the activity strongly synchronizes and evolves towards states of runaway excitation [2,3].

Here, we extend the theory of predictive coding and develop functional spiking E-I networks that incorporate several important biophysical properties of cortical ensembles. We impose the E-I architecture and derive a general solution for E-I networks that obey Dale's law, accounts for slow recurrent and local currents with realistic time scales, and have plausible connectivity patterns that can be learned with Hebbian learning. The network does not require fine-tuning of parameters to avoid runaway excitation, shows asynchronous irregular spiking (Fig. 1) and balances excitatory and inhibitory currents by construction.

We show that the best network solutions occur in inhibition-dominated regimes and in regimes with adaptation. Best solutions are characterized by a moderate temporal E-I balance [4] and by loose E-I balance [5]. Best solutions introduce a new scaling with the network size. Such scaling does not require changes of connectivity weights, but instead requires changes in the top-down current. By changing the top-down current globally, we model a continuum of dynamical regimes that have been observed in the cortex. A local change of the top-down current to a group of selected neurons instead reproduces dynamical effects of top-down attention, such as an increase in firing rates and decrease in noise correlations in neurons selective for the attended stimulus feature. Developing a biologically plausible theory of functional networks is extremely important, since it allows to formulate testable predictions of theoretical models, bridging the gap between theoretical and experimental neuroscience.

References

1. Boerlin M, Machens CK, Denève S. Predictive coding of dynamical variables in balanced spiking networks. PLoS computational biology. 2013 Nov 14;9(11):e1003258.

2. Koren V, Denève S. Computational account of spontaneous activity as a signature of predictive coding. PLoS computational biology. 2017 Jan 23;13(1):e1005355.

3. Rullán Buxó CE, Pillow JW. Poisson balanced spiking networks. PLoS computational biology. 2020 Nov 20;16(11):e1008261.

4. Okun M, Lampl I. Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activities. Nature neuroscience. 2008 May;11(5):535–7.

5. Ahmadian Y, Miller KD. What is the dynamical regime of cerebral cortex?. arXiv preprint arXiv: 1908. 10101. 2019 Aug 27.

Fig. 1
figure c

Activity of the E-I netwok with 400 excitatory and 100 inhibitory units. The network accurately represents three variables in parallel (top three rows). The underlying spiking activity is asynchronous irregular spiking (spike raster). The bottom plot shows the instantaneous average firing rate of the excitatory (red) and inhibititory (blue) neurons

O2 Self-healing neural codes

Michael Rule 1 , Timothy O'Leary 1

1 University of Cambridge, Department of Engineering, Cambridge, United Kingdom

Email: mrule7404@gmail.com

The relationship between neuronal activity and the external world changes over time, even for habitual behaviors. This phenomena, termed “representational drift”, seems to be at odds with long-term stable neural representations. Previous studies have shown that gradual drift in neuronal tuning (i.e., average firing rates conditioned on behavioral variables) could be tracked using weak error feedback. In this work, we show how stable representations could be achieved without external error feedback. We present a model for representational drift that captures features of neural population codes observed experimentally: tunings are typically stable, but occasionally undergo larger reconfigurations. We then discuss “self healing neural codes”, which combine error-correction with plasticity. Self-healing codes can track drift without outside error feedback. The learning rule required is biologically plausible, and amounts to a form of homeostatic Hebbian plasticity. When combined with network interactions that allow neurons to share information, such homeostatic plasticity could allow a population of stable cells to maintain an accurate readout of an unstable population code (Fig. 1).

Fig. 1
figure d

a Neural populations can show fixed latent dynamics but "drifting" neuronal tunings. Is a stable readout of an unstable code possible without external error feedback? b Drift degrades a readout with fixed weights. c Homeostatically preserving firing statistics and Hebbian plasticity confer stability. d Recurrent dynamics reflect internal models, tracking long-term drift via error correction

O3 A neuron-to-network-to-neuron computational model of state-dependent computation in the hypothalamus

Samuel Mestern 1 , Gabriel Benigno 2 , Aoi Ichiyama 1 , Wataru Inoue 1 , Lyle Muller 3

1 Western University, Robarts Research Institute, London, Canada

2 Western University, Applied Mathematics, London, Canada

3 Western University, Department of Applied Mathematics, London, Canada

Email: smestern@uwo.ca

How do single-neuron and network properties combine to create biological function and computation? The interaction between properties of individual neurons and their pattern of connections can translate into a vast array of dynamics at the network level. However, it remains difficult to probe precisely how individual neuron and network-level properties contribute to dynamics and biological function. In this work, we study the interaction of single-neuron and network-level properties in the hypothalamic stress circuit to understand how neuron properties result in stress-dependent switches in hormonal output.

Despite extensive research, surprisingly, little is known about how hypothalamic circuits encode the states of homeostasis and mount stress response upon threats [1]. The Inoue Lab (Robarts Research Institute, Western University, Canada) has recently established an in vivo single-unit extracellular recording paradigm in a group of hypothalamic neurons that regulate hormonal stress responses in mice. These neurons show a stress-dependent spiking profile characterized by (1) brief (2–5 spikes) of high-frequency (> 100 Hz) bursting followed by a long, predominantly silent period (500 ms-1 s), constraining the overall firing rate at low levels (~3 Hz) or (2) single and more continuous spiking with variable spike frequency. Under stress, these neurons fire exclusively in the single-spike mode and reach a relatively high firing rate (20 Hz). However, when characterized in slices ex vivo, these same neurons rarely show these brief bursts and predominantly show single-spike patterns [2]. This difference between in vivo and ex vivo firing patterns indicates that intact network activity underlies the firing patterns responsible for homeostatic regulation in vivo.

Using data from whole-cell patch-clamp in intracellular recordings in vitro, we first developed an adaptive exponential integrate-and-fire (AdEx) model to capture the subthreshold membrane potential and spiking dynamics following standard current injection protocols. We next implemented our single neuron models into a network of excitatory and inhibitory populations. Using this model, we replicated the stress-dependent firing patterns seen in vivo. The computational model revealed a discrete combination of intrinsic and network factors that drive the transition between the two firing modes. Finally, we returned to ex vivo whole-cell patch-clamp and injected the synaptic inputs to a model cell in the computational network model. Remarkably, this model-guided current injection reliably replicated the two distinct firing modes found in vivo.

Our work presents a novel computational model of a hypothalamic homeostasis circuit and new results in validating network models in experiments. More generally, we demonstrate the power of simplified single neuron models that allow us to move back-and-forth between in silico and ex vivo experiments and generate new predictions in tight collaboration between modelling and experiment in computational neuroscience.

References

1. Daviu N, Füzesi T, Rosenegger DG, Rasiah NP, Sterley TL, et al. Paraventricular nucleus CRH neurons encode stress controllability and regulate defensive behavior selection. Nature neuroscience. 2020 Mar;23(3):398–410.

2. Yuan Y, Wu W, Chen M, Cai F, Fan C, et al. Reward inhibits paraventricular CRH neurons to relieve stress. Current Biology. 2019 Apr 1;29(7):1243–51.

O4 Cortex-wide dynamics of intrinsic electrical activities: Propagating waves and their interactions

Yuqi Liang 1 , Pulin Gong 2 , Chenchen Song 3 , Mianxin Liu 4 , Changsong Zhou 1 , Thomas Knöpfel 3

1 Hong Kong Baptist University, Physics, Hong Kong, Hong Kong

2 University of Sydney, Australia, School of Physics, Sydney, Australia

3 Imperial College London, Laboratory for Neuronal Circuit Dynamics, London, United Kingdom

Email: liang13585200059@sina.com

Cortical circuits generate patterned activities that reflect intrinsic brain dynamics that lay the foundation for any, including stimuli-evoked, cognition and behavior. However, the spatiotemporal organization properties and principles of this intrinsic activity have only been partially elucidated due to previous poor resolution of experimental data and limited analysis methods. Here we investigated continuous wave patterns on data from high spatiotemporal resolution optical voltage imaging of the upper cortical layers in anesthetized mice. Waves of population activities propagate in heterogeneous directions to coordinate neuronal activities between different brain regions. The complex wave patterns show characteristics of both stereotypy and variety. The location and type of wave patterns determine the dynamical evolution when different waves interact with each other. Local wave patterns of source, sink or saddle emerge at preferred spatial locations. Specifically, ‘source’ patterns are predominantly found in cortical regions with low multimodal hierarchy such as the primary somatosensory cortex. Our findings reveal principles that govern the spatiotemporal dynamics of spontaneous cortical activities and associate them with the structural architecture across the cortex. More details can be referred in our recent published paper [1].

Acknowledgements

This work was supported by Hong Kong Baptist University (HKBU) Strategic Development Fund, the Hong Kong Research Grant Council (GRF12200217) and the National Science Foundation of China (No. 11975194) to CSZ, the National Institutes of Health BRAIN initiative grants 1U01MH109091 and 5U01NS099573 to TK and the Australian Research Council (Grant DP160104316) to PG.

References

1. Liang Y, Song C, Liu M, Gong P, Zhou C, et al. Cortex-wide dynamics of intrinsic electrical activities: propagating waves and their interactions. Journal of Neuroscience. 2021 Apr 21;41(16):3665–78.

O5 A computational model of the dopaminergic modulation of hippocampal Schaffer collateral-CA1 long-term plasticity

Gautam Kumar 1 , Joseph Schmalz 2

1 University of Idaho, Chemical and Biological Engineering, Moscow, ID, United States of America

Email: gkumar@uidaho.edu

Dopamine is a critical neuromodulator involved in modulating the long-term synaptic plasticity of hippocampal Schaffer collateral-CA1 pyramidal neuron (SC-CA1) synapses, which modulates the plasticity of SC-CA1 synapses in a dose-dependent manner. Over the last four decades, limited experimental results from hippocampal slice experiments have shown that the timing of the activation of dopamine D1/D5 receptors relative to a high/low-frequency stimulation (HFS/LFS) in SC-CA1 synapses regulates the modulation of HFS/LFS-induced long-term potentiation/depression (LTP/LTD) in these synapses. However, the existing literature lacks a complete picture of how various concentrations of D1/D5 agonists and the relative timing between the activation of D1/D5 receptors and LTP/LTD induction by HFS/LFS, affect the spatiotemporal modulation of SC-CA1 synaptic dynamics.

The exploration of the effect of various concentrations of different dopamine agonists with different frequency-dependent stimulation protocols, such as HFS or LFS to induce LTP or LTD, respectively, is a combinatorically challenging problem. The number of experiments required to fill in these gaps of knowledge are prohibitively expensive and time-consuming. To address this challenge, we have developed a computational modeling approach to integrate the spatiotemporal impact of D1/D5 agonists on the HFS/LFS-induced early and late LTP/LTD at the electrophysiological level. Our modeling hypothesis is that the chain of biochemical signaling initiated by HFS/LFS and D1/D5 receptors agonists compete for a limited available biochemical resources to induce and/or modulate late-LTP/LTD in the hippocampal SC-CA1 synapses. Our model combines the biochemical effects with the electrical effects at the electrophysiological level. We have estimated the model parameters from the published electrophysiological data, available from diverse hippocampal CA1 slice experiments, in a Bayesian framework.

Here, we demonstrate the capability of our model in making quantitative predictions of the available data from in vitro slice experiments on the temporal dose-dependent modulation of the HFS/LFS induced LTP/LTD in SC-CA1 synapses by various D1/D5 agonists (see Fig. 1). Moreover, we highlight the importance of the relative timing between the release of the D1/D5 agonists at various concentrations and the HFS/LFS protocol in modulating LTP/LTD of the SC-CA1 synapse.

Fig. 1
figure e

Quantitative comparison between the model predicted and experimentally observed modulation of HFS-induced LTP in hippocampal SC-CA1synapse by D1/D5 agonist SKF 38,393

O6 Age-dependent increase of sag current in human pyramidal neurons dampens noise in cortical sensory processing

Alexandre Guet McCreight 1 , Margaret Wishart 1 , Homeira Moradi Chameh 2 , Shreejoy J. Tripathy 1 , Taufik Valiante 3 , Etay Hay 4

1 Centre for Addiction and Mental Health, Krembil Centre for Neuroinformatics, Toronto, Canada

2 University Health Network, Krembil Research Institute, Toronto, Canada

3 Krembil Research Institute, Division of Clinical and Computational Neuroscience, Toronto, Canada

4 Centre for Addiction and Mental Health, University of Toronto, Krembil Centre for Neuroinformatics, Psychiatry, Physiology, Toronto, Canada

Email: agmccrei@gmail.com

Aging involves a variety of neurobiological changes, although their effect on brain function remains poorly understood due to limited experimental capabilities in humans. The growing availability of human neuronal and circuit data provides an opportunity to uncover age-dependent changes at finer scales of brain networks and constrain detailed computational models to study the related effects on brain function. Here we analyzed sag voltage in human layer 5 pyramidal neurons and found a significant increase in old vs. young. We then generated models of young and old pyramidal neurons capturing the experimental changes and simulated them in layer 5 microcircuits. We found that old microcircuits had lower baseline and response rates than young microcircuits, but an overall enhanced signal-to-noise ratio due to a larger effect on baseline firing rates. Accordingly, the reduced noise in microcircuit output with age enabled a higher accuracy of stimulus discrimination. These age effects were principally due to changes in dendritic conductance mechanisms underlying the measured changes in sag properties. Our results report an age-dependent increase in human pyramidal neuron sag current, which reduced cortical firing noise and improved sensory processing in simulated microcircuits, and thus could serve as a target for modulation to ameliorate age-associated cognitive decline.

O7 Astrocytic modulation of synaptic transmission and plasticity in developing somatosensory cortex

Tiina Manninen 1 , Ausra Saudargiene 2 , Marja-Leena Linne 1

1 Tampere University, Faculty of Medicine and Health Technology, Tampere, Finland

2 Lithuanian University of Health Sciences, Neuroscience Institute, Kaunas, Lithuania

Email: tiina.manninen@tuni.fi

Astrocytes have been shown to have important roles in several phenomena in the brain, such as synapse development, functionality, and plasticity [1], but the underlying biochemical and biophysical mechanisms are not understood. The mechanisms involved seem to depend, for example, on the developmental stage of an animal and the brain area in question. Recent experimental studies have also shown that fine astrocyte processes are increasingly active and motile during synaptic activation, particularly during long-term plasticity changes [2,3]. Such an activity and motility may occur when a fine astrocyte process retracts from a synapse during learning or in injury, making possible for the synaptically released glutamate to spill over from the synaptic cleft to the extrasynaptic space. In the present study, we used our previously developed in silico layer 4 to layer 2/3 tripartite synapse model in somatosensory cortex during postnatal development [4] to explore and predict the amount of glutamate spillover required to induce spike-timing-dependent long-term depression (t-LTD), both with and without fine astrocyte process activation. The model includes presynaptic, postsynaptic, and astrocytic mechanisms and links them to the time window of t-LTD induction which is sensitive to temporal difference between the postsynaptic and presynaptic activity [5,6]. We showed that endocannabinoid-based feedback signal from the postsynaptic to presynaptic neuron via the fine astrocyte process is able to induce and maintain long-lasting decrease in synaptic transmission during postnatal development. Our results also showed that the strength of t-LTD can be modulated by the amount of glutamate spillover (Fig. 1). Developing sensory circuits are known to undergo synapse elimination which is essential for the formation of mature neuronal circuits. Astrocytic modulation of synaptic depression, including the active and motile fine astrocyte processes, may therefore be one important step in preparing neuronal circuits for mature cortical sensory processing.

Acknowledgements

This research has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2) and No. 945539 (Human Brain Project SGA3). The funding has also been received from the Academy of Finland through grants No. 297893, 318,879, 326,494, 326,495.

References

1. Allen NJ, Eroglu C. Cell biology of astrocyte-synapse interactions. Neuron. 2017 Nov 1;96(3):697–708.

2. Bernardinelli Y, Randall J, Janett E, Nikonenko I, König S, et al. Activity-dependent structural plasticity of perisynaptic astrocytic domains promotes excitatory synapse stability. Current Biology. 2014 Aug 4;24(15):1679–88.

3. Sakers K, Lake AM, Khazanchi R, Ouwenga R, Vasek MJ, et al. Astrocytes locally translate transcripts in their peripheral processes. Proceedings of the National Academy of Sciences. 2017 May 9;114(19):E3830-8.

4. Manninen T, Saudargiene A, Linne ML. Astrocyte-mediated spike-timing-dependent long-term depression modulates synaptic properties in the developing cortex. PLoS computational biology. 2020 Nov 10;16(11):e1008360.

5. Min R, Nevian T. Astrocyte signaling controls spike timing–dependent depression at neocortical synapses. Nature neuroscience. 2012 May;15(5):746–53.

6. Banerjee A, González‐Rueda A, Sampaio‐Baptista C, Paulsen O, Rodríguez‐Moreno A. Distinct mechanisms of spike timing‐dependent LTD at vertical and horizontal inputs onto L2/3 pyramidal neurons in mouse barrel cortex. Physiological reports. 2014 Mar;2(3):e00271.

Fig. 1
figure f

Larger glutamate spillover produced stronger t-LTD. A Excitatory postsynaptic potentials (EPSPs) are shown before (black) and after t-LTD induction (other colors than black) of all spillover percentages for both models, the original model with the astrocyte and the model without the astrocyte. B The change of EPSPs seen in A is shown as a function of spillover percentages for both models

O8 Dynamics of ramping bursts in a respiratory neuron model

Muhammad Abdulla 1 , Ryan Phillips 2 , Jonathan Rubin 2

1 University of Florida, Department of Mathematics, Gainesville, FL, United States of America

2 University of Pittsburgh, Department of Mathematics, Pittsburgh, PA, United States of America

Email: muhammadabdulla@ufl.edu

Intensive computational and theoretical work has led to the development of mutliple mathematical models for bursting in respiratory neurons in the pre-Botzinger Complex (pre-BotC) of the mammalian brainstem. Nonetheless, these previous models have not captured the preinspiratory ramping aspects of these neurons' activity patterns, in which relatively slow tonic spiking gradually progresses to faster spiking and a full-blown burst, with a corresponding gradual development of an underlying plateau potential. In this work, we show that the incorporation of the dynamics of the extracellular potassium ion concentration into an existing model for pre-BotC neuron bursting, along with some parameter updates, suffices to induce this ramping behavior. Using fast-slow decomposition, we show that this activity can be considered as a form of parabolic bursting, but with burst termination at a homoclinic bifurcation rather than as a SNIC bifurcation (Fig. 1). We also investigate the parameter-dependence of these solutions and show that the proposed model yields a greater dynamic range of burst frequencies, durations, and duty cycles than those produced by other models in the literature.

Acknowledgements

This work was partially supported by NSF awards DMS-1612913 and DMS-1950195. Additional funding was provided by the University of Florida through the Wentworth Travel Scholarship and the University Scholars Program. The authors would like to acknowledge the Program in Neural Computation at the Center for the Neural Basis of Cognition for their help in facilitating this research collaboration.

Fig. 1
figure g

A The trajectory of the neuron (gradient) drifts towards higher EK values, facilitating gradual increase in frequency throughout burst, causing unique ramping geometry. This bursting behavior is terminated by interactions with stable manifold (blue). B Tracing local extrema reveals path along curves of periodic behavior (green), computationally derived from individual Hopf points (red)

O9 Non-synaptic interactions between olfactory receptor neurons: A possible key feature of odor processing in insects

Mario Pannunzi 1 , Thomas Nowotny 2

1 University of Sussex, Department of Informatics, Brighton, United Kingdom

2 University of Sussex, School of Engineering and Informatics, Brighton, United Kingdom

Email: mario.pannunzi@gmail.com

The olfactory organs of insects are their antenna, on which olfactory receptor neurons (ORNs) are housed in an evaginated sensilla. Several ORNs are grouped together in the same sensillum, between 2 and 4 in Drosophila, but up to 20 for instance in the sensilla of the honeybee. Unlike most other neurons, in particular in vertebrates, the ORNs in the sensilla are not isolated from each other by myelin and they are known to interact (inhibit) with each other non-synaptically (see Fig. 1a). Moreover, the pairings of ORNs expressing specific olfactory receptor types are stereotypical, suggesting that interactions may be functional or selected for rather than being accidental.

In this work we present the results of an in-depth modelling study that elucidates possible functions of non-synaptic interactions (NSI) of ORNs in sensilla. A number of hypotheses for the potential roles of NSI have been suggested in the literature [1,2]. To investigate the viability of these ideas we have built a computational model of the first two stages of information processing in the Drosophila olfactory system–the ORNs on the antennae and the glomeruli in the antennal lobe, in which projection neurons (PNs) and local neurons (LNs) interact to form the olfactory code transmitted to higher brains centres. Our model is the first to consider NSIs between ORNs in the context of the downstream processing in the AL. We constrained our model by reproducing the responses of ORNs to typical odor stimuli as reported in the literature [3,4]. With the data-driven model we the tested the following hypotheses: 1) NSIs could improve the concentration ratio identification of a mixture of odorants by increasing the dynamic range over which it can be perceived (see Fig. 1b). 2) NSIs could help insects to distinguish mixtures of odorants emanating from a single source against those emanating from two separate sources, by improving the capacity to encode the correlation between olfactory stimuli (see Fig. 1c). 3) NSIs could increase the dynamic range of the receptor neurons, by partially removing the ceiling effect that occurs for high concentrations (see Fig. 1b).

In order to assess the benefits of NSIs for mixture processing (hypotheses 1 and 2) we tested the model network with NSIs in place against a control network where there was no interaction between “odour channels” of different receptor types and against a network without NSIs but with lateral inhibition in the AL, a mechanism proposed to provide the same benefits with respect to hypotheses 1 and 2 as NSIs. We found that NSIs improve mixture ratio detection and plume structure sensing as hypothesised and they do so more efficiently than the traditionally considered lateral inhibition mechanism in the antennal lobe. However, we also found that the dynamic range of ORNs is not improved by NSIs over the model with non-interacting ORNs, casting a new light on earlier results obtained in a mathematical model for steady state activation of ORNs [5].

Acknowledgments

This research was funded by the Human Frontiers Science Program, grant RGP0053/2015 (Odor Objects), the European Union under grant agreement 785,907 (HBP SGA2) and a Leverhulme Trust Research Project Grant.

References

1. Todd JL, Baker TC. Function of peripheral olfactory organs. In Insect olfaction 1999 (pp. 67–96). Springer, Berlin, Heidelberg.

2. Su CY, Menuz K, Reisert J, Carlson JR. Non-synaptic inhibition between grouped neurons in an olfactory circuit. Nature. 2012 Dec;492(7427):66–71.

3. Martelli C, Carlson JR, Emonet T. Intensity invariant dynamics and odor-specific latencies in olfactory receptor neuron response. Journal of Neuroscience. 2013 Apr 10;33(15):6285–97.

4. Kim AJ, Lazar AA, Slutskiy YB. System identification of Drosophila olfactory sensory neurons. Journal of computational neuroscience. 2011 Feb 1;30(1):143–61.

5. Vermeulen A, Rospars JP. Why are insect olfactory receptor neurons grouped into sensilla? The teachings of a model investigating the effects of the electrical interaction between neurons on the transepithelial potential and the neuronal transmembrane potential. European Biophysics Journal. 2004 Nov 1;33(7):633–43.

Fig. 1
figure h

a NSI effect, b Hyp. n.1 and 3: Inhibition via NSI can help to increase the dynamic range or it could help encoding the ratio between odorants. At low concentration, the ratio of two odorants can be encoded by ORNs more easily than at high concentration. c-d Hyp. n.2: Odorant mixture emitted from a single source will be more correlated than for odorants emitted from separate sources (d)

O10 Structure in the population code increases along the auditory cortical hierarchy

Clélia De Mulatier 1 , Jean-Hugues Lestang 2 , Songhan Zhang 3 , Jaejin Lee 2 , Vijay Balasubramanian 4 , Yale E. Cohen 5

1 University of Amsterdam, Amsterdam, Netherlands

2 University of Pennsylvania, Departments of Otorhinolaryngology, Philadelphia, PA, United States of America

3 The University of Chicago, Committee on Computational Neuroscience, Chicago, IL, United States of America

4 University of Pennsylvania, Department of Physics and Astronomy, Computational Neuroscience Initiative, Philadelphia, PA, United States of America

5 University of Pennsylvania, Departments of Otorhinolaryngology, Departments of Bioengineering, Departments of Neuroscience, Philadelphia, PA, United States of America

Email: clelia@sas.upenn.edu

Perceptual, cognitive, and motor functions are mediated by neural circuits (i.e., the coordinated activity of neural populations), and not solely by the activity of individual neurons. However, despite their fundamental importance, we know relatively little about how neural circuits contribute to auditory processing – particularly, in primate models of hearing – and whether and how this circuitry changes between earlier and deeper cortical regions. Using established and novel techniques, we analyzed the functional connectivity structure of neural populations from the core and belt regions of the auditory cortex (AC) in non-human primates. We recorded neural activity in different regions of the AC in two rhesus monkeys while they listened passively to two successive repetitions of a dynamic moving ripple (DMR) stimulus.

Our first analysis describes the activity in terms of maximum entropy models with pairwise neuron-to-neuron interactions. These models are constrained to reproduce the observed firing rates and pairwise correlations, while making no assumptions about their mechanistic origin. Such models have already been successful at modeling population activity in the retina and prefrontal cortex. Our approach used an additional information theoretic criterion that prevents overfitting by selecting the model with the minimal number of interactions that still reasonably fits the data. We found consistency in the set of selected pairwise interactions between repetitions of the stimulus (see Fig. 1a). Comparing AC areas, we found that the density of interactions needed to capture the neuronal activity is significantly larger in the belt than in the core. This means that belt areas display more prominent correlation patterns than core areas.

Our second analysis focused on detection of groups of neurons with coordinated activity. We compared a known statistical approach, using dimensionality reduction to detect neuronal assemblies, with a newly developed method based on maximum entropy models with community-like structure. Crucially, the second approach accounts for high-order neural activity patterns (i.e. multi-neuronal activity motifs) in the detection of communities of correlated neurons. The model selection is based on information theoretic criteria balancing goodness-of-fit and model complexity. We observed that assemblies and communities identified by the two methods are detected in similar numbers and present similar features (see Fig. 1b-c). Comparing AC areas, we found that assembly structures are sparser in belt than in core areas, meaning that assembly activity is driven by fewer neurons in the belt.

Together, our analyses indicates that in the belt, as opposed to the core, information is encoded by the collective activity of larger communities, but is driven by a smaller number of highly influential neurons. These findings suggest that functional connectivity becomes broader and more structured between core and belt regions of the AC, perhaps relating to functional differences between these regions. Finally, our work uses two new approaches: 1) an information theoretic method for estimating "extractable information" in noisy activity, and 2) a method for building community models of coordinated neural activity that incorporate intrinsically higher order correlations.

Fig. 1
figure i

Analysis of the neural activity during two repetitions of DMR stimulus performed with three different methods: a selection of the best maximum entropy model with pairwise interactions; b detection of coordinated neuronal assemblies using dimensionality reduction; c selection of the best community-like maximum entropy model

O11 Striatal compartments participate in multi-modal and concurrent reward-based learning

William Barnett 1 , Alexey Kuznetsov 2 , Christopher Lapish 1

1 IUPUI, Department of Psychology, Indianapolis, IN, United States of America

2 IUPUI, Department of Mathematics, Indianapolis, IN, United States of America

Email: whbdupree@gmail.com

In the basal ganglia (BG) hypothesis for reward-based learning, action selection is gated in the striatum by context-dependent dopamine-mediated synaptic plasticity. BG learning is dependent on dopamine (DA) release from the substantia nigra, and different compartments of the striatum receive partition-specific nigro-striatal projections. Nigro-striatal DA release encodes different information in different striatal compartments, and thus supports different modes of learning. In the dorso-medial striatum (DMS), nigro-striatal DA release encodes reward prediction error (RPE); and in the dorso-lateral striatum (DLS), these projections encode salience.

In the present study, we developed a computational model of action selection and learning in the BG that implements two different modes of learning in the striatum. Our model accomplished distinct and concurrent learning modalities by distinguishing DAergic input to the DMS and DLS. Both compartments shared the same rules for cortico-striatal plasticity, and both compartments possess a direct and indirect pathway for each selection option. However, DA encoded different information in each compartment. In the DMS, cortico-striatal synaptic weights were updated based on RPE to perform goal-directed learning. In the DLS, plasticity in cortico-striatal synaptic weights implemented stimulus–response associations.

The model was challenged with a series of two-alternative forced choice behavioral tasks. We manipulated reward feedback to record action selection in the face of reward reversal, reward devaluation, and punishment. In early trials of the reward devaluation task, cortico-striatal weights in the DMS quickly reflected the negative contrast in reward value. Persistence of the stimulus–response association in the DLS maintained the agent’s behavioral response despite the potentiation of the corresponding indirect pathway in the DMS. In punishment learning, the valence of the reward feedback was negative. Similar to the devaluation task, goal-directed learning in the DMS quickly activated the corresponding indirect pathway of the DMS. Persistence of the previous stimulus–response association in the DLS drove perseverative errors in agent performance to select the punished action in early trials. Behavior driven by stimulus–response associations in the DLS resisted goal-directed learning in the face of devaluation or punishment, and we interpreted model performance in these scenarios as the expression of habit.

To investigate the mechanisms that support habit in this working model of the basal ganglia, we implemented the loss of executive control. In this model, outcomes were represented by populations of prefrontal cortex (PFC) neurons. Decreased executive control was implemented in the model by decreasing the specificity of PFC activity to action selection. This was accomplished by introducing weak cross-channel projections; for example the PFC population associated with outcome #1 made additional weak projections to the direct and indirect pathway of the DMS associated with outcome #2 (and vice versa). Model performance was quantified using change point analysis. In simulations with handicapped PFC, agents learned new reward-feedback rules slowly compared to control simulations. We interpreted these results to demonstrate how the loss of executive control reduced the ability of goal-directed learning to overcome stimulus–response driven behavior such as the expression of habit.

O12 Unsupervised identification of space-, time-, and action-dependent latent factors underlying muscle activity during reaching

Alessandro Salatiello 1 , Martin A. Giese 1

1 University of Tübingen, Section for Computational Sensomotorics, Tübingen, Germany

Email: alessandro.salatiello@uni-tuebingen.de

The motor system simplifies the control of movement by flexibly combining fixed spatial and temporal motor modules that are invariant across actions [1,2]. The identification of these modules is critical to shed light on the computational principles of biological motor control. However, popular matrix decomposition methods used to extract these motor modules – such as Non-negative Matrix Factorization and Principal Component Analysis – can only identify either spatial [1] or temporal [2] motor modules, but not both. This leads to overparameterized models that rather than providing a plausible account of the mechanism the brain uses to simplify the control of movement, merely shift the computational burden from the spatial to the temporal domain or vice-versa. For example, models based on spatial modules [1], simplify the control problem in the spatial domain at the cost of complicating it in the temporal domain, where they assume the existence of time-varying coefficients that are specific to each action (Fig. 1A).

To meet the challenge of simultaneous identification of spatial and temporal modules, we propose a decomposition of muscle signals based on the Canonical Polyadic Decomposition (CPD) model [3] – a higher-order tensor decomposition method. The model factorizes muscle activity during reaching movements into fixed spatial and temporal modules that are flexibly recruited depending on the reaching direction (Fig. 1D). The recruitment is specified by action coefficients that, unlike in previous models, are both space- and time-invariant. We show that, compared with classical decomposition models [1,2], CPD identifies qualitatively similar spatial and temporal modules (Fig. 1A–D), explains a comparable amount of data variance, and requires a lower number of parameters. Furthermore, we show that the geometrical organization of the action coefficients is not random but describes a smooth manifold that allows the zero-shot generation of muscle patterns for untrained reaching directions. Taken together, our results suggest that the identified decomposition defines a biologically plausible hierarchical organization of the control of movement [4] that the brain could leverage to effectively control the body while saving computational resources.

Acknowledgements

This work was supported by the German Federal Ministry of Education and Research (BMBF FKZÐ1GQ1704), the Human Frontiers Science Program (HFSP RGP0036/2016), the German Research Foundation (DFG GZ: KA 1258/15–1), and the European Research Council (ERC 2019-SyG-RELEVANCE-856495).

References

1. Tresch MC, Saltiel P, Bizzi E. The construction of movement by the spinal cord. Nature neuroscience. 1999 Feb;2(2):162–7.

2. Ivanenko YP, Poppele RE, Lacquaniti F. Five basic muscle activation patterns account for muscle activity during human locomotion. The Journal of physiology. 2004 Apr;556(1):267–82.

3. Harshman RA. Foundations of the PARAFAC procedure: Models and conditions for an" explanatory" multimodal factor analysis.

4. Merel J, Botvinick M, Wayne G. Hierarchical motor control in mammals and machines. Nature communications. 2019 Dec 2;10(1):1–2.

Fig. 1
figure j

Schematics of decomposition models. A Space-centric decomposition; left: spatial modules. B Time-centric decomposition; right: temporal modules. C Space-by-time decomposition; left: spatial modules; right: temporal modules; center: action coefficients. Prone to underfitting. D Canonical polyadic decomposition; left: spatial modules; right: temporal modules; center: action coefficients

O13 Prevention of post-traumatic epilepsy through sustained network depolarization

Oscar Gonzalez 1 , Maxim Bazhenov 1 , Sara Soltani 2 , Sylvain Chauvette 2 , Brianna Marsh 1 , Giri Krishnan 1 , Igor Timofeev 2

1 University of California, San Diego, Department of Medicine, San Diego, CA, United States of America

2 University of Laval, CIRUSMQ, Quebec, Canada

Email: o2gonzalez@ucsd.edu

Traumatic brain injury remains one of the most common factors leading to acquired epilepsy. Post-traumatic epilepsy (PTE) continues to be a difficult disorder to treat as there can be a prolonged period of time during which epileptogenesis can arise following the initial brain insult. Indeed, it has been reported that epilepsy can develop up to 15 years after the occurrence of the brain trauma. Additionally, the likelihood of developing epilepsy increases with age at the time of the trauma. Recent in vivo studies have shown that older animals were more susceptible to the development of epilepsy following cortical undercut as compared to younger animals. The mechanism that gives rise to PTE remains to be fully understood but may involve mis-regulation of synaptic weights through homeostatic synaptic scaling. In healthy brains, homeostatic synaptic scaling works as a slow negative feedback bidirectional mechanism which aims to maintain network stability through the activity-dependent regulation of post-synaptic AMPA receptor densities. In response to brain trauma, there is a reduction of network activity within and near the traumatized brain area. This reduction of activity triggers homeostatic up-regulation of synaptic and intrinsic excitability in an attempt to recover normal levels of network activity. If trauma is severe, homeostatic scaling may overcompensate and increase synaptic weights such that the network is primed for transitions to hypersynchronized seizure states. In this new study, we tested the hypothesis that preventing homeostatic up-scaling of synaptic weights following cortical deafferentation could prevent post-traumatic epileptogenesis. Using a detailed biophysical model of the neocortex, we found that a sustained depolarization of the traumatized network was capable of preventing up-scaling of synaptic weights to a pathological state and thereby preventing occurrence of spontaneous recurrent seizures. In contrast, a sustained hyperpolarization of the traumatized network resulted in increased homeostatic up-scaling, triggering a severe pathological state characterized by the occurrence of frequent spontaneous recurrent seizures. Furthermore, our analysis demonstrates that pathological increases in synaptic strength drives seizure generation by perturbing extracellular potassium concentration dynamics and initiating a positive feedback loop between extracellular potassium concentration and neuron firing rates. This feedback loop drives increased excitability and hypersynchrony eventually leading to spontaneous seizure onset. These findings from the computational model are in agreement with our in vivo experiments in mice where cortical undercut was followed by activation of DREADDs (hM3DGq or hM4DGi) to alter baseline network activity around the undercut area. Together, these results provide evidence for the role of homeostatic synaptic scaling in the development of post-traumatic epilepsy and may provide new insights into novel treatments or preventative measures for trauma-induced epilepsy (Fig. 1).

Acknowledgements

Supported by NIH grant R01 NS104368 and CIHR.

Fig. 1
figure k

Sustained network activity prevents the development of seizures. A/F Heatmaps of PY neurons in a network which was depolarized (A) or hyperpolarized (F) for a period following deafferentation. Color indicates voltage. Red bar demarcates time of deafferentation and green bar marks duration of depolarization (A) or hyperpolarization (F). B Histograms of membrane potentials before deafer

O14 On the role of arkypallidal and prototypical neurons for neural synchronization in the basal ganglia

Richard Gast 1 , Ruxue Gong 1 , Helmut Schmidt 1 , Hil G. E. Meijer 2 , Thomas R. Knösche 1

1 Max Planck Institute for Human Cognitive and Brain Sciences, Brain Networks Group, Leipzig, Germany

2 University of Twente, Department of Applied Mathematics, Technical Medical Centre, Twente, Netherlands

Email: rgast@cbs.mpg.de

Parkinson's disease is a neurological disorder that leads to progressive dopamine depletion in the basal ganglia. Experimental evidence suggests that basal ganglia dopamine depletion causes the onset of synchronized oscillations of neural activity. The main spectral features of these oscillations are an enhanced power in the beta frequency band (12–30 Hz) and an enhanced phase-amplitude coupling (PAC) between the phase of a beta signal and the amplitude of a high-frequency gamma signal (50–250 Hz). Many computational models and experimental studies have suggested that the external pallidum (GPe) is involved in the generation of parkinsonian beta oscillations via its recurrent coupling with the subthalamic nucleus (STN). However, a recent study in mice found that optogenetic inhibition of the GPe, but not of the STN, led to strong attenuation of parkinsonian beta power [1]. Contrary to initial beliefs, the GPe is not a homogeneous nucleus. It contains two distinct cell types with different electrophysiological properties and projection targets: Prototypical and arkypallidal cells [2]. Under dopamine depletion, the synaptic coupling strengths between GPe cells are increased [3]. Therefore, we asked whether the GPe could generate parkinsonian oscillations autonomously or contribute to increased beta-gamma PAC.

Here, we investigated these hypotheses in a spiking neural network model of recurrently coupled prototypical and arkypallidal cells. Our model accounts for characteristic macroscopic properties of the GPe, such as the firing rate distributions of both cell types under normal and stimulation conditions [4]. We examined the effects of increased synaptic coupling between prototypical and arkypallidal cells via bifurcation analysis based on an exact mean-field model of the spiking neural network. We found that an increased self-inhibition of prototypical neurons can lead to the emergence of synchronized oscillations in the gamma frequency range. Furthermore, we found that increased inhibition of prototypical neurons via arkypallidal projections gives rise to a bi-stable regime where both neuron types compete over a high-activity state. Both findings cannot explain the emergence of parkinsonian beta oscillations, however. Instead, we show that oscillatory input to the GPe in the beta frequency range can lead to beta-gamma PAC in the macroscopic GPe dynamics. Based on these findings, we propose that the GPe cannot generate parkinsonian beta oscillations autonomously but can contribute to the emergence of increased beta-gamma PAC in the dopamine depleted basal ganglia.

References

1. de La Crompe B, Aristieta A, Leblois A, Elsherbiny S, Boraud T, et al. The globus pallidus orchestrates abnormal network dynamics in a model of Parkinsonism. Nature communications. 2020 Mar 26;11(1):1–4.

2. Hegeman DJ, Hong ES, Hernández VM, Chan CS. The external globus pallidus: progress and perspectives. European Journal of Neuroscience. 2016 May;43(10):1239–65.

3. Miguelez C, Morin S, Martinez A, Goillandeau M, Bezard E, et al. Altered pallido‐pallidal synaptic transmission leads to aberrant firing of globus pallidus neurons in a rat model of Parkinson's disease. The Journal of physiology. 2012 Nov 15;590(22):5861–75.

4. Aristieta A, Barresi M, Lindi SA, Barriere G, Courtand G, et al. A disynaptic circuit in the globus pallidus controls locomotion inhibition. Current Biology. 2021 Feb 22;31(4):707–21.

O15 Larger inter-individual variability of large-scale brain organization in schizophrenia revealed by topological data analysis

Emil Dmitruk 1 , Christoph Metzner 2 , Volker Steuber 3 , Shabnam Kadir 4

1 University of Hertfordshire, School of Engineering and Computer Science, Hatfield, United Kingdom

2 Technische Universität Berlin, Institute of Software Engineering and Theoretical Computer Science, Berlin, Germany

3 University of Hertfordshire, Biocomputation Research Group, Hatfield, United Kingdom

4 University of Hertfordshire, Centre for Computer Science and Informatics Research, Hatfield, United Kingdom

Email: e.dmitruk@herts.ac.uk

Popular methods for fMRI data analysis do not utilise the full potential of fMRI datasets in understanding individual differences in brain connectivity and function. In [1] it was shown that spatially variable "network variants" appear to exist for all individuals. Network variants are brain regions belonging to a specific functional network identified using group-averaged data analyses, e.g., by [2], but in locations that differed from observations obtained from those analyses. Many areas, particularly in association cortices, partake in multiple brain networks. Individual differences in network connectivity may reflect brain plasticity arising from differences in life experience, as well as disease. The cognitive difficulties associated with schizophrenia are thought to be caused by the abnormalities in the structural, functional, and effective connectivity of the brain [3,4].

To test the above ideas, we applied methods of topological data analysis (TDA) [5,6] to the COBRE dataset [7,8], consisting of structural MRI (T1w and DTI) scans of healthy controls (HC) (N = 44) and schizophrenia patients (SP) (N = 44).We applied the weight rank clique filtration (WRCF) [9] to connectivity matrices obtained from using a probabilistic fibre-tracking algorithm [10] for each individual. Although the biological interpretation of nodes (brain regions) participating in persistent cycles is not straightforward [5], we observe that the barcodes obtained for the persistent homology classes show consistency in birth and lifetimes for spatially analogous cycles (Fig. 1). Additionally, we observe the appearance of many ‘variant’ cycles in clusters with slight variations between individuals but with considerable overlap as seen in [11]. The most popular cycles had stronger connections as evidenced by earlier birth times (p < 0.001), and activated fewer brain networks than those stemming from later-born clusters of cycles (p < 0.001). We see that many cycles, particularly those with weaker connections, have more individual variability.

Figure 1C shows that 1-dimensional cycles are shared evenly between the two groups. On the other hand, the average persistent landscapes for the two groups show more substantial differences for two-dimensional cycles, with the average persistence landscape for schizophrenia patients exhibiting two peaks instead of one (Fig. 1F). This suggests that whilst schizophrenia patients may share many similarities to controls in terms of their more strongly connected brain regions as revealed by lower-dimensional persistent cycles, their large-scale brain organization, as revealed by higher-dimensional cycles which tend to connect more brain regions, is different and more diverse.

References

1. Gordon EM, Laumann TO, Gilmore AW, Newbold DJ, Greene DJ, et al. Precision functional mapping of individual human brains. Neuron. 2017 Aug 16;95(4):791–807.

2. Thomas Yeo BT, Krienen FM, Sepulcre J, Sabuncu MR, Lashkari D, et al. The organization of the human cerebral cortex estimated by intrinsic functional connectivity. Journal of neurophysiology. 2011 Sep;106(3):1125–65.

3. Lynall ME, Bassett DS, Kerwin R, McKenna PJ, Kitzbichler M, et al. Functional connectivity and brain networks in schizophrenia. Journal of Neuroscience. 2010 Jul 14;30(28):9477–87.

4. Dong D, Wang Y, Chang X, Luo C, Yao D. Dysfunction of large-scale brain networks in schizophrenia: a meta-analysis of resting-state functional connectivity. Schizophrenia bulletin. 2018 Jan 13;44(1):168–81.

5. Stolz BJ, Emerson T, Nahkuri S, Porter MA, Harrington HA. Topological data analysis of task-based fMRI data from experiments on Schizophrenia. arXiv e-prints. arXiv preprint arXiv: 1809. 08504. 2018.

6. Sizemore AE, Giusti C, Kahn A, Vettel JM, Betzel RF, et al. Cliques and cavities in the human connectome. Journal of computational neuroscience. 2018 Feb;44(1):115–45.

7. Aine CJ, Bockholt HJ, Bustillo JR, Cañive JM, Caprihan A, et al. Multimodal neuroimaging in schizophrenia: description and dissemination. Neuroinformatics. 2017 Oct;15(4):343–64.

8. Cetin MS, Christensen F, Abbott CC, Stephen JM, Mayer AR, et al. Thalamus and posterior temporal lobe show greater inter-network connectivity at rest and across sensory paradigms in schizophrenia. Neuroimage. 2014 Aug 15;97:117–26.

9. Petri G, Scolamiero M, Donato I, Vaccarino F. Topological strata of weighted complex networks. PloS one. 2013 Jun 21;8(6):e66506.

10. Schlaier JR, Beer AL, Faltermeier R, Fellner C, Steib K, et al. Probabilistic vs. deterministic fiber tracking and the influence of different seed regions to delineate cerebellar‐thalamic fibers in deep brain stimulation. European Journal of Neuroscience. 2017 Jun;45(12):1623–33.

11. Rolls ET, Joliot M, Tzourio-Mazoyer N. Implementation of a new parcellation of the orbitofrontal cortex in the automated anatomical labeling atlas. Neuroimage. 2015 Nov 15;122:1–5.

Fig. 1
figure l

A Hierarchical clustering of brain regions for each persistent homology cycle. B Cycle regions (AAL2 [7]) colour-coded (G); C Raster plot showing in which subjects' scans the cycle was found; D Total number of regions per cycle; E Vertical barcodes for dimension 1 cycles; F Average persistence landscapes (2D) for HC and SP; G Yeo network key; H Visualisation of the most popular cycle

O16 Fluctuating inter-regional delays in the human cerebral cortex

Joon-Young Moon 1 , Kathrin Müsch 1 , Charles Schroeder 2 , Christopher Honey 1

1 Johns Hopkins University, Department of Psychological and Brain Sciences, Baltimore, MD, United States of America

2 The Nathan S. Kline Institute for Psychiatric Research, Center for Biomedical Imaging and Neuromodulation, Orangeburg, SC, United States of America

Email: moonjy@jhu.edu

In electric signals such as field-potentials measured across regions of the human brain, parietal signals have a tendency to phase-lead signals in temporal and frontal cortex, while waves of activity propagates along parieto-temporal pathways. In order to better understand the functional properties of such large-scale pattern of signal flow, we asked: (i) are inter-regional delays stable or variable over time? (ii) do the patterns of signal propagation co-vary with endogenous cortical rhythms? (iii) do the patterns of signal flow vary with external stimulus properties?

We recorded electrocorticographic signals from the lateral cortical surface of 10 human participants as they listened to a 7-min auditory narrative. In sliding 2-s windows, we identified inter-regional delays by computing the cross-correlation of voltage signals between nearby electrode pairs. For each time window, and for electrodes and electrode-pairs, we identified the time delay of maximal inter-electrode correlation from raw voltage signals, the power for different bands, and the mean broadband high-frequency power. We designed a computational model for the inter-regional flows using a Stuart-Landau coupled oscillator model, with structural topology based on human cortical anatomy. Consistent with prior reports [1,2], we found that the auditory pathway exhibited a gradient of delays, with posterior temporal regions leading anterior temporal regions on average. However, the latencies between stages of auditory processing were not stable, but fluctuated over time. Two distinct electrophysiological states were evident from data: one with longer inter-channel latencies (“propagating state”), and the other shorter latencies (“synchronized state”). Latencies were longer during bursts of alpha power (propagating state) and were shorter during bursts of broadband power (synchronized state), consistent with models in which alpha oscillations regulate corticocortical interactions [3]. The inter-regional delays were mostly endogenous, as the correlation between responses under repeated stimulus was weak. Altogether, the changes in inter-regional latencies are not a random process, and reliably track features of the endogenous dynamics. The transitions between synchronized and propagating states generalizes beyond the auditory pathway to the parietal, temporal and sensorimotor cortex. We observed that global latency patterns change between the synchronized state and the propagating state (Fig. 1). When auditory drive was strong the latencies between many areas were reduced, and when auditory drive was absent the latencies increased. Finally, we were able to reproduce the inter-regional correlation and delay pattern, by varying the coupling-strength between oscillators in the Stuart-Landau oscillator model, indicating that the large-scale dynamic shifts may be regulated by overall shifts in the efficacy of inter-regional influence.

Altogether, the data and models suggest that human cortical dynamics reliably transition between synchronized states (associated with increases of broadband power) and propagating states (associated with increases of alpha-band power).

References

1. Zhang H, Watrous AJ, Patel A, Jacobs J. Theta and alpha oscillations are traveling waves in the human neocortex. Neuron. 2018 Jun 27;98(6):1269–81.

2. Chapeton JI, Inati SK, Zaghloul KA. Stable functional networks exhibit consistent timing in the human brain. Brain. 2017 Mar 1;140(3):628–40.

3. Van Kerkoerle T, Self MW, Dagnino B, Gariel-Mathis MA, Poort J, et al. Alpha and gamma oscillations characterize feedback and feedforward processing in monkey visual cortex. Proceedings of the National Academy of Sciences. 2014 Oct 7;111(40):14,332–41.

Fig. 1
figure m

A We presented a 7-min auditory narrative twice to each participant. B Moving time windows were applied to each pair of ECoG channels. C For each time window, we measured the cross correlation between the chosen pair. The latency τ was defined as the time lag yielding the maximum cross-correlation. D Cross correlation and latency τ are illustrated for all time windows

P1 Recruitment profiles produced by intrafascicular stimulation of peripheral nerve fibers

Morteza Rouhani 1 , Sharon Crook 1 , James Abbas 2

1 Arizona State University, School of Mathematical and Statistical Sciences, Tempe, AZ, United States of America

2 Arizona State University, School of Biological and Health Systems Engineering, Tempe, AZ, United States of America

Email: jimmy.abbas@asu.edu

Stimulation of the human peripheral nervous system can be a powerful treatment for a variety of medical conditions from epilepsy to rheumatism, and can also provide insight into nervous system processes. Each peripheral nerve bundle consists of one or multiple fascicles; each fascicle consists of a group of nerve fibers embedded in a matrix of endoneurium and wrapped by a fatty layer of perineurium. To stimulate these fibers a variety of bioelectric interfaces have been developed. Among these interfaces, the longitudinal intrafascicular electrode (LIFE) is designed to target small groups of fibers inside the fascicle using low-amplitude pulses. Their small size, flexibility, and longitudinal placement minimize their mechanical effects on nearby neural tissues, making them well-suited for chronic use. To achieve higher functionality with fewer side effects, greater specificity of the stimulation would be beneficial. This study is part of a US-French collaboration that aims to improve selectivity of intrafascicular stimulation for bioelectric therapies by coordinating computational studies, stimulation hardware development, and in vivo animal studies. This simulation study investigates the effects of anatomical and stimulation parameters on fiber recruitment and selectivity.

Peripheral nerve stimulation with LIFEs consists of short electrical pulses delivered to an electrode or electrodes placed within the fascicle. Each current pulse generates a spatiotemporal electrical field that affects the membrane potential of the fibers in the vicinity in a manner that might trigger production of an action potential. In this study, to simulate the response of the nerve fibers to electrical stimulation, a hybrid workflow has been developed to simulate: 1) the production/propagation of the electric field, and 2) the effect of the electric field on fiber activation (recruitment). The first part uses anatomical and histological data to produce a finite-element model implemented in the MATLAB-COMSOL environment that simulates the electric field induced in a nerve bundle through stimulation via one or more LIFEs. The second part uses a detailed biophysical model of multi-segmented axons implemented in a Python-NEURON environment that simulates the response of the nerve fibers to the electric field.

Using this hybrid workflow, the effects of various factors like fascicular anatomy (tissue conductivity, spatial distribution of fibers, fiber size, etc.), electrode parameters (size, location, configuration), and stimulation pulse shape (pulse width, pulse amplitude, pulse type, etc.) on recruitment and selectivity have been characterized, and the sensitivity of the recruitment patterns to these parameters has been analyzed. In on-going work, we are using this computational modeling framework to investigate and develop better strategies to enhance selectivity and increase specificity of the peripheral nerve stimulation.

Acknowledgements

This work was supported by NIH-NIBIB (R01-EB027584).

P2 Advancing neuroscience education without borders: neuroscience community training resources at INCF

Malin Sandström 1 , Pradeep George 2 , Mathew Abrams 2

1 INCF, INCF/Karolinska Institute, Stockholm, Sweden

2 INCF, INCF Secretariat, Stockholm, Sweden

Email: malin.sandstrom@incf.org

The INCF TrainingSuite (Fig. 1) is a collection of open access platforms that aims to facilitate self-guided study in the sub-specialisms of neuroscience (with an emphasis on neuroinformatics). These platforms, presented below, collectively work as a framework for integrating training materials and making them FAIR (Findable, Accessible, Interoperable, Reusable).

INCF TrainingSpace (training.incf.org) is an online hub that aims to make neuroscience educational materials more accessible to the global neuroscience community, developed by the INCF Training and Education Committee composed of members from the INCF network, HBP, SfN, FENS, IBRO, IEEE, BD2K, CONP, TCC and iNeuro Initiative. So far, TrainingSpace has more than 23,000 users with 113,000 pageviews. As a hub, TrainingSpace provides users with access to:

Multimedia educational content from courses, conference lectures, and lab exercises from some of the world’s leading neuroscience institutes and societies

Study tracks to facilitate self-guided study

Tutorials on tools and open science resources for neuroscience research

The Q&A forum NeuroStars (neurostars.org)

All courses and conference lectures in TrainingSpace include a general description, topics covered, links to prerequisite courses if applicable, and links to software described in or required for the course. In addition to providing resources for students and researchers, TrainingSpace also provides resources for instructors, such as laboratory exercises, open science services, and access to publicly available datasets and models. TrainingSpace currently has four study tracks to facilitate self-guided study: brain medicine, computational neuroscience, neuroscience, and neuroinformatics. The 2020 Neuromatch Academy materials are available as a TrainingSpace special collection at https://training.incf.org/collection/neuromatch-academy-2020.

Neurostars (neurostars.org; RRID:SCR_003805) is a Question & Answer (Q&A) forum that serves the INCF network and the global neuroscience community as a platform for knowledge exchange between neuroscience researchers at all levels of expertise, software developers, and infrastructure providers. Neurostars has been adopted by several other large neuroscience initiatives including Neuromatch Academy, Neuro Hackademy, and the Organization for Computational Neuroscience (OCNS). Several community tools–among them Nipype, SPM, fMRIprep, Nilearn and Freesurfer–use Neurostars for providing user support. In April 2021, Neurostars had 17,400 users in 25,200 sessions; in total the forum has seen more than 132,700 users and 328,700 sessions.

INCF KnowledgeSpace (https://knowledge-space.org; RRID:SCR_014539) is a community-based encyclopedia for neuroscience that links brain research concepts to the data, models, and literature that supports them, demonstrating how SBPs can facilitate linking brain research concepts with data, models and literature from around the world. It provides user with access to over 1 M publicly available datasets as well as links to literature references and scientific abstracts.

KnowledgeSpace is an open project and welcomes participation and contributions from members of the global research community. KS is the result of recommendations from a community workshop held by the INCF Program on Ontologies of Neural Structures in 2012, and was developed by HBP, INCF and NIF.

Fig. 1
figure n

The INCF Training Suite consists of TrainingSpace, Neurostars and KnowledgeSpace

P3 Going beyond the point neuron: Active dendrites and sparse representations for continual learning

Karan Grewal 1 , Jeremy Forest 1 , Subutai Ahmad 1

1 Numenta, Redwood City, CA, United States of America

Email: kgrewal@numenta.com

Dendrites of pyramidal neurons demonstrate a wide range of linear and non-linear active integrative properties. Extensive work has been done to elucidate underlying biophysical mechanisms, but our understanding of the computational contributions of active dendrites remains limited. As such the vast majority of artificial neural networks (ANNs) ignore the structural complexity of biological neurons and use simplified point neurons. In this paper we propose that active dendrites can help ANNs learn continuously, a property prevalent in biological systems but absent in artificial systems (most ANNs today suffer from catastrophic forgetting, i.e., they are unable to learn new information without erasing what they previously learned). Our model is inspired by two key properties: 1) the biophysics of sustained depolarization following dendritic NMDA spikes, and 2) highly sparse representations. In our model, active dendrites act as a gating mechanism where dendritic segments detect task-specific contextual patterns and modulate the firing probability of postsynaptic cells. A winner-take-all circuit at each level gives preference to up-modulated neurons, and activates a highly sparse subset of neurons. These task-specific subnetworks have minimal overlap with each other, and this in turn minimizes the interference in error signals. As a result, the network does not forget previous tasks as easily as in standard networks without active dendrites.

We compare our model to two others. Dendritic gated networks (DGNs) [1] compute a linear transformation per dendrite followed by gating. DGNs do not learn dendritic weights and model complexity grows with the number of classes. Context-dependent gating (XdG) [2] turns individual units on/off based on task ID. XdG largely avoids catastrophic forgetting but the task ID and hardcoded network subsets are always required. We tested our model in a standard continual learning scenario, permutedMNIST (Fig. 1). Instead of hardcoding task ID, we employ a prototype method to infer task-specific context signals. Results show that dendritic segments learn to recognize different context signals and that this in turn leads to the emergence of independent sub-networks per task. In tests our dendritic networks achieve 94.4% accuracy when learning 10 consecutive permutedMNIST tasks, and 83.9% accuracy for 50 consecutive tasks. This compares favorably with DGNs and XdG, but without their previously mentioned limitations. (Note that standard ANN’s fail at this task and only achieve chance accuracy.) In addition training is simple and requires only standard backpropagation. Further analysis shows that the sparsity of representations and number of dendrites correlate positively with overall accuracy. Our technique is complementary to other continual learning strategies, such as EWC/Synaptic Intelligence and experience replay, and thus can be combined with them. Our results suggest that incorporating the structural properties of active dendrites and sparse representations can help improve the accuracy of ANNs in a continual learning scenario.

References

1. Sezener E, Grabska-Barwinska A, Kostadinov D, Beau M, Krishnagopal S, et al. A rapid and efficient learning rule for biological neural circuits. bioRxiv. 2021 Jan 1.

2. Masse NY, Grant GD, Freedman DJ. Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization. Proceedings of the National Academy of Sciences. 2018 Oct 30;115(44):E10467-75.

Fig. 1
figure o

a Network and neuron model. b Dendrites modulate units and lead to the selection of independent sub-networks for each task. c Accuracy on 10 tasks as a function of the number of dendrites per neuron (left), and sparsity level (right). High sparsity is strongly correlated with overall continual learning accuracy. d Mean accuracy on all tasks after learning each sequentially

P4 A non-linear evidence accumulation model that accounts for single-trial dynamics

Isabelle Hoxha 1 , Matteo Ciarchi 2 , Sylvain Chevallier 3 , Arnaud Delorme 4 , Michel-Ange Amorim 5

1 Université Paris-Saclay, Faculté des Sciences du Sport, Orsay, France

2 Max-Planck Institute for the Physics of Complex Systems, Dresden, Germany

3 LISV, UVSQ, Université Paris-Saclay, Vélizy-Villacoublay, France

4 CerCo, CNRS, Université Toulouse III - Paul Sabatier, Toulouse, France

5 CIAMS, Université Paris-Saclay, Faculté des Sciences du Sport, ORSAY cedex, France

Email: isabelle.hoxha@u-psud.fr

The brain processes sensory input from the environment in order to produce appropriate behavior. This perceptual decision-making process has been an object of interest in computational neuroscience. Evidence Accumulation Models are particularly widespread, and all assume that the brain gathers information and reaches a decision when enough information is collected. Among these, the Diffusion Decision Model (DDM) [1], which assumes a noisy linear integration of evidence, is by far the most widely accepted thanks to its intuitive interpretation, its accurate fit to both behavioral [1] and neurophysiological data [2], and its applicability to multiple paradigms [3].

Current DDM parameters provide a global description of the decision strategies of participants allowing consequently for little insight on single-trial dynamics, and in particular on the influence of history of previous stimuli and decisions on the variability of the model parameters. Although the DDM brought great insight on how the brain handles decision-making, in particular in the lateral intraparietal cortex [2], recent recordings in the same area have questioned the adequacy of this model [4]. Indeed, while the firing rate of initially investigated neurons increases seemingly linearly, in more recent data the increase is step-like. To our knowledge, while some models address the firing diversity [5], single-trial dynamics remain untapped. Our work addresses both of these issues by introducing a drift term described by a non-linear differential equation (Fig. 1A, C). This new model of decision-making offers a description of behavioral and neurophysiological recordings equivalent to previous models as well as a flexibility for single-trial simulations and interpretation. For initial investigation, we assumed a uniform distribution of initial conditions. It translates the assumption that trials are independent and that participants have unbiased expectations regarding the next decision to make. After mathematical investigation, we fit our model to newly acquired data, using PyDDM [6]. Finally, in order to assess quantitatively the quality of the fit, we compared our results to DDM fitting.

We show that our model accurately fits behavioral data on a wide range of paradigms, providing as good a fit as the DDM (Fig. 1B, D), while giving insight into single trial dynamics. In addition to that, we show that our model describes qualitatively better some neurophysiological observations made in the past. This model is further usable in simulations, for example to test hypotheses on the distribution of initial conditions and on how they are selected at each trial depending on the history of the task.

References

1. Ratcliff R. A theory of memory retrieval. Psychological review. 1978 Mar;85(2):59.

2. Gold JI, Shadlen MN. The neural basis of decision making. Annu. Rev. Neurosci.. 2007 Jul 21;30:535–74.

3. Evans NJ, Wagenmakers EJ. Evidence accumulation models: Current limitations and future directions.

4. Latimer KW, Yates JL, Meister ML, Huk AC, Pillow JW. Single-trial spike trains in parietal cortex reveal discrete steps during decision-making. Science. 2015 Jul 10;349(6244):184–7.

5. Usher M, McClelland JL. The time course of perceptual choice: the leaky, competing accumulator model. Psychological review. 2001 Jul;108(3):550.

6. Shinn M, Lam NH, Murray JD. A flexible framework for simulating and fitting generalized drift–diffusion models. ELife. 2020 Aug 4;9:e56938.

Fig. 1
figure p

A potential profile in which the decision variable is evolving. C resulting trajectories with different values of z, with regularly-spaced × 0. Blue trajectories correspond to × 0 > 0, red to × 0 < 0. B,D fitting of one participant's response times with our model (C) and with the DDM (D)

P5 Evoking orientation-tuned activity in a spiking model of cat V1 with optical stimulation

David Berling 1 , Quentin Sabatier 2 , Charlie Galle 3 , Yves Frégnac 4 , Ryad Benosman 5 , Jan Antolik 1

1 Charles University Prague, Computational Systems Neuroscience Group, Faculty of Mathematics and Physics, Prague, Czechia

2 Sorbonne Universite, INSERM, CNRS, Institut de la Vision, Paris, France

3 Gensight Biologics, Paris, France

4 NeuroPSI, Unité de Neurosciences, Information et Complexité (UNIC), Gif-sur-Yvette, France

5 University of Pittsburgh Medical Center, Biomedical Science Tower 3, Pittsburgh, PA, United States of America

Email: david.berling@rwth-aachen.de

Precise external induction of cortical activity is becoming a key tool for neuroscience research with a particular clinical application in cortical prosthetic systems. Major effort is being invested into developing methods for controlling cortical activity in primary visual cortex (V1) to encode visual information [1,2]. However, existing computational work is restricted to stimulation in functionally unspecific network models [3] and hence is of limited use for designing encoding protocols which engage cortical representations in a functionally specific manner.

Building on top of a biologically realistic spiking model of cat V1, we implemented a model of an optogenetically driven visual prosthesis [4]. The model captures layers 2/3 and 4. Visual input can be delivered via a LGN model consisting of spatio-temporal center-surround receptive field filters. Next, we have implemented a virtual model of a LED array covering the modeled cortex. We calculate the external current into the neurons with a channelrhodopsin (ChR) conductance model based on their individual illumination level. The resulting framework allows to test arbitrary stimulation protocols in the context of this optogenetic ‘write-in’ interface.

We used this model to compare the orientation-dependent contrast-invariant cortical response to visual grating stimuli and the response to optical stimulation via the LED array. Although a single LED illuminates a large neural population (> 100 µm diameter), modulation of the illumination across the LED array according to neural orientation preference is sufficient for the induction of contrast-invariant orientation-tuning curves in the stimulated layer 2/3 neurons. Simulated optogenetically evoked cortical dynamics sharpened the driving illumination pattern due to network effects, thus improving encoding of the orientation specific information in V1. Currently, we incorporate morphological effects into the stimulation model as ChR-transfected dendritic arbors may constrain the spatial resolution. This will allow us to estimate the limits of spatial resolution for optogenetic stimulation considering uniform as well as compartment-specific ChR-distributions.

Acknowledgements

This work received funding from DARPA No. HR0011-17-C-0038, French National Research Agency (ANR-43 Horizontal-V1; ANR-17-CE37-0006), improvement of internationalization in the field of research and development at Charles University (CZ.02.2.69/0.0/0.0/17_050/0008466), and by the European Union Marie Skłodowska-Curie grant agreement No 861423.

References

1. Chen SC, Benvenuti G, Chen Y, Kumar S, Ramakrishnan C, et al. Similar neural and perceptual masking effects of low-power optogenetic stimulation in primate V1. bioRxiv. 2021 Jan 1.

2. Chen X, Wang F, Fernandez E, Roelfsema PR. Shape perception via a high-channel-count neuroprosthesis in monkey visual cortex. Science. 2020 Dec 4;370(6521):1191–6.

3. Luboeinski J, Tchumatchenko T. Nonlinear response characteristics of neural networks and single neurons undergoing optogenetic excitation. Network Neuroscience. 2020 Sep 1;4(3):852–70.

4. Antolik J, Sabatier Q, Galle C, Frégnac Y, Benosman R. Assessment of optogenetically-driven strategies for prosthetic restoration of cortical vision in large-scale neural simulation of V1. Scientific reports. 2021 May 24;11(1):1–8.

P6 Biophysically realistic neural-network models of auditory neurons and synapses for neuroscience and machine-hearing applications

Sarah Verhulst 1 , Fotios Drakopoulos 1 , Arthur Van Den Broucke 1 , Deepak Baby 1

1 University of Ghent, Dept of Information Technology, Ghent, Belgium

Email: s.verhulst@ugent.be

Computational models of auditory processing were essential in shaping our modern-day theory of neural sound encoding and have accelerated the development of personalized treatments for the hearing impaired. Classically, analytical models of auditory neurons and synapses are derived from experimental transfer functions derived from neuronal recordings. This has resulted in a variety of models, typically formulated using multi-branch Hodgkin-Huxley or coupled ODE systems comprising a number of nonlinearities. The more accurate the models are in capturing the nonlinear and adaptation properties of the biophysical system, the more computationally expensive they become. While detailed and realistic analytical models are essential to relate biophysical properties and parameters directly to their functional impact on neuronal processing, their computational load has limited their uptake in large scale brain simulation systems (e.g., for sound perception) or in methods for augmented hearing. The latter applications typically resort to faster – but biophysically less accurate – model units or adopt machine learning to map sound to output features. It is clear that both fast (machine-learning) and slow (analytical, biophysical) approaches have their benefits, but for neuroscience purposes it is essential that experimental neuroscientific advances can easily be cast into incrementally improved encoding models that generalize beyond a single experiment.

Here, we present a hybrid, computational neuroscience and machine-learning approach to develop biophysically realistic convolutional neural network (CNN) descriptions of auditory neurons and synapses that predict their classical neuroscientific properties (nonlinearities, adaptation time-constants, frequency characteristics). To this end, we adopted state-of-the-art analytical model descriptions of auditory neurons/synapses to generate a training data set of neuronal responses to a speech corpus. Those simulations were used to train a CNN (L1-loss), which performance was benchmarked on predicting outcomes of six classical auditory neuroscience experiments (using unseen, non-speech stimuli). We used the benchmarking to optimize the hyperparameters of the initial CNN architecture (layer numbers, filter length, activation type, context) in a principled way. This yielded CNN model predictions conform the experimental observations. Because we successfully applied our method to a range of analytical neuron/synapse models with various degrees of complexity, we could derive a method to select an appropriate initial CNN architecture based on its receptive field and estimated adaptation time-constant of the to-be-modelled neuron/synapse. We required 3 to 14 encoder layers to sufficiently capture the neuroscientific properties of the giant axon, cochlea, inner-hair-cell or auditory-nerve-fiber synapse. Based on these minimally required model sizes, we conclude that machine-hearing systems that aim to maintain a relation to the underlying biophysical process need to be modular and have considerable sizes. Nonetheless, our CNN model units have clear advantages over their analytical counterparts, in that they are differentiable for back-propagation purposes (e.g., for hearing-aid algorithm design) and can be parallelized for GPU computing of large-scale neuronal population models (e.g., behavior, evoked responses) to accelerate neuroscience discoveries.

Research support

ERC 678,120 and FWO 1SB4421N.

P7 Stress-induced changes on CRH neurons and homeostatic response at the paraventricular nucleus of the hypothalamus

Ewandson L. Lameu 1 , Neilen Rasiah 2 , Jaideep S. Bains 2 , Wilten Nicola 1

1 University of Calgary, Cell Biology and Anatomy Department, Calgary, Canada

2 University of Calgary, Department of Physiology & Pharmacology, Calgary, Canada

Email: ewandsonluiz.lameu@ucalgary.ca

Dealing with stress is part of our daily lives. The continuous effect of stress can modify our behavior and promote long-term changes in synapses and neuronal structure. The hypothalamus is the brain area responsible for maintaining the body’s homeostasis. In a stress situation, the Paraventricular Nucleus of the Hypothalamus (PVN) is activated. The response works in a cycle where threats stimulate the activity of corticotropin-releasing hormone neurons (CRH) of the PVN. CRH neurons release corticotropin that goes directly to the pituitary glands, stimulating the production of adrenocorticotropic hormone (ACTH) that will further trigger the adrenal glands to liberate cortisol into the bloodstream [1].

In this work, we try to better understand how CRH neurons and synapses change in response to long-term glucocorticoid (CORT) exposure. The experiment consists of administrating CORT via the drinking water for 7 days. This procedure elevates circulating CORT without introducing the confounds associated with stressing the animals. After this period, they were sacrificed and current clamp recordings from CRH neurons were performed in vitro (Fig. 1A). Additionally, miniature excitatory postsynaptic currents (mEPSCs) were recorded to evaluate synaptic changes induced by stress. Control recordings were collected from animals that did not receive CORT. The experiments showed that neurons under CORT treatment presented a decrease in their activity rate, but unexpectedly they presented an increase in their synaptic input currents (Fig. 1B).

Based on these preliminary results of decreased firing rate and increased synaptic amplitude, we hypothesized that the network undergoes homeostasis. To address this question, we built computational network of model neurons with intrinsic and synaptic characteristics modeled after CRH neurons and tested the conditions for homeostasis by comparing the firing rate ratio between CORT and Control networks. To do so, we first proposed a modified integrate-and-fire neuron model [2] and used an optimization algorithm [3] to fit the neuronal current-clamp experiments (Fig. 1A). The algorithm searches in the parameter space for the set of model parameters that best reproduce the real neuron voltage traces from CORT and Control groups. Also, from the mEPSCs recordings, the synaptic currents were fitted by a double exponential function [4], extracting some pulse features such as amplitudes, rise, and decay times. The fitted neurons networks simulations showed that homeostasis can be achieved as the EPSCs frequency increases (Fig. 1C) for different simulation conditions. Therefore, despite the decreased firing rate presented by isolated CORT neurons, at the network level, the CORT synaptic currents counterbalance it, keeping the network firing rate at the same level as the Control network. Our results show that precise adjustments in CRH neurons can exactly counterbalance the synaptic plasticity induced by stress to maintain homeostasis.

References

1. Füzesi T, Daviu N, Cusulin JI, Bonin RP, Bains JS. Hypothalamic CRH neurons orchestrate complex behaviours after stress. Nature communications. 2016 Jun 16;7(1):1–4.

2. Brette R, Gerstner W. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. Journal of neurophysiology. 2005 Nov;94(5):3637–42.

3. Kennedy J, Eberhart R. Particle swarm optimization. In Proceedings of ICNN'95-international conference on neural networks 1995 Nov 27 (Vol. 4, pp. 1942–1948). IEEE.

Proceedings of ICNN'95–International Conference on Neural Networks, Perth, WA, Australia, 4:1942–1948.

4. Rothman JS, Silver RA. Data-driven modeling of synaptic transmission and integration. Progress in molecular biology and translational science. 2014 Jan 1;123:305–50.

Fig. 1
figure q

A (left) an example fitting (blue) of a real neuron voltage trace (black) recorded from a current clamp experiment (schematic illustrated on A, right). In B left we show the mean firing rate observed for each current applied at the current clamp reading and on the right the amplitudes distribution of mEPSCs from Control and CORT samples. In C left we show the raster plot of both networks for different EPSC frequencies, as the EPSC frequency increases the activity of both networks becomes more similar. In C right the steady-state Peristimulus Time Histogram (PSTH) ratio of Control network over CORT network, the dashed black line shows where the homeostasis occurs. The neurons in the networks receive two different currents, as independent synaptic current EPSC which represents the inputs a neuron receives from all its synapses, these current pulses are generated according to a Poisson process whose frequency can be controlled. The other one is a constant external current I that is equal for all neurons

P8 Emergence and propagation of asynchronous states of spontaneous cortical activity.

Roman Arango 1 , Pedro Mateos-Aparicio 2 , Maria V. Sanchez-Vives 3 , Emili Balaguer-Ballester 1

1 Bournemouth University, Department of Computing and Informatics, Bournemouth, United Kingdom

2 IDIBAPS, Systems Neuroscience, Barcelona, Spain

3 Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), ICREA, Systems Neuroscience, Barcelona, Spain

Email: rcabrera@bournemouth.ac.uk

Slow oscillations (SO) of neural activity emerge spontaneously in the neocortex during anatomically (e.g. cortical slices or cortical lesions) or functionally (e.g. anaesthesia, slow-wave sleep) disconnected states. The SO consist of the alternation (ca. 1 Hz) of high- (Up) and low-responsive (Down) periods that propagate spatio-temporally as a travelling wave, thereby revealing properties of the underlying cortical network [1]. Even if the SO are a stable attractor, the network can be driven into richer dynamical states by neuromodulation, inducing e.g., the transition from sleep to wakefulness. How such a globally synchronized state gives rise to largely decorrelated awake states is yet to be elucidated, and in particular, how the emergence of asynchrony is spatially orchestrated by the local network.

Here we investigated this near-asychronous regime by developing time-series analyses applied to extracellular, multielectrode-array recordings on acute slices. The slices exhibited robust SO and were then subjected to neurochemical modulations aimed at eliciting a desynchronized or awake-like state (AS) [2].

We devised a new statistical procedure for decomposing the AS regime into synchronous and asynchronous periods. Our results show that asynchronous states of uneven durations are interspersed among abrupt surges of 1–2 Hz oscillations [3]. These consist of Up- and Down-like states in a sort of excited SO state sharing much of its hallmark features, albeit more locally coordinated.

Population firing rates of local neuronal ensembles were captured by an energy-preserving estimation of the multi-unit activity (MUA). Thus, locally sampled probability densities of MUA reflect the state of the network at different scales. A statistical-distance–based clustering of the MUA densities displays the emergence, at the most excited states, of particular spatial patterns that follow the laminar structure of the slice. In stark contrast, normalised power spectra of AS MUA proved most similar when lying on the same cortical column, independently of the layer.

Although the synchronous/asynchronous balance varies from one column to the other, the inter-channel co-occurrence of asynchronous states is spatially correlated, suggesting that the switch from synchrony to asynchrony acts as a propagating wavefront.

Overall, our novel methodology reveals how, on the brink of wakefulness, an excited SO-like activity cohabits with periods of asynchrony. Their spatio-temporal interplay depends on the cortical network structure: the firing rate intensity is dictated by the layer [4], the column sustains the oscillation, whereas the alternation between synchrony and asynchrony propagates across the whole slice.

Acknowledgments

Funded by EU H2020 Research and Innovation Programme, Grant No. 945539 (HBP SGA3).

References

1. Sanchez-Vives MV, Massimini M, Mattia M. Shaping the default activity pattern of the cortical network. Neuron. 2017 Jun 7;94(5):993–1001.

2. Barbero-Castillo A, Mateos-Aparicio P, Dalla Porta L, Camassa A, Perez-Mendez L, et al. Impact of GABAA and GABAB inhibition on cortical dynamics and perturbational complexity during synchronous and desynchronized states. Journal of Neuroscience. 2021 Jun 9;41(23):5029–44.

3. Tort-Colet N, Capone C, Sanchez-Vives MV, Mattia M. Attractor competition enriches cortical dynamics during awakening from anesthesia. Cell Reports. 2021 Jun 22;35(12):109,270.

4. Senzai Y, Fernandez-Ruiz A, Buzsáki G. Layer-specific physiological features and interlaminar interactions in the primary visual cortex of the mouse. Neuron. 2019 Feb 6;101(3):500–13.

P9 Influence of the connectivity on the synchronization of two coupled neuronal networks

Paulo R. Protachevicz 1 , Matheus Hansen 2 , Kelly Iarosz 3 , Iberê Caldas 1 , Antonio Batista 4

1 University of São Paulo, Institute of Physics, São Paulo, Brazil

2 Federal University of São Paulo, Computer Science Department, São José dos Campos, Brazil

3 Faculdade de Telêmaco Borba, Engineering Department, Telêmaco Borba, Brazil

4 State University of Ponta Grossa, Department of Mathematics and Statistics, Ponta Grossa, Brazil

Email: protachevicz@gmail.com

Understanding how the brain synchronizes is a fundamental question in neuroscience. The main types of synchronization found between the brain regions are phase, anti-phase, and shift-phase. In phase synchronization, neurons of two regions fire at the same time. This type of synchronization has been observed, for instance, during memory, cognition, and motor coordination. In anti-phase synchronization, the neurons are synchronized in each region, but they interpolate symmetrically their phases on time. For shift-phase synchronization, the phase relations are not symmetric. These types of synchronization have been identified in mammalian brains. Since connectivity can be related to neuronal firing patterns, we investigate how chemical connections between two networks can be associated with the appearance of these different types of synchronization. Our results suggest that excitatory and inhibitory connectivities arriving at excitatory and inhibitory neurons play specific roles in the occurrence of synchronized firing patterns.

P10 A physiologically realistic computational model of the basal ganglia network.

Nathalie Azevedo Carvalho 1 , Laure Buhry 2 , Dominique Martinez 3 , Sylvain Contassot-Vivier 4 , Jérôme Baufreton 5

1 INRIA Grand-Est, Computational Neuroscience, Nancy, France

2 Université de Lorraine, LORIA UMR CNRS 7503, Nancy, France

3 CORTEX, CNRS / PISC, INRA, Vandoeuvre-lès-Nancy, France

4 Université de Lorraine, CNRS, LORIA, Computer Science, Nancy, France

5 Université de Bordeaux, Institut des Maladies Neurodégénératives, CNRS UMR 5293, Neuroscience, Bordeaux, France

Email: nathalie.azevedo-carvalho@inria.fr

The basal ganglia (BG) are a set of nuclei that process movement information: they refine and adjust simple movement actions. The BG has two major pathways: the striatum (STR)-indirect neuron pathway and the subthalamic (STN)-hyperdirect nucleus pathway. The GPe is the connecting nucleus between the two pathways. The STR inhibits the GPe and the STN excites the GPe which is divided into two types of neurons [1,2], the prototypical (GPeP) and the arkypallidal (GPeA). This discovery allows for a better understanding of the functioning of this neural network. We model the STN-GPeA-GPeP-STR (D2) network and study the influence of the nucleus on each other like in [3] (see Fig. 1A). The neurons have been modeled as point neurons using the Hodgkin-Huxley formalism and the synapses as exponential functions. From extensive simulations performed with the SiReNe software (Neural network simulator, in french: Simulateur deRéseaux deNeurones [4]), we show that our network is in good agreement with the physiological results of [3]. This simulator is based on a hybrid method combining time-step and event-driven computations with a Runge–Kutta 2 numerical method at inner level. GPe is mainly inhibited by GABAergic inputs of the STR and we study the impact of STR connectivity on GPe. We observe that the GPeP and GPeA react in opposite ways when the STR is activated, i.e., GPeP is entirely inhibited whereas the GPeA and STN are completely excited, as observed in [3] (see Fig. 1B, C). This work aims at better understanding the synaptic connectivity scheme. This model will allow us to test hypotheses regarding the pathological rhythmogenesis in Parkinson disease, both at the cellular and connectivity levels.

References

1. Abdi A, Mallet N, Mohamed FY, Sharott A, Dodson PD, et al. Prototypic and arkypallidal neurons in the dopamine-intact external globus pallidus. Journal of Neuroscience. 2015 Apr 29;35(17):6667–88.

2. Mallet N, Micklem BR, Henny P, Brown MT, Williams C, et al. Dichotomous organization of the external globus pallidus. Neuron. 2012 Jun 21;74(6):1075–86.

3. Aristieta A, Barresi M, Lindi SA, Barriere G, Courtand G, et al. A disynaptic circuit in the globus pallidus controls locomotion inhibition. Current Biology. 2021 Feb 22;31(4):707–21.

4. Carvalho NA, Contassot-Vivier S, Buhry L, Martinez D. Simulation of large scale neural models with event-driven connectivity generation. Frontiers in neuroinformatics. 2020;14.

Fig. 1
figure r

A Basal ganglia model STN-GPeA-GPeP-STR(D2) with the connectivity. B/C Comparing biological data with simulated data. Firing rate between 1-7 s. STR-D2 activated with a stimulation between 3-5 s. GPeP and GPeA react oppositely. B Biological data reproduced from Aristieta et al. with the courtesy of the authors. C Simulated data

P11 Learning in the inhibitory network of the honeybee antennal lobe

Shruti Joshi 1 , Seth Haney 1 , Zhenyu Wang 2 , Fernando Locatelli 3 , Yu Cao 4 , Brian Smith 5 , Maxim Bazhenov 1

1 University of California San Diego, School of Medicine, San Diego, CA, United States of America

2 Arizona State University, Electrical Engineering, Phoenix, AZ, United States of America

3 University of Buenos Aires, Buenos Aires, Argentina

4 Arizona State University, School of ECEE, Phoenix, AZ, United States of America

5 Arizona State University, School of Life Sciences, Phoenix, AZ, United States of America

Email: s4joshi@eng.ucsd.edu

A honeybee in search of food locates nectar producing flowers using floral aromas composed of many volatile compounds. However, nectar-producing and non-producing floral odors contain many of the same compounds. Hence, the honeybee faces a challenging task in determining the map between chemical sensing and reward prediction. This is further complicated by the fact that nectar production may change from season to season and environment to environment. This requires the olfactory system to be able to learn and relearn the association of reward with variable blends of volatile compounds. In this new study, we examine the mechanisms underlying the creation and modification of neural representations of natural odor blends in the early olfactory system – antennal lobe (AL) – using a combination of computational modeling and Ca2 + imaging of the honeybee AL in vivo. Based on previous immunological labeling that showed octopamine receptors (modulating reward) co-localized with GABA receptors [1], we modeled plasticity in the inhibitory AL network. Following our previous modeling work [2], rewarded odors caused GABA facilitation based on presynaptic firing rates, and non-rewarded odors caused GABA facilitation based on post-synaptic firing. We found that this inhibitory plasticity was sufficient to create many of the changes seen in vivo. This includes the shifting of odor mixtures due to reward, the adaptation to many unrewarded odor presentations, and changes in the representations of complex blends. Importantly, our model learned to discriminate between complex odor blends by expanding coding space in the dimensions that were maximally discriminatory (Fig. 1A), which have been observed in vivo. Our model further predicted that the cells representing chemical compounds common to both rewarded and non-rewarded odors face increased inhibition from both associative and non-associative plasticity. This combined action diminished the superfluous components, while increasing the discriminatory components of the neural code (Fig. 1C). This prediction was then verified in vivo by examining Ca2 + imaging data (Fig. 1B, D). We found that glomeruli that were common to many odor blends were suppressed by training and those that were unique to a single odor blend were enhanced. Analysis of a black-box graphical convolutional neural network revealed a similar pattern of relationships between odor percepts to that learned in the biophysical model. Our model demonstrates a learning paradigm where the inhibitory network reshapes coding space to suit the current task and environment. These findings suggest an efficient computational strategy for perceptual learning in complex natural odors through modification of the inhibitory network.

Fig. 1
figure s

Learning in inhibitory network of AL expands maximally discriminatory dimension. A PCA of Projection Neuron (PN) firing rates for odors before, after training. B Ca2+ imaging of honeybee AL (reproduced from [3]). C Learned model weights from inhibitory local neurons to excitatory PNs in AL. D Scatterplot comparing uniqueness of chemical component represented with change due to learning

References

1. Sinakevitch IT, Smith AN, Locatelli F, Huerta R, Bazhenov M, et al. Apis mellifera octopamine receptor 1 (AmOA1) expression in antennal lobe networks of the honey bee (Apis mellifera) and fruit fly (Drosophila melanogaster). Frontiers in systems neuroscience. 2013 Oct 25;7:70.

2. Chen JY, Marachlian E, Assisi C, Huerta R, Smith BH, et al. Learning modifies odor mixture processing to improve detection of relevant components. Journal of Neuroscience. 2015 Jan 7;35(1):179–97.

3. Locatelli FF, Fernandez PC, Smith BH. Learning about natural variation of odor mixtures enhances categorization in early olfactory processing. Journal of Experimental Biology. 2016 Sep 1;219(17):2752–62.

P12 Role of sleep in formation of indirect memory associations

Timothy Tadros 1 , Oscar Gonzalez 2 , Maxim Bazhenov 2

1 University of California, San Diego, Neuroscience Graduate Program, La Jolla, CA, United States of America

2 University of California, San Diego, Department of Medicine, La Jolla, CA, United States of America

Email: o2gonzalez@ucsd.edu

Relational memory, the ability to make and remember associations between objects, remains an essential component of mammalian reasoning. In relational memory tasks, it has been shown that periods of offline processing, such as sleep, are critical to improving one’s ability to make indirect associations or transitive inferences. For example, one may learn to associate two items (A and B) and later learn to associate B with a new element (item C) during the wake state. After briefly learning these associations, a subject can recall item B (also called “linking” item) when presented with item A or C, but is less adept at recalling item C when presented with item A. Behavioral research has shown that the duration of slow-wave sleep (SWS) following such is significantly correlated with the subject’s ability to recall item C when presented with item A, highlighting the importance of SWS in developing relational memory. Despite the behavioral evidence, we know little about the mechanisms of sleep that give rise to improved relational memory as well as the brain network changes that occur during sleep to support relational memory.

Based on the empirical evidence that sleep improves relational memory, in this new study, we built a Hodgkin-Huxley-based model of the thalamo-cortical network to understand how SWS can lead to improvements in an unordered relational memory task. The cortical part of the network was composed of two layers, both including and excitatory and inhibitory neurons; the first layer represented the perception of an individual object (e.g. visual cortex) and the second layer represented higher-order processing (e.g. associative cortex). Feedforward connections from the first layer to the second layer and recurrent connections within the second layer were plastic and modified through spike-timing dependent plasticity (STDP) rules during training and sleep. Other connections, e.g. thalamocortical and cortical feedback connections, were fixed in alignment with biophysical data.

The model was first trained on a paired associate inference task, where four pairs of items (e.g., A + B, B + C, X + Y, Y + Z) were presented to the network. After this associative training phase, the model was able to recall direct associations (e.g., A- > B, B- > C) learned during the waking state. However, the indirect relational association (e.g., A- > C) was not learned or was unreliable. After a period of SWS, the model’s ability to recall these indirect associations was significantly improved, highlighting the importance of SWS in relational memory. In agreement with empirical data, we found that the duration of SWS significant correlates with the improvement in relational memory after sleep. Importantly, we found that replay during sleep increases synaptic connections between neurons representing the linking (common) item (B) and neurons representing the unlinked associations (A, or C). This change in synaptic connectivity led to a greater ability to recall the unlinked items (e.g., C) when its indirect pair (item A) was presented. Our study predicts that sleep can reactivate pathways to and from the linking item to the unlinked objects in order to form indirect associations. In addition, we predict that inactivating the neurons that represent the linking item (B) through optogenetics may destroy the subject’s ability to perform indirect associative recall.

Acknowledgments

Supported by NIMH (1R01MH125557).

P13 Simulating sleep spindles and slow oscillations EEG and MEG in a multiscale thalamocortical network model with hierarchical connectivity

Yury Sokolov 1 , Burke Rosen 2 , Jean Erik Delanois 3 , Oscar Gonzalez 4 , Giri Krishnan 4 , Eric Halgren 5 , Maxim Bazhenov 4

1 University of California, San Diego, La Jolla, CA, United States of America

2 University of California, San Diego, Neuroscience, La Jolla, CA, United States of America

3 University of California, San Diego, Computer Science, La Jolla, CA, United States of America

4 University of California, San Diego, Medicine, La Jolla, CA, United States of America

5 University of California, San Diego, Departments of Radiology and Neuroscience, La Jolla, CA, United States of America

Email: ysokolov@ucsd.edu

Magneto- and electro-encephalography (M/EEG) are complementary, non-invasive imaging techniques which are used to measure macroscopic brain activity. However, how the microscopic activity of ion channels gives rise to these electromagnetic signals is yet to be fully understood. To address this question, we developed a multi-scale thalamocortical network model that exhibits the characteristic activity states of NREM sleep: sleep spindles and slow oscillations. We incorporated the main organizational principles of cortical connectivity in our network model. First, the connection probability between a pair of cortical cells decayed exponentially with respect to the diffusion-MRI derived white matter tract distance between the pair. Second, a hierarchical index was assigned to each functional region that is inversely proportional to region’s average myelination. Third, inter-areal hierarchically feed-forward and feed-back connections were pruned into distinct laminarly-separated counter-streams, distinct from local connectivity. The synaptic delays and synaptic strengths were derived from dMRI distances and laminar patterns. We examined the role of synaptic delays on the propagation of spindles and slow oscillations, and found that characteristic traveling wave structure is preserved even for relatively high delays consistent with human brain long-range connectivity delays. We compared the spatiotemporal patterns of mesoscale cortical correlativity structure in simulated and empirical data. By embedding the simulated cortical currents in a volume conduction model of the head we produced simulated M/EEG, enabling investigations of how multi-scale dynamics and detailed connectivity give rise to these complex signals.

Acknowledgements

Supported by grants from NIMH (RF1MH117155) and NINDS (R01NS10955).

P14 Hippocampal indexing promotes cortical consolidation without interference by altering the stability landscape of synaptic weight space

Ryan Golden 1 , Oscar Gonzalez 2 , Jean Erik Delanois 3 , Maxim Bazhenov 4

1 University of California, San Diego, Neurosciences Graduate Program, San Diego, CA, United States of America

2 University of California, San Diego, Medicine, San Diego, CA, United States of America

3 University of California, San Diego, Computer Science, San Diego, CA, United States of America

4 University of California, San Diego, San Diego, CA, United States of America

Email: rygolden@ucsd.edu

Human and animal brains must continuously encode and assimilate new memories to appropriately guide behavior in a constantly changing environment. Although advances in computational neuroscience and machine learning have made significant steps towards a mechanistic understanding of biological learning, the existing models fall short by suffering from severe retroactive or catastrophic interference–an overwriting of older, competing memories–when presented with novel information to encode. One possible explanation for this is that most existing models only attempt to model the learning processes which occur during awake behavior, and ignore the complexities of the consolidation processes which occur during sleep. Systems Consolidation Theory posits that the hippocampus rapidly encodes new information during awake behavior. This hippocampal trace is subsequently assimilated into the cortex and further consolidated during sleep. Specifically, coupling of slow oscillations (SOs) in the cortex and sharp-wave/ripples (SWRs) in the hippocampus is thought to allow the hippocampus to replay recent memories and to index corresponding cortical memory traces to be replayed and learned for long-term storage. To understand the details of this coupling, we developed a biophysically-realistic thalamocortical network model (Fig. 1A) implementing SWR input and SOs (Fig. 1B). We found that when two competing memories were trained sequentially during awake, the model suffered from retroactive interference, forgetting the old memory trace. However, interference could be avoided when the competing new memory was embedded to the cortical network by SWRs during sleep (Fig. 1C-E). More surprisingly, we observed that hippocampal indexing qualitatively changes the dynamics of consolidation during sleep by (1) the emergence of autonomous learning rate decay, and (2) altering the stability landscape of synaptic weight space (Fig. 1F). The former allows the network to consolidate the new memory in a self-stabilizing manner, while the latter prevents retroactive interference by restricting consolidation dynamics to a subspace of synaptic weight space—the solution manifold of the older memory.

Fig. 1
figure t

A Schematic of basic circuitry of thalamocortical network model. PY/IN neurons are excitatory and inhibitory cortical neurons (respectively). TC/RE neurons are excitatory thalamocortical neurons and inhibitory reticular neurons. HP indicates simulated hippocampal input to PY neurons. Excitatory connections are demarcated by lines terminating in horizonal bars and inhibitory connections by lines terminating in dots. B Raster plots showing example activity during slow-wave sleep in the model (top). Bottom plots show cortical LFPs filtered at the slow-wave (0.2-2 Hz) and spindle (7-15 Hz) frequency bands showing the presence of both waveforms and the coupling of slow-waves and spindles in the model. C Heatmap showing activity during a typical simulation. Y-axis indicates PY neuron index and X-axis is time; color indicates membrane voltage. Labels on top: T-Testing period, S1-Training of S1 memory during wake, S1* HP Input-period during slow-wave sleep when hippocampal indexing of S1* memory trace is applied at the down-to-up transition. D Left shows heatmap example of training pulses during S1 wake training. Right shows example up state with black boxes demarcating timing of S1* indexing. E Performance on a pattern completion task before training (baseline), after S1 Training, and After Sleep (red) for both S1 and S1* (left and right respectively). F First 2 PCs of the synaptic weight space of the network. Colored regions indicate regions within PC space that represent weight configuration supporting memory S1 only (blue), S1* only (red), good for both (purple) or not supporting either memory (gray). Arrows indicate the weight dynamics during sleep without hippocampal indexing (white) and with indexing (black)

P15 A novel meta-analytic web application for multimodal neuroscientific data integration and analysis

Krishna Praneeth Kilambi 1 , Matteo Cantarelli 2 , Chandrashekar Mysore Subbakrishna Dikshit 3 , Paolo Bazzigaluppi 4 , Filippo Ledda 2 , Afonso Pinto 5 , Lucas Rebscher 6 , Jesus Martinez 7 , Peter Tamajong 1 , Mollie Ullman-Cullere 1 , Brinda Banerjee 1 , Stephen Larson 2 , Kevin Hallock 1 , Peter Bergethon 1

1 Biogen Inc, Cambridge, MA, United States of America

2 MetaCell LLC, Cambridge, MA, United Kingdom

3 Biogen Inc, Biogen digital health, Cambridge, MA, United States of America

4 MetaCell LLC, Toronto, Canada

5 MetaCell LLC, Porto, Portugal

6 MetaCell LLC, Berlin, Germany

7 MetaCell LLC, San Diego, United States of America

Email: krishnapraneeth.kilambi@biogen.com

The number of patients with neurodegenerative disorders is expected to quadruple in the next 50 years, bringing the cost associated with patient care to approximately $2 trillion in 2030 [1]. The development of disease modifying therapies is hindered by the high costs of the drug development process, which is in part due to the long duration of the clinical trials. Total costs for the development of an Alzheimer’s Disease drug are estimated at $5.6 billion over 13 years [2], with phase 2/3 clinical trials taking up approximately half of that time [3]. One reason for the need for long clinical trials to test neurodegenerative disorder therapies is the large inter-subject variability due to heterogeneity in the diseases’ pathobiology and progression [4]. Recruiting individuals who share common pathophysiological signatures during the trial enrollment stage will reduce the variability in the response to the treatment leading to shorter trials [5]. Grouping subjects based on multimodal biomedical data is complicated by the diverse data types due to the variety of spatiotemporal scales at which data is collected. To this end, we present the Medicine Graph (MG), a web-based neuroinformatics software application designed for integration of multimodal data and cutting-edge 2D/3D visualizations in a neuroanatomy-based reference system (Fig. 1). MG is built on an integrated graph database platform which allows its users to ingest, organize, and correlate data from clinical trials, public databases, and proprietary multi-omics data. A graph representation of the multimodal data enables the users to explore structure–function relationships across different scales (from molecular to behavioral), and to model the response of the system to pharmacologic manipulations. Currently, MG allows users to visualize human anatomical knowledge from various sources (SNOMED, UBERON, and the Allen Institute) and references their content in MNI space. It also incorporates gene expression data obtained from brain samples of a representative subject and simulated pharmacokinetic data in the cerebrospinal fluid. MG provides the framework for curation, visualization, and annotation of graph-based data, enabling the analysis of relationships between different types of biomedical data to derive novel hypotheses to accelerate drug development. Furthermore, integration of results from clinical assessments and digital/fluid biomarkers will enable users to identify patient groups with common biological signatures for testing of personalized treatments.

References

1. Alzheimer's Disease International. World Alzheimer's report 2015: the global impact of dementia. London, UK: Alzheimer's Disease International; 2015

2. Scott TJ, O'Connor AC, Link AN, Beaulieu TJ. Economic analysis of opportunities to accelerate Alzheimer’s disease research and development. Annals of the New York Academy of Sciences. 2014 Apr;1313(1):17.

3. Cummings J, Reiber C, Kumar P. The price of progress: Funding and financing Alzheimer's disease drug development. Alzheimer's & Dementia: Translational Research & Clinical Interventions. 2018 Jan 1;4:330–43.

4. Oxford AE, Stewart ES, Rohn TT. Clinical trials in Alzheimer’s disease: a hurdle in the path of remedy. International Journal of Alzheimer’s Disease. 2020 Apr 1;2020.

5. Hampel H, Vergallo A, Caraci F, Cuello AC, Lemercier P, et al. Future avenues for Alzheimer's disease detection and therapy: Liquid biopsy, intracellular signaling modulation, systems pharmacology drug discovery. Neuropharmacology. 2021 Mar 1;185:108,081.

Fig. 1
figure u

Medicine Graph (MG) tabs showing details of the selected brain region or Node (left), 3D Explorer (center) representing MG Nodes in MNI space, and Graph Explorer (right) which uses a force layout to visualize clusters of Nodes. Anatomical connections are represented as edges between the nodes. Nodes and Pathways can be affected by Events (Function, Disease etc.) according to a mathematical model

P16 A dynamics-based approach to thresholding tractography-based connectomes

Eleanna Kritikaki 1 , Diana Kyriazis 2 , Natasha Sigala 2 , Matteo Mancini 3 , Simon F Farmer 4 , Luc Berthouze 3

1 University of Sussex, School of Engineering and Informatics, Brighton, United Kingdom

2 Brighton and Sussex Medical School, Brighton, United Kingdom

3 University of Sussex, Brighton, United Kingdom

4 University College London, Institute of Neurology, London, United Kingdom

Email: l.berthouze@sussex.ac.uk

Tractography is a widely used technique for studying the relationship between brain structure and function. However, its accuracy is limited by false positive and false negative connections [1]. Consensus thresholding, often used to produce representative group connectomes, seeks to reduce these errors by retaining only the links present in a given percentage of the subjects. In the absence of ground truth, guidelines to choose the threshold often rely on structural considerations as well as specific assumptions [2]. Here, we propose an alternative approach, whereby, given a model of neuronal dynamics, the threshold is chosen based on whether the dynamical behaviour of the group connectome is most representative of the behaviour of the individuals. We use the Kuramoto model of synchronization and characterise the individual and group networks in terms of their metastability, which has been shown to be clinically and behaviourally relevant [3]. Using a dataset of forty structural connectivity matrices constructed by probabilistic tractography (healthy adult cohort), we found that the threshold for which the metastability profile was most representative of that of the individual networks was 42.5%. As thresholds moved away from this minimum, they no longer fitted the average individual metastability profile (Fig. 1, left), despite still falling in the range proposed by [2]. We compared our approach to two other methods: one preserving the connection length distribution [4], the other retaining the most consistent edges across the cohort [5]. Both connectomes significantly deviated from the best fit (Fig. 1, left). A graph theoretical analysis using common network metrics suggested that similarity in network structure did not predict similarity in dynamical behaviour. For example, the distance-dependent network which showed most similarity in terms of global graph metrics (Fig. 1, right), and close proximity in terms of local graph metrics (Fig. 1, middle) was furthest way in terms of dynamical behaviour. This suggests that relying purely on structure for choosing the threshold may overlook network features of importance to neuronal dynamics. Further work is needed to establish whether these results generalise to different classes of neuronal dynamics. Importantly, however, the proposed method is agnostic to whether deterministic or probabilistic tractography is used.

References

1. Sotiropoulos SN, Zalesky A. Building connectomes using diffusion MRI: why, how and but. NMR in Biomedicine. 2019 Apr;32(4):e3752.

2. de Reus MA, van den Heuvel MP. Estimating false positives and negatives in brain networks. Neuroimage. 2013 Apr 15;70:402–9.

3. Hellyer PJ, Scott G, Shanahan M, Sharp DJ, Leech R. Cognitive flexibility through metastable neural dynamics is disrupted by damage to the structural connectome. Journal of Neuroscience. 2015 Jun 17;35(24):9050–63.

4. Betzel RF, Griffa A, Hagmann P, Mišić B. Distance-dependent consensus thresholds for generating group-representative structural brain networks. Network neuroscience. 2019 Mar 1;3(2):475–96.

5. Roberts JA, Perry A, Roberts G, Mitchell PB, Breakspear M. Consistency-based thresholding of the human connectome. NeuroImage. 2017 Jan 15;145:118–29.

Fig. 1
figure v

Metastability profiles for the group networks considered. Shaded area denotes subjects’ STD (Left). Mean KS statistic (STD) for difference in nodal metrics between each group network and subjects. Bonferroni correction for multiple comparisons at the 1% significance level (middle). Z-scores relative to subject average for the three chosen global metrics (right)

P17 Role of realistic connectivity patterns in shaping learning in the mushroom body

Daniel Zavitz 1 , Elom Amematsro 2 , Sophie Caron 3 , Alla Borisyuk 1

1 University of Utah, Department of Mathematics, Salt Lake City, UT, United States of America

2 Columbia University, Department of Neuroscience, New York City, NY, United States of America

3 University of Utah, School of Biological Sciences, Salt Lake City, UT, United States of America

Email: danielzavitz1@gmail.com

Cerebellum-like structures are found in many brains and share a basic fan-out-fan-in network architecture. How the specific structural features of these networks affect their ability to learn remains largely unknown. Previous theoretical studies have suggested that purely random connections between input neurons and encoding neurons are optimal for associative learning. However, recent experimental studies of the Drosophila melanogaster mushroom body have identified two principal connectivity patterns that deviate from purely random connections. To investigate this structure–function relationship, we developed a four-layer network model of the early Drosophila melanogaster olfactory system with particular attention paid to the structure of the feedforward excitatory connections from the projection neurons of the antennal lobe to the Kenyon cells of the mushroom body (Fig. 1A). The first connectivity pattern, biases, deviates from the purely random case (Fig. 1Bi) by allowing the likelihoods at which individual projection neurons connect to Kenyon cells to substantially deviate from uniformly random (Fig. 1Bii). The second connectivity pattern, groups, allows projection neurons to connect preferentially to the same Kenyon cells (Fig. 1Biii). Finally, we consider a network class that exhibit both biases and grouping (Fig. 1Biv).We compared the representations of olfactory stimuli generated by the KC layer qualitatively and quantitatively; we also assess the ability of a network to perform associative learning via a novel, biologically inspired learning rule (Fig. 1C). We find that biases allow the mushroom body to prioritize the learning of particular, ethologically meaningful odors while incurring a minimal loss in overall associative learning ability relative to the optimal, purely random case (Fig. 1D). Second, we find that groups facilitate the mushroom body generalizing learned associations across similar odorswhile maintaining the ability to discriminate across most odors (Fig. 1E). Altogether, our results demonstrate how different connectivity patterns shape the representation space of a cerebellum-like network and impact its learning outcomes.

Fig. 1
figure w

A Network model of the Drosophila melanogaster mushroom body. B PN-KC connections are either unstructured (Bi) biased (Bii), grouped (Biii) or both (Biv). C MBONs mediate attraction ( +) or repulsion (-). The network is trained and tested on the same set of odors. Di Odor recall accuracy. Dii B1G0 prioritizes ethologically relevant odors. E Generalization across odor representations

P18 Astrocytes can sharpen spatial patterns in neuronal networks by restricting synaptic volume

Gregory Handy 1 , Alla Borisyuk *2

1 University of Chicago, Neurobiology and Statistics, Grossman Center for Quantitative Biology and Human Behavior, Chicago, IL, United States of America

2 University of Utah, Department of Mathematics, Salt Lake City, UT, United States of America

Email: borisyuk@math.utah.edu

Astrocytes are glial cells that make up 50% of brain volume, with each one wrapping around thousands of synapses. However, the exact role astrocytes have in governing the dynamics of the synapse and neuronal networks is still being debated. Previous computational modeling work has helped tease out possible mechanisms driving this interaction at the synapse level, with micro-scale models of calcium dynamics [1,2] and neurotransmitter diffusion [3]. Little computational work has been done to understand how astrocytes may be influencing spiking patterns and synchronization of large networks, partly because it is computationally infeasible to include the intricate details found in this previous work in such a network-scale model.

We overcome this issue by first developing an “effective” astrocyte that can be easily implemented to already established network frameworks. We do this by showing that the astrocyte proximity to a synapse makes synaptic transmission faster, weaker, and less reliable. Thus, our “effective” astrocytes can be incorporated by considering heterogeneous synaptic time constants, which are parametrized only by the degree of astrocyte proximity at that synapse. This parametrization makes sense in light of experimental evidence showing that the degree of astrocyte ensheathment varies by brain region and that it is a crucial component in certain disease states such as some forms of epilepsy [4]. We then apply our framework to a network of 20,000 exponential integrate-and-fire neurons, similar to the one presented by Rosenbaum et al. [5]. Depending on key parameters, such as the number of synapses ensheathed, and the strength of this ensheathment, we show that astrocytes have the ability to push the network to a synchronous state and to enhance and sharpen patterns of spatial correlation exhibited by the network.

Acknowledgements

This work was done with support from National Science Foundation grant NSF-DMS-1853673, The Swartz Foundation, and the support and resources from the Center for High Performance Computing at the University of Utah.

References

1. De Pittà M, Volman V, Berry H, Ben-Jacob E. A tale of two stories: astrocyte regulation of synaptic depression and facilitation. PLoS computational biology. 2011 Dec 1;7(12):e1002293.

2. Taheri M, Handy G, Borisyuk A, White JA. Diversity of evoked astrocyte Ca2 + dynamics quantified through experimental measurements and mathematical modeling. Frontiers in systems neuroscience. 2017 Oct 23;11:79.

3. Handy G, Lawley SD, Borisyuk A. Role of trap recharge time on the statistics of captured particles. Physical Review E. 2019 Feb 25;99(2):022,420.

4. Umpierre AD, West PJ, White JA, Wilcox KS. Conditional knock-out of mGluR5 from astrocytes during epilepsy development impairs high-frequency glutamate uptake. Journal of Neuroscience. 2019 Jan 23;39(4):727–42.

5. Rosenbaum R, Smith MA, Kohn A, Rubin JE, Doiron B. The spatial structure of correlated neuronal variability. Nature neuroscience. 2017 Jan;20(1):107–14.

P19 EEG based emotion recognition while playing computer games

Ashish Kumar Shrivastava 1 , Joy Bose

1 Ericsson, Bangalore, India

2 Ericsson, Global AI Accelerator, Bangalore, India

Email: ashish.kumar.shrivastava@ericsson.com

Emotions are central to human experience and therefore, the use of machine learning to accurately classify human emotions has been an area of popular research in recent times. Most of the available research is based on data collected while the subject is kept stationary and exposed to an external stimulus, such as listening to an audio or watching an audio-visual clip. In this paper, we extend work done by [1], which focuses on studying the emotions when the subject is involved in doing a more complex physiological activity, such as playing computer games. We aim to establish a relationship between emotions and lobes of the brain by examining the EEG signals from those lobes. Additionally, we wish to examine, if deep-learning-based architecture like long-short term memory (LSTM) and its variants can offer better results for emotion classification on GAMEEMO dataset. LSTM is believed to perform better on temporal data having long-term dependencies. The GAMEEMO dataset contains EEG data collected from 28 subjects who played 4 different games, known to elicit a particular kind of emotion. To analyze the dataset, we have used a 4-layered network including LSTM, Bidirectional LSTM and Gated Recurrent Unit (GRU) models. The input data from GAMEEMO dataset is fed to the network and it learns to associate EEG data with the emotion class label. In addition, we also learn to associate the emotion class label with the lobe of the brain by segregating the EEG electrodes as per their position.

We have achieved an average accuracy of greater than 80% for all the channels with each of the 3 models, which is significantly better than the earlier work (Fig. 1). The spatial analysis of results also suggests that there exists a strong relationship between Occipital lobe and HANV (High arousal Negative Valence) emotion class and Parietal lobe and LAPV (Low Arousal Positive Valence) emotion class. We observed that Bidirectional LSTM outperforms the other two models when it comes to overall average classification accuracy. Out of the three models, classes HANV and LAPV show much better classification results as compared to HAPV (High Arousal Positive Valence) and LANV (Low Arousal Negative Valence). HANV class emotions such as anger, nervousness, horror, etc., and electrical activity in the Occipital lobe seem to have a strong relationship, as this lobe produced the best results for HANV class. HANV is associated with emotions such as horror (as tagged in Game G3 in our dataset). One reason for this could be that the stimulation of the occipital lobe is associated with heightened emotions.

References

1. Alakus TB, Turkoglu I. Emotion recognition with deep learning using GAMEEMO data set. Electronics Letters. 2020 Oct 22;56(25):1364–7.

Fig. 1
figure x

Comparison of the emotion prediction accuracy using different models

P20 Fitting neural models to experimental data with Brian 2

Marcel Stimberg 1 , Aleksandra Teska 2 , Ante Lojić Kapetanović 3 , Romain Brette 4

1 Institut de la Vision, Sorbonne Université, INSERM, CNRS, Paris, France

2 École Polytechnique Fédérale de Lausanne (EPFL), School of Life Sciences, Geneva, Switzerland

3 University of Split, FESB, Split, Croatia

4 Sorbonne Université, Paris, France

Email: marcel.stimberg@inserm.fr

Brian 2 [1] is a neural simulator for biological spiking neural networks. It is based on a code-generation approach, which means that it transforms arbitrarily user-specified model equations into efficient compiled code. This approach makes it an ideal tool to develop and explore new, detailed models of neural activity. Most parameters of such models do not correspond to physical quantities that can be measured directly and are therefore not exactly known beforehand. Consequently, a common task for modellers is to adapt the model parameters so that they reproduce a certain set of experimental data as accurately as possible. Adapting the parameters has often been an ad-hoc procedure, where the researcher tweaked the parameters until the fit to the experimental data looked “good enough”. Such a procedure is obviously time-consuming, and will most often lead to a sub-optimal solution. At the same time, a large number of automatic optimization algorithms exists, and several software packages provide efficient implementations for them. However, using these approaches together with simulators like Brian 2 is not yet common in the community. One reason for this is that their efficient use is not always straightforward and requires considerable effort by the researcher. Switching between approaches and the packages that implement them also requires adapting the code to a new interface.

Here, we present how the Brian 2 simulator, together with the brian2simulator package, enables researchers to overcome these difficulties. It provides a unified interface to several state-of-the-art optimization algorithms so that researchers can determine the best-fitting parameters of their models. The supported approaches include global optimization methods (provided by the Nevergrad [2] library), as well as local gradient-based methods (provided by the scipy [3] package). The gradient-based methods can be accelerated by making use of Brian 2's facilities to symbolically analyse the model equations. This makes it possible to provide an exact calculation of the gradient, instead of relying on an approximation.

Finally, we demonstrate how to go beyond parameter optimization with a more recent approach, simulation-based inference [4,5]. This approach provides the researcher with a more complete view of the fit of a model to experimental data, by estimating the full posterior distribution of the model parameters given the data.

Acknowledgements

The International Neuroinformatics Coordinating Facility (INCF) and Google supported the development of the brian2modelfitting toolbox via Google Summer of Code internships.

References

1. Stimberg M, Brette R, Goodman DF. Brian 2, an intuitive and efficient neural simulator. Elife. 2019 Aug 20;8:e47314.

2. Rapin J, Teytaud O. Nevergrad-A gradient-free optimization platform. version 0.2. 0, https://GitHub.com/FacebookResearch/Nevergrad. 2018.

3. Virtanen P, Gommers R, Oliphant TE, Haberland M, Reddy T, et al. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nature methods. 2020 Mar;17(3):261–72.

4. Cranmer K, Brehmer J, Louppe G. The frontier of simulation-based inference. Proceedings of the National Academy of Sciences. 2020 Dec 1;117(48):30,055–62.

5. Tejero-Cantero, A, Boelts J, Deistler M, Lueckmann, J-M, Durkan, et al. sbi: A toolkit for simulation-based inference. JOSS. 2020, 5(52), 2505.

P21 Active dendrites support robust synchronous spiking computation

Thomas Burger 1

1 University of Cambridge, Engineering, Cambridge, United Kingdom

Email: thomasburger.nl@gmail.com

There is evidence that the brain can compute quickly and reliably with single spikes in certain instances. This points to an operating regime very different from rate coding: neuronal noise is suppressed and the binary, all-or-nothing nature of spikes plays an important role. What components might allow neurons to achieve this? In this work, we explore how neural oscillations can orchestrate rapid and robust binary computation with the aid of dendrites.

Two ideas are central: saturation and synchrony. Saturation allows neurons and their components (e.g.\ dendritic compartments, ion channels) that are strongly driven to act effectively as if they are binary units. Coupled with synchrony, which facilitates the quick integration of related inputs, these binary units can perform rapid spike-based computations reliably. In our simulations, synchrony comes from population oscillations mediated by inhibitory interneurons, which define periodic integration and firing windows for the neurons. However, synchronization comes at a price: the effect of incoming spikes depends heavily on their timing relative to the oscillations. We find a trade-off in performance: wider integration windows give more robust input summations, but lead to proportionally more jitter in the timing of the output spikes. Narrower integration windows make for more precise spike timings, but require the input spikes to arrive nearly simultaneously. This trade-off is hard to avoid without fine-tuning synaptic delays, which is unphysiological.

We show that this issue can be redressed with the use of active dendrites. Specifically, we develop a simple model of an active, saturating dendrite which decouples the integration of inputs from the firing time of the soma, thereby fixing the problem. We show that a network equipped with these dendrites can robustly display oscillations which coexist with ongoing computations based on single spikes. That is, the generation of synchrony and the performance of computations can both independently be achieved by the same network

Taken together, these ideas provide a hypothesis on how some biological circuits in the brain could perform a binary computation efficiently (with a small number of neurons), quickly (with one spike volley per layer), and robustly (in the presence of noise).

P22 Gap junction conductance non-monotonically determines action potential propagation

Erin Munro Krull 1 , Nancy Kopell 2 , Christoph Börgers 3

1 Ripon College, Ripon, WI, United States of America

2 Boston University, Mathematics & Statistics, Boston, MA, United States of America

3 Tufts University, Mathematics, Medford, MA, United States of America

Email: ecmun2002@yahoo.com

Gap junctions are known to connect neurons, glia, retinal cells, as well as cardiac cells. Of these, action potentials (APs) can actively propagate between neurons and cardiac cells. Previous experiments in cardiac cells show that increasing the gap junction conductance can initially enhance propagation, while higher gap junction conductances re-introduce propagation block [1,2]. Similarly, neuronal models show that there is an ideal gap junction conductance for AP propagation [3,4].

We investigate AP propagation through a chain of cells, modifying the gap junction conductance (g) and the number of downstream neighbors (k). Using the Fitzhugh-Nagumo model, we are able to predict propagation through a chain of cells by reducing the model to 1-dimension which focuses on the fast dynamics. By analyzing the fixed points in this reduced 1-dimensional model (Fig. 1), we can predict when APs will propagate through the entire chain of cells, partially propagate through the chain, or not propagate at all. Furthermore, we can predict the spike heights of the propagated APs, as well as a region in the (g,k)-plane where cells no longer fire independently, but their voltage is tied to the leading cell in the chain. We are also able to use a similar 1-dimensional reduction to predict propagation in Hodgkin-Huxley and cardiac cells.

References

1. Rohr S, Kucera JP, Fast VG, Kléber AG. Paradoxical improvement of impulse conduction in cardiac tissue by partial cellular uncoupling. Science. 1997 Feb 7;275(5301):841–4.

2. Rohr S. Role of gap junctions in the propagation of the cardiac action potential. Cardiovascular research. 2004 May 1;62(2):309–22.

3. Gansert J, Golowasch J, Nadim F. Sustained rhythmic activity in gap-junctionally coupled networks of model neurons depends on the diameter of coupled dendrites. Journal of neurophysiology. 2007 Dec;98(6):3450–60.

4. Nadim F, Golowasch J. Signal transmission between gap-junctionally coupled passive cables is most effective at an optimal diameter. Journal of neurophysiology. 2006 Jun;95(6):3831–43.

Fig. 1
figure y

A Each each cell connects one upstream cell (vu) and k downstream cells (vd = 0). B F(v) is the fast dynamics of the cell’s currents, G(v,vu) is the gap junction current. If there is a saddle-node bifurcation as vu increases, the cell can fire. C A saddle-node bifurcation occurs when F(v) and G(v,vu) are tangent. The resulting bifurcation curve predicts a peak in k for AP propagation

P23 The impact of neuronal noise statistics on binocular rivalry dynamics

Maria Inês Cravo 1 , Rui Bernardes 2 , Miguel Castelo-Branco 3

1 University of Coimbra, Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Coimbra, Portugal

2 University of Coimbra, Faculty of Medicine, Clinical Academic Center of Coimbra (CACC), Coimbra, Portugal

3 University of Coimbra, Institute of Nuclear Sciences Applied to Health, Coimbra, Portugal

Email: m.ines.cravo@hotmail.com

Neuronal noise is a characteristic of brain computations that can play a central role in visual phenomena such as binocular rivalry. In addition, the statistics of neuronal noise may be important in neurological conditions where noise abnormally affects perception, such as schizophrenia, autism spectrum disorder and developmental dyslexia. However, there is no systematic approach to include noise in computational models of neuronal circuits involved in vision.

Binocular rivalry is a visual phenomenon where two images presented simultaneously and independently to the two eyes alternate in perception irregularly. Computational models of this phenomenon rely on neuronal networks of the visual cortex with competition between populations responsive to different patterns, and the switch in perception is proposed to result from random perturbations in neuronal activity.

Here, we compare three biologically plausible stochastic processes by studying how they affect the simulated dynamics of binocular rivalry. We include white Gaussian noise, usually regarded as the null hypothesis for noise, Ornstein–Uhlenbeck noise, a model of noise filtered by synapses, and pink noise, a statistical process found in natural phenomena from earthquakes to heartbeats, and in measures of brain activity such as magnetoencephalography and local field potentials. We simulate a network with three layers of neurons: a monocular layer, a binocular layer, and a layer of ocular opponency neurons, which detect interocular conflict.

By simulating the model for a wide range of parameter values, varying image input contrast, noise intensity, and noise correlation time, we find that temporally uncorrelated white noise does not produce strong rivalry (Fig. 1). We also estimate the minimum correlation time constant (t ~200 ms) for Ornstein–Uhlenbeck noise to be consistent with experimental values of percept durations, which have been measured to be between 1 and 10 s. Although pink noise and Ornstein–Uhlenbeck noise have similar phase diagrams when looking at rivalry strength (Fig. 1), calculation of the coefficient of variation of percept durations reveals that pink noise (CVpink = 0.59 ± 0.08) is better than Ornstein–Uhlenbeck noise (CVOU = 1.0 ± 0.1) at reproducing experimental values, between 0.4 and 0.6. Our model also predicts that the strength of rivalry is lower at extreme input contrasts, close to 0 and 1.

This comparison of commonly used, but rarely characterized, models of synaptic noise may guide future computational works on binocular rivalry and other perceptual phenomena where noise has a relevant contribution.

Acknowledgements

We thank Fundação para a Ciência e a Tecnologia for supporting this research with grants FCT-UID/4950/2020 (RB and MCB), UI/BD/150861/2021 (MIC), PTDC/PSI-GER/1326/2020 (MCB) and DSAIPA/DS/0041/2020 (MCB). We also thank Comissão de Coordenação e Desenvolvimento Regional do Centro for supporting this research with grant CENTRO-01–0145-FEDER-000016 (MCB and MIC).

Fig. 1
figure z

Effect of noise intensity and input contrast on relative dominance time, a measure of rivalry strength. Darker squares denote a strong dominance, and lighter squares denote binocular fusion. The white contour line corresponds to mean percept duration equal to 1 s, defining the less transparent region of the heatmap as the one that satisfies 1 < D < 10 s

P24 Oscillatory network model to understand theta-sequences in one-dimensional motion

Kushal Reddy 1 , Bharat Patil 2 , Azra Aziz 2 , Ayan Mukhopadhyay 3 , V Srinivasa Chakravarthy 1

1 Indian Institute of Technology, Madras, Department of Biotechnology, Chennai, India

2 Indian Institute of Technology, Madras, Computational Neuroscience Lab, Department of Biotechnology, Chennai, India

3 Indian Institute of Technology, Madras, Department of Physics, Chennai, India

Email: kushal@smail.iitm.ac.in

The hippocampal cells, broadly categorized as spatial cells, have a key role in the storage of experience, which is essential for learning, navigation and formation of memory. The processes behind this storage are not well known. We devise a computational model for understanding theta-sequences during linear motion. Theta sequences are “clear, ordered sequences” observed in the theta wave, with segments reflecting the position, time during motion in an animal brain [1]. Neurophysiological findings suggest that the hippocampal theta sequences are found ahead or behind the position in the path trajectory when there is altered velocity [2], and these sequences have a phase relationship with the background theta rhythm [1]. While studies have shown that the theta sequences have a phase precession, some findings note the dependence of the theta sequences on velocity, directionality and activity to depict Spatio-temporal signals and spatial representations of present and future [3–6].

We present a network model (Fig. 1) centrally built with oscillatory neurons as input and has multiple layers. The first layer is the Path integration (PI) layer that encodes the displacement in the preferred direction (forward or backward) by encoding a scaling factor of β and the speed. The base frequency of the oscillators is modulated using speed and beta as modulating factors. The output is fed to layers of stacked auto-encoder that extract the principal components. Finally, we have a hidden layer that acts as the regressor to predict the velocity.

The neuron output from this layer is analyzed by a) thresholding the firing, b) filtering the neurons based on sequential firing and c) rearranging the neurons based on position; we thus identify the wave firing pattern coherent to the theta rhythm in order of the motion. We can replicate the firing pattern observed in the case of theta sequences in one-dimensional motion [1]. Thus, the output (Fig. 1) helps observe theta sequences based on the underlying Spatio-temporal cells in the model that extends the applicability of the current oscillator-based modelling framework to understand navigation and learning.

References

1. Foster DJ, Wilson MA. Hippocampal theta sequences. Hippocampus. 2007 Nov;17(11):1093–9.

2. Gupta AS, Van Der Meer MA, Touretzky DS, Redish AD. Segmentation of spatial experience by hippocampal theta sequences. Nature neuroscience. 2012 Jul;15(7):1032–9.

3. Buzsáki G. Theta oscillations in the hippocampus. Neuron. 2002 Jan 31;33(3):325–40.

4. Buzsáki G, Moser EI. Memory, navigation and theta rhythm in the hippocampal-entorhinal system. Nature neuroscience. 2013 Feb;16(2):130–8.

5. Cei A, Girardeau G, Drieu C, El Kanbi K, Zugaro M. Reversed theta sequences of hippocampal cell assemblies during backward travel. Nature neuroscience. 2014 May;17(5):719–24.

6. Wikenheiser AM, Redish AD. Hippocampal theta sequences reflect current goals. Nature neuroscience. 2015 Feb;18(2):289–94.

Fig. 1
figure aa

A Model architecture. B The Spike Raster Plot of a subset of neurons. C Power spectral density of the raw neuron output shows the coincidence of all peaks with the maximum around 6 Hz. D The extracted phase from the Hilbert transform of the output wave and E Raw neuron output from the final layer, the peaks occur in sequence within the theta (0.16 s)

P25 Modelling working memory using deep convolutional Elman and Jordan neural networks

Dhruv Chopra 1 , Sweta Kumari 2 , V Srinivasa Chakravarthy 2

1 Indian institute of Technology Madras, Electrical Engineering, Chennai, India

2 Indian Institute of Technology Madras, Biotechnology, Chennai, India

Email: chopradhruv1610@gmail.com

Working memory system in the brain combines the temporary storage and manipulation of information in the service of reasoning and the guidance of decision-making tasks [1,2]. To maintain this, our brain scans the entire image piecewise by attending to only a small region of the entire big picture and part by part aggregates the entire information given in the image, with fading memory of the information represented by the parts focused at very early on and best recollection of the most recently focused regions. Similar to this, we propose a dual channel multilayered convolutional recurrent neural network architecture to solve the image reconstruction problem. We model the recurrent connections according to the architecture (Fig. 1), consisting of a network with both Elman and Jordan layers as recurrent connections. The Elman connections form the self recurrent connections for each of the convolutional layers present in the architecture [3] while the Jordan connection forms the recurrent connection from the penultimate layer to the previous layers [4]. We try to reconstruct two kinds of images, one with brightness diminishing by a constant factor across the image regions encountered in the past time steps and the other with constant brightness across all the time steps. The inputs across time steps are the heatmaps signifying the location in the image where the current attention is focused at for the first channel and a zoomed in version of the attention window for the second channel. The output at each timestep is the aggregated image from the initial up to the current timestep with diminishing brightness across the regions encountered in the past, precisely how our brain memory works, in the first case and with constant brightness in the second. We test the performance of our proposed architectures on the MNIST dataset and the Fashion MNIST dataset. Using the Elman Jordan recurrent connections we obtain reconstruction test Mean Squared Error losses of 0.0022 on the MNIST dataset and 0.0032 on the Fashion MNIST dataset after training for 100 epochs for images with diminishing brightness in the output over the previous timesteps. The reconstruction test Mean Squared Error losses are 0.0049 on the MNIST dataset and 0.0084 on the Fashion MNIST Dataset for images with constant brightness after training for 100 epochs. Thus, we are able to achieve good image reconstruction results with a network architecture with lightweight recurrent connections by extending the Elman Jordan equations to a convolutional form and utilizing a dual channel architecture.

References

1. Barkley RA. The executive functions and self-regulation: An evolutionary neuropsychological perspective. Neuropsychology review. 2001 Mar;11(1):1–29.

2. Malenka RC, Nestler EJ, Hyman SE, Sydor A, Brown RY. Higher cognitive function and behavioral control. Molecular neuropharmacology: A foundation for clinical neuroscience. 2009:313–21.

3. Elman JL. Distributed representations, simple recurrent networks, and grammatical structure. Machine learning. 1991 Sep;7(2):195–225.

4. Jordan MI. Serial order: a parallel distributed processing approach. Technical report, June 1985-March 1986. California Univ., San Diego, La Jolla (USA). Inst. for Cognitive Science; 1986 May 1.

Fig. 1
figure ab

In the top half of the image, we show the Elman Jordan based convolutional neural network architecture that we use to model memory. In the bottom half, we show the reconstructed image output and also show the ground truth for comparison. We also include a table with the Mean Squared Error (MSE) values for both the datasets in both the normal and the diminishing image scenarios

P26 Modelling a central pattern generator using capacitively coupled nano-oscillators

Akhil Bonagiri 1 , Dipayan Biswas 2 , V Srinivasa Chakravarthy 2

1 Manipal Institute of Technology, Department of Electronics and Communication Engineering, Hyderabad, India

2 Indian Institute of Technology Madras, Biotechnology, Chennai, India

Email: akhil.bonagiri1@learner.manipal.edu

Legged animal locomotion such as walking and running is based on periodic limb movements. The neural circuits underlying various rhythmic motor behaviors can be traced to the central pattern generator (CPG). Hence, bio-inspired robotics aims to employ CPGs to control limb movement for synchronized locomotion [1]. A CPG can produce coordinated rhythmic output signals without any feedback mechanism while receiving simple input signals from the higher regions in the brain, ideal to be implemented by a system of coupled limit-cycle oscillators. Recently, the surge in the development of solid-state nanoelectronic devices has enabled the implementation and experimental realization of neuromorphic structures designed to reproduce various computational features observed in the neural system.

A single relaxation oscillator (Fig. 1a) can be realized by placing a two–terminal memristive device composed of Vanadium Dioxide (VO2) in series with a MOSFET and a capacitor [2]. We consider a network of four such oscillators connected in a ring topology with capacitive nearest-neighbor bidirectional coupling (Fig. 1a). The coupling capacitance CC controls the coupling strength, wherein a high (low) CC corresponds to an inhibitory (excitatory) connection. A previous work demonstrated a three-gait CPG using a similar network wherein the difference in intrinsic frequencies between the oscillators was used to obtain phase shifts in the frequency-locked regime [3]. In this simulation-based study, we demonstrate a six-gait neuromorphic CPG by exploring the dynamics of a ring network by modulating the coupling strengths between oscillators. A range of phase-tunable spatiotemporal patterns emerge in the network while modulating the coupling elements under different coupling schemes. We propose three such schemes; when tuned accordingly, the network can produce steady-state phase patterns that are suitable to closely generate all the primary walking gait patterns observed in quadruped animals according to Alexander’s classification [4] (Fig. 1b). The generated patterns along with the corresponding coupling parameters are depicted in Fig. 1c.

In conclusion, our results illustrate that coupled nano-oscillators offer a compact and low-power hardware [3] platform to model a CPG. Additionally, inserting transistors in series with the coupling capacitors makes the network real-time programmable, where the locomotion speed and gait patterns can be modulated by just adjusting a small set of bias voltages, paving the way for feedback-driven adaptive gait generation. Ultimately, such a platform for locomotion control would provide an excellent opportunity to realize bio-inspired neuromorphic hardware for autonomous robots and other applications.

References

1. Ijspeert AJ. Central pattern generators for locomotion control in animals and robots: a review. Neural networks. 2008 May 1;21(4):642–53.

2. Shukla N, Parihar A, Freeman E, Paik H, Stone G, et al. Synchronized charge oscillations in correlated electron systems. Scientific reports. 2014 May 14;4(1):1–6.

3. Dutta S, Parihar A, Khanna A, Gomez J, Chakraborty W, et al. Programmable coupled oscillators for synchronized locomotion. Nature communications. 2019 Jul 24;10(1):1–0.

4. Alexander RM. The gaits of bipedal and quadrupedal animals. The International Journal of Robotics Research. 1984 Jun;3(2):49–59.

Fig. 1
figure ac

a Schematic of the capacitively coupled oscillatory network. Limbs: LF = Left front, RF = Right front, LH = Left hind, RH = Right hind. b Various gait patterns observed in quadruped animals [4]. c The generated steady-state phase patterns; i Lateral Sequence Walk. ii Diagonal Sequence Walk. iii Trot. iv Pace. v Canter. vi Transverse Gallop

P27 Feedback and feedforward model in motion anticipation

Kuan Hao Chen 1 , Qi-Rong Lin 2 , Chi Keung Chen 2

1 Academia Sinica, Taipei, Taiwan

2 Academia Sinica, Institute for Physics, Taipei, Taiwan

Email: r08222039@ntu.edu.tw

To produce timely responses, animals must conquer delays from visual processing pathway by predicting motion. Previous studies [1] revealed that predictive information of motion is encoded in spiking activities of retinal ganglion cells (RGCs) early in the visual pathway. In order to study the predictive properties of a retina in a more systematic manner, stimuli in the form of a stochastic moving bar are used in experiments with retinas from bull frogs in a multi-electrode system. Trajectories of the bar are produced by Ornstein–Uhlenbeck (OU) processes with different time correlations (memories) induced by a butter-worth low-pass filter with various cut-off frequencies.

We then investigated the predictive properties of single RGC by calculating the time shifted (δt) mutual information (MI(x,r;δt)) between spiking output (r(t)) from a single RGC and the bar trajectories (x(t)). Intuitively, the peak position of MI(δt) is typically negative when considering the processing delay of the retina. Our measured peak positions of MI(δt) for some RGCs were characterized by both positive and negative peak position under low-pass OU (LPOU) stimulus. This finding indicates that some RGCs (P-RGCs) are predictive while the others are non-predictive (NP-RGCs). For LPOU with various cut-off frequencies, the MI peaks from the P-RGCs are positively correlated with the correlation times of the stimuli while those from the NP-RGCs are always around a fixed negative number (-50 ms).

In order to further understand the mechanism of prediction, we develop a negative group delay model which is based on Voss’s [2] paper to generate anticipative responses. We extend our model to spatial version and use the same stimulation condition as we use in experiments. The model indicates that delayed negative feedback is crucial for producing MI(x,r;δt) similar to those observed in experiments. Besides, we also show feedforward inhibition can also generate similar prediction dynamics. Thus, we presume horizontal cells’ feedback and feedforward inhibition may participate in this prediction phenomenon. Besides, our feedback and feedforward model can also predict constant velocity moving bar [3]. After adding LPOU noises into constant velocity moving bar, our model even predicts better than gain control model [3]. To sum up, our feedforward and feedback model can anticipate both stochastic and constant velocity moving bar.

References

1. Palmer SE, Marre O, Berry MJ, Bialek W. Predictive information in a sensory population. Proceedings of the National Academy of Sciences. 2015 Jun 2;112(22):6908–13.

2. Voss HU. Signal prediction by anticipatory relaxation dynamics. Physical Review E. 2016 Mar 30;93(3):030,201.

3. Berry MJ, Brivanlou IH, Jordan TA, Meister M. Anticipation of moving stimuli by the retina. Nature. 1999 Mar;398(6725):334–8.

P28 Generative episodic memory: A computational model

Zahra Fayyaz *1 , Laurenz Wiskott 2 , Sen Cheng 1

1Ruhr-University Bochum, Institute for Neuroinformatik, Bochum, Germany

2Ruhr-University Bochum, Institute for Neural Computation, Bochum, Germany

*Email: zarfayyaz@gmail.com

Many studies have suggested that episodic memory is a generative process, but most computational models nevertheless adopt a storage view according to which the contents of the memory more or less faithfully reflect the content of the experience. As a result, the investigation of generative episodic memory uses conceptual descriptions and remains rather vague. We, therefore, propose a computational model for generative episodic memory based on one central hypothesis: episodic memory traces store and retrieve selected aspects of an episode in a compressed format, which are necessarily incomplete. The missing information is filled in during retrieval based on general semantic information.

The computational model consists of two parts: the visual processing network and the semantic network. First, the images are passed through an autoencoder (AE) structure. The encoder part models the processing of episodic experiences into more abstract gist representations. These latent representations can be used by the decoder to reconstruct the missing details. This structure represents the visual pathway in the neocortex and is implemented using the Vector Quantized Variational Autoencoder (VQ-VAE). Attention is modeled by selecting parts of the latent neural representation and storing them as a memory trace. This process is hypothesized to occur in the hippocampus. To reconstruct the full latent representation from this partial memory trace, we use a semantic network based on the Pixel-CNN architecture. This network is trained on the full latent neural representations and learns their structure and statistics. It can then generate new valid neural representations or complete the missing parts of partial memory trace according to the learned statistics.

Both the VQ-VAE and the Pixel-CNN are state-of-the-art machine learning generative algorithms, which allows us to use more realistic sensory inputs in contrast to the majority of hippocampal memory models that process abstract and simple patterns. Experiments have shown that objects that are experienced in a semantically congruent context are recalled better than objects in an incongruent context, as there is no conflict between episodic and semantic memory. Also, interactions with objects (i.e., paying attention) increases memory accuracy. Moreover, it has been shown that, in incongruent cases, objects that are not remembered episodically correctly are more often remembered semantically correctly than completely wrong. Our computational model accounts for the aforementioned experimental results.

This shows that the model is successful in capturing the complex statistics from the input. When only parts of the latent neural representation are attended and stored, and then later recalled, the results are not necessarily faithful. Still, they are valid and likely reconstructions consistent with the original data. Our modeling results support our hypothesis on generative episodic memory. The stored gist has far less information content than the input images; nonetheless, the inputs can be reconstructed from the gist with the help of a semantic network. The model is also capable of dreaming, i.e., generating unseen but valid episodes. In conclusion, our model suggests how generative episodic memory could be implemented and provides the basis for further investigations and comparisons to neural processes.

P29 A simple computational model of increased olfactory bulb network oscillations with synaptic degradation

Kendall Berry 1 , Daniel Cox 1

1 University of California, Davis, Physics, Davis, CA, United States of America

Email: jkberry@ucdavis.edu

Loss of olfaction is a common early symptom of several neurodegenerative diseases, including Alzheimer’s disease and Parkinson’s disease [1]. Pathological markers of these diseases are found in the olfactory bulb at early stages of disease progression [2,3], and studies replicating disease-like pathology in animal models have observed perturbations in oscillatory activity in the olfactory bulb [4,5]. We implement a simple computational model of olfactory bulb oscillatory activity and explore the effects of damage to the network. Because synaptic dysfunction is known to play a role in both Alzheimer’s and Parkinson’s disease [6,7], and as it fits the scope of the model used here, we limit our focus to this aspect of the pathology and model network damage primarily by weakening synaptic weights. Damage is propagated throughout the network in several schemes: localized, spreading, and globalized. Moderate levels of globalized and spreading damage result in increased oscillatory power. Damage reduces inhibition and increases the average activity level of the mitral cell model units, leading to an increase in network oscillations that critically depends on the nonlinearity of the activation function. Greater damage results in loss of oscillations, which can be predicted by a linearized analysis of the model activity. Thus, we explore one potential mechanism behind the increased gamma oscillations found in some animal models of Alzheimer’s disease [4,5] and highlight the potential for olfactory bulb behavior to play a role in early diagnosis of disease.

References

1. Ubeda-Bañon I, Saiz-Sanchez D, Flores-Cuadrado A, Rioja-Corroto E, Gonzalez-Rodriguez M, et al. The human olfactory system in two proteinopathies: Alzheimer’s and Parkinson’s diseases. Translational Neurodegeneration. 2020 Dec;9:1–20.

2. Kovács T, Cairns NJ, Lantos PL. Olfactory centres in Alzheimer's disease: olfactory bulb is involved in early Braak's stages. Neuroreport. 2001 Feb 12;12(2):285–8.

3. Doty RL. Olfactory dysfunction in Parkinson disease. Nature Reviews Neurology. 2012 Jun;8(6):329–39.

4. Chen M, Chen Y, Huo Q, Wang L, Tan S, et al. Enhancing GABAergic signaling ameliorates aberrant gamma oscillations of olfactory bulb in AD mouse models. Molecular neurodegeneration. 2021 Dec;16(1):1–23.

5. Li W, Li S, Shen L, Wang J, Wu X, et al. Impairment of dendrodendritic inhibition in the olfactory bulb of APP/PS1 mice. Frontiers in aging neuroscience. 2019 Jan 24;11:2.

6. Spires-Jones TL, Hyman BT. The intersection of amyloid beta and tau at synapses in Alzheimer’s disease. Neuron. 2014 May 21;82(4):756–71.

7. Calo L, Wegrzynowicz M, Santivañez‐Perez J, Grazia Spillantini M. Synaptic failure and α‐synuclein. Movement Disorders. 2016 Feb;31(2):169–77.

P30 Drifting memories: Spontaneous long-term evolution of memory representations in the hippocampus

Lars Bollmann 1 , Federico Stella 2 , Peter Baracskay 3 , Jozsef Csicvari 1

1 IST Austria, Vienna, Austria

2 Donders Institute, Nijmegen, Netherlands

3 Eötvös Loránd University, Budapest, Hungary

Email: lars.bollmann@ist.ac.uk

Sleep's fundamental role for the processing of memory and its consolidation has now received substantial experimental support. Nevertheless sleep can be hardly considered as an homogeneous state: it consists of multiple stages that can be broadly classified in the two main categories of REM and non-REM (nREM) sleep. These two sleep states show widely different physiological characteristics both at the level of local activity and in terms of global brain dynamics. Importantly, their relative contribution to memory function is largely unknown and questions about their interaction during off-line processing of newly acquired information have remained mostly untapped.

In this study, we address these issues by combining a goal-directed learning task with long-term wireless electrophysiological recordings in the Hippocampus of rats. After the acquisition of a novel episodic-like memory, place cell activity was continuously tracked for an extended period of time (> 10 h) while animals rested. We then combined multiple decoding approaches to obtain a time-resolved characterization of the evolution of a memory representation during sleep following its initial encoding. Over the course of several hours, we could track a continuous drift in the reactivated activity patterns, as they progressively accumulate distance from the representation expressed at the end of learning. Intriguingly, the direction of drift is not constant: a closer inspection in fact reveals opposing effects for REM and nREM phases. While nREM sleep 'pushes' the reactivated activity away from the old representation, REM sleep coincides with periods of reversal, partially resetting the ongoing reconfiguration. REM and nREM reactivations present otherwise only minor differences: while the reactivation content is largely overlapping in the two phases, activity patterns expressed during REM present a higher similarity to the awake ones, possibly due to REM slower temporal dynamics.

Further analysis identified the main source of memory representation drift in the differential modulation of firing rates over the course of sleep, resulting in a significant sparsification of the assemblies responsible for the encoding of goal locations.

Together these results present a first-time detailed account of the effects of off-line reactivations on the evolution of hippocampal memory representations. We show how the effect of REM and nREM stages integrate over the course of sleep in reshaping memory related neural activity, a phenomenon relevant not only in understanding the nature of neural coding but also in establishing a link between memory transformation and homeostatic processes.

P31 Stimulus-independent neural assembly interactions across brain regions

Michele Nardin 1 , Federico Stella 2 , Jozsef Csicvari 1

1 Institute of Science and Technology (IST) Austria, Klosterneuburg, Austria

2 Donders Institute, Nijmegen, Netherlands

Email: michele.nardin@ist.ac.at

Neuronal assemblies are thought to underlie brain-wide cognitive and mnemonic functions, and were first hypothesized to exist more than 70 years ago by Donald Hebb. He envisioned them as being densely interconnected subsets of neurons which act in a loosely synchronized manner by consistently activating when the subject thinks of a particular concept or idea. Neuroscientists have sought assemblies and tried to characterize them ever since. In the last decade, numerous computational techniques have been developed to extract patterns of co-activation from multiple neurons recorded simultaneously. This co-activation based approach can be, however, conceptually different from Hebb’s original view. Especially in brain areas where neurons show clear firing preferences for one or more environmental variables, strong pairwise correlations do not necessarily reflect an underlying physical or even functional connectivity; in fact, awake neural correlations are mostly explained by their co-selectivity for stimuli.

Here, we introduce a method for detecting neural ensembles that is not influenced by common stimulus selectivity or global synchrony. We do this by employing proper null models of neural activity, which we utilize to simulate firing and determine the amount of neural activity that exceeds expectations. From those traces we then extract stimulus-independent co-activation patterns. This procedure enables us to detect densely functionally or physically interconnected subsets of excitatory neurons, together with their above-chance co-activation patterns over time.

We validated our method on synthetic data, where we found that the underlying true assemblies were detected more reliably than existing co-activation based approaches. We then applied the analysis on several datasets of simultaneously recorded single cells in different brain regions. These included the hippocampus, prefrontal cortex and entorhinal cortex of rats performing foraging, spatial learning and rule switching tasks. Crucially, these data allow us to investigate the presence of structured interactions between neural assemblies belonging to different brain regions. We find that cross-area interactions are time-modulated, emerging in correspondence with periods of higher cognitive load, such as rule switching or contingency update.

Our evidence for the key role in the acquisition and transfer of information played by distributed neural ensembles, points to the necessity of effective detection methods. Our approach enables us to disentangle the different levels of interaction in complex networks, unveiling the relevant neural structures responsible for information processing and bringing us closer to the original Hebb intuition.

P32 Dendritic normalisation improves learning in sparsely connected artificial neural networks

Alexander Bird 1 , Peter Jedlicka 1 , Hermann Cuntz 2

1 University of Giessen, ICAR3R, Giessen, Germany

2 Frankfurt Institute for Advanced Studies (FIAS) & Ernst Strüngmann Institute (ESI), Computational Neuroanatomy, Frankfurt am Main, Germany

Email: alex.neurosci@gmail.com

The afferent connectivity of a neuron depends heavily on the size and structure of its dendritic tree; under general assumptions the number of potential excitatory synapses a neuron is expected to receive is proportional to the total length of its dendrites [1,2]. Conversely, the expected local input resistance of a dendritic tree is approximately inversely proportional to its length, as synaptic currents can more easily dissipate both across the larger cell membrane and along the dendrites themselves [3,4]. Taken together, these two factors imply that the influence of a single synaptic contact on the excitability of a neuron is likely to be inversely proportional to the number of connections that that neuron receives across its entire dendritic tree (Fig. 1A). Thus, dendrites intrinsically provide an L0-normalisation on synaptic inputs.

Here we study the computational implications of this effect using sparsely-connected artificial neural networks (Fig. 1B). These networks adapt their connectivity to solve defined computational tasks such as classifying inputs and have a number of advantages in terms of efficiency over the more common dense networks [5,6]. We apply the normalisation implied by dendritic structure to such networks: artificial neurons receiving more contacts require larger dendrites and so each individual contact will both have proportionately less influence and learn more slowly in response to a given error signal. We analyse various sparsely-connected, feedforward network architectures and find that the learning performance is significantly increased (Fig. 1C). This phenomenon also applies in self-organised recurrent networks with spatially extended units (Fig. 1D, E) and provides an improvement over other widely used normalisations in sparse networks (Fig. 1F). Our result is both a practical advance in machine learning and a previously unappreciated way in which the structural properties of neurons may contribute to their computational function.

References

1. Stepanyants A, Hof PR, Chklovskii DB. Geometry and structural plasticity of synaptic connectivity. Neuron. 2002 Apr 11;34(2):275–88.

2. Bird AD, Deters LH, Cuntz H. Excess Neuronal Branching Allows for Local Innervation of Specific Dendritic Compartments in Mature Cortex. Cerebral Cortex. 2021 Feb;31(2):1008–31.

3. Mainen ZF, Sejnowski TJ. Influence of dendritic structure on firing pattern in model neocortical neurons. Nature. 1996 Jul;382(6589):363–6.

4. Cuntz H, Bird AD, Beining M, Schneider M, Mediavilla L, et al. A general principle of dendritic constancy–a neuron’s size and shape invariant excitability. bioRxiv. 2019 Jan 1:787,911.

5. LeCun Y, Denker J, and Solla S. Optimal brain damage. Advances in Neural Information Processing Systems (1990). 2: 598–605.

6. Mocanu DC, Mocanu E, Stone P, Nguyen PH, Gibescu M, et al. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science. Nature communications. 2018 Jun 19;9(1):1–2.

Fig. 1
figure ad

A Schematic of the relationship between dendritic length and connectivity; larger dendrites imply increased afferent connectivity. B Schematic of a sparsely-connected artificial neural network with one hidden layer. C Improved learning performance of networks with dendritic normalisation (orange) against control networks (blue). The top row shows training set cross-entropy loss and the bottom row shows test set accuracy. From left to right the networks have one hidden layer with 100 units and 20% connectivity on MNIST digit data, the same network on MNIST fashion data, and a convolutional network with 20 5 × 5 filters and a sparsely connected layer of 100 units and 20% connectivity on the MNIST fashion dataset. D Distributions of dendritic lengths, afferent connection numbers, and somatic responses to distributed inputs for excitatory neurons in a self-organised recurrent network with spatially extended units before (light green) and after (dark green) training. E Example network with spatially extended units. Excitatory cells are green and inhibitory cells are red. F Comparison of learning rates for neurons with different numbers and weights of connection under different normalisation

P33 Topological data analysis of spontaneous activity in the zebrafish optic tectum

Joshua Paik 1 , Enrique Hansen 2 , German Sumbre 2 , Carina Curto 1

1 Pennsylvania State University, Mathematics, State College, PA, United States of America

2 École Normale Supérieure, Institut de Biologie, Paris, France

Email: joshdpaik@gmail.com

With the aid of optogenetics, two-photon light sheet microscopy allows us to capture the activity of thousands of neurons in the zebrafish larva. In our study, we focus on the spontaneous activity in the zebrafish optic tectum, whose neurons can be organized into functional neural assemblies – groups of highly correlated, co-firing neurons. Previous studies [1–3] have shown that these assemblies display attractor-like dynamics including reverberation, sparse to full activation, and winner-take-all dynamics.

To study the mechanism underlying the observed dynamics, we use techniques from topological data analysis introduced in [4] to analyze the intra-assembly correlation matrices. Given a correlation matrix induced by the neuronal firings of a given assembly, one may construct a filtration of clique complexes and compute Betti curves which reveal structure that is invariant under applying a monotone nonlinearity to the entries. To briefly describe the construction, we can view a symmetric matrix as the weighted adjacency matrix of a complete graph. By adding edges in (reverse) order relative to their weights, we produce a sequence of graphs, and by filling in cliques, construct a filtration of clique complexes. To each clique complex, we compute the k-th Betti number, which indicates the number of k-dimensional “holes” of a clique complex, and by recording the k-th Betti numbers as edges are added, we produce the k-th Betti curve, βk.

In Fig. 1, the k-th Betti curves (fork ≥ 1) of identified assemblies (c1,e1) are found to be identically zero, which is indicative of a low rank structure [5]. These low or identically zero Betti curves are visible across all assemblies. To check that this is not an artifact of the way the correlation matrix was computed, we compare the Betti curves to those induced by the correlation matrix of a random subset of neurons of the same size (d1,f1). We see that the Betti curves of the real assemblies are clearly different from those of the random assemblies. In contrast, we see that the singular values of all four matrices (c2-f2) are both qualitatively similar and indicative of full rank. Hence, the techniques in [4] allow us to see structure in the correlation matrix which is not readily visible using spectral techniques from linear algebra. In addition to analyzing the correlations of the entire recording, we analyze the correlations restricted to when the assemblies are “on” (a large proportion of neurons are firing intra-assembly) or “off,” and find that when the assembly is on, the low Betti curve signature is preserved across assemblies. We propose that this low rank structure is a signature of the attractor-like dynamics observed in the zebrafish optic tectum.

Acknowledgements

This work was supported by NIH R01 NS120581.

References

1. Romano SA, Pietri T, Pérez-Schuster V, Jouary A, Haudrechy M, et al. Spontaneous neuronal network dynamics reveal circuit’s functional adaptations for behavior. Neuron. 2015 Mar 4;85(5):1070–85.

2. Pietri T, Romano SA, Pérez-Schuster V, Boulanger-Weill J, Candat V, et al. The emergence of the spatial structure of tectal spontaneous activity is independent of visual inputs. Cell reports. 2017 May 2;19(5):939–48.

3. Avitan L, Pujic Z, Mölter J, Van De Poll M, Sun B, et al. Spontaneous activity in the zebrafish tectum reorganizes over development and is influenced by visual experience. Current Biology. 2017 Aug 21;27(16):2407–19.

4. Giusti C, Pastalkova E, Curto C, Itskov V. Clique topology reveals intrinsic geometric structure in neural correlations. Proceedings of the National Academy of Sciences. 2015 Nov 3;112(44):13,455–60.

5. Curto C, Paik J, Rivin I. Betti Curves of Rank One Symmetric Matrices. arXiv preprint arXiv: 2103. 00761. 2021 Mar 1.

Fig. 1
figure ae

a, b Neurons in assemblies 83 and 122 are in yellow. Betti curves induced by the correlation matrices of two assemblies c1, e1 are those of a rank 1 matrix and are clearly different from the Betti curves induced by the correlation matrices of a random subset of neurons of the same size (d1, f1). Note the corresponding singular value plots (c2-f2) of each correlation matrix indicate full rank

P34 Properties of Drosophila noxious-cold sensing neurons encoding rate and magnitude of change in temperature

Natalia Maksymchuk 1 , Akira Sakurai 1 , Daniel N. Cox 1 , Gennady Cymbalyuk 1

1 Georgia State University, Neuroscience Institute, Atlanta, GA, United States of America

Email: nmaksymchuk1@gsu.edu

Reliable sensation of cold temperature and its change is necessary for stimulus-relevant behavioral responses. We combined computational and electrophysiological methods to investigate the neural dynamics of Drosophila larva cold-sensing CIII primary afferents. These neurons express a suite of thermoTRP channels implicated in noxious cold sensation [1].

We show that due to variability of responses across individual CIII neurons, as a population, they can encode both the magnitude of cold temperature and the rate of temperature change. Cold-evoked responses of CIII neurons included phasic and tonic components: the peak of firing rate that occurred within 10–20 s of stimulation was followed by frequency adaptation reaching steady-state spiking activity. The steady-state frequency of CIII neurons was temperature-dependent.The estimated temperatures of the half-maximal activation of individual neurons weredistributed over a wide temperature range. The magnitude of the firing rate peak significantly correlated with the maximal rate of temperature change.

Based on transcriptomic data from CIII neurons [1] and patch-clamp data on gating characteristics of Drosophila Na + and K + channels [2,3], we developed a computational model that includes a TRP current with temperature-dependent activation and Ca2 +–dependent inactivation. Modeling suggests that the kinetics of TRP current is responsible for the tonic-phasic response and sensitivity to the rate of temperature change. A rapid inactivation (~3–20 s) of TRP currents could explain the initial peak of spiking rate at rapid temperature fall and subsequent frequency adaptation when the temperature reaches a steady level. We identified two basic cold-evoked patterns of CIII neurons: bursting and spiking. Bursts were more frequently seen within the peak of spiking rate in response to a fast temperature drop, followed by tonic spiking with frequency adaptation. On the other hand, when the temperature was decreased slowly, fewer neurons showed bursts of activity, and the bursting activity did not form a peak of activity.

Using computational model, we described the mechanisms of two basic CIII cold-evoked activities: spiking and bursting, and phasic and tonic components of their responses, which were defined by dynamics of TRP channels together and their interaction with the voltage-gated Ca2 + current and Ca2 +–activated K + currents. By applying an evolutionary algorithm, we obtained parameter sets of the time constant of TRP inactivation, the temperature of half-maximal activation, the steepnesses of TRP current activation, and inactivation representing key features of the CIII spiking responses recorded in experimental data. The present results bring new insights into the potential molecular and biophysical mechanisms underlying neural processing of noxious and innocuous cold stimuli.

Acknowledgments

NIH grant 1R01NS115209 (DNC and GSC)

References

1. Turner HN, Armengol K, Patel AA, Himmel NJ, Sullivan L, et al. The TRP channels Pkd2, NompC, and Trpm act in cold-sensing neurons to mediate unique aversive behaviors to noxious cold in Drosophila. Current biology. 2016 Dec 5;26(23):3116–28.

2. Wang L, Nomura Y, Du Y, Dong K. Differential effects of TipE and a TipE-homologous protein on modulation of gating properties of sodium channels from Drosophila melanogaster. PLoS One. 2013 Jul 18;8(7):e67551.

3. Hardie RC. Voltage-sensitive potassium channels in Drosophila photoreceptors. Journal of Neuroscience. 1991 Oct 1;11(10):3079–95.

P35 Control of bursting activity based on interaction of Na + /K + pump with persistent sodium current

Ricardo J. E. Toscano 1 , Samuel Core 1 , Ronald Calabrese 2 , Gennady Cymbalyuk 1

1 Georgia State University, Neuroscience Institute, Atlanta, GA, United States of America

2 Emory University, Department of Biology, Atlanta, GA, United States of America

Email: gcymbalyuk@gmail.com

Life-supporting rhythmic motor functions like heart beating in invertebrates and breathing in vertebrates require indefatigable generation of a robust rhythm by specialized oscillatory circuits, Central Pattern Generators (CPGs). Yet, CPGs should be sufficiently flexible to adjust to changes of the environment and behavioral goals. Neuromodulation modifies the CPG’s rhythm by co-regulating multiple ionic currents, including the Na+/K+ pump current, Ipump. In the leech heartbeat CPG, endogenous neuropeptide myomodulin downregulates Ipump and upregulates Ih to speed up the CPG’s rhythm [1]. The interaction of these currents dramatically speeds up the rhythm of the leech heartbeat CPG when Ipump is activated by increased internal Na+ concentration, [Na+]i, produced by application of monensin [2]. Comodulation of Ipump and Ih supports the CPG’s functional activity in a wider range of the pattern’s cycle period and avoids dysfunctional regimes [3].

We anticipate that the interaction of Ipump and persistent Na+ current, IP, produces a mechanism supporting functional bursting. Ipump is an outward current activated by [Na+]i and is a major source of Na+ efflux. IP is a low-voltage activated inward current and is a major source of Na+ influx. Both currents are active between and during bursts. We apply a combination of electrophysiology, computational modeling, and dynamic clamp to investigate the role of Ipump and IP in the leech heartbeat CPG interneurons (HNs). Applying dynamic clamp, introducing additional Ipump and IP into the dynamics of a living synaptically isolated HN neuron in real-time [4], we show that their joint upregulation produces transition into a new bursting regime characterized by higher spiking frequency and more depolarized base potential during the burst. Further upregulation of Ipump speeds up the HN rhythm by shortening burst duration and interburst interval.

In summary, the dynamic interaction of Na+/K+ pump current with persistent Na+ current offers a mechanism of generation and regulation of robust and flexible pattern of bursting activity.

Acknowledgments

The study was supported by NIH Grant 1 R21 NS111355 to GC and RC and the GSU Brains and Behavior Fellowships to RET and SC.

References

1. Tobin AE, Calabrese RL. Myomodulin increases I h and inhibits the Na/K pump to modulate bursting in leech heart interneurons. Journal of neurophysiology. 2005 Dec;94(6):3938–50.

2. Kueh D, Barnett WH, Cymbalyuk GS, Calabrese RL. Na + /K + pump interacts with the h-current to control bursting activity in central pattern generator neurons of leeches. Elife. 2016 Sep 2;5:e19322.

3. Ellingson PJ, Barnett WH, Kueh D, Vargas A, Calabrese RL, et al. Comodulation of h-and Na + /K + Pump Currents Expands the Range of Functional Bursting in a Central Pattern Generator by Navigating between Dysfunctional Regimes. Journal of Neuroscience. 2021 Jul 28;41(30):6468–83.

4. Toscano RJ, Ellingson PJ, Calabrese RL, Cymbalyuk GS. Contribution of the Na + /K + Pump to Rhythmic Bursting, Explored with Modeling and Dynamic Clamp Analyses. Journal of Visualized Experiments: Jove. 2021 May 9(171).

P36 The role of hyperpolarization-activated current in the production of episodic bursting by a half-center oscillator

Jessica Parker 1 , Simon A. Sharples 2 , Alex Vargas 1 , Patrick J. Whelan 3 , Gennady Cymbalyuk 1

1 Georgia State University, Neuroscience Institute, Atlanta, GA, United States of America

2 University of St Andrews, School of Psychology and Neuroscience, Fife, United Kingdom

3 University of Calgary, Hotchkiss Brain Institute, Calgary, Canada

Email: gcymbalyuk@gmail.com

Growing evidence suggests that specialized oscillatory neuronal circuits controlling locomotion, called locomotor central pattern generators, are capable of producing a variety of rhythmic patterns in response to changes in neuromodulator tone [1,2]. Besides the continuous bursting rhythm (period ~1 s), isolated neonatal rodent spinal cord preparations exhibit a complex pattern evoked by dopamine: a very slow episodic bursting rhythm (period ~50 s) in which episodes of fast bursting rhythm are separated by long pauses [1]. Neuromodulation can cause transitions between these rhythms by altering key properties of intrinsic ionic currents.

Here, we describe how a basic half-center oscillator (HCO) model of a CPG, modified from [3] and assembled of two mutually inhibitory neurons, could produce both types of patterns. In the model HCO, each model neuron represents a population of interneurons in the spinal cord. Each neuron is constructed as a single compartment model with ionic currents introduced using Hodgkin-Huxley formalism as well as a dynamical intracellular Na+ concentration, [Na+]i, and a Na+/K+ pump current, IPump. The HCO model successfully simulated many important characteristics of the experimentally recorded episodic pattern and alterations caused by pharmaceutical agents. The model’s mechanism underlying episodic activity depends mainly on two intrinsic currents: IPump and h-current, Ih. Consistent with the effects of ouabain bath application in experiments, the decrease of maximal pump activity caused a transition from episodic to continuous bursting. When we increased the [Na+]i influx, indirectly increasing IPump, episode duration (ED) and episode cycle period (EP) increased while interepisode interval (IEI) did not change significantly, which is consistent with the bath application of monensin. Increase of the maximal conductance of Ihincreased ED without a significant effect on IEI and at a certain critical value caused a transition into continuous bursting, which is consistent with experiments using ZD-7288 bath application. We found that a single model neuron is capable of generating episodic activity and activation and deactivation ofIhgovern the episodic pattern. By applying slow–fast decomposition of the single neuron model, we elucidated the mechanisms underlying episodic bursting generation. These mechanisms involving the balance of Ih and IPump may be applicable to other biological systems that engage in episodic activity.

Acknowledgments

The study was funded by studentships from NSERC-PGS-D, Alberta Innovates, and Hotchkiss Brain Institute (SAS); grants from Canadian Institute of Health Research and NSERC Discovery grant (PJW); GSU Brains and Behavior Fellowship (JRP); NIH grant 1 R21 NS111355 (GSC and Ronald L. Calabrese).

References

1. Sharples SA, Whelan PJ. Modulation of rhythmic activity in mammalian spinal networks is dependent on excitability state. Eneuro. 2017 Jan;4(1).

2. Picton LD, Nascimento F, Broadhead MJ, Sillar KT, Miles GB. Sodium pumps mediate activity-dependent changes in mammalian motor networks. Journal of Neuroscience. 2017 Jan 25;37(4):906–21.

3. Parker J, Bondy B, Prilutsky BI, Cymbalyuk G. Control of transitions between locomotor-like and paw shake-like rhythms in a model of a multistable central pattern generator. Journal of neurophysiology. 2018 Sep 1;120(3):1074–89.

P37 Embedded chimera states in recurrent neural networks

Maria Masoliver 1 , Wilten Nicola 2 , Joern Davidsen 1

1 University of Calgary, Physics and Astronomy, Calgary, Canada

2 University of Calgary, Calgary, Canada

Email: maria.masolivervila@ucalgary.ca

It has been experimentally verified that synchronization and partial synchronization of brain activity play an important role in the pathogenesis of several neurological diseases, such as Parkinson’s disease, Alzheimer’s disease and essential tremor [1,2] (among others), as well as in normal functioning brain circuits [3–6] (e.g., during memory consolidation). However, the fundamental principles and constraints that govern the intricate timing and specificity of the time-evolving patterns of partial synchrony are not well understood.

Here we aim to relate the mathematical concept of the chimera state [7,8], where synchrony and asynchrony coexist, to partial synchronization in the brain. So far, chimera states have been investigated through bottom-up approaches using simple mathematical models [9,10]. However, these simple models are not directly applicable to real biological systems (e.g., brain regions), which are extraordinarily complex networks of coupled dynamical systems. Yet, there has been some initial work relating chimera states to brain-related disorders such as epileptic seizures, Parkinson’s and schizophrenia [11–14], as well as in the normal operating regime of circuits like the hippocampus [6].

Here we initiate a novel approach by training the synaptic connections of an artificial recurrent neural network with techniques in machine learning to display a chimera state. We establish that chimera states can in principle emerge at the mesoscopic and macroscopic level in brain circuits, and do not require precisely specified connectivity or network topologies (e.g. rings). These network embedded Chimera states are quite generic with the connectivity matrices being primarily random, with small perturbations off of randomness. Our results imply that the emergence of chimeras is quite generic at the meso and macroscale suggesting their general relevance in neuroscience in both pathological and healthy circuits.

References

1. Uhlhaas PJ, Singer W. Neural synchrony in brain disorders: relevance for cognitive dysfunctions and pathophysiology. neuron. 2006 Oct 5;52(1):155–68.

2. Tang Y, Qian F, Gao H, Kurths J. Synchronization in complex networks and its application–a survey of recent advances and challenges. Annual Reviews in Control. 2014 Jan 1;38(2):184–98.

3. Melloni L, Molina C, Pena M, Torres D, Singer W, et al. Synchronization of neural activity across cortical areas correlates with conscious perception. Journal of neuroscience. 2007 Mar 14;27(11):2858–65.

4. Buzsáki G. Theta oscillations in the hippocampus. Neuron. 2002 Jan 31;33(3):325–40.

5. Lubenov EV, Siapas AG. Hippocampal theta oscillations are travelling waves. Nature. 2009 May;459(7246):534–9.

6. Buzsáki G. Hippocampal sharp wave‐ripple: A cognitive biomarker for episodic memory and planning. Hippocampus. 2015 Oct;25(10):1073–188.

7. Kuramoto Y, Battogtokh D. Coexistence of coherence and incoherence in nonlocally coupled phase oscillators. arXiv preprint cond-mat/0210694. 2002 Oct 31.

8. Davidsen J. Symmetry-breaking spirals. Nature Physics. 2018 Mar;14(3):207–8.

9. Laing CR. The dynamics of chimera states in heterogeneous Kuramoto networks. Physica D: Nonlinear Phenomena. 2009 Aug 1;238(16):1569–88.

10. Chouzouris T, Omelchenko I, Zakharova A, Hlinka J, Jiruska P, et al. Chimera states in brain networks: Empirical neural vs. modular fractal connectivity. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2018 Apr 9;28(4):045,112.

11. Bassett DS, Sporns O. Network neuroscience. Nature neuroscience. 2017 Mar;20(3):353–64.

12. Lainscsek C, Rungratsameetaweemana N, Cash SS, Sejnowski TJ. Cortical chimera states predict epileptic seizures. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2019 Dec 30;29(12):121,106.

13. Bansal K, Garcia JO, Tompson SH, Verstynen T, Vettel JM, et al. Cognitive chimera states in human brain networks. Science advances. 2019 Apr 1;5(4): eaau8535.

14. Lainscsek C, Rungratsameetaweemana N, Cash SS, Sejnowski TJ. Cortical chimera states predict epileptic seizures. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2019 Dec 30;29(12):121,106.

P38 Alteration of astrocytic glutamate transporters can drive a multistage progression of Alzheimer’s disease

Giulio Bonifazi 1 , Celia Luchena 2 , Adhara Gaminde-Blasco 2 , Carolina Ortiz-Sanz 2 , Estibaliz Capetillo-Zarate 2 , Carlos Matute 2 , Elena Alberdi 2 , Maurizio De Pittà 1

1 Basque Center for Applied Mathematics, Bilbao, Spain

2 Achucarro Basque Center for Neuroscience and Universidad del País Vasco (UPV/EHU), Leioa, Spain

Email: gbonifazi@bcamath.org

At Alzheimer’s disease (AD) onset, accumulation of amyloid-β (Aβ) correlates with excitotoxicity and alteration of glutamate uptake. Experiments show that oligomeric Aβ in mouse cultures modifies the expression of astrocytic GLT1 transporters, which remove most of the extracellular glutamate, preventing excitotoxicity. In this regard, we consider how extracellular Aβ modifies GLT1 expression and how it impacts glutamate time course in the peri-synaptic space. Accordingly, we develop a mathematical model for glutamate diffusion and uptake by astrocytic transporters. Since extracellular glutamate and Aβ both modulate and depend on calcium homeostasis and firing properties of the tissue, we include these in our model to estimate the conditions for excitotoxicity. Therefore, we upscale our description to a tissue level, and we consider the dynamics of the average firing rate, glutamate, Aβ, and intracellular calcium concentration. Our model predicts that when Aβ lowers GLT1 concentration below a threshold, the accumulation of extracellular glutamate increases. This promotes a positive feedback loop that induces further synaptic glutamate release and thereby excitotoxicity. When including calcium and firing dynamics, changes in astrocytic glutamate uptake and basal firing activity result in a third and intermediate state: the asymptomatic stage of the disease that could degenerate into pathology, or reverse into a healthy brain. These results provide theoretical support to the pivotal role of Aβ in triggering excitotoxicity by perturbing neuronal activity, glutamate, and calcium homeostasis. Furthermore, we can foresee the idea of Alzheimer’s as a multistage disease, where transitions are driven by Aβ.

P39 Mapping and validating the LIF model on Intel's Loihi

Srijanie Dey 1 , Alexander Dimitrov 1

1 Washington State University, Mathematics and Statistics, Vancouver, WA, United States of America

Email: srijanie.dey@wsu.edu

Neuromorphic hardware is based on emulating the natural biological structure of the brain. Since their computational model is by design similar to standard neural models, we would like to use it as a computational acceleration for both research projects, and biomedical applications. However, in order to exploit this new generation of computer chips, rigorous simulation and consequent validation of brain-based experimental data is imperative. In this work, we investigate the potential of Intel's fifth generation neuromorphic chip ‘Loihi’ [1], which is based on the idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain. The work is implemented in context of simulating the Leaky Integrate and Fire (LIF) models [2] based on the mouse primary visual cortex matched to a rich data set of anatomical, physiological and behavioral constraints. We address neuromorphic hardware challenges viz., fixed-point arithmetic, bit-size constraints and a distinct algorithmic time. Simulations on the classical hardware [3] serve as the validation platform for the neuromorphic implementation. In spite of the hardware implementation constraints, we find that Loihi is highly efficient producing high-quality replication of the classical simulations. In addition, it scales extremely well in terms of both time and energy performance as the network size gets larger.

References

1. Davies M, Srinivasa N, Lin TH, Chinya G, Cao Y, et al. Loihi: A neuromorphic manycore processor with on-chip learning. Ieee Micro. 2018 Jan 16;38(1):82–99.

2. Teeter C, Iyer R, Menon V, Gouwens N, Feng D, et al. Generalized leaky integrate-and-fire models classify multiple neuron types. Nature communications. 2018 Feb 19;9(1):1–5.

3. Allen Institute for Brain Science. Brain Modeling Toolkit (BMTK). Available from:

alleninstitute.github.io/bmtk/

P40 A model study of electro-diffusion in dendritic signal processing

Yinyun Li 1 , Alexander Dimitrov 2

1 Beijing Normal University, Shool of Systems Science, Beijing, China

2 Washington State University, Mathematics, Vancouver, United States of America

Email: yinyun@bnu.edu.cn

Dendrites are the sub-compartment of neurons which are important for receiving signals from other neurons and processing into the neuronal cell body [1–3]. Studying the full electro-diffusion ion dynamics [4–7] will allow us to understand neuronal structures and function that are still mysterious under the approximate classical electric membrane model.

The electro-diffusion of ions could only have impact on relatively small volumes, where the ions with charge fluxes into this space accumulate extremely quickly and that concentration spike should allow the electric potential gradients to come into play [2,4,7].

In this presentation, we have investigated the electro-diffusion impact on one of the typical small sub-compartment dendrite. The Nernst-Planck equation has been used for the dynamics of ions, together with voltage-gated ion channel dynamics, and the voltage equation for dendritic membrane [3,4].

Our model shows that, for a single pulse of injection of alpha current, the one-dimensional dendritic model shows only 0.13 mV difference due to electro-diffusion. However, for multiple spike stimulations, the membrane potential of dendrite accumulated the small discrepancy by electro-diffusion and eventually approached a significant magnitude, depending on the frequency of stimulation. Importantly, the impact can also spread to neighboring region from 10 μm at 20 Hz to more than 20 μm at 100 Hz stimulation. In addition, the electro-diffusion effect is dependent on the diameter of the dendrites, as indicated by the Nernst-Planck Equation.

We have also investigated the synaptic cooperation and competition by injecting two currents within certain distances; according to the above analysis, the membrane potential by electro-diffusion may only interact and play a significant function within 10-15 μm. For injection distance more than 20 μm the impact would not superpose on each other, according to our simulation.

Acknowledgements

We thank the China Scholarship Coucil for funding support to Yinyun Li.

References

1. Goldman DE. Potential, impedance, and rectification in membranes. The Journal of general physiology. 1943 Sep 20;27(1):37–60.

2. Johnston D, Magee JC, Colbert CM, Christie BR. Active properties of neuronal dendrites. Annual review of neuroscience. 1996 Mar;19(1):165–86.

3. Johnston D, Wu SM. Foundations of cellular neurophysiology. MIT press; 1994 Nov 2.

4. Qian N, Sejnowski TJ. An electro-diffusion model for computing membrane potentials and ionic concentrations in branching dendrites, spines and axons. Biological Cybernetics. 1989 Nov;62(1):1–5.

5. Cartailler J, Kwon T, Yuste R, Holcman D. Deconvolution of voltage sensor time series and electro-diffusion modeling reveal the role of spine geometry in controlling synaptic strength. Neuron. 2018 Mar 7;97(5):1126–36.

6. Halnes G, Østby I, Pettersen KH, Omholt SW, Einevoll GT. Electrodiffusive model for astrocytic and neuronal ion concentration dynamics. PLoS computational biology. 2013 Dec 19;9(12):e1003386.

7. Lagache T, Jayant K, Yuste R. Electrodiffusion models of synaptic potentials in dendritic spines. Journal of computational neuroscience. 2019 Aug;47(1):77–89.

P41 A neuromorphic application to keyword recognition

Michael Helde 1 , Alexander Dimitrov 2

1 Skyview High School, Vancouver, WA, United States of America

2 Washington State University, Neuroscience, Vancouver, WA, United States of America

Email: michaelhelde11@gmail.com

Neuromorphic hardware simulating Spiking Neural Networks (SNN) is becoming more broadly commercially available. There are still relatively few neural-based algorithms that can effectively operate in this unfamiliar development environment. We conjecture that algorithms based on specific sensory modalities can be used more broadly for general sensory signal processing. In this research project, we have applied one of these neuromorphicalgorithms, based on the structure of the mammalian olfactory bulb [1], to speech keyword recognition. Using the implemented SNN resulted in efficient and accurate sound recognitions. In order to adapt the aforementioned neural algorithm to audio analysis, we performed several sounds-specific preprocessing steps. First, a gammatone filter was applied to reduce the noise of the short audio sample and convert temporal sound signal to positional frequency signal. The single odor test algorithm was altered to be used for audio processing on extracted columns from a gammatone filter spectrogram made from a sound file. The results showed that over sequential “olfactory” gamma cycles, the algorithm successfully achieved one-shot online learning (Fig. 1). The graphs showing the frequency measured by each sensor were noticeably distinguishable. Currently, we are experimenting with multiple audio samples to test the potential identification of speakers. Implementation on the Loihi neuromorphic hardware chip would lead to an increase in the magnitude of speed and energy efficiency as compared to general-purpose computers. Thus, one-shot learning has been achieved and the modified neuromorphic algorithm demonstrates the validity of our cross-modality hypothesis.

References

1. Imam N, Cleland TA. Rapid online learning and robust recall in a neuromorphic olfactory circuit. Nature Machine Intelligence. 2020 Mar;2(3):181–91.

Fig. 1
figure af

Similarity of testing audio to learned audio

P42 A general approach to characterize structured synchronization processes in spiking neural networks based on an adaptive synchronization measure

Denis Zakharov 1 , Boris Gutkin 2 , Olesia Dogonasheva 1

1 Higher School of Economics, Centre for Cognition & Decision Making, Moscow, Russia

2 École Normale Supérieure, Paris, France

Email: dzakh76@gmail.com

Synchronization in the brain underlies information processing across multiple areas. Notably one can observe spatially structured coherence states where multiple and tunable synchronous brain subnetworks coexist. From the point of view of nonlinear dynamics, these correspond to clustered synchronization or chimera states (coexistence of synchronous and asynchronous activity [1]). For example, recent experiments have shown that chimera states are observed in the brain during epileptic seizures and unihemispheric sleep [2].

Despite the recent interest in chimera states, the ability to robustly and automatically identify such complex spatio-temporal dynamics of neuronal networks correctly remains a key challenge. Arguably previously proposed measure measures for chimera state identification (the Kuramoto order parameter [1], strength of incoherence [3], and the χ2-parameter [4]), have significant drawbacks: inability to identify cluster synchronization, instability for the travelling wave regime, need for hand-tuned parameter selection, empirical selection of the regime boundaries.

We propose a new approach for large-scale studies of chimera states [5] – adaptive coherence measure (ACM). ACM is based on the modification of χ2-parameter. We suggest to solve the optimization problem: R2 = maxΔt χ2 ({V(t − ∆ti)} Ni=1, where Δt = (∆t1, ∆t2, …, ∆tN) is a vector of time lags. Couple (R2, L) unequivocally determines a dynamical regime (see Table 1), where L is the number of unique time lags. For a chimera state, we can determine large synchronous groups of neurons Llgs and a large population of asynchronous neurons in the network (see, for example, Fig. 1).

Our approach allows automatic disambiguation of synchronized clusters, travelling waves, chimera states, and asynchronous regimes. In addition, our method can determine the number of clusters in the case of cluster synchronization.

Acknowledgements

This study has been carried out using HSE unique equipment (Reg. num 354,937) and supported by the RF Government grant ag. № 075–15-2021–673. The research was also partially supported by the computational resources of HPC facilities.

References

1. Abrams DM, Strogatz SH. Chimera states for coupled oscillators. Physical review letters. 2004 Oct 22;93(17):174,102.

2. Lainscsek C, Rungratsameetaweemana N, Cash SS, Sejnowski TJ. Cortical chimera states predict epileptic seizures. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2019 Dec 30;29(12):121,106.

3. Gopal R, Chandrasekar VK, Venkatesan A, Lakshmanan M. Observation and characterization of chimera states in coupled dynamical systems with nonlocal coupling. Physical review E. 2014 May 27;89(5):052,914.

4. Golomb DD, Hansel D, Mato G. Mechanisms of synchrony of neural activity in large networks. InHandbook of biological physics 2001 Jan 1 (Vol. 4, pp. 887–968). North-Holland.

5. Dogonasheva O, Kasatkin D, Gutkin B, Zakharov D. Robust universal approach to identify travelling chimeras and synchronized clusters in spiking networks. arXiv preprint arXiv: 2103. 09304. 2021 Mar 16.

Table 1 Classification of network dynamical regimes on the basis of the adaptive coherence measure (ACM) and the number of unique time lags L
Fig. 1
figure ag

Raster plot (A), frequency distribution (B) and instantaneous snapshot (C) for a traveling chimera state (R2 = 0.7779) with two synchronous clusters (Llgs = 2). The neuronal network is from [5] (Iext = 95 A/cm2, gsyn = 3 mS/sm2, r = 0.78)

P43 AnalySim: A web platform for collaborative data sharing and analysis

Cengiz Gunay 1 , Nga Tran 1 , Hieu Dinh 1 , Anca Doloc-Mihu 1

1 Georgia Gwinnett College, School of Science and Technology, Lawrenceville, GA, United States of America

Email: cengique@users.sf.net

AnalySim is a website that is being developed to help create and share projects that analyze various types of datasets. AnalySim aims to help with data sharing, data hosting for publications, interactive visualization, collaborative research, and crowdsourced analysis. It aims to provide special support for datasets with many changing parameters and recorded measurements, such as those produced by large-scale neuronal simulation studies. However, Analysim is not limited to this type of data and allows running custom code. Currently, we demonstrate a proof-of-concept analysis by embedding JavaScript notebooks provided from ObservableHQ.com. We plan to include Python Jupyter notebooks in the future.

Offering these features on an interactive web platform improves visibility of one’s research and helps the paper review process by allowing to reproduce others’ analyses. In addition, it fosters collaborative research by providing access to others' public datasets and analysis, creating opportunities to ask novel questions, guide one's research, and start new collaborations or join existing teams. Analysim can be said to provide a “social scientific environment”, which include features such as forking or cloning existing projects to customize them and tagging or following researchers and projects. In addition, one can filter datasets, duplicate analyses and improve them, and publish findings via interactive visualizations. In summary, Analysim is a Github-like tool specialized for scientific problems—especially when they are large and complex as in parameter search.

P44 Signal denoising through modular topography

Barna Zajzon *1 , David Dahmen 1 , Abigail Morrison 2 , Renato Duarte 3

1Jülich Research Center, Institute of Neuroscience and Medicine (INM-6), Jülich, Germany

2Forschungszentrum Jülich, Institute of Neuroscience and Medicine (INM-6), Jülich, Germany

3Jülich Research Center, Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich, Germany

Email: b.zajzon@fz-juelich.de

To navigate in a dynamic and noisy environment, the brain must create reliable and meaningful representations from sensory inputs that are often ambiguous, incomplete or corrupt. From these noisy inputs, cortical circuits extract the relevant features to forge a ground truth against which internally generated signals from inferential processes can be evaluated. Since information that fails to permeate the cortical hierarchy can not influence sensory perception and decision-making, it is critical that external stimuli are encoded and propagated through different processing stages in a manner that minimizes signal degradation.

In this study, we hypothesize that stimulus-specific pathways akin to cortical topographic maps may provide the structural scaffold for such signal routing. A pervasive structural feature of the mammalian neocortex, topographic projections can imprint spatiotemporal features of (noisy) sensory inputs onto the cortex by preserving the relative organization of cells between distinct populations. Here, we investigate whether the feature-specific pathways within such maps can guide and route stimulus information throughout the system while retaining representational fidelity.

We demonstrate that, in a large modular circuit of spiking neurons comprising multiple sub-networks, topographic projections can help the system reduce sensory and intrinsic noise to enable an accurate propagation of stimulus representations. Moreover, by regulating the effective connectivity and local E/I balance, modular topographic precision can instantiate a de-facto denoising auto-encoder, whereby the system's internal representation is gradually improved and signal-to-noise ratio increased as the input signal is transmitted through the network. Such a denoising function arises beyond a critical transition point in the sharpness of the feed-forward projections, and is characterized by the emergence of inhibition-dominated regimes where population responses along stimulated maps are amplified and others are weakened.

In addition, we demonstrate that this is a generalizable and robust structural effect, largely independent of the underlying architectural specificities. Using mean-field approximations, we gain deeper insight into the mechanisms responsible for the qualitative changes in the system's behavior and show that these depend only on the modular topographic connectivity and stimulus intensity. The general dynamical principle revealed by the theoretical predictions suggest that such a denoising property may be a universal, system-agnostic feature of topographic maps. Finally, our results indicate that structured projection patterns can enable a wide range of behaviorally relevant regimes observed under various experimental conditions: maintaining stable representations of multiple stimuli across cortical circuits; amplifying certain features while suppressing others, resembling winner-take-all circuits; and endow circuits with metastable dynamics (winnerless competition), assumed to be fundamental in a variety of tasks.

P45 Simulation of the somatosensory cortex microcircuit in NetPyNE

Fernando Borges 1 , Joao Moreira 2 , Donald Doherty 1 , William W. Lytton 1 , Salvador Dura-Bernal 1

1SUNY Downstate Health Sciences, Department of Physiology & Pharmacology, Brooklyn, NY, United States of America

2SUNY Downstate Health Sciences, Program in Neural and Behavioral Science, Brooklyn, NY, United States of America

Email: fernandodasilvaborges@gmail.com

A great advance in the digital reconstruction of brain microcircuits came with the model of the primary somatosensory cortex (S1) of rats developed by the Blue Brain Project in 2015. In this microcircuit, each column had around 31,000 neurons, 55 layer-specific morphological population, and 207 morpho-electrical neuron sub-types. The complex network of S1 included around 8 million connections with 37 million synapses. Here, we implemented a version of the S1 model using NetPyNE, a high-level Python interface to the NEURON simulator (Fig. 1). First, we downloaded all data available in The Neocortical Microcircuit Collaboration Portal (https://bbp.epfl.ch/nmc-portal). Secondly, we imported the 1035 reconstructed cells to NetPyNE and tested the somatic membrane potential under different current clamp amplitudes. Later, using the connectoma of 7 neocortical columns, we obtained the connection probability rules of the 1941 m-type pathways. The connection probability between two neurons depends on the distance between them, but we note that, in most cases, two different fits are required to describe these probability rules. The long-range connections are well fitted by an exponential decay, but for short range (< 100 um) the connections are well represented by using a linear fit rule. We reconstructed the S1 in NetPyNE distributing the 31,346 cells within a cylindrical volume with 2082 um height and radius of 210 um, where each sub-type was randomly distributed in its specific layer (L1, L2/3, L4, L5, or L6). Then, we created the network with synaptic transmission parameters for each pathway and added spontaneous synaptic release as a Poisson stimulus. Finally, we simulated the model and explored the spontaneous rates for excitatory and inhibitory synapses in order to find biologically constrained values for neuronal firing rates.

Fig. 1
figure ah

A An example of voltage traces for 5 neurons in each of the 55 m-types during spontaneous activity in the microcircuit. B Microcircuit with 10% of neurons plotted. C, D Soma positions of 31,346 cells within a cylindrical volume with 2082 um height and radius of 210 um

P46 Effects of ih-current modulation in a pyramidal tract projecting cell model

Joao Moreira 1 , Samuel A. Neymotin 2 , William W. Lytton 3 , Benjamin Suter 4 , Gordon Shepherd 5 , Salvador Dura-Bernal 3

1 SUNY Downstate Medical Center, School of Graduate Studies, Brooklyn, NY, United States of America

2 Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States of America

3 SUNY Downstate Medical Center, Department of Physiology and Pharmacology, Brooklyn, NY, United States of America

4 Institute of Science and Technology Austria, Klosterneuburg, Austria

5 Northwestern University, Department of Physiology & NUIN, Chicago, IL, United States of America

Email: joao.moreira@downstate.edu

Pyramidal tract projecting (PT) neurons are involved in the forwarding of motor commands to the lower motor neurons and sit strategically in the layer 5B of the cortex, a known output route from the cortical circuit [1,2]. These neurons share the location with another class of pyramidal cells, the intratelencephalic projecting (IT) neurons, which project mainly to basal ganglia structures and are involved with error correction and motor planning [3]. Besides its projection targets, another key feature that distinguishes the PT neurons from its other neighboring cells in the layer 5B is the presence of a hyperpolarization-activated cyclic nucleotide-gated cation (HCN) channel. This channel is thought to play a key role is in the switching from motor planning to execution under norepinephrine modulation [3].

The activity of the HCN channels is quantified in terms of its ih-current, a hyperpolarization-induced cationic current [4,5]. The ih-current is active during rest, inducing a depolarizing effect in the cell [5] and a decrease in neuronal input resistance [6,7]. HCN channels can be blocked by administration of the drug ZD7288 [8], allowing for a mechanism to test its contribution to the overall cell behavior [3]. Over the years, authors proposed different mechanisms to explain the dynamics of the HCN channel [3,7,9,10], with ih-current being coined as the "funny current" [11], for being an inward current whose conductance increases as the transmembrane potential approaches the hyperpolarized state [5], for its responsiveness to both voltage and cAMP [5], and for its leak property, being permissive to K + and Na + at a 4:1 ratio [5,12, 13]. Another peculiarity of the ih-current is that, despite its presence having a depolarizing effect in the cell, it shows a reversal in the peak amplitude during stimulation with increasing weights, as demonstrated by George et al. [4].

In this work, we incorporated an implementation of the HCN channel used in a CA1 neuron [10] into a model of PT corticospinal neurons with 706 compartments (Fig. 1C) [14]. This HCN channel adds a shunting current that is proportional in amplitude to the ih-current, thought to be mediated by TASK-like channels [10].

Our results show that the presence of the ih-current in the model resulted in reduction of temporal summation (Fig. 1A), reversal in peak amplitude (Fig. 1B), reduction of corticospinal output (decrease in action potentials) (Fig. 1D) and change in the profile of input integration (Fig. 1F). The F-I curve is preserved compared with the original cell model (Fig. 1E). Therefore, our model reconciles the experimental findings from an electrophysiological characterization of these neurons under the administration of an HCN channel blocker [3] and the reversal in peak amplitude [4]. This unified model more closely matches the physiological behavior of PT neurons under norepinephrine modulation, and can provide insights into its underlying biophysical mechanisms and their role in the gating between motor planning and execution.

References

1. Anderson CT, Sheets PL, Kiritani T, Shepherd GM. Sublayer-specific microcircuits of corticospinal and corticostriatal neurons in motor cortex. Nature neuroscience. 2010 Jun;13(6):739–44.

2. Harris KD, Shepherd GM. The neocortical circuit: themes and variations. Nature neuroscience. 2015 Feb;18(2):170–81.

3. Sheets PL, Suter BA, Kiritani T, Chan CS, Surmeier DJ, et al. Corticospinal-specific HCN expression in mouse motor cortex: I h-dependent synaptic integration as a candidate microcircuit mechanism involved in motor control. Journal of neurophysiology. 2011 Nov;106(5):2216–31.

4. George MS, Abbott LF, Siegelbaum SA. HCN hyperpolarization-activated cation channels inhibit EPSPs by interactions with M-type K + channels. Nature neuroscience. 2009 May;12(5):577–84.

5. Lee CH, MacKinnon R. Structures of the human HCN1 hyperpolarization-activated channel. Cell. 2017 Jan 12;168(1–2):111–20.

6. Albertson AJ, Bohannon AS, Hablitz JJ. HCN channel modulation of synaptic integration in GABAergic interneurons in malformed rat neocortex. Frontiers in cellular neuroscience. 2017 Apr 19;11:109.

7. Magee JC. Dendritic hyperpolarization-activated currents modify the integrative properties of hippocampal CA1 pyramidal neurons. Journal of Neuroscience. 1998 Oct 1;18(19):7613–24.

8. Zhang XX, Min XC, Xu XL, Zheng M, Guo LJ. ZD7288, a selective hyperpolarization-activated cyclic nucleotide-gated channel blocker, inhibits hippocampal synaptic plasticity. Neural regeneration research. 2016 May;11(5):779.

9. Kole MH, Hallermann S, Stuart GJ. Single Ih channels in pyramidal neuron dendrites: properties, distribution, and impact on action potential output. Journal of Neuroscience. 2006 Aug 1;26(6):1677–87.

10. Migliore M, Migliore R. Know your current Ih: interaction with a shunting current explains the puzzling effects of its pharmacological or pathological modulations. PloS one. 2012 May 11;7(5):e36867.

11. DiFrancesco D. Serious workings of the funny current. Progress in biophysics and molecular biology. 2006 Jan 1;90(1–3):13–25.

12. Ludwig A, Zong X, Jeglitsch M, Hofmann F, Biel M. A family of hyperpolarization-activated mammalian cation channels. nature. 1998 Jun;393(6685):587–91.

13. Santoro B, Liu DT, Yao H, Bartsch D, Kandel ER, et al. Identification of a gene encoding a hyperpolarization-activated pacemaker channel of brain. Cell. 1998 May 29;93(5):717–29.

14. Neymotin SA, Suter BA, Dura-Bernal S, Shepherd GM, Migliore M, et al. Optimizing computer models of corticospinal neurons to replicate in vitro dynamics. Journal of neurophysiology. 2017 Jan 1;117(1):148–62.

Fig. 1
figure ai

Main features reproduced by the Corticospinal PT cell model. A Absence of temporal summation in the presence of the ih-current (Con). B Inversion effect of the peak voltage. C Cell model morphology. D Increase in Corticospinal output with ih-current blocked. E F-I curve of the original model and our implementation. F Somatic depolarization with spatially distributed inputs

P47 Intrinsic and parameter-less gain control in rate coding by spiking neurons

Nirag Kadakia 1 , Will Rosenbluth 1 , Thierry Emonet 2

1 Yale University, New Haven, CT, United States of America

2 Yale University, Department of Moleular, Cellular, and Developmental Biology, New Haven, CT, United States of America

Email: nirag.kadakia@yale.edu

Adaptation is a critical feature of sensory response, and is virtually universal in neural systems, including in individual neurons. In single neurons, adaptation of the amplification, or gain, can occur over time by some (typically slower) process mediating desensitization, such as an influx of calcium currents. Such gain control is inherently dynamical, since it involves changes in internal state over time. Past studies have illustrated that gain control can in some contexts can also be enacted intrinsically, without changes in parameters [1]. The requisite features are a high-dimensional signal (such as a time trace) and nonlinear response. In this case, gain control is immediate and effectively parameter-less.

Here, we propose a biophysical mechanism for intrinsic gain control that builds on this idea. Our framework is motivated by experimental observations of Drosophila olfactory receptor neurons (ORNs) to Gaussian fluctuating stimuli with nonzero mean [2]. An ORN’s firing response to these fluctuations does not modulate smoothly over a range of frequencies; instead it switches more discontinuously between low and high ~40 Hz firing rates. In the language of dynamical systems, the neuron persistently crosses a bifurcation between spiking and quiescence. For small fluctuations, this system could only encode 1 bit of information – spiking or resting. However, we show that the conversion from spike events to a rate code can effectively utilize past information – from the signal history – to encode substantially more than 1 bit of information. This system is gain invariant: the dose response curves between signal and firing response overlap perfectly when the stimulus is scaled by the amplitude of the signal fluctuation. Thus, bifurcation crossing effectively amplifies small fluctuations, permitting rate codes that would otherwise be imperceptible. We call this mechanism bifurcation-induced gain control, and illustrate that it is obeyed inherently by many classes of spiking neurons with different topologies at their bifurcating point.

Perfect gain control erases information about context: system responses are identical across different stimulus statistics (or contexts), so the context itself becomes ambiguous. Contextual information can be relayed at longer timescales, as in H1 neurons in fly vision [3]. We show that bifurcation-induced gain control encodes context via fast response asymmetries not reliant on timescale separation. Finally, we use experimental observations in ORNs [2] to propose a simple extension of bifurcation-induced gain control that simultaneously adapts to both the mean and variance of the signal. Our results show that the natural machinery of neuron spiking permits robust adaptation with high coding efficacy in changing environments.

Acknowledgements

We thank NIH for supporting N. Kadakia with grants1F32MH118700 and 1K99DC019397 and supporting T. Emonet with grant 2R01GM106189.

References

1. Borst A, Flanagin VL, Sompolinsky H. Adaptation without parameter change: dynamic gain control in motion detection. Proceedings of the National Academy of Sciences. 2005 Apr 26;102(17):6172–6.

2. Gorur-Shandilya S, Demir M, Long J, Clark DA, Emonet T. Olfactory receptor neurons use gain control and complementary kinetics to encode intermittent odorant stimuli. Elife. 2017 Jun 27;6:e27670.

3. Fairhall AL, Lewen GD, Bialek W, van Steveninck RR. Efficiency and ambiguity in an adaptive neural code. Nature. 2001 Aug;412(6849):787–92.

P48 Dynamics and trainability of recurrent neural networks with partial symmetry and antisymmetry

Matthew Ding 1 , Rainer Engelken 1

1 Columbia University in the City of New York, Zuckerman Institute, Center for Theoretical Neuroscience, New York, NY, United States of America

Email: matthew.river.ding@gmail.com

Recent work is yielding large amounts of connectivity data in a diversity of neural systems and spatial scales. However, it is largely an open problem how local connectivity features shape global activity dynamics and influence network changes during learning. In this work, we relate partial symmetry and antisymmetry in connectivity to the dynamics and trainability of recurrent neural networks (RNNs). Partial symmetry and antisymmetry correspond respectively to correlated and anticorrelated connection strengths between pairs of units. We calculate the full Lyapunov spectrum, which describes how dynamics transform the set of points around a network state over time. From the Lyapunov spectrum, we obtain the maximum Lyapunov exponent, which quantifies chaos, i.e., the exponential separation rate of nearby initial states due to recurrent dynamics. We also obtain an estimate of the attractor dimensionality known as the Kaplan-Yorke dimension, and also calculate the entropy rate, which quantifies the increase in uncertainty due to chaotic separation of nearby initial states. For weak coupling networks, partial symmetry increases the attractor dimension and entropy rate, explained by increasing magnitudes of the real parts of the Jacobian’s eigenvalues. For strong coupling networks, attractor dimension and entropy rate decrease with symmetry and increase with antisymmetry. This arises from the effect of partial symmetry on the variance of unit activities. As symmetry increases, most units are in saturation and the average gain of the transfer function is small. This leads to a sparse Jacobian of the dynamics, meaning that small differences in the network state grow in fewer directions of phase space. We additionally compare results of Kaplan-Yorke dimension to more conventional estimates of the dimensionality determined by principal components analysis (PCA). The PCA dimensionality trend is similar to that of the Kaplan-Yorke dimension. To study functional implications of partial symmetry, we investigate how initial symmetry affects a network’s trainability on the task of generating oscillatory readout without any input. We find that more antisymmetric networks trained with backpropagation through time have higher success rates and shorter training convergence times. Our work on RNN motifs may provide insights on how features of local connectivity among constituent units shape global features of dynamics and learning in biological networks.

P49 A brain-inspired meta-reinforcement learning inhibition cognitive control for artificial agents in a conflictual decision-making task

Federica Robertazzi 1 , Guido Schillaci 1 , Matteo Vissani 1 , Egidio Falotico 1

1 Sant'Anna School of Advanced Studies, The BioRobotics Institute and Department of Excellence in Robotics & AI, Pisa, Italy

Email: federica.robertazzi@santannapisa.it

Although modern artificial agents are extremely accurate in operating on single instances after a long exposure of stationary learning trials, they fail to work in the context of non-deterministic environments as in a human real-case scenario. Such sources of uncertainty and variability (e.g., unpredictable cues, unexpected constraints, and new objects in a task) may affect dramatically the performance of an artificial agent.

Meta-learning applied to reinforcement learning may thrive the design of control algorithms where an outer learning system progressively adjust the operation of an inner learning system, yielding the behavior of the artificial agent more flexible and efficient. The internal adjustment of the hidden learning system leads to practical benefits such as the reducing of the explicit hand-tuning of the parameters and the generalization error. Starting from the neural architecture developed by Khamassi and colleagues for agent-environment interaction such as action selection, we developed a brain-inspired meta-learning framework for inhibition cognitive control that includes distributed learning systems in the human brain, e.g., cortical areas such as prefrontal cortex and subcortical regions such as basal ganglia. We embedded in the model meta-learning mechanisms based on the neuromodulation theory proposed by Doya. This theory posits a central role for dynamics of the four major neurotransmitters (e.g., acetylcholine, serotonin, dopamine, and noradrenaline) and their mutual interdependence in shaping the behavior of the hyperparameters that underlies meta-learning processes. We explicitly included meta-control in the artificial agent, formalizing hyperparameters optimization rules: (i) dopamine receptors D1, modulating the noradrenergic system (i.e., exploration/exploitation rate) with an inverse linear function that relates dopamine to the entropy of the probability distribution of the actions, (ii) dopamine receptors D2, tuning the striatum neuron’s excitability, and (iii) serotonin, regulating the overall dopamine release and the reward temporal scale.

The artificial agent was tested in two different conflictual tasks (No-Go and Stop-Signal Paradigms) that involve different types of action inhibition. In No-Go Paradigm we tested the ability to withdraw a not-yet-initiated action from responding (i.e., action restraint) using a hold signal before the initiation of the movement. In Stop-Signal Paradigm we investigated the ability to cancel an initiated response (i.e., action cancellation) triggering an unpredictable hold signal after a range of delays from the action onset. After a short learning phase, the artificial agent adjusted successfully its hyperparameters (e.g., driving the system towards exploitation regimes) in response to the appearance of the hold signal in both tasks, i.e., proper use of the action inhibition command. The qualitative increase of performance was corroborated by a significance increase of the right inhibition, global accuracy, and a reduction of the stop-signal reaction time, i.e., the latency of the cancellation process,moving from the training to the test phase.We propose that the use of brain-inspired mechanisms to implement meta-learning processes may be a feasible approach for robotic applications, leading to an improvement of the performance even in unpredictable human real-case scenario.

P50 Reproducing asymmetrical spine shape fluctuations in a model of actin dynamics predicts self-organized criticality

Mayte Bonilla-Quintana 1 , Florentin Wörgötter 2 , Elisa D'Este 3 , Christian Tetzlaff 4 , Michael Fauth 2

1 University of California, Department of Mechanical and Aerospace Engineering, San Diego, CA, United States of America

2 University of Göttingen, Third Institute for Phyics, Göttingen, Germany

3 Max-Planck-Institut für medizinische Forschung, Heidelberg, Germany

4 University of Göttingen, Göttingen, Germany

Email: mfauth@gwdg.de

Dendritic spines are the morphological basis of excitatory synapses in the cortex and their size and shape correlates with functional synaptic properties. Recent experiments show that spines exhibit large shape fluctuations that are not related to activity-dependent plasticity but nonetheless might influence memory storage at their synapses. Thus, it is important to investigate the determinants and functional use of these spontaneous shape fluctuations.

In a recent series of studies [1,2], we propose a mathematical model for the dynamics of the spine shape based on the scaffolding protein actin – a protein that polymerizes into dynamic filaments which undergo continuous treadmilling. Experiments show that synapses usually have a few foci, where actin polymerization activity and, thus, also treadmilling speed is large. Hence, we model the spine shape to be governed by a local imbalance between the expansive force from the actin treadmilling at these foci and the membrane tension. The actin treadmilling, as well as filament branching and capping are described by Monte-Carlo models for each focus that interact via the membrane. Hereby, the polymerization activity in each focus has a limited lifetime similar as observed in experiments. As a consequence, the model shows asymmetric spine shape fluctuations because the momentarily existing set of polymerization foci pushes the membrane along certain directions until they are replaced and other directions are affected.

We analyze in detail how the shape and the temporal characteristics of our model-spines depend on the different biophysical parameters involved in actin polymerization. For this, we also introduce descriptors for asymmetric spine shapes and use them to demonstrate that shape fluctuations in our model are comparable to experimental data. Thus, our model provides a platform to study the relation between molecular and morphological properties of the spine with a high degree of biophysical detail and realism.

We therefore used the model to extrapolate into longer temporal intervals and discovered the presence of 1/f noise. As a reason for this, we find that actin dynamics underlying shape fluctuations self-organizes into a critical state. This critical state facilitates spine enlargement, for example after LTP, as compared to a non-critical model. Thus, ongoing spine shape fluctuations may be a consequence of a self-organization that enables a spine to quickly reconfigure itself when necessary.

References

1. Bonilla-Quintana M, Wörgötter F, Tetzlaff C, Fauth M. Modeling the shape of synaptic spines by their actin dynamics. Frontiers in synaptic neuroscience. 2020 Mar 10;12:9.

2. Bonilla-Quintana M, Wörgötter F, D’Este E, Tetzlaff C, Fauth M. Reproducing asymmetrical spine shape fluctuations in a model of actin dynamics predicts self-organized criticality. Scientific reports. 2021 Feb 17;11(1):1–7.

P51 Long-term stability of memories independent of any form of replay

Jonas Neuhöfer 1 , Christian Tetzlaff 2 , Michael Fauth 1

1 Georg-August University, Third Physics Institute, Göttingen, Germany

2 University of Göttingen, Göttingen, Germany

Email: j.neuhoefer@stud.uni-goettingen.de

Memories are known to reactivate during sleep. A recent modelling study [1] could reproduce this phenomenon based on self-reactivations of heavily inter-connected cell assemblies and showcased its beneficial consequences for memories. However, to be maintained, the memories needed frequent reactivations such that the weights between the cells representing the memory remain at a high level. Otherwise, the memories were forgotten.

In this work, we extend the model such that memories are maintained independently of reactivations. Furthermore, we suggest that long-term memories are mainly represented by their connectivity, i.e. the number of structural connections between neurons, and are less dependent on the actual weight of these connections.

We test this with simulations and (mean-field) analyses in recurrent networks, in which connections are subject to (1) structural plasticity, which creates and removes connections via stochastic processes, (2) synaptic plasticity adapting the synaptic weights according to neural activity and (3) a biologically inspired spontaneous dynamics of the synaptic weight.

We find that when a memory has not been reactivated for an extended period, the spontaneous weight dynamic comes into effect and decreases the internal synaptic weights of the memory. In this case, the memory can have three different states depending on its structural connectivity: At relatively high degrees of connectivity, the memory can reactivate itself. At slightly lower degrees of connectivity, the memory can only be reactivated by external stimuli but may self-reactivate in a short time span afterwards. But at even lower degrees of connectivity, the memory cannot be reactivated by external stimuli at all. However, even if a memory cannot be reactivated by external stimuli anymore, the structural connections of the memory still exist for extended periods. These connections can then be used to relearn the pre-existing memory very fast, which provides a possible explanation for Ebbinghaus’ savings effect.

In contrast, when a memory has just been learned, the internal synaptic weights are strong, and the memory only needs intermediate connectivity to self-reactivate. However, these self-reactivations are heavily dependent on the high strength of the synaptic weights. In comparison, older memories have increased their connectivity after multiple self-reactivations and are less dependent on the strength of the synaptic weights. Thus, interference with these reactivations (i.e., sleep deprivation), existing synapses, or synaptogenesis will impact new memories more severely than older ones, which may explain the gradedness of retrograde amnesia.

References

1. Fauth MJ, van Rossum MC. Self-organized reactivation maintains and reinforces memories despite synaptic turnover. ELife. 2019 May 10;8:e43717.

P52 Retrospective inference in online structure learning: A simulation study

Francesco Silvestrin 1 , Thomas FitzGerald 1

1 University of East Anglia, School of Psychology, Norwich, United Kingdom

Email: francesco.silvestrin21@gmail.com

The ability to flexibly learn the structure of one’s surroundings (structure learning) is crucial for adaptive behaviour. Use of an inaccurate model of the environment can lead to incorrect inferences, and thus maladaptive actions. Despite this, relatively little is understood about how structure learning occurs in human cognition. As a first step towards addressing this, we built on existing approaches to create an online clustering algorithm, and used it to simulate behaviour on a novel structure learning task, where optimal performance required estimating the number and properties of discrete clusters of continuously variable stimuli. More specifically, the stimuli we used were mushrooms, the look of which varied only on one dimension (size). The task required the agent to determine whether each mushroom was edible (good) or poisonous (bad) based on that one stimulus feature. Crucially, there were different species of good and bad mushrooms (clusters), which the agent was left in the dark about. Each mushroom’s size was sampled from a species-specific Gaussian distribution, and the overall distribution (a mixture of Gaussians) of good and bad mushrooms was designed so that a unimodal Gaussian approximation of the two categories would result in very high overlapping and thus bad performance.

In this simulation-based work we compare a set of different models and show how an agent that learns online the statistical structure of the stimuli (i.e., the number of clusters) outperforms one that just approximates the two categories as single Gaussian clusters, grouping all good mushrooms into one single species and all bad mushrooms into another. We also introduce a model that incorporates a working memory component, and show how retrospective inference (i.e., updating one’s beliefs about past stimuli as opposed to only updating beliefs about the current one) benefits structure learning. We finally discuss trial-by trial measures that can be derived from our model, which provide testable predictions for future empirical studies.

P53 Mathematical modeling of temperature effects on the AFD neuron of Caenorhabditis elegans

Zach Mobille 1 , Epaminondas Rosa 2 , Rosangela Follmann 3

1 Illinois State University, Department of Mathematics, Normal, IL, United States of America

2 Illinois State University, Department of Physics, Normal, IL, United States of America

3 Illinois State University, School of Information Technology, Normal, IL, United States of America

Email: zdm428@gmail.com

Temperature fluctuations can affect neurological processes at a variety of levels, with the overall output that higher temperatures generally increase neuronal activity. Here we utilize computer simulations of a mathematical model for a C. elegans sensory neuron to investigate the dynamical properties of temperature sensation in the worm. Thermoreception is known to originate in the bilateral symmetric pair of amphid neurons with finger-like ciliated endings (AFD) of C. elegans, to which we target our modeling efforts. We build upon a previously developed deterministic model for salt-sensing in the chemosensitive ASER neuron of C. elegans by implementing temperature-dependent Arhennius factors. Multiple experimental results involving time series data of intracellular AFD calcium ion concentration in response to ambient temperature changes are reproduced using this model. Among other things, we find that our model neuron requires synchronous temperature and chemical stimuli to exhibit dynamics qualitatively similar to those of a real AFD neuron.

P54 Understanding degeneracy and redundancy using variational free energy

Noor Sajid 1 , Thomas Parr 1 , Thomas Hope 1 , Cathy Price 1 , Karl Friston 2

1 University College London, Wellcome Centre for Human Neuroimaging, Queen Square Institute of Neurology, London, United Kingdom

2 University College London, London, United Kingdom

Email: noor.sajid.18@ucl.ac.uk

Degeneracy refers to a structure–function mapping in which a system can recruit from multiple structures to achieve functional plasticity. Systematic differentiation of these structures might provide insights into how cognitive or motor functions recover following neurological damage. Since each structure is sufficient, but not necessary, for a particular function; profound functional deficit is manifest when all degenerate structures are damaged. In contrast, redundancy – the inefficient use of a structure’s degrees of freedom to perform a particular function – should be regarded as a distinct but related concept. Here, we provide a computational account of degeneracy and redundancy, in terms of variational Bayes, for understanding potential recovery pathways following damage. We use a (generic) generative model and approximate inference based on variational free energy. We introduce a formal and intuitive trade-off between degeneracy and redundancy by associating degeneracy with the entropy of beliefs about the causes of sensations and redundancy with the complexity cost incurred by belief updating. We validate this formulation through the successful application of our approach – using structural learning and in-silico lesions – in the context of a word repetition paradigm: a canonical task in the neuropsychology of language. This is a relevant paradigm, since a computational assessment of degeneracy, could explain which combinations of structural damage are necessary to disrupt functional outcomes; i.e. ability to repeat words. Our simulations highlight: i) redundant structures – via structural duplications – have higher complexity cost but do not adversely impact function, ii) increasingly degenerate mappings between causes and outcomes – via in-silico lesions – have higher entropy, and iii) profound functional deficits are exhibited only when all possible sub-systems are damaged. Our formalism provides a framework to evaluate levels of degeneracy (and potential recovery pathways), following neurological damage.

P55 Bayesian brains and the Renyi divergence

Noor Sajid 1 , Francesco Faccio 2 , Lancelot Da Costa 3 , Thomas Parr 1 , Jurgen Schmidhuber 2 , Karl Friston 4

1 University College London, Wellcome Centre for Human Neuroimaging, Queen Square Institute of Neurology, London, United Kingdom

2 Haute école spécialisée de la Suisse italienne, The Swiss AI Lab IDSIA, Lugano, Switzerland

3 Imperial College London, Department of Mathematics, London, United Kingdom

4 University College London, London, United Kingdom

Email: noor.sajid.18@ucl.ac.uk

Under the Bayesian brain hypothesis, behavioural variations can be attributed to altered priors over the generative model (hyper-)parameters. This provides a particular explanation as to why individuals may exhibit inconsistent behavioural preferences when faced with similar observations. For example, greedy preferences are a consequence of confident (or precise) beliefs over particular outcomes. Conversely, individuals with uniform (or imprecise) priors exhibit increased variability in their choices, and (potentially) impulsive behaviour. Here, we offer an alternative account for explaining these behavioural variations using Rényi divergences, and their associated Rényi variational bounds. The Rényi bounds are analogous to the variational free energy (or evidence lower bound) and can be derived using the same assumptions. Importantly, these bounds provide a formal way to establish behavioural differences through the alpha parameter, given particular priors. This is accomplished by alpha changes that alter the bound (on a continuous scale), induce different posterior estimates, and consequent variations in behaviour. Thus, it looks as if individuals have different priors, and have reached different conclusions. Explicitly, alpha tending towards 0 optimisation would lead to mass-covering variational estimates that induce increased variability in choice behaviour. Furthermore, alpha tending towards infinity optimisation would lead to mass-seeking variational posteriors, and greedy preferences. We exemplify this formulation through simulations of the multi-armed bandit task (Fig. 1). We note that these alpha parameterisations are relevant, i.e., shape preferences, when the true posterior is not in the same family of distributions as the assumed (simpler) approximate density – common for complex real-world scenarios. Consequently, this departure from vanilla variational inference provides a useful explanation for differences in behavioural preferences of biological (or artificial) agents – under the assumption that the brain performs variational Bayesian inference.

Fig. 1
figure aj

MAB results and estimated posteriors

P56 Neural-ECM interactions in small scale networks

Nicolangelo Iannella 1 , Kristian Prydz 1 , Anders Malthe-Sørenssen 2 , Gaute Einevoll 3 , Marianne Fyhn 1

1 University of Oslo, Department if Biosciences, Oslo, Norway

2 University of Oslo, Department of Physics, Oslo, Norway

3 Norwegian University of Life Sciences, Faculty of Science and Technology, Aas, Norway

Email: nicolangelo.iannella@gmail.com

Electrical and molecular activity play important roles in adapting the spatial environment and responses of neurons, glia, and neuronal circuits over short and long-time scales. Numerous studies have shown how neurons and the networks function, interact and adapt their responses from both electrical and molecular perspectives [1–3]. The emerging view is that the molecular environment in the space between neurons and glia actively influences brain activity on multiple scales. Experiments are elaborating how this environment’s plexus of macromolecules, known as the extracellular matrix (ECM) that includes a specialization called the Peri-Neuronal Nets (PNN) and its strategic occupation of regions in and around synapses [4,5], impacts neuronal activity and function. Mounting evidence shows that the expression of certain ECM/PNN molecules play important roles in learning and memory, synaptic remodelling and significantly, in the recall of fear memory [6,7]. Currently, there have been very few investigations of neural-ECM interactions from a computational perspective. Those studies have focused on understanding the role of how the ECM influences neural signalling [7,8], however computational/theoretical investigations on how neural-ECM interactions impact network activity, behaviour and information processing has yet to be fully explored.

We developed a biologically inspired framework and an accompanying mathematical model that captures the bidirectional nature of the neuronal-ECM signalling of various ECM/PNN molecules. Our model can be applied to study the neuronal-ECM signalling in brain tissue and their collective influence on both single neuron responses and network activity. We present some simple examples to illustrate how neuronal-ECM interactions impact the behaviour of basic spiking neural circuits.

Acknowledgments

The authors thank the Research Council of Norway for financial support to Project No. 250259.

References

1. Tuckwell HC. Introduction to theoretical neurobiology: volume 2, nonlinear and stochastic theories. Cambridge University Press; 1988.

2. Mataga N, Mizuguchi Y, Hensch TK. Experience-dependent pruning of dendritic spines in visual cortex by tissue plasminogen activator. Neuron. 2004 Dec 16;44(6):1031–41.

3. Dityatev A, Schachner M, Sonderegger P. The dual role of the extracellular matrix in synaptic plasticity and homeostasis. Nature Reviews Neuroscience. 2010 Nov;11(11):735–46.

4. Ferrer-Ferrer M, Dityatev A. Shaping synapses by the neural extracellular matrix. Frontiers in neuroanatomy. 2018 May 15;12:40.

5. Tsien RY. Very long-term memories may be stored in the pattern of holes in the perineuronal net. Proceedings of the National Academy of Sciences. 2013 Jul 23;110(30):12,456–61.

6. Thompson EH, Lensjø KK, Wigestrand MB, Malthe-Sørenssen A, Hafting T, et al. Removal of perineuronal nets disrupts recall of a remote fear memory. Proceedings of the National Academy of Sciences. 2018 Jan 16;115(3):607–12.

7. Kazantsev V, Gordleeva S, Stasenko S, Dityatev A. A homeostatic model of neuronal firing governed by feedback signals from the extracellular matrix.

8. Lazarevich I, Stasenko S, Rozhnova M, Pankratova E, Dityatev A, et al. Activity-dependent switches between dynamic regimes of extracellular matrix expression. Plos one. 2020 Jan 24;15(1):e0227917.

P57 Neural models for the cross-species recognition of dynamic facial expressions

Peter Dicke 1 , Michael Stettler 2 , Peter Thier 1 , Nick Taubert 3 , Ramona Siebert 4 , Silvia Spadacenta 4 , Martin Giese 3

1 Hertie Institute for Clinical Brain Research, Department of Cognitive Neurology, Tübingen, Germany

2 Eberhard Karls University of Tübingen, Tübingen, Germany

3 Eberhard Karls University of Tübingen, Center for Integrative Neuroscience (CIN) & Hertie Institute for Clinical Brain Research (HIH), Tübingen, Germany

4 Eberhard Karls University of Tübingen, Hertie Institute for Clinical Brain Research (HIH), Tübingen, Germany

Email: michael.stettler@cin.uni-tuebingen.de

Dynamic facial expression recognition is an essential skill of primate communication. While the neural mechanisms to recognize static facial expressions has been extensively investigated, they remain largely unclear for dynamic facial expressions. We studied physiologically plausible neural encoding mechanisms, exploiting highly controlled and realistic stimulus sets generated by computer graphics, which are also used in electrophysiological experiments. The generation of these stimuli combined high-quality human and monkey head models with motion capture in humans and monkey [1]. Combining physiologically plausible neural models for the recognition of dynamic bodies [2], static faces [3] and architecture from computer vision [4], we devised two models (Fig. 1) for the recognition of dynamic facial expressions. The first model exploits an example-based approach. It encodes dynamic expressions as temporal sequences of snapshots, exploiting a sequence-selective recurrent neural network. The second model exploits norm-referenced encoding. Expressions are encoded as points in a continuous face-space by face neurons that are tuned to direction and size of the difference vectors between the actual stimulus frame and a neutral expression in face space. The output of these face neurons then is processed by differentiating neurons, resulting in selective responses to dynamic faces.

Both models were tested with movies of human and monkey avatars that showed human and monkey expression, and morphs between them. This ensured a highly accurate control of the form and dynamic style features of the stimuli [1]. Both models recognize reliably the tested dynamic facial expressions of humans and monkeys, but make different predictions when tested with stimuli generated by morphing. The norm-referenced model shows a highly gradual, almost linear dependence of the neuron activity with the expressivity of the stimuli. Contrasting with this result, the example-based model does not generalize well to stimuli with modified expressions strength. Also, the responses of the neurons at the output level of the norm-based model show striking similarities with the responses of neurons recently recorded in the Superior temporal Sulcus of macaque monkeys. Very simple physiologically plausible mechanism can account for the recognition of dynamic face. Norm-based and example-based encoding make quite different predictions of the behavior at the single-cell level, especially for stimuli generated by expression morphing.

Acknowledgements

This work was supported by ERC 2019-SyG-RELEVANCE-856495 and HFSP RGP0036/2016, MG was also supported by BMBF FKZ 01GQ1704, and SSTeP-KiZ BMG: ZMWI1-2520DAT700, NVIDIA Corp.

References

1. Taubert N, Stettler M, Siebert R, Spadacenta S, Sting L, et al. Shape-invariant perceptual encoding of dynamic facial expressions across species. bioRxiv. 2020 Jan 1.

2. Giese MA, Poggio T. Neural mechanisms for the recognition of biological movements. Nature Reviews Neuroscience. 2003 Mar;4(3):179–92.

3. Giese MA, Leopold DA. Physiologically inspired neural model for the encoding of face spaces. Neurocomputing. 2005 Jun 1;65:93–101.

4. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv: 1409. 1556. 2014 Sep 4.

Fig. 1
figure ak

Model architectures. A Convolutional Neural Network (CNN) mid-level feature extraction architecture (com- mon to both models). B Example-based circuit, and C Norm-based circuit for the representation of a single dynamic expression

P58 Open Source Brain v2.0: Closing the loop between experimental neuroscience data and computational models

Ankur Sinha 1 , Matteo Cantarelli 2 , Salvador Dura-Burnal 3 , Filippo Ledda 2 , Zoran Sinnema 4 , Angus Silver 1 , Padraig Gleeson 1

1 University College London, Department of Neuroscience, Physiology and Pharmacology, London, United Kingdom

2 MetaCell, LLC, Cambridge, MA, United States of America

3 SUNY Downstate Medical Center, Department of Physiology and Pharmacology, Brooklyn, NY, United States of America

4 MetaCell Limited, Oxford, United Kingdom

Email: a.sinha2@herts.ac.uk

Modern neuroscience relies on a combination of experimental and theoretical approaches to understand the brain. Sharing the outputs of research, both in terms of experimental datasets and software to analyse and model them, is now a crucial part of good scientific practice. Standardised formats for exchange of these outputs have emerged, which significantly aid reuse and reproducibility, both for data (NeuroData Without Borders, NWB, https://www.nwb.org) and computational models (NeuroML [1]). However, data and model sharing have traditionally happened independently via different repositories/databases. This makes “closing the loop”–using experimental data for data-driven modelling and/or theoretical analysis, and applying insights from modelling/theoretical investigations to dictate/design new experiments–a non-trivial undertaking. There is a growing need to develop tools and resources that allow working with both experimental data and theoretical models in one convenient, integrated environment.

The Open Source Brain platform (OSB, https://www.opensourcebrain.org) was developed as an online resource for sharing, viewing, analyzing, and simulating neuroscience models, using NeuroML as the underlying language for expressing the models in a standardised format [2]. With more than 1200 registered users, and over 50 participating labs from around the world, OSB serves as an important community resource for computational neuroscientists.

Here, we present the next version of the OSB platform (OSBv2, https://www.v2.opensourcebrain.org), a browser based, integrated research environment for both experimental data analysis and theoretical/modelling research. OSBv2 uses NWB as the recommended data sharing format, and we have developed the NWB Explorer application where users can visualise and analyse experimental data using a powerful graphical interface. This represents a critical extension to the scope of OSB as a portal for data exploration and analysis. OSBv2 also integrates the newly developed graphical frontend to the NetPyNE package (http://www.netpyne.org), greatly facilitating the simulation and analysis of network models using NEURON. These OSBv2 applications are tightly coupled with Python Jupyter notebook technologies. Users can save and share “workspaces” generated from these applications, and open them in a JupyterLab environment, giving access to a range of other common neuroscience simulators and analysis tools that form the greater Python neuroscientific ecosystem.

OSBv2 represents the next generation of collaborative, integrated research platforms for neuroscience that leverage modern web based infrastructure and software technologies to make both tools and scientific resources easily accessible to the whole neuroscience community. Providing this single, integrated environment for data analysis and modelling will help close the gap between experimental observations and insights obtained through computational modelling.

References

1. Cannon RC, Gleeson P, Crook S, Ganapathy G, Marin B, et al. LEMS: a language for expressing complex biological models in concise and hierarchical form and its use in underpinning NeuroML 2. Frontiers in neuroinformatics. 2014 Sep 25;8:79.

2. Gleeson P, Cantarelli M, Marin B, Quintana A, Earnshaw M, et al. Open source brain: a collaborative resource for visualizing, analyzing, simulating, and developing standardized models of neurons and circuits. Neuron. 2019 Aug 7;103(3):395–411.

P59 Sleep prevents catastrophic forgetting in spiking neural networks by forming joint synaptic weight representations

Jean E. Delanois 1 , Pavel Sanda 2 , Maxim Bazhenov 3 , Ryan Golden 4

1 University of California, San Diego, Computer Science, La Jolla, CA, United States of America

2 Institute of Computer Science - Czech Academy of Sciences, Complex systems, Prague, Czechia

3 University of California, San Diego, La Jolla, CA, United States of America

4 University of California, San Diego, Neuroscience, La Jolla, CA, United States of America

Email: jedelanois@gmail.com

Artificial neural networks overwrite previously learned tasks when trained sequentially, a phenomenon known as catastrophic forgetting. In contrast, the brain learns continuously, and typically learns best when new learning is interleaved with periods of sleep for memory consolidation. In this study, we used spiking network to study mechanisms behind catastrophic forgetting and the role of sleep in preventing it. The network could be trained to learn a complex foraging task but exhibited catastrophic forgetting when trained sequentially on multiple tasks. New task training moved the synaptic weight configuration away from the manifold representing old tasks leading to forgetting. Interleaving new task training with periods of off-line reactivation, mimicking biological sleep, mitigated catastrophic forgetting by pushing the synaptic weight configuration towards the intersection of the solution manifolds representing multiple tasks. The study reveals a possible strategy of synaptic weights dynamics the brain applies during sleep to prevent forgetting and optimize learning.

P60 Tangent space projections of optimally regularized Functional Connectomes improve their phenotypic reliability as measured by their fingerprint

Kausar Abbas 1 , Michael Wang 1 , Duy Duong-Tran 1 , Mintao Liu 1 , Uttara Tipnis 1 , Li Shen 2 , Alan Kaplan 3 , Jaroslaw Harezlak 4 , Joaquín Goñi 1

1 Purdue University, College of Engineering, West Lafayette, IN, United States of America

2 University of Pennsylvania, Perelman School of Medicine, Philadelphia, PA, United States of America

3 Lawrence Livermore National Laboratory, Center for Advanced Signal and Image Sciences, Livermore, CA, United States of America

4 Indiana University, Department of Epidemiology and Biostatistics, Bloomington, IN, United States of America

Email: kausar.jaffary@gmail.com

Functional magnetic resonance imaging (fMRI) research, in addition to improving our scientific understanding of the normative and pathological brain dynamics, seeks to develop clinical applications where diagnosis, treatment, and/or interventions are subject-specific. To that end, Functional Connectomes (FCs), estimated by cross correlating the regional BOLD activity across brain regions as measured by fMRI, have emerged as a suitable phenotype. FCs are usually summarized in the form of a symmetric correlation matrix and represent the whole-brain functional connectivity profile of an individual performing a specific fMRI condition (e.g., resting-state or working-memory). FCs have been shown to possess a recurrent and reproducible individual [1]. Amount of such fingerprints in an FC dataset can be used to estimate the reliability of the FC-phenotype. Traditional methods of estimating these fingerprints (e.g., Pearson’s correlation coefficient between the vectorized FCs) have had limited success in terms of phenotypic reliability [1]. This was improved upon by Venkatesh et al. by using Geodesic distance to compare FCs more accurately by utilizing the generally overlooked fact that FCs are non-Euclidean objects and the distances between them are better measured along a geodesic of the Symmetric Positive Definite (SPD) manifold [2]. We have recently improved on this by combining Geodesic distance with an optimal amount of main-diagonal regularization that is added to the FCs [3]. This approach, though provides accurate distance estimates between FCs, does not allow edgewise analyses of the FCs. This limitation can be addressed by projecting FCs from the SPD manifold onto an optimal tangent space of symmetric matrices, which is Euclidean and hence allows the use of Euclidean algebra and calculus (Fig. 1). Tangent space projections of FCs (tangent-FCs) require a reference point on the manifold which is qualitatively good representative of the dataset. Many different types of references have been proposed in the literature (e.g., Euclidean, Harmonic, log-Euclidean, Riemannian, Kullback). In this work, we found that when FCs are regularized by an optimal amount that maximizes phenotypic reliability of FCs using Geodesic distance [3], then (1) tangent-FCs have significantly higher phenotypic reliability than the original-FCs, (2) all reference matrices perform similarly with Riemannian performing slightly better, (3) reliability increases with increasing granularity of the parcellation, and (4) tangent-FCs can achieve higher reliability with a fraction of the total scanning length than the reliability of original-FCs with the maximum scanning length. These results hold for each of the eight fMRI conditions included in the HCP dataset. In contrast, if a fixed amount of regularization (e.g., τ = 1) is used, tangent space projections of FCs can lead to extremely low phenotypic reliability. In addition, the reliability of resultant tangent FCs become highly dependent on the choice of the reference matrix. In summary, these results indicate that a combination of optimal main diagonal regularization and tangent space projection of FCs leads to significant improvement in phenotypic reliability of FCs.

References

1. Finn ES, Shen X, Scheinost D, Rosenberg MD, Huang J, et al. Functional connectome fingerprinting: identifying individuals using patterns of brain connectivity. Nature neuroscience. 2015 Nov;18(11):1664–71.

2. Venkatesh M, Jaja J, Pessoa L. Comparing functional connectivity matrices: A geometry-aware approach applied to participant identification. NeuroImage. 2020 Feb 15;207:116,398.

3. Abbas K, Liu M, Venkatesh M, Amico E, Kaplan AD, et al. Geodesic distance on optimally regularized functional connectomes uncovers individual fingerprints. Brain Connectivity. 2021 Jun 1;11(5):333–48.

Fig. 1
figure al

Tangent Space Projection of FCs and its effect on FC fingerprints

P61 A cellular automata model of the hippocampo-septal pacemaker circuit

Ashraya Samba Shiva 1 , Bruce Graham 1

1 University of Stirling, Department of Computing Science and Mathematics, Stirling, United Kingdom

Email: asv@cs.stir.ac.uk

Cellular automata (CA) are an effective approach to modelling spiking neurons that provide a computationally simpler “state machine” description of the neurons’ operation than differential equation-based models, such as integrate-and-fire neurons. Adapting the CA neural model of Claverol et al. [1], we are developing neural network models of oscillations with phasic learning and memory function based on mammalian hippocampus.

The first stage of this work is to reproduce, in a CA network, the theta oscillation behaviour of the septal pacemaker circuit modelled, using a continuous neural population activity approach, by Denham and Borisyuk [2]. The septal pacemaker circuit considers 4 major populations to ascertain the propagation of theta frequency oscillations from the medial septum to the hippocampal CA1 region namely, the excitatory CA1 pyramidal cells (E), and inhibitory CB-containing hippocampo-septal cells (IP), other interneurons in CA1 (I), and inhibitory medial septal cells (S). The E cells excite the IP cells which then inhibit the S cells. The S cells inhibit the IP cells, which in turn inhibit the E cells. The model has 2 major external excitatory inputs namely, from the hippocampal CA3 to CA1 E cells and I cells, and from the posterior hypothalamus and supramammillary nucleus (PS) to the S cells.

We have about 100 neurons in each of the populations with the number of efferents between populations being 2–10 synapses. The time constants of the continuous model translate into explicit delays in our cellular model. Refractory periods are between 10 to 30 ms. The synaptic delay and active synaptic duration, the weights of each projection, and the thresholds for generating an action potential are varied to reproduce the theta functionality of the continuous model. External driving inputs are random spike trains of constant mean frequency.

We obtained oscillations in frequencies between 4-7 Hz range by setting the synaptic delay of E cells and the synaptic duration of the I cells, both of which lie in the range 10–20 ms. As soon as the E cells fire, with a small delay the IP cells fire, which inhibit the active S cells, which in turn inhibit the active I cells, finally inhibiting the active E cells. Determined by the duration of inhibition of E cells, the cycle continues periodically thus producing oscillatory behaviour. Populations that are in-sync with each other are the E and IP populations, and the I and S populations, in both the models. Too much or too little external input results in a fixed steady state in the continuous model. In the CA model, this steady state is characterised by random, non-oscillatory firing of the Populations.

The next step is to extend the hippocampo-septal circuit to model the CA1 and CA3 regions of the hippocampus using the CA, with more cell populations that regulate theta and theta-coupled gamma frequency oscillations. We will then model the integrated circuit of the CA1 and CA3 regions with the feedforward and feedback synaptic pathways between them. We will compare the CA model with the continuous population activity model of these circuits that we have already developed [3] (Fig. 1). The ultimate goal of the CA model is to simulate learning and recall in an oscillatory model.

References

1. Claverol ET, Brown AD, Chad JE. A large-scale simulation of the piriform cortex by a cell automaton-based network model. IEEE transactions on biomedical engineering. 2002 Nov 7;49(9):921–35.

2. Denham MJ, Borisyuk RM. A model of theta rhythm production in the septal‐hippocampal system and its modulation by ascending brain stem pathways. Hippocampus. 2000;10(6):698–716.

3. Shiva AS, Graham BP. Population model of oscillatory dynamics in hippocampal CA1 and CA3 regions in 29th Annual Computational Neuroscience Meeting: CNS*2020. BMC Neuroscience. 2020;21,54.

Fig. 1
figure am

Comparison of activity dynamics of continuous and cellular automata modelling approaches

P62 Local homeostatic regulation of the spectral radius of echo-state networks

Fabian Schubert 1 , Claudius Gros 2

1 Goethe University Frankfurt, Institute for Theoretical Physics, Frankfurt am Main, Germany

2 Goethe University Frankfurt, Frankfurt am Main, Germany

Email: fschubert@itp.uni-frankfurt.de

Recurrent cortical networks provide reservoirs of states that are thought to play a crucial role in sequential information processing in the brain. However, classical reservoir computing requires manual adjustments of global network parameters, particularly of the spectral radius of the recurrent synaptic weight matrix. It is hence not clear if the spectral radius is accessible to biological neural networks. Using random matrix theory, we show that the spectral radius is related to local properties of the neuronal dynamics whenever the overall dynamical state is only weakly correlated. This result allows us to introduce a local homeostatic synaptic scaling mechanism, termed flow control, that implicitly drives the spectral radius toward the desired value. The spectral radius is autonomously adapted while the network receives and processes inputs under working conditions. We demonstrate the effectiveness of this mechanism under different external input protocols. Moreover, we evaluate the network performance after adaptation by training the network to perform a time-delayed XOR operation on binary sequences. As our main result, we found that flow control reliably regulates the spectral radius for different types of input statistics. Precise tuning is however negatively affected when interneural correlations are substantial. Furthermore, we found a consistent task performance over a wide range of input strengths/variances. Given the effectiveness and remarkably simple mathematical form of flow control, we conclude that self-consistent local control of the spectral radius via an implicit adaptation scheme is an interesting and biologically plausible alternative to conventional methods using set point homeostatic feedback controls of neural firing.

P63 Surrogate methods for spike pattern detection in non-Poisson data

Peter Bouss 1 , Alessandra Stella 2 , Günther Palm 3 , Sonja Gruen 4

1 Forschungszentrum Jülich, Institute of Neuroscience and Medicine (INM-6, INM-10), Institute for Advanced Simulation (IAS-6), Jülich, Germany

2 Forschungszentrum Jülich, Jülich, Germany

3 University of Ulm, Institute of Neural Information Processing, Ulm, Germany

4 Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6, INM-10), Jülich, Germany

Email: p.bouss@fz-juelich.de

In order to detect significant spatio-temporal spike patterns (STPs) at ms-precision, we developed the SPADE method [1–3]. SPADE enables the detection and evaluation of spatio-temporal patterns (STPs), i.e., spike patterns across neurons and with temporal delays. For the significance assessment of STPs, surrogates are generated to implement the null hypothesis. Here we demonstrate the requirements for appropriate surrogates.

SPADE first discretizes the spike trains into bins of a few ms width. The discretization also includes clipping, i.e., if a bin is occupied by 1 or more spikes, its content is set to 1. The binarized spike trains are then mined for STPs with Frequent Itemset Mining, counting identical patterns. For the assessment of these patterns' significance, surrogate spike trains are used. The surrogate data are mined as the original data resulting in a p-value spectrum for the significance evaluation [3].

Surrogate data are modifications of the original data where potential time-correlations are destroyed and thus implement the null hypothesis of independence. For that purpose, the surrogate data need to keep the statistical features of the original data as similar as possible to avoid false positives. A classical choice for a surrogate is uniform dithering (UD), which independently displaces each individual spike according to a uniform distribution. We show that UD makes the spike trains more Poisson-like and does not preserve a potential dead time after the spikes. As a consequence, more spikes are clipped away as compared to the original data. Thus, UD surrogate data reduce the expectation for the patterns.

To overcome this problem, we evaluate different surrogate techniques. The first is a modification of UD that preserves the dead time. Further, we employ (joint-) ISI dithering, preserving the (joint-) ISI distribution [4]. Another surrogate is based on shuffling bins of already discretized spike data within a small window. Lastly, we evaluate trial shifting that shifts the whole spike trains against the others, trial by trial, according to a uniform distribution.

To evaluate the effect of the different surrogate methods on significance assessment, we first analyze the surrogate modifications on different types of stochastic spike models, such as Poisson spike trains, Gamma spike trains but also Poisson spike trains with dead time [5]. We find that all surrogates but UD are robust to clipping. Trial shifting is the technique that preserves best the statistical features of the spike trains. Further, we analyze artificial data sets for the occurrence of false-positive patterns. These data sets were generated with non-stationary firing rates and interval statistics taken from an experimental data set but are otherwise independent. We find many false positives for UD but all other surrogates show a consistently low number of false-positive patterns. Based on these results, we conclude with a recommendation on which surrogate method to use.

Acknowledgments

The project is funded by the Helmholtz Association Initiative and Networking Fund (ZT-I-0003), by Human Brain Project HBP Grant No. 785907 (SGA2 and SGA3), and by RTG2416 MultiSenses-MultiScales (DFG).

References

1. Torre E, Quaglio P, Denker M, Brochier T, Riehle A, et al. Synchronous spike patterns in macaque motor cortex during an instructed-delay reach-to-grasp task. Journal of Neuroscience. 2016 Aug 10;36(32):8329–40.

2. Quaglio P, Yegenoglu A, Torre E, Endres DM, Grün S. Detection and evaluation of spatio-temporal spike patterns in massively parallel spike train data with spade. Frontiers in computational neuroscience. 2017 May 24;11:41.

3. Stella A, Quaglio P, Torre E, Grün S. 3d-SPADE: Significance evaluation of spatio-temporal patterns of various temporal extents. Biosystems. 2019 Nov 1;185:104,022.

4. Gerstein G. Searching for significance in spatio-temporal firing patterns. Acta Neurobiol Exp (Wars). 2004 Jan 1;64(2):203–207.

5. Deger M, Helias M, Boucsein C, Rotter S. Statistical properties of superimposed stationary spike trains. Journal of computational neuroscience. 2012 Jun 1;32(3):443–63.

P64 Behaviorally relevant spatio-temporal spike patterns in parallel spike trains

Alessandra Stella 1 , Peter Bouss 2 , Günther Palm 3 , Alexa Riehle 4 , Thomas Brochier 4 , Sonja Gruen 5

1 Forschungszentrum Jülich, Jülich, Germany

2 Forschungszentrum Jülich, Institute of Neuroscience and Medicine (INM-6, INM-10), Institute for Advanced Simulation (IAS-6), Jülich, Germany

3 University of Ulm, Institute of Neural Information Processing, Ulm, Germany

4 Institut de Neurosciences de la Timone, Marseille, France

5 Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6, INM-10), Jülich, Germany

Email: a.stella@fz-juelich.de

The Hebbian hypothesis [1] states that neurons organize in assemblies of co-active neurons acting as information processing units. We hypothesize that assembly activity is expressed by the occurrence of precise spatio-temporal patterns (STPs) of spikes–with precise temporal delays between the spikes–emitted by neurons that presumably are members of an assembly.

We developed a method, called SPADE [2–4], that detects significant STPs in massively parallel spike trains. SPADE involves three steps: it first identifies repeating STPs using Frequent Itemset Mining [4]; second, it evaluates the detected patterns for significance through surrogates (trial-shifting); third, it removes the false positive patterns that are a by-product of true patterns and the background activity.

Here, we aim to evaluate if cell assemblies are active in relation to motor behavior [2]. Therefore, we analyzed N = 20 experimental sessions consisting of about 100 parallel spike trains recorded by a 100-electrode Utah array in the pre-/motor cortex of two macaque monkeys performing a reach-to-grasp task [6,7]. In this task, the monkey, after an instructed preparatory period, had to pull and hold an object by using either a side or a precision grip, and using either high or low force (four behavioral conditions). We segmented trials into 500 ms periods and concatenated them to analyze separately for the occurrence of STPs. Each significant STP is identified by its neuron composition, its number and times of occurrences and the delays between spikes of the pattern. The temporal resolution of the detected patterns is fixed to 5 ms.

We find that STPs occur in all phases of the behavior. In particular, we find about 6 patterns per session, where only 3 to 13 individual neurons are involved in STPs. Pattern can repeat from 280 to 10 times, depending on the size, which varies from 2 to 6 neurons. Within a session, patterns strongly depend on the behavioral context, and we do not find identical patterns in the different epochs. Thus, patterns are specific to a behavioral condition, suggesting that different assemblies are activated for each specific behavioral context. Patterns that occur in a single session typically overlap in the participating neurons, and a few individual neurons appear as hubs, i.e., are involved in several patterns. We also find that pattern neurons are not located within a small region, but distributed across the entire cortical surface covered by the Utah array.

Our results are consistent with the model of the synfire chain (SFC) [8]. A theoretical study showed that patterns emerging from SFC activity can be found in parallel spike train data recorded with a 100-electrode Utah array, i.e., despite the strong subsampling.

Acknowledgments

The project is funded by the Helmholtz Association Initiative and Networking Fund (ZT-I-0003), by Human Brain Project HBP Grant No. 785907 (SGA2 and SGA3), and by RTG2416 MultiSenses-MultiScales (DFG).

References

1. Hebb DO. The organisation of behaviour: a neuropsychological theory. New York: Science Editions; 1949.

2. Torre E, Quaglio P, Denker M, Brochier T, Riehle A, et al. Synchronous spike patterns in macaque motor cortex during an instructed-delay reach-to-grasp task. Journal of Neuroscience. 2016 Aug 10;36(32):8329–40.

3. Quaglio P, Yegenoglu A, Torre E, Endres DM, Grün S. Detection and evaluation of spatio-temporal spike patterns in massively parallel spike train data with spade. Frontiers in computational neuroscience. 2017 May 24;11:41.

4. Stella A, Quaglio P, Torre E, Grün S. 3d-SPADE: Significance evaluation of spatio-temporal patterns of various temporal extents. Biosystems. 2019 Nov 1;185:104,022.

5. Picado-Muino D, Borgelt C, Berger D, Gerstein G, Grün S. Finding neural assemblies with frequent item set mining. Front. Neuroinform. 2013 May 31;7:9.

6. Brochier T, Zehl L, Hao Y, Duret M, Sprenger J, et al. Massively parallel recordings in macaque motor cortex during an instructed delayed reach-to-grasp task. Scientific data. 2018 Apr 10;5(1):1–23.

7. Riehle A, Wirtssohn S, Grün S, Brochier T. Mapping the spatio-temporal structure of motor cortical LFP and spiking activities during reach-to-grasp movements. Frontiers in neural circuits. 2013 Mar 27;7:48.

8. Abeles M. Corticonics: Neural circuits of the cerebral cortex. Cambridge University Press; 1991 Feb 22.

P65 Computational modeling of electrophysiological properties in urethral smooth muscle cell

Chitaranjan Mahapatra 1 , Shirish Adam 2 , Amritanshu Gupta 3

1 University of California San Francisco, Cardiovascular Research Institute, San Francisco, CA, United States of America

2 Government College of Engineering, Jalgaon, India

3 Indian Institute of Technology Bombay, Biosciences & Bioengineering Department, Mumbai, India

Email: cmahapatra97@gmail.com

The International Continence Society has defined urinary incontinence (UI) as a condition in which involuntary loss of urine is objectively demonstrable and is a social or hygiene problem [1]. Among different types of UI, stress urinary incontinence (SUI) is one, which is a common syndrome in women that is typically associated with advanced age, obesity, diabetes mellitus, and fertility [1].The smooth muscles from the urinary bladder and urethra display spontaneous contractility patterns, which are associated with UI and SUI.The urethral smooth muscle (USM) cell contributes to SUI by generating spontaneous electrical activities in the terms of membrane depolarization and action potentials (sAP). Therefore, a complete understanding of the USM cell’s sAP biophysics will help in identifying novel pharmacological targets for the SUI.This study presents the first biophysically based model of USM AP which integrates all the key ionic currents underlying the electrogenic processes in the urethra.

The classical Hodgkin-Huxley (HH) approach is implemented to build all ion channels after borrowing data from various published electrophysiological studies. There is an array of ion channels discovered in USM cell electrophysiology to regulate the cell’s excitability. The ion channels in the USM cell model are Ca2 + activated Cl-channel, voltage-gated Ca2 + channel, voltage-gated K + channel, Ca2 + activated K + channel, ATP-dependent K + channel, and leakage currents. The sAPs were induced in the whole-cell model by applying an external stimulus current as brief rectangular pulses or synaptic input.The USM cell model simulation is performed in “NEURON”software environment [2].

The USM cell model successively responded to both current and synaptic input stimuli by showing all-or-none AP firing properties. A current input is a step input pulse with different amplitudes and durations. A synaptic input is also mimicked by the alpha function to evoke AP in our model. The voltage threshold for triggering an AP is ≈ −35 mV. Figure 1 presents the simulated AP after inducing a synaptic input to mimic the experimental AP in [3]. The resting membrane potential, AP peak, after hyperpolarization and duration are −40 mV, 47 mV, −53 mV, and 38 ms respectively.

In the present state, this model is at an elementary stage. Integration of other active channels, Na +—Ca2 + exchanger, Ca2 + ATPase pump and sarcoplasmic reticulum Ca2 + releasing mechanism will improve this model towards a more comprehensive stage. In addition, the expansion of this single-cell model to syncytium or network level will help to establish a better physiologically realistic computational model for investigating the SUI.

References

1. Abrams P, Cardozo L, Fall M, Griffiths D, Rosier P, et al. The standardisation of terminology in lower urinary tract function: report from the standardisation sub-committee of the International Continence Society. Urology. 2003 Jan 1;61(1):37–49.

2. Hines ML, Carnevale NT. The NEURON simulation environment. Neural computation. 1997 Aug 15;9(6):1179–209.

3. Kyle B, Bradley E, Ohya S, Sergeant GP, McHale NG, et al. Contribution of Kv2. 1 channels to the delayed rectifier current in freshly dispersed smooth muscle cells from rabbit urethra. American Journal of Physiology-Cell Physiology. 2011 Nov;301(5):C1186-200.

Fig. 1
figure an

The simulated AP in the USM model

P66 Network model provides insights into entorhinal cortex mechanisms of theta generation

Inês Guerreiro 1 , Zhenglin Gu 2 , Jerrel Yakel 2 , Boris Gutkin 1

1 Ecole Normale Supérieure, Paris, France

2 NIEHS, RTP, NC, United States of America

Email: ines.completo@gmail.com

Hippocampal theta oscillations are a prominent 4–10 Hz rhythm in the hippocampal field potential of all mammals studied to date. They have been linked to spatial and episodic memory formation. After decades of research, the origins of the hippocampal theta rhythm remain elusive. In particular, it is not clear what is the role of each of the regions essential for in vivo hippocampal theta generation – the septum, hippocampus and entorhinal cortex (EC).

Recent experimental studies performed by Gu and Yakel indicate that the EC may be the generator of theta rhythm in the hippocampal formation–not only is the EC leading the theta rhythm, but all hippocampal sub-regions are synchronized, suggesting that they respond to a common rhythmic extrinsic input coming from the EC with theta-range activity. However, it is important to note that the EC does not function as an independent rhythm generator and it requires hippocampal inputs in the theta range to maintain the theta rhythm [1].

In this work, we propose a circuit model of the EC to study the intrinsic properties of the EC that allow for external excitatory inputs to drive the system into an oscillatory regime.We use Izhikevich’s two-dimensional QIF neuron model [2] to describe the three major classes of neurons observed in the EC: stellate cells (S), pyramidal cells (E), and fast-spiking interneurons (I). We then take advantage of a thermodynamic approach combined with a reduction method to get a simplified, exact description of the three neural populations. In order to study the contributions of the neural populations in the generation of theta, we use a machine learning approach [3] to infer the space of connectivity parameters that give rise to theta rhythmic activity in the EC network model. We found that theta generation is strongly constrained by the connections between the S and E-cells. In fact, a subnetwork of S and E-cells is capable of robustly generate synchronized theta oscillations. While the E-cells provide the excitatory drive, the S-cells play a key role in keeping the oscillations in the theta range.

The entorhinal cortex (EC) has a unique role as it is positioned as a gateway between neocortical areas and the hippocampal system. A clearer understanding of the intrinsic circuit properties of the EC and its temporal dynamics will clarify the information communication processes between the hippocampus and other neocortical areas as well as the role of theta oscillations

References

1. Gu Z, Yakel JL. Inducing theta oscillations in the entorhinal hippocampal network in vitro. Brain Structure and Function. 2017 Mar 1;222(2):943–55.

2. Izhikevich EM. Simple model of spiking neurons. IEEE Transactions on neural networks. 2003 Nov;14(6):1569–72.

3. Gonçalves PJ, Lueckmann JM, Deistler M, Nonnenmacher M, Öcal K, et al. Training deep neural density estimators to identify mechanistic models of neural dynamics. Elife. 2020 Sep 17;9:e56261.

P67 Neuronal heterogeneity underlies electrical synapse asymmetry and spike time variability in coupled neurons

Austin Mendoza 1 , Julie Haas 1

1Lehigh University, Biological Sciences, Bethlehem, PA, United States of America

Email: ajma19@lehigh.edu

Electrical synapses couple inhibitory neurons across the brain, serving a variety of functions within neural circuits that are modifiable by activity. Much focus has been on synchrony and oscillatory activities that are promoted by electrical synapses between cells. Recently, several specific mechanisms of plasticity have been demonstrated at electrical synapses.In feedforward and feedback inhibitory circuits, these synapses can play complex roles towards information processing. Despite recent advances, many basic aspects of electrical synapse signaling, including asymmetry or effects on spike times, remain underappreciated. Using multi-compartmental models of neurons coupled through dendritic electrical synapses, we investigated how intrinsic factors contribute to observed synaptic asymmetry and how those result in modulation of spike times in coupled cells. We show that electrical synapse location along a dendrite, input resistance, internal dendritic resistance, or directional conduction of the electrical synapse itself each alter the asymmetry, as measured by coupling between cell somas. Strikingly, apparent asymmetry resulted from symmetrically conducting electrical synapses that coupled different subcellular locations of the two cells. Asymmetry resulting from synapse location difference was amplified by differences in synapse strength, input resistance or dendritic resistance. Additionally, we show that several combinations of factors that contribute to asymmetry can also produce identical coupling ratio measurements, indicating that observations of asymmetry may mask truly asymmetrical coupling. Furthermore, we show that asymmetry alters spike times and latency in coupled cells, depending on direction of conduction or dendritic location of the electrical synapse. Together, these simulations illustrate that causes of asymmetry are multifactorial, may not be apparent in measurements of electrical coupling, and produce a variety of outcomes of spike times in coupled cells. Our findings highlight aspects of electrical synapses that should be considered in experimental investigations of coupling, and when constructing networks containing electrical synapses.

P68 Computational modelling of ictogenicity to inform photosensitive epilepsy from interictal EEG

Marinho Lopes 1 , Jiaxiang Zhang 2 , Khalid Hamandi 3

1 Cardiff University, Cardiff, United Kingdom

2 Cardiff University, School of Psychology, CUBRIC, Cardiff, United Kingdom

3 Cardiff University, CUBRIC, Cardiff, United Kingdom

Email: m.lopes@exeter.ac.uk

People with photosensitive epilepsy (PSE) are prone to epileptic seizures evoked by visual stimuli, typically flickering lights. PSE is particularly relevant as a model to understand epilepsy. For example, it is used within clinical trials to test the efficacy of anti-seizure medication [1]. Thus, a better understanding of the pathophysiology of PSE may have an impact not only on people with PSE but more generally on the diagnosis and treatments of epilepsy.

Several studies have found evidence for both occipital and more widespread cortical hyperexcitability in people with PSE [2]. In this study, we aimed to find whether we could identify a widespread increased ictogenic propensity and/or occipital increased ictogenic propensity from interictal EEG in people with PSE relative to individuals with epilepsy but without PSE. To evaluate network-wide and local ictogenic propensity, we used the concepts of brain network ictogenicity (BNI) and node ictogenicity (NI), respectively. BNI is a measure of how likely a functional brain network is of generating seizures in computer simulations [3]. These simulations consist of placing a mathematical model of epilepsy into the functional network and compute the resulting brain dynamics. Brain networks that have a higher likelihood of supporting seizures are expected to produce more seizure-like activity in the simulations [3]. NI is assessed by removing regions from the functional network and evaluating the resulting altered BNI [3]. Brain regions whose removal produce a higher reduction of BNI are considered more ictogenic.

We considered two groups of individuals with idiopathic generalised epilepsy, 26 individuals that had a photoparoxysmal response (PPR) during intermittent photic stimulation (IPS) (the PPR group), and 24 individuals that did not have PPR (the non-PPR group). We tested two hypotheses: (i) the PPR group has a higher BNI than the non-PPR group; and (ii) the PPR group has a higher occipital NI than the non-PPR group. By applying our computational framework, we observed that the BNI is not significantly different between the two groups. This result suggests that our cohort with PSE did not have a higher widespread ictogenic propensity than other individuals with epilepsy but without PSE. In contrast, we found that the PPR group had a statistically significantly higher occipital NI than the non-PPR group. This result suggests that the occipital region is particularly prone to induce seizure activity in people with PSE, and that this susceptibility can be probed from resting interictal EEG. More generally, our results show that computational analysis of interictal EEG may be used to diagnose PSE without the need of photic stimulation.

References

1. Yuen ES, Sims JR. How predictive are photosensitive epilepsy models as proof of principle trials for epilepsy?. Seizure. 2014 Jun 1;23(6):490–3.

2. Padmanaban V, Inati S, Ksendzovsky A, Zaghloul K. Clinical advances in photosensitive epilepsy. Brain research. 2019 Jan 15;1703:18–25.

3. Lopes MA, Richardson MP, Abela E, Rummel C, Schindler K, et al. An optimal strategy for epilepsy surgery: Disruption of the rich-club?. PLoS computational biology. 2017 Aug 17;13(8):e1005637.

P69 Modeling of neural action potential generation in the electron-diffusion regime

Ahmed Hamzah 1

1 Washington State University, MME, Pullman, WA, United States of America

Email: ahmed.hamzah@wsu.edu

In modeling neurons, it is generally assumed that the diffusion current in the cable model is too small to be worth taking into account. A cable model and a modified cable model having a diffusion current has been solved by using a finite volume method to test the layer 5 neuron of a rat in different amplitudes of a stimulus current. The effect of the diffusion current was shown to have a significant impact on the potential results in some values of the stimulus current and showed differences in generating a spike of action potential between including the diffusion current and excluding it. Also, the results showed that the sodium concentration predicted by the two modified cable models had different response during a spike of action potential. The present work reveals that the diffusion term in the modified cable equation may critically determine the action potential generation in the dynamic equation of membrane potential. This is a new concept in research to show the importance of the Nernst-Planck equation being stated, where electro-migration and diffusion fluxes are combined together.

P70 Astrocyte-neuron interaction through the extracellular ionic composition

Carter Johnson 1 , Alla Borisyuk 1 , Gregory Handy 2

1 University of Utah, Department of Mathematics, Salt Lake City, UT, United States of America

2 University of Chicago, Neurobiology and Statistics, Grossman Center for Quantitative Biology and Human Behavior, Chicago, IL, United States of America

Email: caljohnson@math.utah.edu

Glial cells called astrocytes play many important roles in the brain. In many brain areas, astrocytes can partially wrap around synapses to form “tripartite synapses” (presynaptic neuron–astrocyte–postsynaptic neuron). This wrapping allows the astrocyte to modulate the synaptic signal between nearby neurons in a number of ways. In this work, we explore one such pathway of astrocyte-neuron interaction. Namely, we study how the astrocyte’s calcium activity can affect the excitability of the postsynaptic neuron by altering the extracellular concentrations of different ion species. We present a model of the astrocyte (see Fig. 1) that includes biologically-constrained key transmembrane potassium, calcium, sodium and glutamate fluxes: Na + /Ca2 + exchanger (NCX), Na + /K + pump (NKA), inward-rectifying potassium channels (Kir4.1), and glutamate transporter (GLT). Each component is carefully adapted from the literature to match the available data. All components are then combined and interfaced with existing astrocyte calcium response models [1,2] to study the influence of this astrocyte-neuron interaction pathway on the excitability of nearby neurons. We find that by regulating the volume of and the ion concentrations in the extracellular space around the synapse, astrocytes can effectively weaken the signal transfer between neurons but also prevent run-away excitation in some pathological conditions.

Acknowledgments

We thank NSF for supporting authors withNSF-DMS-1853673.

References

1. Handy G, Taheri M, White JA, Borisyuk A. Mathematical investigation of IP3-dependent calcium dynamics in astrocytes. Journal of computational neuroscience. 2017 Jun;42(3):257.

2. Taheri M, Handy G, Borisyuk A, White JA. Diversity of evoked astrocyte Ca2 + dynamics quantified through experimental measurements and mathematical modeling. Frontiers in systems neuroscience. 2017 Oct 23;11:79.

Fig. 1
figure ao

Extended tripartite synapse. Boxed in red are the components of the Handy-Taheri IP3-Calcium model [1,2]. New components in extended model are Na + /Ca2 + exchanger (NCX), Na + /K + pump (NKA), inward-rectifying potassium channels (Kir4.1), glutamate transporter (GLT), sodium-leak current (L-N), and neuronal-released glutamate (Glut)

P71 Biomarkers of reduced inhibition in human cortical microcircuit signals in depression

Frank Mazza 1 , John Griffiths 2 , Etay Hay 3

1 Centre for Addiction and Mental Health, University of Toronto, Krembil Centre for Neuroinformatics, Department of Physiology, Toronto, Canada

2 Centre for Addiction and Mental Health, University of Toronto, Krembil Centre for Neuroinformatics, Toronto, Canada

3 Centre for Addiction and Mental Health, University of Toronto, Krembil Centre for Neuroinformatics, Psychiatry, Physiology, Toronto, Canada

Email: frank.mazza@camh.ca

Major depressive disorder (depression) involves different mechanisms and brain scales. Altered cortical inhibition is associated with treatment-resistant depression, and recent studies indicate that reduced dendritic inhibition by somatostatin-expressing (SST) interneurons are a key component of the pathology. Modeling studies suggest that changes in SST-mediated inhibition increase cortical baseline activity and noise, and may thus account for deficits in cortical processing in depression. Electroencephalography (EEG) offers an important source of biomarkers for depression to improve diagnosis and inform personalized treatments. However, whether the effects of reduced SST inhibition on microcircuit activity have signatures detectible in EEG remains unknown. We used detailed models of human cortical layer 2/3 microcircuits with normal or reduced SST inhibition to simulate resting-state activity together with the associated EEG signals in health and depression. We show that the healthy microcircuit models had emergent properties that reproduced key features of resting-state EEG, including a theta-alpha band peak (4 – 12 Hz) and 1/f decomposition of the power spectral density (PSD). We compared the simulated EEG in healthy and depression microcircuits and found an increase in theta band power (4 – 8 Hz) along with a broadband increase. We then characterized the spike preference of EEG phase for the different neuron types in the microcircuit and found a distinct preference to the peak of the theta-alpha oscillations. In addition, we characterized the spatial decay of the EEG signatures across the brain surface by integrating the microcircuit signal in a realistic head model. Our study thus used detailed computational models to identify EEG biomarkers of reduced SST inhibition in cortical microcircuits in depression, which may serve to improve the diagnosis and stratification of depression subtypes, and in monitoring the effects of pharmacology that modulates SST inhibition for treating depression.

P72 Semantization of episodic memory in a spiking cortical attractor network model

Nikolaos Chrysanthidis 1 , Florian Fiebig 1 , Anders Lansner 2 , Pawel Herman 3

1 KTH Royal Institute of Technology, Stockholm, Sweden

2 Stockholm University, KTH Roual Institute of Technology, Stockholm, Sweden

3 KTH Royal Institute of Technology, Digital Futures, Stockholm, Sweden

Email: nchr@kth.se

Episodic memory (EM) is the recollection of past experiences that occurred at particular times and places. Semantic memory (SM) refers to general knowledge about words and items, lacking spatiotemporal source information, possibly resulting from the accumulation of EMs. In fact, EM traces are susceptible to transformation and loss of information [1], which can be partially attributed to semantization (decontextualization process). Extensions to the classical Remember/Know behavioral paradigm attribute the loss of episodicity to repeated exposures of items in different contexts leading to decontextualization [2]. Despite recent advancements explaining semantization at a behavioral level [2], the underlying neural mechanisms and, particularly, the role of synaptic plasticity in the associative pathways remain poorly understood.

Here we propose and evaluate a Bayesian-Hebbian hypothesis about synaptic and network mechanisms underlying EM semantization. We build a model consisting of two cortical spiking neural networks associatively coupled using a Bayesian-Hebbian learning rule (BCPNN) [3,4] (Fig. 1a), and show how it captures key phenomenological aspects of the semantization. In particular, we simulate an EM task designed to follow a seminal experimental study [2] (Fig. 1b), and qualitatively compare the modelling results with the corresponding behavioral data. We demonstrate that encoding items across multiple contexts leads to item-context decoupling akin to semantization (Fig. 1c, f: items or contexts serve as retrieval cues, respectively). The emerging loss of episodicity progresses with further exposures of a stimulus in different contexts, resulting in weaker item-context memory binding (Fig. 1d, g). This gradual trace modification relies on the nature of Bayesian learning, which normalizes and updates weights over estimated presynaptic (Bayesian-prior) as well as postsynaptic (Bayesian-posterior) spiking activity, while also modulating intrinsic excitability of pyramidal cells in the model (Fig. 1e, h). Importantly, the more commonly used spike-timing dependent plasticity (STDP) rule does not lead to item-context decoupling in the same EM task.

On the whole, there are few computational models of EM-SM interplay, and those that exist typically neglect the underlying neural mechanisms in favor of predicting behavioral outcomes. Our model bridges these perspectives, and reproduces important EM phenomena on behavioral time scales (under constrained network connectivity with plausible postsynaptic potentials, firing rates, etc.), while it also explains semantization based on synaptic plasticity. To further this understanding, our hypothesis of the EM-SM interplay at a neural level, needs to be substantiated experimentally.

Acknowledgments

Vetenskapsrådet 2018–05,360; SNIC resources at PDC Centre for High Performance Computing, KTH Royal Institute of Technology; Swedish e-Science Research Centre (SeRC).

References

1. Tulving E. 12. Episodic and Semantic Memory. Organization of memory/Eds E. Tulving, W. Donaldson, NY: Academic Press. 1972:381–403.

2. Opitz B. Context-dependent repetition effects on recognition memory. Brain and Cognition. 2010 Jul 1;73(2):110–8.

3. Tully PJ, Hennig MH, Lansner A. Synaptic and nonsynaptic plasticity approximating probabilistic inference. Frontiers in synaptic neuroscience. 2014 Apr 8;6:8.

4. Fiebig F, Herman P, Lansner A. An indexing theory for working memory based on fast Hebbian plasticity. eneuro. 2020 Mar;7(2).

Fig. 1
figure ap

Semantization of EMs in a a two-network model. Items and contextual memory objects are simultaneously cued b in the respective networks (contexts inherit color from the coactivated items in the spike raster). Repetition of items with various contexts leads to gradual item-context decoupling c due to weakening of associative weights between the networks (d). ***p < 0.001 (Mann–Whitney)

P73 Modelling the effect of deep brain stimulation on cortico-subcortical networks in the context of freezing of gait in Parkinson’s Disease

Mariia Popova 1 , Arnaud Messé 1 , Alessandro Gulberti 2 , Monika Pötter-Nerger 3 , Claus Hilgetag 1

1 University Medical Center Hamburg-Eppendorf, Institute of Computational Neuroscience, Hamburg, Germany

2 University Medical Center Hamburg-Eppendorf, Department of Neurophysiology and Pathophysiology, Hamburg, Germany

3 University Medical Center Hamburg-Eppendorf, Department of Neurology, Hamburg, Germany

Email: m.popova@uke.de

Currently available treatments of Parkinson’s disease are of limited efficacy. There are symptoms such as freezing of gait which causes falls and is a significant source of morbidity in patients suffering from Parkinson’s disease. One possible approach to treat these patients is deep brain stimulation. However, often there is a remaining freezing of gait during the standard subthalamic nucleus deep brain stimulation. At the same time, there is experimental evidence of freezing improvement during simultaneous stimulation of the subthalamic nucleus and substantia nigra pars reticulata. This effect could be due to the connections of the substantia nigra pars reticulata to the midbrain regions responsible for posture stability and gait initiation such as pedunculopontine nucleus. A computational model explaining the observed improvement, which also accounts for the behavioral data could help to unravel the mechanisms behind the symptoms of Parkinson’s disease and potentially lead to more individualized treatment.

For this reason, we study the cortico-subcortical networks responsible for gait and the effects exerted on these networks via perturbations such as deep brain stimulation. To assess the differences between the two aforementioned stimulation modes, we compare the network dynamics during the healthy, the Parkinsonian and the deep brain stimulated states. Also, we compare the modelling outputs with pupillometry data, which is an indirect measure of locus coeruleus activity. This is of importance as abnormalities in afferent pathways of locus coeruleus – one of the outputs of the model, are associated with gait deterioration. Previous computational models do not account for the effects of interest as they are either lack biological detail or do not include midbrain regions.

As a first approach, we developed a firing rate network model comprising interconnected populations of Hodgkin-Huxley neurons representing basal ganglia nuclei and midbrain regions. The switch to the Parkinsonian state is achieved via the change in striatal conductances representing dopamine depletion – a hallmark of Parkinson’s disease. Deep brain stimulation is modeled as a current applied to the efferent axons of the neurons in the target regions. The resulting firing profile in the locus coeruleus is then compared to the pupillometry data.

We present simulations from the proposed computational model that qualitatively account for the firing rate data and their dynamics in the healthy, Parkinsonian and stimulated states. Moreover, the firing dynamics during the subthalamic nucleus deep brain stimulation is markedly different from the simultaneous stimulation of subthalamic nucleus and substantia nigra pars reticulata. Limitations of that firing rate modelling approach are discussed. Thus, the model accounts for the first time for the difference between two stimulation modes and suggests a possible mechanism of action behind the deep brain stimulation.

P74 Systematic perturbation of an Artificial Neural Network: A step towards quantifying causal contributions in the brain

Kayson Fakhar 1 , Claus Hilgetag 1

1University Medical Center Hamburg-Eppendorf, Institute of Computational Neuroscience, Hamburg, Germany

Email: k.fakhar@uke.de

“No causation without manipulation”. With this motto in mind, lesion inference approaches characterize the causal contributions of neural elements to brain functions. Historically, lesion inference has helped to localize specialized units in the brain and it has gained new prominence through the arrival of optogenetic perturbation techniques that allow probing the causal role of neural elements at an unprecedented level of detail. While lesion or perturbation inferences are conceptually powerful tools, they face methodological difficulties due to the brain’s complexity. Particularly, they are often challenged to disentangle the causal role of individual neural elements, since many functions emerge from coalitions of different elements. Therefore, studies of real-world data, as in clinical lesion studies, are not suitable for establishing the reliability of lesion approaches, due to unknown, multivariate, and potentially complex interactions among brain regions. Instead, ground truth studies of well-characterized artificial systems are required to validate established lesion inference approaches and reveal computational motifs employed by the brain.

Here, we trained an Artificial Neural Network (ANN) playing a classic arcade game to explore how well different perturbation strategies canreveal the neural substrate of a behavior. To this goal, we first lesioned every node and connection using a single-site lesioning scheme, which is the traditional approach in neuroscience and second employed a multi-site lesioning scheme in order to perturb thousands of unique combinations of units. We quantified the causal contribution of all elements using a rigorous game-theoretical metric based on the Shapley value and then calculated the synergistic and redundant interactions of pairs of causal units.

We found that not every perturbation approach necessarily reveals causation, as lesioning elements one at a time produced biased results. By contrast, multi-site lesion analysis captured essential information that was missed by single-site lesions. In particular, we identified a motif of functional interaction that manifests as a paradoxical lesion effect, i.e., disruptions in performance caused by a first lesion that reverts towards normal after a second lesion. Finally, we compared the network’s behavior with the behavior of the network in which the most critical element was lesioned, to understand the functional role of the element.

We conclude that even small and seemingly simple ANNs show surprising complexity that needs to be appreciated in order to derive a causal picture of the system. In the context of rapidly evolving multi-site perturbation approaches and multivariate brain-mapping and inference methods, we advocate using in silico experiments and ground-truth models to verify fundamental assumptions about the validity of these approaches.

P75 Detection of visual information processing regions from high-density EEG data

Anna Pidnebesna 1 , Stanislav Jiricek 1 , Vlastimil Koudelka 2 , Kamil Vlcek 3 , Pavel Sanda 1 , Jiri Hammer 4 , Jaroslav Hlinka 1

1 Czech Academy of Sciences, Institute of Computer Science, Complex systems, Prague, Czechia

2 National Institute of Mental Health, Klecany, Czechia

3 Czech Academy of Sciences, Institute of Physiology, Prague, Czechia

4 Charles University, 2nd School of Medicine, University Hospital Motol, Prague, Czechia

Email: p.anna2401@gmail.com

Visual information processing plays an important role in human perception and cognition. Measuring the information flow is an even more challenging task than purely detecting local activations. The selection of parsimoneous set of relevant regions of interest (ROIs) is key for successful analysis. A common choice is using blind source separation (ICA, PCA, NNMF). However, due to nonstationarity of the stimulus driven data and multiple local maxima of the temporal components, interpretable description of spreading of the initial stimulus is complicated. We thus propose a method that enforces better temporal localization of the activity within the studied ROIs, and demonstrate an application to source-reconstructed high-density EEG data. Effective connectivity analysis was used to demonstrate the difference in the detected feedforward and feedback activity (Fig. 1).

Local activity time courses are divided into components via spatiotemporal dynamics. Activation times are defined as a time of a maximum of absolute values, and were used to sort signals in time and divide them into equal groups (N = 15). In every group, outliers are removed according to the sources' spatial positions and remaining locations were spatially clustered using k-means and considered as ROI for further processing. The method was tested on example EEG data of a healthy subject (male, age 33). A set of pictures was presented on a computer monitor with 600 trials, each including 200 ms of baseline, 300-ms stimulus, and 600-ms of reaction time [1]. The EEG was recorded by a high-density 256-channel system with Net Amps 400 series amplifier at 1000 Hz sampling and preprocessed by an automated pipeline: bad channel detection and interpolation, bad segment rejection, bandpass filtering (0.5-300 Hz), ICA-based artifact detection and rejection by a set of features from SASICA, FASTER, and ADJUST packages in the EEGLAB toolbox [2], and finally average referenced and bandpass filtered to 1–80 Hz.

The EEG dipole moment time courses were estimated by the eLORETA inverse algorithm [3] on a regular grid in grey matter. The forward model was generated by the Fieldtrip-Simbio pipeline [4] including a 5-layer hexahedral head model using individual T1-w MRI image. The electrode positions were based on fiducial points coregistration with individual head model. Several ROI laying along ventral/dorsal pathways were selected for preliminary connectivity analysis. A significant difference between feedforward and feedback connectivity was detected [200:400] ms after stimulus. In future we aim to update the ROI definition so sources could have no/more than one activation, include more subjects and continue with connectivity analysis.

Acknowledgements

This study was supported by the Czech Science Foundation grant GA19-11753S.

References

1. Vlcek K, Fajnerova I, Nekovarova T, Hejtmanek L, Janca R, et al. Mapping the scene and object processing networks by intracranial EEG. Frontiers in human neuroscience. 2020 Oct 9;14:411.

2. Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of neuroscience methods. 2004 Mar 15;134(1):9–21.

3. Pascual-Marqui RD, Lehmann D, Koukkou M, Kochi K, Anderer P, et al. Assessing interactions in the brain with exact low-resolution electromagnetic tomography. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2011 Oct 13;369(1952):3768–84.

4. Vorwerk J, Oostenveld R, Piastra MC, Magyari L, Wolters CH. The FieldTrip-SimBio pipeline for EEG forward solutions. Biomedical engineering online. 2018 Dec;17(1):1–7.

Fig. 1
figure aq

A Data-driven ROI taking part in visual processing; color corresponds to the activation time. B Feedforward and feedback average connectivity (of selected ROI) in time, estimated by Granger causality in sliding window

P76 Can short recovery time-constants of synapses explain long recovery time-constants of auditory evoked responses in monkey primary auditory cortex

Siwei Qiu 1 , Spencer Koerner 2 , Tobias Teichert 3 , Chengcheng Huang 4

1 University of Pittsburgh, Neuroscience, Pittsburgh, PA, United States of America

2 University of Pittsburgh, Department of Psychiatry, Pittsburgh, PA, United States of America

3 University of Pittsburgh, Department of Psychiatry and Department of Bioengineering, Pittsburgh, PA, United States of America

4 University of Pittsburgh, Neuroscience/Mathematics, Pittsburgh, PA, United States of America

Email: siwei.qiu@gmail.com

Auditory responses are strongly modulated by the recent history of sound. Across the auditory pathway neural responses are significantly attenuated if the same stimulus was presented less than a few seconds ago. The time constant at which firing rates in primary auditory cortex (PAC) recover back to baseline is on the order of one second. Time constants for auditory evoked EEG responses, which reflect synchronized post-synaptic potentials from all auditory responsive brain regions, can be even longer. This response attenuation has often been linked to short-term synaptic depression. However, synaptic time constants are typically in the range of a few hundred milliseconds. It is thus unclear if and how the synaptic time-constants could give rise to the much longer time-constants of firing rates and EEG.

To address this question, we investigated under which circumstances the recovery time-constant of a neural network can differ from the recovery time-constant of the underlying synapse. Further, we tested if the long-lasting attenuation of click-evoked neural responses observed in monkey PAC and EEG, can be accounted for by much shorter synaptic time-constants. We measured the multi-unit activity (MUA) from the PAC and EEG signal in rhesus monkeys. The sound stimuli were auditory clicks with random inter-click intervals (ICI, 0.25 to 12 s) and different intensities (65 to 85 dB SPL). To develop a forward model to simulate EEG activity, we used magnetic resonance image (MRI) to obtain head models of the monkeys. We used a firing rate model with short-term synaptic depression at both the feedforward and recurrent excitatory synapses. We fitted the rate model to the MUA data recorded in PAC and obtained distributions of network connectivity and synaptic parameters. To simulate the EEG data, we built a forward model to link single region activity to EEG signals, which incorporates detailed monkey head models and the geometry of the monkey cortex. With a brain atlas database of non-human primates, we extracted the accurate locations of different auditory regions, and computed the contributions of each region to the EEG signals recorded on different sensors. We found that networks with recurrent depression typically generated longer rate recovery time constants compared to their synaptic time constants. Networks with feedforward depression can also generate longer rate recovery time constant if their stimulus response function is supralinear.

These results suggest that the rate recovery time constant is an emergent property of the network and can increase across the cortical hierarchy. Interestingly, we found that the evoked potentials of EEG signal lasted much longer than the neural responses in PAC, suggesting contributions from other auditory regions. Moreover, different EEG components showed different recovery time constants, suggesting that the recovery time constants change along the auditory pathway. To capture these differences, we extended the recurrent network to model multiple auditory regions, including core, belt and parabelt regions in auditory cortex. We found that belt and parabelt regions had longer response latencies, which would contribute more to the later components of the EEG responses.

Acknowledgments

We thank NIH for supporting all authors with grant 1RF1MH114223.

P77 Predictive principal component analysis

Takuya Isomura 1

1RIKEN Center for Brain Science, Wako, Saitama, Japan

Email: takuya.isomura@riken.jp

Biological organisms and artificial intelligence need to predict the dynamics of newly encountered input signals (test data) based only on the knowledge learned from a limited number of past experiences (training data). However, previous methods either suffer from a large test prediction error or fall into suboptimal solutions. To address this issue, we developed an unsupervised learning scheme that extracts the most informative components for predicting future inputs, which is called the predictive principal component analysis (PredPCA) [1]. It has a simple architecture comprising two parts – one responsible for prediction and one for dimensionality reduction – and can identify their optimal synaptic weight matrices that minimise the test prediction error through a convex optimisation. The solution that minimises the test prediction error coincides with the most plausible estimator of the generative process that generates sensory data, meaning that the outcomes of PredPCA offer a reliable system identification with guaranteed accuracy. Owing to the asymptotic linearisation theorem [2], while PredPCA employs a linear neural network, it can reliably identify the true parameters of canonical nonlinear generative processes when the hidden state dimensionality is high and the input dimensionality is sufficiently higher than the hidden state dimensionality. Thus, the reliable prediction generalisation and unique system identification guaranteed by the convex optimisation are the virtues of PredPCA. We demonstrate that PredPCA can extract hidden features important for predicting subsequent images of previously unseen videos. This scheme is potentially useful for automated driving and medical diagnosis.

PredPCA potentially contributes to neuroscience in several ways. First, PredPCA is useful for analysing neural data. Feature extraction using PredPCA offers data prediction with high generalisability, reliability, and explainability. Second, the brain may use the PredPCA-like learning rule to extract features. According to the complete class theorem, any neural network that minimises its cost function can be cast as performing variational Bayesian inference [3,4]. Because PredPCA minimises its cost function, it can be cast as Bayesian inference at least under a pair of Bayesian cost function and prior beliefs. This sort of representation learning can be cast as the dynamics of neural activity and plasticity. Thus, PredPCA can be a model of perceptual learning in the brain. We describe how such a machine learning scheme is closely related to neural and synaptic dynamics of canonical neural networks. We discuss the possible neuronal and synaptic mechanisms underlying the PredPCA-like computation in the brain.

References

1. Isomura T, Toyoizumi T. Dimensionality reduction to maximize prediction generalization capability. Nature Machine Intelligence. 2021 May;3(5):434–46.

2. Isomura T, Toyoizumi T. On the achievability of blind source separation for high-dimensional nonlinear source mixtures. Neural Computation. 2021 May 13;33(6):1433–68.

3. Isomura T, Friston K. Reverse-engineering neural networks to characterize their cost functions. Neural Computation. 2020 Nov 1;32(11):2085–121.

4. Isomura T, Shimazaki H, Friston K. Canonical neural networks perform active inference. bioRxiv. 2020 Jan 1.

P78 A study on recurrent neural networks trained with excitatory-inhibitory constraint

Cecilia Jarne 1

1 National University of Quilmes and CONICET, Departement of Science and Technology, Bernal (Quilmes), Argentina

Email: katejarne@gmail.com

Characterizing the dynamics of recurrent neural networks trained to perform tasks similar to those performed by animals and humans in laboratory experiments is crucial to understanding which connectivity models best predict the behavior of different areas of the brain, such as the cortex, and more specifically the prefrontal cortex [1]. In the last decades, simple models of recurrent neural networks have been successfully used to explaining different mechanisms such as decision-making, motor control, or working memory [2]. One of the aspects that are omitted generally in those models is that neurons present differences between excitatory and inhibitory units (Dale's Law). Building recurrent networks that present this characteristic presents several challenges [3]. In present work, the different dynamical behaviors obtained when training networks with different proportions of excitatory and inhibitory units were analyzed considering decision-making tasks. The dynamical behavior, the performance of training and different constraints were studied. The emergent properties of the system were studied by comparing them with the results obtained with models that do not distinguish between excitatory and inhibitory units. We considered the case where the amount of excitatory and inhibitory units is balanced, and also what happens when this balance is broken.

References

1. Murphy BK, Miller KD. Balanced amplification: a new mechanism of selective amplification of neural activity patterns. Neuron. 2009 Feb 26;61(4):635–48.

2. Evarts EV. Relation of pyramidal tract activity to force exerted during voluntary movement. Journal of neurophysiology. 1968 Jan;31(1):14–27.

3. Ingrosso A, Abbott LF. Training dynamically balanced excitatory-inhibitory networks. PloS one. 2019 Aug 8;14(8):e0220547.

P79 Detailed biophysical modeling of CA1 pyramidal cells in a mouse model of Alzheimer's disease suggests origin of hyperexcitability

Martin Mittag 1 , Laura Mediavilla 2 , Hermann Cuntz 3 , Peter Jedlicka 1

1 Justus Liebig University Giessen, Interdisciplinary Centre for 3Rs in Animal Research, Giessen, Germany

2 University of Bristol, School of Physiology, Pharmacology and Neuroscience, Bristol, United Kingdom

3 Frankfurt Institute for Advanced Studies (FIAS) & Ernst Strüngmann Institute (ESI), Computational Neuroanatomy, Frankfurt am Main, Germany

Email: martin.mittag@gmx.de

Based on human and animal studies, neuronal hyperexcitability has been identified as one of the hallmarks of Alzheimer’s disease (AD). Accordingly, previous studies in transgenic mice [1] and rats [2] have revealed increased excitability of hippocampal CA1 pyramidal cells (PCs). However, the cause of this hyperexcitability has not yet been fully elucidated. It may be a result of dendritic atrophy (and its electrotonic consequences) or impaired balance between excitation and inhibition or pathological changes in ion channel expression, or a combination of these mechanisms. Nevertheless, the contribution of these three mechanisms and their interplay with synaptic loss, which is another hallmark of AD, has remained unclear. Therefore, here we used anatomically and biophysically realistic computational models of CA1 PCs driven by distributed synaptic inputs, to test whether dendritic atrophy can account for AD-related hyperexcitability. We have performed computational comparative analysis of passive and active properties using 3D-reconstructed CA1 PC morphologies from wild type (WT) and aged APP/PS1 mice. In agreement with previous computational results [1], we have discovered that, in APP/PS1 mouse morphologies, reduced dendritic length and branching decreases input resistance of modelled CA1 PCs rendering them electrotonically more compact and more excitable upon somatic current injections. However, due to synapse loss, the CA1 PCs did not display any hyperexcitability in simulations with more natural stimulation in the form of distributed synaptic activation. This is in agreement with our previous findings that dendritic atrophy can contribute to neuronal firing rate homeostasis by compensating for the loss of synaptic inputs [3]. We conclude that dendritic degeneration cannot account for the observed hyperexcitability in AD. Our modeling suggests that other changes such as excitation-inhibition imbalance or/and altered ion channels are needed to induce synaptically-driven hyperactivity of CA1 PCs.

References

1. Šišková Z, Justus D, Kaneko H, Friedrichs D, Henneberg N, et al. Dendritic structural degeneration is functionally linked to cellular hyperexcitability in a mouse model of Alzheimer’s disease. Neuron. 2014 Dec 3;84(5):1023–33.

2. Sosulina L, Mittag M, Geis HR, Hoffmann K, Klyubin I, et al. Hippocampal hyperactivity in a rat model of Alzheimer’s disease. Journal of Neurochemistry. 2021 Feb 14.

3. Platschek S, Cuntz H, Vuksic M, Deller T, Jedlicka P. A general homeostatic principle following lesion induced dendritic remodeling. Acta neuropathologica communications. 2016 Dec;4(1):1–1.

P80 Modeling homo- and heterosynaptic plasticity using a new reduced-morphology model of CA1 pyramidal cells

Matúš Tomko *1 , Lubica Benuskova 1 , Peter Jedlicka 2

1 Comenius University, Department of Applied Informatics, Bratislava, Slovakia

2 Justus Liebig University Giessen, Interdisciplinary Centre for 3Rs in Animal Research, Giessen, Germany

Email: matus.tomko@fmph.uniba.sk

Biophysically and anatomically realistic modeling of long-term synaptic plasticity requires computationally demanding simulations. Using a complex model with a complete dendritic tree morphology can be computationally expensive. Therefore, we focused on the development of a simplified model for CA1 pyramidal cells that are involved in learning and memory-related processes. We used a strategy that combines reduced morphology from one model and complex biophysics from another model. Using this approach, we created a new hybrid model with reduced morphology [1]. The dendritic tree of the model retains the minimal anatomical properties of the CA1 pyramidal cell including basal dendrites, apical trunk, oblique dendrites, and apical tuft (Fig. 1). We subjected the model to systematic testing of somatic and dendritic features using HippoUnit, a recently established standardized test for CA1 pyramidal cell models [2]. Our model reproduces typical somatic electrophysiological features, depolarization block, attenuation of excitatory postsynaptic potentials, as well as back-propagation of action potentials. The model dendrites are able to generate dendritic spikes in response to synchronous synaptic stimulation. To test the capability of the model to simulate synaptic plasticity, we used a voltage-based implementation of the STDP (spike-timing dependent plasticity) rule endowed with a fast BCM-like metaplasticity [3,4]. The model stabilized synaptic weights during ongoing spontaneous activity as well as displayed long-term synaptic plasticity using typical stimulation protocols. Furthermore, we observed heterosynaptic plasticity at unstimulated synapses, the magnitude of which depended on the level of spontaneous activity, the stimulation protocol used, and the dendritic compartment where it was observed. We conclude that the model is biologically accurate and is suitable for taking into account the complex experimentally observed patterns of homosynaptic and heterosynaptic plasticity induced by different stimulation protocols.

References

1. Tomko M, Benuskova L, Jedlicka P. A new reduced-morphology model for CA1 pyramidal cells and its validation and comparison with other models using HippoUnit. Scientific reports. 2021 Apr 7;11(1):1–6.

2. Sáray S, Rössert CA, Appukuttan S, Migliore R, Vitale P, et al. HippoUnit: A software tool for the automated testing and systematic comparison of detailed models of hippocampal neurons based on electrophysiological data. PLoS computational biology. 2021 Jan 29;17(1):e1008114.

3. Benuskova L, Abraham WC. STDP rule endowed with the BCM sliding threshold accounts for hippocampal heterosynaptic plasticity. Journal of computational neuroscience. 2007 Apr 1;22(2):129–33.

4. Jedlicka P, Benuskova L, Abraham WC. A voltage-based STDP rule combined with fast BCM-like metaplasticity accounts for LTP and concurrent “heterosynaptic” LTD in the dentate gyrus in vivo. PLoS computational biology. 2015 Nov 6;11(11):e1004588.

Fig. 1
figure ar

The morphology of the model (A), representative responses of the model to the positive (B) and negative (C) somatic current injections and the normalized model Z-scores obtained from HippoUnit tests (D). The red vertical line represents SD = 2

P81 Computational dendritic repair mechanism for human and nonhuman neurons based on optimal wiring considerations

Moritz Groden 1 , Hannah Moessinger 2 , Barbara Schaffran 2 , Javier DeFelipe 3 , Ruth Benavides-Piccione 3 , Hermann Cuntz 4 , Peter Jedlicka 5

1 Justus Liebig University Giessen, Institut für medizinische Informatik, Giessen, Germany

2 Ernst Strüngmann Institute, Frankfurt am Main, Germany

3 Instituto Cajal, Consejo Superior de Investigaciones Científicas, Madrid, Spain

4 Frankfurt Institute for Advanced Studies (FIAS) & Ernst Strüngmann Institute (ESI), Computational Neuroanatomy, Frankfurt am Main, Germany

5 Justus Liebig University Giessen, Interdisciplinary Centre for 3Rs in Animal Research, Giessen, Germany

Email: moritzgromail@gmail.com

Investigating the functionality of human neurons remains a challenge due to the scarcity and incompleteness of their 3D anatomical reconstructions. Additionally, accurate human and nonhuman neuronal morphologies are urgently needed for a better understanding of species differences in brain circuits as well as for realistic compartmental modeling. Therefore, here we used a morphological modelling approach based on optimal wiring [1] to repair any parts of a dendritic morphology that were lost during the reconstruction process. Interestingly, our minimum spanning tree-based algorithm regenerated dendritic branches of Drosophila neurons in a manner similar to experimental observations using branch ablation techniques [2]. To validate the repair algorithm for mammalian neurons, we artificially sectioned reconstructed dendrites from mouse and human hippocampal pyramidal cell morphologies [3], and showed that the regrown dendrites were morphologically similar to the original ones (Fig. 1). Moreover, we could recover their electrophysiological functionality as shown by restoration of their firing behavior. Importantly, we show that such repairs can be generalized to other neuron types including hippocampal granule cells and cerebellar Purkinje cells. Such internal validation of the repair algorithm based on sectioning and regrowing of available reconstructions allowed us to extrapolate the repair to incomplete morphologies. We showed this specifically for cases of data from humans where the anatomical delimitations of the particular brain areas innervated by the neurons in question were known. To make the repair tool available to the neuroscientific community, we have developed an intuitive and easy-to-use graphical user interface (GUI [PJ1]) available in the TREES Toolbox (www.treestoolbox.org).

References

1. Cuntz H, Forstner F, Borst A, Häusser M. One rule to grow them all: a general theory of neuronal branching and its practical application. PLoS computational biology. 2010 Aug 5;6(8):e1000877.

2. Song Y, Ori-McKenney KM, Zheng Y, Han C, Jan LY, et al. Regeneration of Drosophila sensory neuron axons and dendrites is regulated by the Akt pathway involving Pten and microRNA bantam. Genes & development. 2012 Jul 15;26(14):1612–25.

3. Benavides-Piccione R, Regalado-Reyes M, Fernaud-Espinosa I, Kastanauskaite A, Tapia-González S, et al. Differential structure of hippocampal CA1 pyramidal neurons in the human and mouse. Cerebral Cortex. 2020 Feb;30(2):730–52.

Fig. 1
figure as

Example repair of mouse CA1 pyramidal neuron with reference neuron on the left and repaired neuron on the right. Artificially sectioned and repaired dendrites are marked in red with the blue shaded areas being the growth volume. The Sholl distributions for the cut, repaired and reference morphology are shown at the bottom

P82 Model inversion techniques for seizure spread in individual brain networks

Viktor Sip 1 , Viktor Jirsa 2 , Meysam Hashemi 3 , Anirudh Vattikonda 3 , Jayant Jha 3

1 Aix-Marseille Université, Marseille, France

2 INSERM & Aix-Marseille Université, Theoretical Neuroscience Group, Institut de Neurosciences des Systèmes, Marseille, France

3 Aix-Marseille Université, Institut de Neurosciences des Systèmes, Marseille, France

Email: viktor.sip@univ-amu.fr

During focal seizures in epileptic patients, abnormal electrical activity appears in and can spread through the brain network. A possible remedy is the surgical resection of the suspected epileptogenic zone localized using the intracranial EEG (iEEG). Rigorous, computational approaches based on the fusion of the individual structural connectomes with the iEEG recordings hold promise for improving the localization of the epileptogenic zone and therefore the surgery outcome. Integration of the functional with structural data can be performed in a model-based framework. However, this model inversion poses multiple challenges, both technical and conceptual. In this contribution we provide an overview of our recent efforts [1–2] in this domain and discuss the challenges and possible approaches.

In particular, we consider the choice of the model and compare the complexity, expressivity, and ease of inversion of the models based on the Epileptor neural mass [1] with a simplified threshold model [2]. The model of source activity is linked to the observed iEEG activity via the forward projection model, which can affect the identifiability of the parameters, and has to be coupled to data preprocessing methods. We continue with the formulation of the problem in Bayesian framework, and we discuss the choice of the inversion technique, such as the Markov chain Monte Carlo sampling, or the maximum a posteriori estimation. We highlight the importance of the parameterization of the model for the efficiency of the inversion. Finally, we discuss the possibilities of validation of a chosen approach, which too is not straightforward considering the clinical origin of the data and the limitations associated.

References

1. Hashemi M, Vattikonda AN, Sip V, Guye M, Bartolomei F, et al. The Bayesian Virtual Epileptic Patient: A probabilistic framework designed to infer the spatial map of epileptogenicity in a personalized large-scale brain model of epilepsy spread. NeuroImage. 2020 Aug 15;217:116,839.

2. Sip V, Hashemi M, Vattikonda AN, Woodman MM, Wang H, et al. Data-driven method to infer the seizure propagation patterns in an epileptic brain from intracranial electroencephalography. PLoS computational biology. 2021 Feb 17;17(2):e1008689.

P83 A robust spike counter for automatic detection of interictal eplileptiform discharges before seizure onset

Swati Banerjee 1 , Marmaduke Woodman 2 , Huifang Wang 2 , Viktor Jirsa 2

1 CNRS & Aix-Marseille Université, Institute de Neurosciences des Systèmes, Marseille, France

2 INSERM & Aix-Marseille Université, Theoretical Neuroscience Group, Institute de Neurosciences des Systèmes, Marseille, France

Email: swatibanerjee@ieee.org

Epilepsy is one of the most common severe neurological disorder characterized by likelihood for the brain to enter seizure states. Prompt and efficient treatment often requires a prior knowledge or predictability, when and where seizures are likely to occur. Developing prediction strategies is extremely challenging due to the patient-specific causes of seizures, and the difficulty in obtaining data from longitudinal study.

The interictal discharges are often observed transient changes translating as spikes captured through the stereotactic EEG (sEEG) implants before the onset of seizure. The spikes are usually distinguishable as prominent sharp amplitude feature occuring for a short duration of time. The cause of source level activation pattern and the associated physiological changes is often not known. In this work we attempt to understand the underlying physiological phenomenon using an extended epileptor model connecting the epileptic state with the resting state. The aim is to capture the bursting phenomenon at the source level throught the model and translating up to the sensor level i.e. at the sEEG level. A relative comparison gives an insight and understanding of the coactivation pattern of the brain regions recruited during an occurrance of seizure in an epileptic brain.

The simulations were done using the neuroinformatic platform TVB. Structural connectome constructed using an in house pipeline for automatic processing of multimodal neuroimaging data based on publicly available neuroimaging tools, customized for TVB having the Virtual Epileptic Patient (VEP) as the parcellation scheme. Once the connectome is obtained the bursting phenomenon at the source level is being simulated using the RS-epileptor model. To capture this bursting phenomenon a robust spike estimator is developed for automatic detection of fiducial points viz. occurance of the spikes. A modified Tear-kaiser operator or non-negative frequency weighted operator is used to capture the transient spike pattern and occurrances both at the source and sEEG sensor level. This is a feasible way of assessing the instantaneous energy of the signal incorporating both amplitude and frequency feature. Once these features are identified, the next steps of the detection algorithm is followed by a linear Support Vector Machine (SVM) based two stage spike sorting system which first detects the spikes and then differentiates it from noise. The IS are characterized by a brief initial phase having a sharp and strong amplitude occurring as transitional events appearing either isolated or in bursts. To capture the dynamics of this bursting mechanism the following scheme is being devised: (i) Detecting and characterizing IS on each simulated sEEG channel and the simulated regions, (ii) Determining the temporal relations between the various channels and corresponding simulated regions.

Interictal spikes are waveform arising due to the synchronous firing of excitable population of neurons and are considered abnormal electrical phenomenon when observed at the sEEG level. Interictal discharges have been predominantly observed in between Hence they become a complementary source of information in the diagnosis and localization of early onset of the seizure or mathematically speaking, acts as a prior to the VEP estimation paradigm.

P84 Impact of sodium channel distribution in the axon initial segment on the initiation and backpropagation of action potentials

Benjamin Barlow 1 , André Longtin 1 , Béla Joós 1

1 University of Ottawa, Department of Physics, Ottawa, Canada

Email: bbarlow@uottawa.ca

We are interested in the biophysics of forward and backward propagation of action potentials (APs), as they are both important for learning. The axon initial segment (AIS) initiates APs in a variety of neurons. Pyramidal cells contain two types of voltage-gated sodium channel: NaV1.2 (high threshold) and NaV1.6 (low threshold). These channels are nonuniformly distributed in the AIS. The density of NaV1.2 is greatest near the soma, and NaV1.6 density peaks further down the AIS, away from the soma [1]. While this distribution is observed, its purpose remains unclear [2]. Counterintuitively, published simulations suggest that concentration of high threshold channels near the soma lowers the threshold for backpropagation [1]. We find that this is true when stimulating at the axon. However our results suggest that the observed distribution increases the backpropagation threshold for somatic stimulation. We discuss the effect of altering AIS length, AIS distance, and specific leak currents.

Acknowledgments

Funded by NSERC (Canada).

References

1. Hu W, Tian C, Li T, Yang M, Hou H, et al. Distinct contributions of Na v 1.6 and Na v 1.2 in action potential initiation and backpropagation. Nature neuroscience. 2009 Aug;12(8):996–1002.

2. Katz E, Stoler O, Scheller A, Khrapunsky Y, Goebbels S, et al. Role of sodium channel subtype in action potential generation by neocortical pyramidal neurons. Proceedings of the National Academy of Sciences. 2018 Jul 24;115(30):E7184-92.

P85 Seizure forecasting from long-term EEG and ECG data using Critical Slowing Principle

Wenjuan Xiong 1 , Ewan Nurse 2 , Elisabeth Lambert 3 , Tatiana Kameneva 4

1 Swinburne University of Technology, Melbourne, Australia

2 Seer Medical, Melbourne, Australia

3 Swinburne University of Technology, Department of Health and Medical Sciences, Melbourne, Australia

4 Swinburne University of Technology, School of Software and Electrical Engineering, Melbourne, Australia

Email: wxiong@swin.edu.au

Epilepsy is a neurological disorder characterized by recurrent seizures that are transient symptoms of synchronous neuronal activity in the brain. Epilepsy affects more than 50 million people worldwide [1]. Seizure forecasting allows patients and caregivers to deliver early interventions and prevent serious injuries. Electroencephalography (EEG) has been used to forecast seizure onset, with varying success between participants [2,3]. There is an increasing interest to use electrocardiogram (ECG) to help with seizures forecasting. The neural and cardiovascular systems may exhibit critical slowing, which is measured by an increase in variance and autocorrelation of the system, when change from a normal state to an ictal state [4]. The aim of this study is to use variance and autocorrelation of long-term continuous EEG and ECG data to forecast seizures.

EEG and ECG data from 16 patients was used for analysis. The average period of recording was 161.9 h, with an average 9 electrographic seizures in an individual patient. The variance and autocorrelation of EEG and ECG signals of one electrode were calculated in 15 s window for each time point. The instantaneous phases of variance and autocorrelation signals were calculated at each time point using Hilbert transform. The relationship between seizure onset times and phase of variance and autocorrelation signals were investigated in long (6 h) cycles. The probability distribution for seizure occurrence in each signal was determined. Seasonal autoregressive integrated moving average (SARIMA) model was used to forecast variance and autocorrelation signals. Bayesian approach was used to combine probability distributions of seizure occurrences for each time point. The results of forecasting models using critical slowing features, seizure circadian features, and combined critical slowing and circadian features were compared using the receiver-operating characteristic curve.

The results demonstrated that the best forecaster was patient-specific and the average area under the curve (AUC) of the best forecaster across patients was 0.68. In 50% of patients, circadian forecasters had the best performance. Critical slowing forecaster performed best in 19% of patients. Combined forecaster achieved the best performance in 31% of patients. The mean forecasting time was 44.2 min. Results indicate that critical slowing features could be used to forecast seizures. The results of this study may advance the field of seizure forecasting and ultimately lead to the improved quality of life of people who suffer from epilepsy.

References

1. Thijs RD, Surges R, O'Brien TJ, Sander JW. Epilepsy in adults. The Lancet. 2019 Feb 16;393(10,172):689–701.

2. Cook MJ, O'Brien TJ, Berkovic SF, Murphy M, Morokoff A, et al. Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study. The Lancet Neurology. 2013 Jun 1;12(6):563–71.

3. Karoly PJ, Ung H, Grayden DB, Kuhlmann L, Leyde K, et al. The circadian profile of epilepsy improves seizure forecasting. Brain. 2017 Aug 1;140(8):2169–82.

4. Scheffer M, Bascompte J, Brock WA, Brovkin V, Carpenter SR, et al. Early-warning signals for critical transitions. Nature. 2009 Sep;461(7260):53–9.

P86 Inferring inter-columnar connectivity from sparse activity data

Linus Lauer 1 , Christian Tetzlaff 2 , David Kappel 1

1 Georg August University Göttingen, Third Institute of Physics - Biophysics, Ebergötzen, Germany

2 University of Göttingen, Göttingen, Germany

Email: l.lauer@stud.uni-goettingen.de

Advances in brain imaging techniques have enabled us to acquire detailed datasets of neural activity. But while activity is easy to measure, connectivity is still hard to observe directly and often has to be inferred from activity data. To do so, large amounts of neural recordings are necessary to reconstruct the connectome which makes this process costly and time-consuming. Here, we present a new method for inferring connectivity from sparse activity by using synthetic data to pretrain a model for inferring connection strengths. We demonstrate our approach on recordings from the rodent barrel cortex, which processes tactile information and consists of many interconnected anatomically confined cortical columns. The connectome inside a single cortical column has been studied for decades and their microcircuits and connectivity are well-known. However, the connectivity between multiple columns, which give rise to the observed detailed dynamics, is not well understood. We use a mean-field cortical column model that reduces individual neurons to a network of neuron populations [1] for producing barrel-cortex activity-like data. This approximation leads to a model which qualitatively reproduces the activity observed in experimental measurements while being numerically inexpensive. To initialize our model and validate our results, we use experimental data of anaesthetized adult rats, obtained from in-vivo experiments [2]. We then used two different methods and compared them in their ability to infer connectivity–one of which is a modified version of FORCE learning [3] acting on recursive neural networks. An overview of this approach can be seen in Fig. 1. As in the original FORCE approach, learning is performed through changes in connection strength inside the network, however, connections to read-out units are constant. To provide the recurrent chaotic dynamic needed by the FORCE approach, a higher number of units is used in the FORCE network than in the mean-field model. Additionally, the network was further divided into sub-networks with a corresponding target function generated using the mean-field model. We adjusted the learning rule to improve the representation of the biological setting. Our modified FORCE respects Dale's law and the output is restricted to positive values. A technique in this context is successful if experimental datasets can be reproduced and predicted using the generated connectome. We find that FORCE learning with the additional constraints can accurately replicate neuron population activity typically encountered in the mean-field model. Also, we observed convergence in the generated connection matrix over multiple learning procedures with randomly generated starting conditions. In ongoing work, we compare these results with connection matrices inferred using a deep learning approach to assess the stability and reproducibility of our modified FORCE learning model.

References

1. Schwalger T, Deger M, Gerstner W. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. PLoS computational biology. 2017 Apr 19;13(4):e1005507.

2. Reyes-Puerta V, Kim S, Sun JJ, Imbrosci B, Kilb W, et al. High stimulus-related information in barrel cortex inhibitory interneurons. PLoS computational biology. 2015 Jun 22;11(6):e1004121.

3. Sussillo D, Abbott LF. Generating coherent patterns of activity from chaotic neural networks. Neuron. 2009 Aug 27;63(4):544–57.

Fig. 1
figure at

Overview of the method used to infer the trans columnar connectivity of a multi cortical column model. Our modified FORCE-learning approach receives input from a mean-field model and adjusts connectivity between columns to replicate a set of target functions supplied by experimental data. The magnification box (taken from [1]) shows the structure of a cortical column in the mean-field approach

P87 Approximating transient dynamics of hippocampal ripple oscillations in an inhibitory network model

Natalie Schieferstein 1 , Tilo Schwalger 2 , Benjamin Lindner 3 , Richard Kempter 1

1 Humboldt University in Berlin, Institute for Theoretical Biology, Berlin, Germany

2 Technische Universität Berlin, Institute of Mathematics, Berlin, Germany

3 Humboldt University in Berlin, Physics, Berlin, Germany

Email: natalie.schieferstein@bccn-berlin.de

Hippocampal ripple oscillations have long been implicated in important cognitive functions such as memory consolidation [1]. Several generating mechanisms have been proposed, some relying on excitation, some on inhibition as the main pacemaker of ripples. The inhibitory models can be further subdivided into perturbation-based [2] and bifurcation-based models [3,4]. While all the above model classes can produce oscillations in the ripple-band (140–220 Hz), only the bifurcation-based inhibitory model has been shown to also reproduce the experimentally observed intra-ripple frequency accommodation (IFA) – an asymmetry in the instantaneous network frequency in response to transient, sharp wave-like stimulation [4,5; Fig. 1].

Here we provide a mechanistic explanation for the occurence of IFA in bifurcation-based inhibitory ripple models, using a theoretical mean-field approach. We start with a simplified spiking network of leaky-integrate-and-fire units, which are fully connected via delayed inhibitory pulse-coupling. All units receive independent white noise and the same excitatory drive, which is thought to mimic the input to CA1 coming from the CA3 Schaffer collaterals. It has been shown that for high-enough drive this network undergoes a bifurcation from a stationary to an oscillatory regime [6]. To address IFA we need to a) approximate the highly non-linear oscillation dynamics for constant drive beyond the bifurcation and b) understand how the response to transient, sharp wave-like drive relates to those cyclo-stationary dynamics.

Assuming large enough constant drive, we take the frozen-noise limit and approximate the density of membrane potentials (i.e., the solution of the associated Fokker–Planck equation) as a Gaussian with time-dependent mean. In this framework we can analytically approximate the frequency and amplitude of the network oscillation as a function of excitatory drive. We show that for a wide parameter regime (spanned by noise intensity, coupling strength, reset potential, synaptic delay) this ansatz provides a good approximation of the cyclo-stationary dynamics beyond the bifurcation. It captures the transition of the network from a regime of sparse, irregular synchrony to full synchrony as the excitatory drive increases. This transition comes with a monotonic increase in the amplitude of the oscillation in the mean membrane potential. We demonstrate that, given transient, sharp wave-like drive, IFA results from a speed-dependent hysteresis effect in the amplitude of the oscillatory mean membrane potential. Since this finding is largely independent of specific parameter choices, it establishes IFA as an inherent feature of the bifurcation-based inhibitory model. Conversely, we find that the perturbation-based inhibitory model cannot exhibit IFA without additional parameter tuning. The present work thus highlights the importance of considering transient ripple dynamics, such as IFA, to guide the selection of the true generating mechanism of ripple oscillations.

References

1. Buzsáki G. Two-stage model of memory trace formation: a role for “noisy” brain states. Neuroscience. 1989 Jan 1;31(3):551–70.

2. Malerba P, Krishnan GP, Fellous JM, Bazhenov M. Hippocampal CA1 ripples as inhibitory transients. PLoS computational biology. 2016 Apr 19;12(4):e1004880.

3. Brunel N, Wang XJ. What determines the frequency of fast network oscillations with irregular neural discharges? I. Synaptic dynamics and excitation-inhibition balance. Journal of neurophysiology. 2003 Jul;90(1):415–30.

4. Donoso JR, Schmitz D, Maier N, Kempter R. Hippocampal ripple oscillations and inhibition-first network models: frequency dynamics and response to GABA modulators. Journal of Neuroscience. 2018 Mar 21;38(12):3124–46.

5. Sullivan D, Csicsvari J, Mizuseki K, Montgomery S, Diba K, et al. Relationships between hippocampal sharp waves, ripples, and fast gamma oscillation: influence of dentate and entorhinal cortical activity. Journal of Neuroscience. 2011 Jun 8;31(23):8605–16.

6. Brunel N, Hakim V. Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural computation. 1999 Oct 1;11(7):1621–71.

Fig. 1
figure au

Intra-ripple frequency accommodation (IFA) in simulated inhibitory spiking network: Given transient, sharp wave-like drive (green, bottom), the population rate (blue) responds with a transient ripple-like oscillation that has an asymmetric instantaneous frequency (top panel, white line)

P88 Bayesian mechanics in the brain: A continuous-state formulation of active inference

Chang Sub Kim 1

1 Chonnam National University, Department of Physics, Gwangju, South Korea

Email: cskim@jnu.ac.kr

We present our recent effort of the continuous-state formulation of active inference in the brain [1,2], which attempts to undergird the free energy principle (FEP) in neuroscience [3]. Our goal is to make the FEP a more rigorous formalism by implementing FE minimization based on the principle of least action [4]. Consequently, we cast the neural implementation of variational Bayes under the FEP as an effective Hamilton's equation of motion in continuous time, invoking Bayesian mechanics (BM) in the brain. The ensuing BM prescribes the dynamics of the brain states and their conjugate momenta in neural phase space; the momentum variable represents the discrepancy between the environmental dynamics and the brain's internal model about it. We also present a simple agent-based model of the brain performing integration of the BM to demonstrate our framework.

The FEP stipulates that all viable organisms perceive and behave in the natural world by calling forth the probabilistic models in their neural system–the brain–in a manner that ensures their adaptive fitness [3]. We consider that the brain continually confronts sensory streams and conducts the Bayesian inversion of inferring external causes using the continuous state representations. We formulate a plausible computational implementation of the FEP by postulating that the informational FE – an upper bound for surprisal–plays the role of a Lagrangian in theoretical mechanics [4]. Accordingly, we furnish a variational scheme of the brain’s updating the internal model and acting on the external world by minimizing the sensory uncertainty, which is a long-term surprisal over time [2].

The prescribed BM is subject to a time-dependent signal arising from the prediction errors at the sensory level on the sensorimotor loop, which serves as the motor command. To this extent, the BM bears a resemblance to the motor-control equations derived from Pontryagin’s maximum principle in optimal control theory [5]. By numerically integrating the Bayesian equations of motion for the considered parsimonious model, we illustrate the brain’s transient trajectories in continuous time, performing active perception of the causes of nonstationary sensory stimuli [1]. The steady-state solution of the BM reveals an attractor about which stationary limit cycles form, which suggests that the brain undergoes nonequilibrium transit between spontaneous state and aware state upon sensory perturbations.

References

1. Kim CS. Bayesian mechanics of perceptual inference and motor control in the brain. Biological Cybernetics. 2021 Feb;115(1):87–102.

2. Kim CS. Recognition dynamics in the brain under the free energy principle. Neural computation. 2018 Oct;30(10):2616–59.

3. Friston K. The free-energy principle: a unified brain theory?. Nature reviews neuroscience. 2010 Feb;11(2):127–38.

4. Landau LD, Lifshitz EM. Mechanics: Course of theoretical physics. Volume 1. 3rd edition. Amsterdam: Elsevier; 1976.

5. Todorov E. Optimal control theory. Bayesian brain: probabilistic approaches to neural coding. 2006:268–98.

P89 Constructing a cortical column model from the local field potentials in the auditory cortex in awake monkeys

Vincent S. C. Chien 1 , Yonatan I. Fishman 2 , Burkhard Maess 3 , Thomas R. Knösche 1

1 Max Planck Institute for Human Cognitive and Brain Sciences, Brain Networks, Leipzig, Germany

2 Albert Einstein College of Medicine, Departments of Neurology and Neuroscience, New York, NY, United States of America

3 Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany

Email: knoesche@cbs.mpg.de

How auditory evoked responses (e.g., P1, N1, P2) in EEG/MEG are generated in the cortex is still poorly understood. One approach is to employ biological neural models to interpret the underlying network mechanisms. However, existing models targeting this question (e.g., the Human Neocortical Neurosolver [1]) are not constrained by other recorded neural activities such as the local field potentials (LFPs), which can potentially lead to biased interpretation. In this study, we attempt to investigate the generation of the evoked responses by constructing a rate-based cortical column model constrained by LFPs from multi-contact electrode recording. The electrode recorded the laminar neural activities in response to 60 dB SPL 200 ms duration pure tones at the best-frequency (BF) sites in the primary auditory cortex (A1) of awake monkeys (a total of 11 sites, each with 16 laminar depths). Since the LFPs are contributed by the activities of various types of excitatory (E) and inhibitory neurons such as parvalbumin-expressing interneurons (PV), somatostatin-expressing interneurons (SOM), and vasoactive-intestinal-peptide-expressing neurons (VIP), we include several neural populations in different layers (E, PV, and SOM in layer 2/3; E in layer 4; E and PV in layer 5/6) in the column model. The model's state variables include the firing rates, postsynaptic potentials (PSPs), and synaptic efficacy reflecting short-term plasticity (STP). The model parameters include network connection strengths, synaptic time constants, and STP rates. We fitted the column model to the laminar profiles of multi-unit activity (MUA) and current source density (CSD) derived from the recorded LFPs. The fitting procedure was implemented in the VBA toolbox [2] to find the best parameters using the variational Bayes algorithm. To explore plausible solutions, we randomly selected starting parameter sets in a reasonable range of the parameter space (2000 samples at each recording site). The preliminary fitting results suggest that the diverse CSDs at different recording sites can be transformed into the product of diverse CSD spatial profiles with relatively consistent patterns of firing rates. So far we have demonstrated the applicability of our column model in estimating population-level neural interaction from LFP data. The model simulations also suggest that the current sources and current sinks indicated by the CSD result from multiple transmembrane current flows. Future work will be concerned with the interpretation of fitted parameters, choice of priors and constraints for computational efficiency of fitting, and fitting across multiple recording sites.

References

1. Neymotin SA, Daniels DS, Caldwell B, McDougal RA, Carnevale NT, et al. Human Neocortical Neurosolver (HNN), a new software tool for interpreting the cellular and network origin of human MEG/EEG data. Elife. 2020 Jan 22;9:e51214.

2. Daunizeau J, Adam V, Rigoux L. VBA: a probabilistic treatment of nonlinear models for neurobiological and behavioural data. PLoS computational biology. 2014 Jan 23;10(1):e1003441.

P90 The effect of ephaptic coupling on signal transmission in peripheral nerve bundles

Helmut Schmidt 1 , Thomas R. Knösche 1

1 Max Planck Institute for Human Cognitive and Brain Sciences, Brain Networks, Leipzig, Germany

Email: hschmidt@cbs.mpg.de

Ephaptic coupling effects in parallel nerve fibers have been observed experimentally since the early 1940s [1,2]. These are characterized by the synchronization and slowing down of action potentials, a phenomenon that has been reproduced in modelling studies based on biophysically realistic models [3,4]. The latter, however, preclude the theoretical study of ephaptic coupling effects in nerve fibers with a large number of axons. Here, we present a spike-propagation model (SPM) that sheds excessive biophysical detail in favor of computational efficiency, without loss of capturing the essential features of propagating action potentials and their ephaptic interaction.

The SPM describes an action potential by its position on the axon and its velocity. The velocity is primarily defined by intrinsic features of the axons, such as diameter and myelination status, but it is also modulated by changes in the extracellular potential. These changes are due to transmembrane currents that generate an action potential. Within the SPM framework, this change of extracellular potential is modelled by a coupling function that is derived from passive axonal properties. In the absence of external perturbations, an action potential propagates with the velocity intrinsic to the axon. In the presence of external perturbations, the resulting change in the velocity is appropriately described by a linearized coupling function, which is calibrated with a biophysical model.

The efficiency of the SPM allows us to systematically study peripheral nerve bundles with a large number of axons. We find that fiber density and the number of active fibers are critical for the emergence of synchronization between action potentials and their slowing down. The transition from asynchrony to synchrony is characterized by a phase transition that occurs at a critical fiber density and activity level. This transition is counteracted by the heterogeneity of the fiber bundle, specifically by the heterogeneity of fiber diameters. We study different distributions of fiber diameters and identify corresponding critical values for the transition to synchrony. In addition, we compare our results with previous results obtained for fiber bundles in the central nervous system [5], where ephaptic coupling has no synchronizing effect and accelerates signal transmission.

References

1. Katz B, Schmitt OH. Electric interaction between two adjacent nerve fibres. The Journal of physiology. 1940 Feb 14;97(4):471–88.

2. Arvanitaki A. Effects evoked in an axon by the activity of a contiguous one. Journal of neurophysiology. 1942 Mar 1;5(2):89–108.

3. Bokil H, Laaris N, Blinder K, Ennis M, Keller A. Ephaptic interactions in the mammalian olfactory system. Journal of neuroscience. 2001 Oct 15;21(20):RC173-.

4. Reutskiy S, Rossoni E, Tirozzi B. Conduction in bundles of demyelinated nerve fibers: computer simulation. Biological cybernetics. 2003 Dec;89(6):439–48.

5. Schmidt H, Hahn G, Deco G, Knösche TR. Ephaptic coupling in white matter fibre bundles modulates axonal transmission delays. PLoS Computational Biology. 2021 Feb 8;17(2):e1007858.

P91 Neural circuits of human prediction error computation across valences and tasks

Jessica Mollick 1 , Philip Corlett 1 , Hedy Kober 1

1 Yale University, Psychiatry, New Haven, CT, United States of America

Email: jessica.mollick@yale.edu

Many neuroimaging studies examined reward prediction errors (PEs), focusing on dopamine-rich brain regions, which encode PEs [1]. Systematic approaches combining results across these studies will improve our understanding. To examine brain regions responding to dimensions of PE across studies, we used coordinate-based meta-analysis – multi-level kernel density analysis (MKDA; [2]) to analyze data from 263 papers and 464 contrasts representing 6,454 participants, as shown in Fig. 1.

Both computational modeling work and experiments on PE have considered whether regions encoding PEs respond to both unexpected rewards and violations of beliefs in tasks without explicit rewards [3]. To examine this, we used a conjunction analysis to look for regions computing PEs in reward tasks and perceptual and cognitive tasks without explicit rewards, finding a core PE circuit including midbrain, insula, and striatum. There was also specialization for different PE types, such that perceptual PEs recruited visual and parietal areas, and social PEs more consistently recruited dorsomedial prefrontal cortex (dmPFC) than non-social.

Predictive coding theories suggest that precision, the reliability of statistical estimates, influences the contribution of PEs to learning [4]. A conjunction analysis of signed and precision-weighted (unsigned) PEs revealed striatum, parietal lobe, supplementary motor area (SMA), and frontal eye field. Comparing the two, signed PEs had more consistent activity in midbrain, striatum, medial PFC and cingulate regions, while precision-weighted PEs had more consistent activity in cerebellum, dorsolateral PFC, dmPFC, SMA, distinct insula and cingulate regions, and parietal and temporal regions.

Recent theories of PE propose that some circuits encode value, increasing for appetitive and decreasing for aversive outcomes, while others capture salience, increasing for both valences [5]. We examined salience using a conjunction of appetitive and aversive valence PEs, which revealed midbrain, striatum, and insula. However, a meta-contrast analysis found that distinct regions of striatum and midbrain responded more consistently to aversive PEs than appetitive, consistent with recent evidence [6].

Overall, we show a core circuit in the midbrain, striatum, and insula that responds to PEs across valences and tasks as well as distinct regions for more specialized computations, such as social and perceptual inferences. This has important implications for theories of PE.

Acknowledgements

Supported by R01DA043690 (to HK) and R21MH116258 andR01MH12887 (to PC).

References

1. Schultz W. Behavioral dopamine signals. Trends in neurosciences. 2007 May 1;30(5):203–10.

2. Kober H, Wager TD. Meta‐analysis of neuroimaging data. Wiley Interdisciplinary Reviews: Cognitive Science. 2010 Mar;1(2):293–300.

3. Gardner MP, Schoenbaum G, Gershman SJ. Rethinking dopamine as generalized prediction error. Proceedings of the Royal Society B. 2018 Nov 21;285(1891):20,181,645.

4. Friston KJ, Shiner T, FitzGerald T, Galea JM, Adams R, et al. Dopamine, affordance and active inference. PLoS computational biology. 2012 Jan 5;8(1):e1002327.

5. Bromberg-Martin ES, Matsumoto M, Hikosaka O. Dopamine in motivational control: rewarding, aversive, and alerting. Neuron. 2010 Dec 9;68(5):815–34.

6. Lammel S, Lim BK, Ran C, Huang KW, Betley MJ, et al. Input-specific control of reward and aversion in the ventral tegmental area. Nature. 2012 Nov;491(7423):212–7.

Fig. 1
figure av

a Core PE circuit, all tasks: insula, midbrain, striatum. b Perceptual: visual regions, cognitive: mPFC, striatum, social > non-social: vmPFC, dmPFC. c Signed PE: mPFC, striatum, Precision-weighted: distinct insula, dmPFC regions. d Conjunction of appetitive & aversive PE: midbrain, striatum, insula. e Aversive PE compared to appetitive showed distinct striatal, midbrain, PFC & insula regions

P92 Precise spike-timing can be achieved by increasing inhibitory input

Tomas Barta 1 , Lubomir Kostal 1

1 Institute of Physiology, Czech Academy of Sciences, Laboratory of Computational Neuroscience, Prague, Czechia

Email: tomas.barta@fgu.cas.cz

Neurons receive a stream of random excitatory and inhibitory inputs arising from the background network activity, leading to fluctuations of the neuron's membrane potential [1–3]. Experimentally, it has been observed that evoked inhibitory input to the neuron may decrease its membrane potential fluctuations, despite the mean value of the membrane potential remaining unchanged [1]. However, the evoked inhibitory input (paired with an evoked excitatory input, necessary to keep the mean membrane potential unchanged) leads to an increase in the total synaptic noise and the synaptic current fluctuations. We provide a theoretical explanation for this observation and analyze its effect on the neuronal firing variability.

We used single compartmental neuronal models to show that evoked inhibitory input decreases the membrane potential fluctuations if the signal to noise ratio of the input scales slower than the square of the input intensity, a condition which is implicitly satisfied for the Poisson shot noise. Moreover, we show that in order to reproduce this behavior in neural models, reversal potentials and synaptic filtering has to be included in the model of the synaptic input.

To clarify the effects on spike-firing regularity, we used models with different spike-firing adaptation (SFA) mechanisms. When SFA was implemented through ionic currents or not at all, higher levels of inhibition led to lower firing regularity, despite the decreased membrane potential fluctuations. On the other hand, we observed that evoked inhibition may lead to more regular firing (while keeping the mean firing rate unchanged), if the neuron exhibits a dynamic spike firing threshold (Fig. 1). See [5] for the published version of the presented work.

References

1. Matsumura M, Cope T, Fetz EE. Sustained excitatory synaptic input to motor cortex neurons in awake animals revealed by intracellular recording of membrane potentials. Experimental brain research. 1988 May;70(3):463–9.

2. Rudolph M, Pospischil M, Timofeev I, Destexhe A. Inhibition determines membrane potential dynamics and controls action potential generation in awake and sleeping cat cortex. Journal of neuroscience. 2007 May 16;27(20):5280–90.

3. Steriade M, Timofeev I, Grenier F. Natural waking and sleep states: a view from inside neocortical neurons. Journal of neurophysiology. 2001 May 1;85(5):1969–85.

4. Monier C, Chavane F, Baudot P, Graham LJ, Frégnac Y. Orientation and direction selectivity of synaptic inputs in visual cortical neurons: a diversity of combinations produces spike tuning. Neuron. 2003 Feb 20;37(4):663–80.

5. Barta T, Kostal L. Regular spiking in high-conductance states: The essential role of inhibition. Physical Review E. 2021 Feb 18;103(2):022,408.

Fig. 1
figure aw

The increase in inhibitory input (A) leads to an increase in the fluctuations of the synaptic current (B), but decreases the fluctuations of the membrane potential of a non-spiking membrane (C). The evoked inhibition decreases the firing regularity in the model with M-current SFA (D), but increases the firing regularity in a model with dynamic threshold (E)

P93 Cortical thickness as predictor of performance enhancement in complex real-time strategy game training

Anna Kovbasiuk 1 , Natalia Jakubowska 2 , Nikodem Hryniewicz 3 , Rafał Prusinowski 2 , Aneta Brzezicka 2 , Natalia Kowalczyk-Grębska 2

1 SWPS University of Social Sciences and Humanities, Warsaw, Poland

2 SWPS University of Social Sciences and Humanities, Faculty of Psychology, Warsaw, Poland

3 Nalecz Institute of Biocybernetics and Biomedical Engineering, Polish Academy of Sciences, CNS Lab, Warsaw, Poland

Email: akovbasiuk@st.swps.edu.pl

Studies revealed that outcomes of complex video game (VG) training can be predicted by individual differences in demographic and behavioral characteristics [1]. However, there is still no unanimity on the effectiveness, settings, and benefits of VG training for particular subjects. Therefore, researchers used measures of individual differences in neuroanatomy to shed light on the inconsistent results. Most of the studies in the domain used voxel-based morphometry (VBM) method to obtain neuroanatomical measures of grey matter volume [e.g., 2, 3, 4]. Surface based morphometry (SBM) measures such as cortical thickness (CT) provide a better differentiation of tissue boundaries [5], but only one study used it to predict complex VG skill acquisition [6]. Researchers revealed that CT of the lingual gyrus (LG) can be a significant predictor of First Person Shooter VG learning [6].

In our research we have concentrated on prediction of VG skill learning from CT in a game with different mechanics, Real Time Strategy. We have selected regions of interest from previous studies which were possible to investigate using the SBM method such as LG [6], medial frontal gyrus and anterior cingulate cortex [4]. This study provides important evidence of the usefulness of SBM measures for prediction of complex VG learning. We hope that our study and future reports will allow researchers to better adjust training regimes for esport professionals, create personalized rehabilitation programmes and explain theoretical underpinnings of neuroplasticity after complex VG training.

Acknowledgments

We thank the National Science Center for supporting this research with Grant OPUS (2016/23/B/HS6/03843).

References

1. Arthur Jr W, Strong MH, Jordan JA, Williamson JE, Shebilske WL, et al. Visual attention: Individual differences in training and predicting complex task performance. Acta Psychologica. 1995 Feb 1;88(1):3–23.

2. Erickson KI, Boot WR, Basak C, Neider MB, Prakash RS, et al. Striatal volume predicts level of video game skill acquisition. Cerebral Cortex. 2010 Nov 1;20(11):2522–30.

3. Kowalczyk N, Skorko M, Dobrowolski P, Kossowski B, Mysliwiec M, et al. Lenticular nucleus volume predicts performance in real-time strategy game-cross-sectional and training approach using voxel-based morphometry. bioRxiv. 2020 Jan 1.

4. Basak C, Voss MW, Erickson KI, Boot WR, Kramer AF. Regional differences in brain volume predict the acquisition of skill in a complex real-time strategy videogame. Brain and cognition. 2011 Aug 1;76(3):407–14.

5. Lemaitre H, Goldman AL, Sambataro F, Verchinski BA, Meyer-Lindenberg A, et al. Normal age-related brain morphometric changes: nonuniformity across cortical thickness, surface area and gray matter volume?. Neurobiology of aging. 2012 Mar 1;33(3):617-e1

6. Momi D, Smeralda C, Sprugnoli G, Ferrone S, Rossi S, et al. Acute and long-lasting cortical thickness changes following intensive first-person action videogame practice. Behavioural brain research. 2018 Nov 1;353:62–73.

P94 Chaotic dynamics introduce the discrete response and show the high susceptibility

Tomoki Kurikawa 1

1 Kansai Medical University, Osaka, Japan

Email: kurikawt@hirakata.kmu.ac.jp

A neural circuit is highly recurrent and shows rich internal dynamics. The internal dynamics interplay with external stimuli to generate their neural representation. How such a representation emerges and is related to internal dynamics are important questions for understanding neural processing. Random recurrent neural networks are basic substrates for answering these questions by virtue of their simplicity. However, behaviors in these models are quite simple and neurons in a biological neural circuit are not randomly connected but organized into a somewhat structured network. To clarify the relation between internal dynamics, network structure, and its response, “low rank” networks such as Hopfield networks and reservoir networks with feedback, are studied. Still, however, it remains unclear how a structured network with multiple memorized items generates response behaviors.

To investigate this point, we present a structured network model composing of inputs and their representation patterns with their pseudo-inverse matrix. The response of this network to the input is analytically described for an arbitrary strength of the input.This is a great advantage point against previous models.By using this model, we identified three regimes of responses depending on the gain parameter of the activation function and the number of the used inputs (load factor): continuous response, discrete response, and no response regimes. The continuous regime appears for the smaller gain parameter and load factor of inputs, wherein the analytically described response is a stable fixed point for any input strength. As the input strength increases, the response increases continuously. Secondly, in the discrete response regime for the larger gain and load factor, the described response becomes unstable and chaotic dynamics emerge. The response discretely surges to the maximum value at the critical input strength. Finally, for the much larger gain parameter and load factor, the no response regime appears where the response does not increase sufficiently even for the strong input.

We focused on the computational functions in these regimes: susceptibility against input strength and learning speed for a new item. The susceptibility takes the highest value in the discrete response regime. At the same time, the fastest learning is achieved. Thus, the chaotic dynamics in the discrete regime provide the best computational ability.

Recent experimental studies observed the discrete response as the input strength changes in auditory and odor cortices. Interestingly, we found that random neural networks and the low rank networks did not provide such a discrete response, indicating an important role of the pseudo inverse matrix in the discrete response. We also demonstrated the pseudo inverse matrix can be shaped through a simple learning rule requiring only local (i.e., pre- and post-synaptic neural activities) information in the previous study. In total, these results suggest that the discrete responses observed in the several cortical areas reflect the high computational ability and they are based on the pseudo inverse matrix.

Acknowledgements

We thank JSPS KAKENHI (20H00123) for supporting.

P95 Reward and state prediction error signals in cortico-striatal circuitry of obsessive–compulsive disorder

Taekwan Kim *1 , Jun Soo Kwon 2

1Seoul National University, Brain and Cognitive Sciences, Seoul, South Korea

2Seoul National University College of Medicine, Department of Psychiatry, Seoul, South Korea

Email: takwan99@snu.ac.kr

Obsessive–compulsive disorder (OCD) is characterized over-reliance on habitual control system [1]. The bias toward habits is considered to produce unbalanced decision arbitration between goal-directed (model-based, MB) and habitual (model-free, MF) learning strategies in OCD [2]. Although previous literature has demonstrated dysfunctional reward prediction error (RPE) signals in fronto-striatal circuitry in OCD [3], little is known about how neural signals encoding the RPE and state prediction error (SPE) are disrupted in dynamics of the decision arbitration between MB and MF systems in OCD. We scanned functional magnetic resonance imaging from thirty patients with OCD and thirty one healthy controls. We used the sequential two-choice Markov decision task to dissociate MB and MF systems and the reinforcement-learning computational model developed to estimate arbitration process between two learning strategies [4]. Through the computational framework of dynamic competition between two models, we estimated RPE and updated the state-action value using the SARSA algorithm, while we estimated SPE and updated the state-action value using the learning algorithm employing FORWARD learning and BACKWARD planning [4]. We tested group differences of neural signals encoding prediction errors between patients and healthy controls and analyzed correlation between hit rate and prediction errors within patients. Patients with OCD had greater negative RPE than healthy controls (t = -3.08, p = 0.003) during MB-favored trials, while SPE was comparable between groups. Hit rate was lower in patients than healthy controls when MB system was favored (U = 271.0, p = 0.003). Within patients, the greater negative RPE was associated with lower hit rate (r = 0.89, p < 0.001). We found neural correlates of RPE signal in the bilateral nucleus accumbens and SPE signal in the bilateral insula. Compared to healthy controls, patients had hypoactivated regions encoding RPE signal in the right dorsolateral prefrontal cortex (dlPFC; MNI [52, 42, 22], cluster pFDR < 0.001) and the left dlPFC (MNI [-36, 32, 38], cluster pFDR < 0.001). In conclusion, we demonstrated that unbalanced decision arbitration in OCD was attributed to enhanced negative RPE, but not SPE, and that hypoactive dlPFC signal in cortico-striatal circuitry underlay the erroneous prediction in reward-based learning strategy in OCD (Fig. 1).

Acknowledgments

This research was supported by the National Research Foundation of Korea (grant no.2019R1A2B5B03100844 and 2018R1A4A1025891).

References

1. Gillan CM, Papmeyer M, Morein-Zamir S, Sahakian BJ, Fineberg NA, et al. Disruption in the balance between goal-directed behavior and habit learning in obsessive–compulsive disorder. American Journal of Psychiatry. 2011 Jul;168(7):718–26.

2. Voon V, Derbyshire K, Rück C, Irvine MA, Worbe Y, et al. Disorders of compulsivity: a common bias towards learning habits. Molecular psychiatry. 2015 Mar;20(3):345–52.

3. Hauser TU, Iannaccone R, Dolan RJ, Ball J, Hättenschwiler J, et al. Increased fronto-striatal reward prediction errors moderate decision making in obsessive–compulsive disorder. Psychological medicine. 2017 May;47(7):1246–58.

4. Lee SW, Shimojo S, O’Doherty JP. Neural computations underlying arbitration between model-based and model-free learning. Neuron. 2014 Feb 5;81(3):687–99.

Fig. 1
figure ax

Neural underpinnings of impaired reward prediction error in patients with OCD

P96 Computational modeling of age-dependent tonic inhibition in the cerebellar granule cells in a network context

Sungho Hong 1 , Erik De Schutter 1 , Jae Kwon 2 , Sunpil Kim 2 , C. Justin Lee 2

1 Okinawa Institute of Science and Technology, Computational Neuroscience Unit, Okinawa, Japan

2 Institute for Basic Science, Center for Cognition and Sociality, Daejon, South Korea

Email: shhong@oist.jp

GABA is a dominant mediator of inhibitory signaling between neurons and plays critical roles in neural network functions. Experimental studies have reported diverse forms and origins of GABA-mediated inhibition. One of them is tonic inhibition mediated by extra-synaptic GABA receptors, arising from distinct sources, such as slow spillover of GABA from synaptic to extra-synaptic regions and GABA release from glial cells [1–3]. Notably, the developmental process can regulate the underlying mechanisms of tonic inhibition and change which one dominates during maturation [4]. However, the causes and functional impacts of such a shift have not been understood well.

In this study, we addressed this question by intracellular recording experiments and computational modeling of tonic inhibition in principal neurons, called granule cells, in the cerebellar cortex. Experimental data showed a significant decrease in the spontaneous inhibitory postsynaptic current (sIPSC) and also in the neuronal activity-dependent component of tonic inhibitory current (TIC) from the adolescent (P21-28) to adult (P56-96) animals. At the same time, the total TIC remained the same. We built models of the granule cell inhibition for each age group based on the data. Then, we integrated them into a large-scale network model of the cerebellar granular layer [5]

Our analysis of the simulated data showed that the global network activity, shaped by the excitatory granule cell-inhibitory interneuron loop, significantly depends on how much the activity-dependent component contributes to tonic inhibition. Therefore, the different compositions of tonic inhibition at different developmental stages can result in the distinct encoding of external inputs by the cerebellar granule cells in the network despite similar level of overall tonic inhibition in individual cells. We also created different models based on data from animals with the genetic knockout of the glial Bestrophine 1 channel, which is mainly responsible for the activity-independent tonic inhibition [2,3]. With network simulations with those models, we investigated the dependence of the network activity on various parameters such as the synaptic conductance, conductance of the activity-independent tonic inhibition, etc. Our study can help us understand how development changes in tonic inhibition impact the cerebellar neural network in relation to age-dependent changes in motor behavior across adolescence.

References

1. Farrant M, Nusser Z. Variations on an inhibitory theme: phasic and tonic activation of GABA A receptors. Nature Reviews Neuroscience. 2005 Mar;6(3):215–29.

2. Lee S, Yoon BE, Berglund K, Oh SJ, Park H, et al. Channel-mediated tonic GABA release from glia. Science. 2010 Nov 5;330(6005):790–6.

3. Woo J, Min JO, Kang DS, Kim YS, Jung GH, et al. Control of motor coordination by astrocytic tonic GABA release through modulation of excitation/inhibition balance in cerebellum. Proceedings of the National Academy of Sciences. 2018 May 8;115(19):5004–9.

4. Wall MJ, Usowicz MM. Development of action potential‐dependent and independent spontaneous GABAA receptor‐mediated currents in granule cells of postnatal rat cerebellum. European Journal of Neuroscience. 1997 Mar;9(3):533–48.

5. Sudhakar SK, Hong S, Raikov I, Publio R, Lang C, et al. Spatiotemporal network coding of physiological mossy fiber inputs by the cerebellar granular layer. PLoS computational biology. 2017 Sep 21;13(9):e1005754.

P97 Conduction delays in myelinated axons with variable nodal and internodal lengths

Afroditi Talidou 1 , Jeremie Lefebvre 1

1 University of Ottawa, Department of Biology, Ottawa, Canada

Email: atalidou@uottawa.ca

The oligodendrocytes, a type of glial cell insulating axons in the central nervous system, are the targets of immune attacks in demyelinating diseases such as multiple sclerosis. Oligodendrocytes create myelin, a lipid-rich substance surrounding axons that influences the conduction velocity of electrical impulses by enabling saltatory conduction. Delays, determined by conduction velocities, should coincide to achieve simultaneous signalling in the neuron network (synchrony). It is yet unclear the mechanism making oligodendrocytes recognize the quantity of myelin needed to secure synchrony. In this project, we study the influence of the geometry of myelinated axons (variable lengths of nodes of Ranvier and myelin sheaths) in conduction delays between neurons.

P98 Recurrent connectivity controls the ability of inhibitory synaptic plasticity to produce E/I co-tuning.

Emmanouil Giannakakis 1 , Anna Levina 2

1 Eberhard Karls University of Tübingen, Computer Science, Tübingen, Germany

2 Universtity of Tübingen, Tübingen, Germany

Email: giannakakismanos@gmail.com

For many years, the idea of a ‘blanket of inhibition’ that modulates excitatory currents on average had been nearly universally accepted. However, recent experimental and theoretical findings have demonstrated evidence and benefits of excitatory/inhibitory co-tuning [1]. This, in turn, opens questions about how such co-tuning can potentially emerge. The experimental observation of STDP in inhibitory synapses [2] along with relevant theoretical studies [3] suggest that synaptic plasticity mechanisms can generate E/I co-tuning. Still, studies of the ability of inhibitory plasticity to generate detailed E/I co-tuning have been focused on feedforward networks with distinct input currents which are virtually free of noise and cross-correlations that may disrupt the tuning process. However, cortical networks rarely exhibit such architectures and are typically characterized by high levels of noise and recurrent connectivity. Our study examines the ability of a standard inhibitory plasticity rule [3], which has been shown to produce E/I co-tuning in feedforward networks, to tune inhibitory connections that match static tuned excitatory connectivity under realistic levels of noise and recurrent connections in the presynaptic neurons.

We find that noise and unstructured recurrent connectivity can significantly reduce the ability of inhibitory synaptic plasticity to produce E/I co-tuning (Fig. 1). We trace this phenomenon to the covariance structure of inputs which affects the loss function of the inhibitory learning rule. We make a theoretical investigation of a reduced rate neuron model, and then compare predictions from it with the behaviour of a large complex network of LIF neurons. We subsequently investigate which types of pre-synaptic connectivity can restore the desired input statistics for E/I tuning to emerge. We find that clustering of the pre-synaptic connections (increased connectivity within each input group) can create the appropriate input statistics for E/I tuning to emerge even in the presence of strong pre-synaptic noise.

Our findings suggest that despite the negative effects that noise and recurrent connectivity can have on the ability of inhibitory plasticity to tune inhibitory connections, these effects can be effectively mitigated by the topology of the presynaptic network. Thus, we suggest that a combined effect of connectivity and plasticity allows E/I co-tuning to emerge in networks with biologically plausible levels of noise and realistic connectivity structures.

References

1. Tao HW, Li YT, Zhang LI. Formation of excitation-inhibition balance: inhibition listens and changes its tune. Trends in neurosciences. 2014 Oct 1;37(10):528–30.

2. Froemke RC. Plasticity of cortical excitatory-inhibitory balance. Annual review of neuroscience. 2015 Jul 8;38:195–219.

3. Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science. 2011 Dec 16;334(6062):1569–73.

Fig. 1
figure ay

A Network schema with in-group (green) and between-group (orange) connections. B E and I weights are aligned in a tuned and uncorrelated in an untuned network. C An increase in global recurrence destroys tuning. D Without recurrence/noise the learning rule’s loss is at a minimum when the network is tuned (purple dot), but recurrence moves the minimum away from the tuned state (olive dot)

P99 Entorhinal modules as graph-learning systems

Marcus Lewis 1

1 Numenta, San Francisco, CA, United States of America

Email: mrcslws@gmail.com

The hippocampal formation is thought to learn spatial maps of environments, and in many models this learning process consists of forming a sensory association for each location in the environment. This is inefficient, akin to learning a large lookup table for each environment. Spatial maps can be learned much more efficiently if the maps instead consist of arrangements of sparse environment parts. In this work, we approach spatial mapping as a problem of learning graphs of environment parts. Each node in the learned graph, represented by hippocampal engram cells, is associated with feature information in lateral entorhinal cortex (LEC) and location information in medial entorhinal cortex (MEC). Each edge in the graph (Fig. 1) represents the relationship between two parts, and it is associated with coarse displacement information. Thus, the model uses a hybrid approach to storing spatial information, learning ambiguous grid cell locations of environment parts and also learning coarse displacements between those parts. The two complement each other, as the grid cells provide fine-grained resolution that augments the coarse displacements while the coarse displacements disambiguate the grid cells so that a single module is sufficient for unambiguously representing locations. Using this graph approach, environments can be learned with just a few associations, and the graph can be formed nearly instantly by attending to each of the environment parts. This arrangement-of-parts model offers interesting perspectives on multiple hippocampal phenomena. First, it suggests that each entorhinal module is running an independent mapping system, rather than requiring the modules to work together to represent unambiguous locations. Second, it suggests a reason why grid cells seem to track viewed locations, as that information is exactly what should be associated with nodes in the graph. Third, it offers an explanation for grid cell distortions, suggesting that they occur because the animal fits idealized parts onto actual environment features, and based on this insight we use empirical grid cell data to reconstruct the idealized maps that could lead to such distortions. Fourth, this view explains why hippocampal engram cells are often classified as place cells, suggesting that they actually represent a node in a graph which the animal can attend to from many locations. Fifth, the core idea of associating arbitrary information with nodes and edges is not inherently spatial, so this graph-based view of processing in the hippocampal formation can expand to incorporate non-spatial tasks. Our model shows that hippocampal modules may dynamically create graphs representing spatial arrangements, and it opens up new ways of understanding how animals make rapid spatial and non-spatial inferences.

Fig. 1
figure az

Each graph node is a distributed engram cell representation which represents an environment part. It is associated with information about that part, shown as solid lines. Nodes are associated with other nearby nodes, and spatial displacement vectors are learned for pairs of nodes. This spatial information is computed by detecting transformations between object-vector cells and grid cells

P100 Dynamical origin for winner-take-all competition in a biological network of the hippocampal dentate gyrus

Sang-Yoon Kim 1 , Woochang Lim 2

1 Daegu National University, Institute for Computational Neuroscience and Department of Science Education, Daegu, South Korea

2 Daegu National University of Education, Institute for Computational Neuroscience and Department of Science Education, Daegu, South Korea

Email: wclim@icn.re.kr

We consider a biological network of the hippocampal dentate gyrus (DG). The DG is a pre-processor for pattern separation which facilitates pattern storage and retrieval in the CA3 area of the hippocampus. The main encoding cells in the DG are the granule cells (GCs) which receive the input from the entorhinal cortex (EC) and send their output to the CA3. We note that the activation degree of GCs is so low (~ 5%). This sparsity has been thought to enhance the pattern separation. We investigate the dynamical origin for winner-take-all (WTA) competition which leads to sparse activation of the GCs. The whole GCs are grouped into lamellar clusters. In each GC cluster, there is one inhibitory (I) basket cell (BC) along with excitatory (E) GCs.

There are three kinds of external inputs into the GCs; the direct excitatory EC input, the indirect inhibitory EC input, mediated by the hilar perforant path-associated (HIPP) cells, and the excitatory input from the hilar mossy cells (MCs). The firing activities of the GCs are determined via competition between the external E and I inputs. The time-averaged ratio of the external E to I conductances, RE-I(con)(t), may represents well the degree of such external E-I input competition. It is thus found that GCs become active when their RE-I(con)(t) is larger than a threshold Rth*, and then the mean firing rates of the active GCs are strongly correlated with RE-I(con)(t). In each GC cluster, the feedback inhibition of the BC may select the winner GCs. GCs with larger RE-I(con)(t) than the threshold Rth*survive, and they become winners; all the other GCs with smaller RE-I(con)(t) become silent. In this way, WTA competition occurs via competition between the firing activity of the GCs and the feedback inhibition from the BC in each GC cluster. In this case, the hilar MCs are found to play a role of enhancing the WTA competition.

P101 Optimal stimulation of spiking neuron using reinforcement learning: Single neuron study

Sai Kalyan Ranga Singanamalla 1 , Ashlesha Akella 2 , Chin-Teng Lin 2

1 University of Technology Sydney, Sydney, Australia

2 University of Technology Sydney, School of Computer Science, Sydney, Australia

Email: saikalyanranga.singanamalla@student.uts.edu.au

Neurostimulation is a process of treating neurological diseases by inducing electrical activity at specific brain regions in order to recover lost functionality. In the context of Brain-Computer Interfaces (BCI), few studies have used external stimulation to by-pass the signal transmission from one region to another, induce the formation of new synapses between neurons or bridge two region via an implanted chip. To note few examples, the damage of nerves connecting the motor cortex to muscle damaged, is accommodated by functional electrical stimulation (FES) devices which detect motor activity to initiate external electrical pulse to muscle cells [1]. In another study, implanted chips were used to establish artificial connection between two neuronal areas by detecting activity in one region and triggering another [2]. Recently, developments have shown that possibility of replacing a lost circuity with silicon neural network, an embedded VLSI circuitry. These devices bridge the information flow between regions.

The existing studies often deliver activity (of another region) dependent stimulus to another region and such stimulation process either deliver stimulus at fixed frequency or the chip is designed to mimic the spiking patterns of lesioned region. However, delivery of fixed stimulus pattern is not an optimal approach and the chip designed to mimic certain regional patterns required pre-lesioned data, which is not practical for all applications. Therefore, this preliminary study proposes to use Reinforcement Learning (RL)to overcome these limitations and find optimal stimulation patterns at single neuron level. A Leaky-Integrate-Fire (LIF) spiking neuron model was considered as the environment and Double Deep-Q-Learning (acting as stimulator) was applied to find the action sequence (i.e., stimulus patterns) such that a desired spiking pattern is produced by the neurons (Fig. 1). For each of the spike pattern produced, Deep Q Network identifies the optimal input spike stimulation need to be delivered. The future direction of this study includes to expand the current approach to network level.

Fig. 1
figure ba

A Overview of system architecture where DQN acts as external stimulator to the LIF neuron. Neuron properties such as current, voltage, spikes and past information on action and reward constitute the state for RL and action are either spike or no-spike. B Output current of LIF neurons produced by LIF due to optimal stimulation from DQN and its comparison to desired pattern

P102 A reinforcement learning approach to model evidence accumulation of decision making

Ashlesha Akella 1 , Sai Kalyan Ranga Singanamalla 2 , Chin-Teng Lin 1

1 University of Technology Sydney, School of Computer Science, Sydney, Australia

2 University of Technology Sydney, Sydney, Australia

Email: saikalyanranga.singanamalla@student.uts.edu.au

Decision making is a fundamental function of animals in their daily tasks. Evidence accumulation is a regarded paradigm to study the neurological bases for the decision-making process. Evidence accumulation involves integrating evidence from past stimuli towards or against a choice until a decision is made. Towards this task, studies have performed visual stimulus-based experiments on rat model using a T-maze experiment [1]. Here, a series of visual cues, say Left and Right cue of different proportions, are presented to the rat for a few milliseconds. After the set of stimuli, the rat takes a Left/Right turn at the T junction and receives a reward (e.g. water). The objective for the rat is to keep track of the Left vs Right stimulus and taking a corresponding turn (decision), i.e., Left or Right. This experiment is also often used to study working memory as the decision-making outcome is made at a timescale larger than the individual neuronal timescale. Studies have shown that the decision outcome is affected as the difference in the number of left vs right cues (D-LR) becomes smaller.

Despite these studies, understanding the processes of working memory in decision making is still largely unknown. Therefore, a biologically inspired computational model mimicking the behavior could potentially help unveil these processes. To this end, a rate based recurrent neural network (RNN) model was trained using Reinforcement Learning (RL) to solve the T-Maze task. Like the existing experiments, the Left vs Right cue was presented as step input current to the input layer which gets processed in RNN and the final readout layer outputs the model decision. The RNN was trained for D-LR of 0.8, i.e., either Left or Right cue comprises 80% of the stimulus stream in each trial. The trained model was tested for a different fraction of D-LR and the model’s behavior resembled the actual rat experiments, where decision accuracy increases with an increase in D-LR in a sigmoid fashion (Fig. 1) as in [2]. In addition, an animal model could perform the task continuously on longer-time scales. Here, the neural activity resets automatically between two consecutive T-maze tasks, where the cue of the first task does not affect the cues of the following tasks. Such reset behavior is often ignored in modelling studies. The model we used in the work was also able to reset RNN activity after each task and make new independent decisions. We trained an agent on two consecutive T-maze tasks (Fig. 1) and tested the performance on 100 consecutive T-maze tasks. The agent was able to make correct decision for all the 100 tasks.

At the start of a T-maze task, a random cue is chosen as a dominant cue. Each cue is given at random time points for 850 ms. After a delay of 250 ms, the mean output activity of 100 ms is computed. The output node with a higher mean will be the action decided by the agent. When the agent chooses a correct action, it is rewarded with 3 and –1 otherwise. The agent is trained for 300 episodes, with a D-LR of 0.8. After which the agent is tested on 2 sequences T-maze task for different D-LR's (0.55–1) for 20 episodes. Figure 1C shows the mean across these 20 episodes. To understand the reproducibility, we trained the agent with 3 different seeds. Figure 1D shows the mean of rewards (solid line) and standard deviation (shaded region).

References

1. Pinto L, Koay SA, Engelhard B, Yoon AM, Deverett B, et al. An accumulation-of-evidence task using visual pulses for mice navigating in virtual reality. Frontiers in behavioral neuroscience. 2018 Mar 6;12:36.

2. Deverett B, Kislin M, Tank DW, Wang SS. Cerebellar disruption impairs working memory during evidence accumulation. Nature communications. 2019 Jul 16;10(1):1–7.

Fig. 1
figure bb

A Overview of the 2 consecutive T-mazes. B Rate based recurrent neural network (RNN) model, with 2 inputs (Left and Right cue) & 2 output (left and right actions) nodes. C Agent trained with D-LR 0.8 was tested with different D-LR's (percentage of cues 55 to 100) resulted in a sigmoid fashioned decision accuracy/reward (moving average). D Moving average of rewards during training

P103 Signal encoding enhanced by recurrent noise

Gregory Knoll 1 , Benjamin Lindner 1

1 Humboldt-Universitaet zu Berlin, Physics, Berlin, Germany

Email: gregory.knoll@bccn-berlin.de

Sensory processing involves a series of stages progressing from the sensory periphery, where neural assemblies may have little interconnectivity, to the sensory cortex, where principal cells receive local, lateral inputs, share inputs with other layers within a column and between columns, and are inundated with top-down input from other cortical areas and bottom-up sensory input. In each of these stages, neurons encode the sensory information while experiencing stochasticity from many sources, including channel noise, background synaptic input, and through their own heterogeneities. This stochasticity of course influences how efficaciously the neural populations within a processing stage encode a sensory stimulus. The neural assemblies in stages near the sensory periphery with little recurrence can be represented by feedforward networks, which have been shown to improve their encoding of even strong signals under stochastic conditions (additive white noise or heterogeneity) through a phenomenon known as suprathreshold stochastic resonance [1]. We demonstrate through simulations of the recurrent spiking network illustrated in Fig. 1A that the same resonance effect can be displayed by recurrent networks, which has implications for later cortical processing stages [2]. In this case, however, suprathreshold stochastic resonance is found with increased levels of network noise, controlled via the synaptic strength, instead of additive white noise. The results are robust across a large parameter space, in which single-neuron, network, and signal parameters are varied, a selection of which are shown in Fig. 1B-D. Finally, control experiments are run with a feedforward network (Fig. 1E) in order to confirm that the noise from the network is responsible for the improved encoding.

References

1. Stocks NG. Suprathreshold stochastic resonance in multilevel threshold systems. Physical Review Letters. 2000 Mar 13;84(11):2310.

2. Knoll G, Lindner B. Recurrence-mediated suprathreshold stochastic resonance. Journal of Computational Neuroscience. 2021 May 18:1–2.

Fig. 1
figure bc

A The model recurrent network. B-D Suprathreshold stochastic resonance is observed as the synaptic strength is increased for a broad range of intrinsic (B), network (C), and signal (D) parameters. E Control for the effect of a changing mean and noise intensity from synaptic input

P104 Amplitude and phase coupling optimize information transfer between brain networks that function at criticality

Arthur-Ervin Avramiea 1 , Anas Masood 2 , Huibert D. Mansvelder 1 , Klaus Linkenkaer-Hansen 1

1 Vrije Universiteit Amsterdam, Integrative Neurophysiology, Amsterdam, Netherlands

2 University of Geneva, Basic Neurosciences, Geneva, Switzerland

Email: a.e.avramiea@vu.nl

Brain function depends on segregation and integration of information processing in brain networks often separated by long-range anatomical connections. Neuronal oscillations orchestrate such distributed processing through transient amplitude and phase coupling; however, little is known about local network properties facilitating these functional connections. Here, we test whether criticality–a dynamical state characterized by scale-free oscillations–optimizes the capacity of neuronal networks to couple through amplitude or phase, and transfer information. We coupled in silico networks with varying excitatory and inhibitory connectivity, and found that phase coupling emerges at criticality, and that amplitude coupling, as well as information transfer, are maximal when networks are critical. Our data support the idea that criticality is important for local and global information processing and may help explain why brain disorders characterized by local alterations in criticality also exhibit impaired long-range synchrony, even prior to degeneration of physical connections.

P105 The emergence of computational capacity in developing biological neural networks

David Shorten 1 , Viola Priesemann 2 , Michael Wibral 3 , Joseph Lizier 1

1 The University of Sydney, Centre for Complex Systems, Sydney, Australia

2 MPI for Dynamics and Self-Organization, Göttingen, Germany

3 Georg-August-Universität, Campus Institut für Dynamik biologischer Netzwerke, Göttingen, Germany

Email: david.shorten@sydney.edu.au

The brains of organisms are capable of performing a dazzling array of computations. The ability to perform these computations is undergirded by a highly-developed computational capacity. This capacity is often studied within the framework of information dynamics, where it is decomposed into the fundamental atomic information processing operations of storage, transfer and modification. The structure and distribution of these operations has been well studied in mature brains, in particular using the Transfer Entropy (TE) to measure information flow. At the neural level, TE has previously been used to study information flows in recordings of spikes in slice cultures, but these studies analysed fully developed neural networks. As such, we lack an understanding of how such neural information flows arise during the development of neural systems. Here, we present progress towards filling this gap by studying the emergence of information flows (as measured by TE) in neural development using an open dataset [1] of recordings from developing cultures of dissociated cortical neurons. By estimating the TE between nodes (electrodes) on different recording days over a period of about a month, we are able to analyse how information flows change over neural development.

Crucially, we make use of a newly-developed continuous-time estimator of TE on spike trains [2], which is able to capture relationships that occur over relatively large time intervals without any loss in temporal precision. This contrasts with previous studies of TE making use of the traditional discrete-time estimator on spiking data, which suffers from numerous weaknesses including an inability to measure relationships occurring over fine and large timescales simultaneously [2].

We find that the amount of information flowing across the cultures increases dramatically throughout development. This is reflected in substantial increases in the average estimated TE between nodes as well as the number of source-target pairs for which there is a statistically significant TE value. We further find that the structure of these flows is locked in early in development: there is a large correlation in the information flowing between a given source-target pair between early and late days of development. We also find that, during the critical periods of population bursting, the nodes consistently take on specialised computational roles as either transmitters, mediators or receivers of information. Moreover, this specialisation corresponds with their position in the burst propagation: those that burst early are transmitters, late bursters are receivers and middle burster are mediators. This provides confirmatory evidence for the conjecture that middle bursters occupy the critical computational role of “brokers of neuronal communication” [3].

References

1. Wagenaar DA, Pine J, Potter SM. An extremely rich repertoire of bursting patterns during the development of cortical cultures. BMC neuroscience. 2006 Dec;7(1):1–8.

2. Shorten DP, Spinney RE, Lizier JT. Estimating transfer entropy in continuous time between neural spike trains or other event-based data. PLoS computational biology. 2021 Apr 19;17(4):e1008054.

3. Schroeter MS, Charlesworth P, Kitzbichler MG, Paulsen O, Bullmore ET. Emergence of rich-club topology and coordinated dynamics in development of hippocampal functional networks in vitro. Journal of Neuroscience. 2015 Apr 8;35(14):5459–70.

P106 Mechanisms of flexible information sharing through noisy oscillations

Arthur Powanwe 1 , André Longtin 2

1 University of Ottawa, Department of Physics and Centre for Neural Dynamics, Ottawa, Canada

2 University of Ottawa, Department of Physics, Ottawa, Canada

Email: apowa074@uottawa.ca

Inter-areal brain communication relies on the ability of coupled brain areas to flexibly exchange information. It has been argued that fast neural rhythms known as Gamma oscillations could support inter-areal brain communication provided that there exists sufficient coherence between connected brain areas. This is known as Communication Through Coherence (CTC) [1–2]. However, the synaptic mechanisms behind inter-areal brain communication is still unknown. For example, pieces of information coming into and out of a brain area must occur in different intervals of time. A simple mechanism could be that a brain area is passive when it receives information from another area and active when it sends to other brain areas [3]. This requires dynamic coupling between brain areas. However, the “connectome” inferred from imaging techniques is fixed. The fundamental question is to investigate the mechanisms that allow the flexible information sharing required for the brain to perform cognitive tasks like perception, attention, and working memory. We consider two coupled brain areas in the gamma band. Each brain area can be described by the stochastic Wilson-Cowan model of neural rhythms. Our goal is to identify the critical parameters and the dynamical regimes that allow flexible information sharing between the two networks. We successively consider the cases where the system of coupled networks lies in the quasi-cycle (noise-induced rhythm) and noisy limit cycles (noise-perturbed rhythm) regimes, since both of these regimes have been identified as potential candidates for certain rhythms. We also investigate the cases where the conduction delay between the networks is considered or not. We use numerical simulations of the delayed mutual information between the phase signals of each Local field's potentials, as well as a recently developed theory [4] of amplitude-phase coupling for quasi-cycles.We define flexibility in information sharing by the number of peaks (local maxima) and the sign of their locations in delayed mutual information curves.Our preliminary results show that the ability of the system to flexibly share information depends critically on the dynamical regime of interest and the presence of conduction delay between the connected networks. This suggests that gamma oscillations could be efficiently used by the brain as support for communication between areas in spite of the noise-induced or noise-perturbed nature of the rhythm’s origin. In all cases investigated, including with asymmetry and heterogeneity, we find a continual stochastic exchange of phase leadership between the areas.

References

1. Fries P. A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends in cognitive sciences. 2005 Oct 1;9(10):474–80.

2. Fries P. Rhythms for cognition: communication through coherence. Neuron. 2015 Oct 7;88(1):220–35.

3. Palmigiano A, Geisel T, Wolf F, Battaglia D. Flexible information routing by transient synchrony. Nature neuroscience. 2017 Jul;20(7):1014–22.

4. Powanwe AS, Longtin A. Brain rhythm bursts are enhanced by multiplicative noise. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2021 Jan 8;31(1):013,117.

P107 Reconciling forgetting and memory consolidation: Simulating the dissociable effects of neuronal noise levels on cortical memory traces

Max Garagnani 1 , Guglielmo Lucchese 2

1 Goldsmiths, University of London, Computing, London, United Kingdom

2 University of Greifswald, Neurology, Greifswald, Germany

Email: m.garagnani@gold.ac.uk

Neuronal noise as resulting from spontaneous baseline firing is believed to play an important role in cognitive processes, with theories postulating a contribution to gradual memory trace decay (forgetting). However, the exact cortical mechanisms underlying this process remain unclear. Specifically, transcranial direct current stimulation (tDCS) has been shown to promote memory consolidation; furthermore, a moderate degree of neural noise has also been suggested to positively affect memory consolidation, whereas high degrees of noise are suspected to negatively interfere with it.

To shed light on the exact cortical mechanisms underlying the differential contributions of low and high neural noise to memory consolidation (and decay) in the neocortex, we used a deep, spiking, neurobiologically constrained computational model of primary, secondary and associative areas in frontal and temporal lobes of the human brain. The network's "primary cortices" were repeatedly confronted with model-correlate of perception and action patterns, while strengths of all synaptic links were allowed to change by means of neurobiologically realistic learning mechanisms. This lead to the emergence of stimulus-specific cell assembly (CA) memory circuits in it, binding together perception and action inputs. To simulate the effects of noise on such memory traces, after the training two identical copies of the model were subjected to a period of constant high (or low) intensity noise, respectively, while synapses remained plastic.

Intriguingly, we observed that high noise levels induced rapid decay of previously formed CA memory traces in the network, whereas low noise levels lead to further CA-circuit consolidation. Preliminary analyses suggested that this behaviour was a result of the periodic re-activation of the model's memory circuits, which was observed in the low-noise condition but not in the high-noise one. We conjectured that, while a relatively small amount of noise allowed ignition (and hence consolidation) of the existing memory circuits to occur, too much prevented it (due to the network's inhibitory response to exceedingly high noise levels). These observations were confirmed by statistical analyses of changes in high-frequency oscillatory activity of the network during CA circuit stimulation.

The present results provide a neuromechanistic account able to bridge the gap between theories of forgetting and current experimental data on memory consolidation and brain stimulation effects.

P108 A ring model based on dendritic bistability

Jiacheng Xu 1 , Daniel Cox 1 , Mark Goldman 2 , Steve Luck 3

1 University of California, Davis, Physics, Davis, CA, United States of America

2 University of California, Davis, Department of Neurobiology, Physiology and Behavior and Ophthalmology and Vision Science, Davis, CA, United States of America

3 University of California, Davis, Psychology, Davis, CA, United States of America

Email: jchxu@ucdavis.edu

In this project, we set up a novel ring model based on dendritic bistable/hysteretic response function and show how it can be used to memorize both amplitude and width of the Gaussian input signal without any fine tuning of parameters. Ring model is commonly assumed as a way the brain encodes continuous periodic information like location, color and orientation. Classically, ring model can only maintain a fixed delayed bump and all information about the amplitude and width of input signal is lost. It is unclear how amplitudes and widths can be encoded, which is significant because they have a potential correspondence to the intensity and certainty of the memorized item. With additional structures, later ring model developments have realized more flexible parametrized systems of working memory, but they usually require fine tuning of parameters.

Here, we propose a novel ring model that incorporates bistable dendrites. For each dendrite, instead of a linear or sigmoid response function, the dendritic output to the soma behaves in a hysteretic way based on the presynaptic input. Such an input/output function can be realized by, for example, widely distributed NMDAr. The basic structure follows the classical ring model. However, each of inputs from other neurons to the target neuron is received through one separated dendrite of that target neuron. Such bistable dendrites work, relatively independently, as basic units in this system that each has a different selectivity. While intra layer input goes presumably through NMDA receptors in dendrites, external signal would go directly into the somas through AMPA receptors with a constant conductance. In the continuous limit, this ring model obeys an integral equation that shows how input amplitudes determine the bump amplitudes during the delay. Because such an equation is not solvable analytically beyond a single iteration, we have simulated it based on firing rate.

Simulations show that the delayed activity successfully encodes the intensity level of the input signal. Gaussian connectivity is analyzed at first. To achieve better performance, power law connectivity is also explored. As in (Fig. 1, left), a simulation ran on 360 neurons with different inputs centered in the middle. Each color means an independent run with a certain amplitude. Activities during the delay period show how different amplitudes are maintained. In addition, this dendritic instability ring model can also encode input width, which may represent the certainty of the bump(Fig. 1, right). Notably, to achieve amplitude and width encoding, the model only requires bistable dendrites but not inhibition tuning or additional neural types. We further perturbed each parameter of the model and the system shows robustness under a wide range of variations so that no fine tuning is required.

The pre-stable dynamics of this ring model can also serve as a bump integrator for evidence accumulation among a continuous range of locations. Experimentally, the independency of dendritic units of single neuron has been observed and some models of NMDAr dynamics support a hysteretic response function. While NMDAr is widely known to affect working memory performance, a more direct relation between dendritic hysteretic function and working memory remains to be verified.

Fig. 1
figure bd

Activities during the delay period when input amplitudes (left) and Gaussian variance (right, zoomed in to neurons 150–210) equal to the values represented by corresponding colors

P109 Analyzing the differences in olfactory bulb spiking with ortho- and retronasal stimulation

Michelle Craft1, Andrea Barreiro2, Shree Hari Gautam3, Woodrow Shew3, Cheng Ly4

1 Virginia Commonwealth University, Richmond, VA, United States of America

2 Southern Methodist University, Mathematics, Dallas, TX, United States of America

3 University of Arkansas, Physics, Fayetteville, AR, United States of America

4 Virginia Commonwealth University, Statistical Sciences and Operations Research, Richmond, VA, United States of America

Email: craftm@vcu.edu

Olfaction is a critical driver of many cognitive and behavioral tasks that can motivate risk-reward survival habits. It is particularly unique with two naturally occurring modes of stimulation: orthonasal from inhaling and retronasal from exhaling during feeding. Prior imaging studies have shown the brain responds differently to ortho versus retro stimulation. However, no work has detailed how the olfactory bulb (OB), where odor information is processed before relayed to cortex, responds at a cellular level to ortho versus retro stimulation. Specifically, mitral cell (MC) (and tufted cell) spiking responses have critical implications for odor processing, but any such differences are largely unknown.

For the first time, we perform in vivo recordings in rats using multi-electrode arrays to measure MC spiking response to the two modes of olfaction. We find significant differences in evoked firing rates and spiking covariances (i.e., noise correlations) between ortho and retro stimulus. Retro stimulation elicits larger firing yet lower correlations than ortho (Fig. 1A). Our data further highlight the different sensory response to the two modes of olfaction but remain limited in explaining underlying details prompting these differences. For this reason, we constructed a biophysical OB network model that balances biophysical attributes with computational efficiency.

Previous work suggests that olfactory receptor neuron (ORN) activity, presynaptic to the OB, may lead to observed differences in OB activity. Thus, we construct an OB model to account for ORN input differences with synapses driven by a correlated, inhomogeneous Poisson process (Fig. 1B). The ORN input is defined by three critical attributes: input rate temporal profiles, amplitudes, and input correlations (Fig. 1C). ORN response to retro stimulation is thought to be temporally slower and spatially smaller relative to ortho stimulation response, but the implications of this on OB remain unexplored. With these constraints, we find our model captures trends observed in our data.

We further analyze how our OB model maps a particular statistic (mean, variance, or covariance) in a simple and transparent manner by fitting a linear-nonlinear (LN) model to our OB model spike statistics. We show that the OB filters inputs (in time) differently for retro than ortho, with retro having overall larger filter values. However, the key attribute(s) of ORN inputs that can result in different ortho and retro statistics consistent with our data are not obvious. Therefore, we additionally evaluate multiple combinations of the three critical attributes of ORN input and find the temporal profile plays a critical role in shaping the magnitudes of the linear filters and in matching our data (Fig. 1D-E). Specifically, the slower input rate (rise and decay) is a key signature of retro stimulation to capture the trends in our data with retro stimulation, while faster rise and decay is similarly a key signature of ortho stimulus.

These findings provide a basis for understanding how differences in OB spiking statistics arise with these two natural modes of olfaction while providing a model framework of how to analyze attributes responsible for different OB spiking driven by differences in ORN inputs.

Fig. 1
figure be

A Larger evoked firing rate for retro (red) than ortho (blue), but smaller spike covariance in data. B OB model with correlated ORN synaptic input. C Various input rates and correlations (not shown) surveyed to capture data. D Temporal profile is more critical than other attributes to match ortho/retro data trends. E LN filters are consistently larger for slower/retro-like temporal profile

P110 Electrophysiological models of pig right atrial ganglionic plexus (RAGP) neurons derived from transcriptomics

Jessica Feldman 1 , Suranjana Gupta 2 , Lakshmi Kuttippurathu 3 , Alison Moss 3 , James S. Schwarber 3 , Rajanikanth Vadigepalli 3 , William W. Lytton 4

1 SUNY Downstate Medical Center, Neuroscience, New York, NY, United States of America

2 SUNY Downstate Health Sciences University, Department of Physiology and Pharmacology, New York, NY, United States of America

3 Thomas Jefferson University, Pathology, Anatomy and Cell Biology, Philadelphia, PA, United States of America

4 SUNY Downstate Medical Center, Department of Physiology and Pharmacology, New York, NY, United States of America

Email: suranjana.gupta@ieee.org

Neurons in the right atrial ganglionic plexus (RAGP), a dorsally located structure in the right atrium, mediate control of the sinoatrial node (SAN) via the vagus nerve, with implications for understanding cardiac pathologies and neuromodulatory control of the heart via vagal stimulation and pharmacotherapeutics.

We identified 405 single neuronal cells of pig RAGP using a transcriptomic map derived from HT-qPCR (High Throughput quantitative Polymerase Chain Reaction) and RNA-sequencing. To create neuronal simulations, we mined the transcriptomic data to identify ion-channel coding genes and surveyed available kinetic models for ion channel protein subtypes coded by those genes. Our single-compartment electrophysiological models, developed on NEURON and NetPyNE, utilized Hodgkin-Huxley-based ion channel models: sodium channels (Nav 1.1, Scn1a); potassium channels – Kv 1.1 (Kcnab1) and Kv 3.1 (Kcnc1); HCN channels (Hcn1, Hcn2, Hcn3, Hcn4); and calcium channels – Cav 2.1 (Cacna1a), Cav 2.2 (Cacna1b), Cav 1.2 (Cacna1c), Cav 1.3 (Cacna1d), Cav 3.1 (Cacna1g) and Cav 3.3 (Cacna1i). Out of the 405 neuronal cells, we found 115 patterns defined by distinct ion channel combinations. Three of our models demonstrated phasic and tonic firing patterns, consistent with existing experimental data. We will next determine how many of the distinct binary transcriptomic classes define populations with distinct neuroexcitability phenotypes.

As experimental RAGP data demonstrate the presence of both cholinergic and catecholaminergic milieus, future directions include tuning our models to reflect behaviors based on differential inputs. This is relevant ultimately to understanding control of the SAN via vagal neuromodulation with myriad potential applications to treatment of cardiac pathologies.

P111 A minimal integrate-and-fire model for mossy cells

Maurício Girardi-Schappo 1 , Anh-Tuân Trinh 2 , Jean-Claude Beique 2 , André Longtin 1 , Leonard Maler 2

1 University of Ottawa, Department of Physics, Ottawa, Canada

2 University of Ottawa, Department of Cellular and Molecular Medicine, Ottawa, Canada

Email: girardi.s@gmail.com

Mossy cells (MCs) are glutamaergic interneurons in the hilus. They receive synaptic inputs mainly from Granule cells (GCs), CA3 pyramidal cells and hilar inhibitory interneurons, and project their outputs back to GCs, and hilar interneurons. They have an intermediate firing rate as compared to GCs and inhibitory interneurons, and fire action potentials as a response to the animal passing through specific spatial positions. These positions that elicit firing are called place fields, making MCs place cells of multiple place fields. MCs participate in many processes involving the storage and retrieval of memories, spatial navigation, fear conditioning, and separation of patterns [1]. However, its membrane potential dynamics is often overlooked in theoretical and computational models of memory.

Here, we introduce a minimal bottom-up exponential integrate-and-fire (EIF) model to account for many of the MCs experimental features. Integrate-and-fire neurons offer a reasonable framework to model complex slow processes at the expense of replacing the fast action potential dynamics by a threshold parameter [2]. This makes them somewhat analytically tractable [2,3] and relatively efficient for large-scale computer simulations [4]. We built a data-driven model with feedback from current and voltage clamp experiments, constraining many of the EIF parameters and membrane currents.

From simple step current experiments, we identified membrane currents that are essential to correctly reproduce the experimentally observed current-dependent threshold increase, spike-dependent threshold, long-term threshold decay, and threshold-dependent reset potential. We also modeled the noisy synaptic input that constantly drives the MC behavior. An important feature of the EIF is that it describes a simplified rise of the Na inactivation [3], allowing our model to capture the threshold increase in the MC spike initiation – a feature that could not be fitted by linear leaky IF. On the other hand, we simplified gating variables to constants that were fitted to the experiments, keeping the model as simple as possible.

We present some preliminary results about the cell model computational properties by tracing f-I curves. We also test the filtering properties of the model, and whether MCs could act as a positive feedback to the GC layer, due to its anatomical position. These features are tested with and without synaptic noise. This work shines some light on the role of the strong MCs’ threshold adaptation and noise for memory tasks and spatial localization. This model also serves as a building block for future large-scale, and hopefully more realistic, models of the Dentate Gyrus and hippocampal networks.

Acknowledgements

The authors thank the financial support of the grant CIHR# 153,143 (LM and AL), NSERC grants 438 to LM (RGPIN/04336–2018) and AL (RGPIN/06204–2014), and NSERC grant BCPIR/493076–2017 (AL, LM, and MG-S).

References

1. Hainmueller T, Bartos M. Dentate gyrus circuits for encoding, retrieval and discrimination of episodic memories. Nature Reviews Neuroscience. 2020 Mar;21(3):153–68.

2. Benda J, Maler L, Longtin A. Linear versus nonlinear signal transmission in neuron models with adaptation currents or dynamic thresholds. Journal of Neurophysiology. 2010 Nov;104(5):2806–20.

3. Platkiewicz J, Brette R. A threshold equation for action potential initiation. PLoS computational biology. 2010 Jul 8;6(7):e1000850.

4. Girardi-Schappo M, Bortolotto GS, Stenzinger RV, Gonsalves JJ, Tragtenberg MH. Phase diagrams and dynamics of a computationally efficient map-based neuron model. PloS one. 2017 Mar 30;12(3):e0174621.

P112 Computational investigation of the effect of an SK channel activator on a detrusor smooth muscle cell action potential

Suranjana Gupta 1 , Rohit Manchanda 2

1 SUNY Downstate Health Sciences University, Department of Physiology and Pharmacology, New York, NY, United States of America

2 Indian Institute of Technology Bombay, Department of Biosciences and Bioengineering, Mumbai, India

Email: suranjana.gupta@ieee.org

Small conductance calcium activated potassium (SK) channels are purely activated by intracellular calcium concentration and play an important role in mediating the firing frequency of spontaneously active detrusor smooth muscle (DSM) cells. These channels have been found to be associated with bladder instability and their suppression have shown to induce detrusor overactivity [1]. Thus, they need to be investigated as potential therapeutic targets for the treatment of bladder pathophysiologies. Here, we propose the application of an SK channel activator in order to alleviate overactivity in a DSM cell.

The SK channel family includes four isoforms, of which SK3 is predominantly expressed in human DSM. Since the SK channel density is very low, we propose that SK channel activators will be more effective than their blockers. A potent SK3 activator, CyPPA (cyclohexyl-[2-(3,5-dimethyl-pyrazol-1-yl)-6-methyl-pyrimidin-4-yl]-amine) has been reported [2] to alter the cooperativity by left-shifting the channel’s activation curve (Fig. 1A).

We had previously developed a biophysically constrained Hodgkin-Huxley-based SK channel model and integrated it with a composite cellular model comprising DSM-specific calcium dynamics and ion channel models [3]. We simulated the effect of increasing concentrations of CyPPA on a single-cell DSM action potential. It was observed that CyPPA hyperpolarised the resting membrane potential (rmp) and prolonged the after-hyperpolarisation (AHP) phase without affecting the peak or width of the action potential (Fig. 1B). A hyperpolarised rmp reduces the excitability of the cell and a prolonged AHP phase reduces its firing frequency. These findings, thereby, support the potential applicability of CyPPA to ameliorate overactivity in a cell.

We were unable to simulate the effect of CyPPA on a spontaneously active DSM cell, sinceour DSM-specific cellular model failed to generate spontaneous action potential activity. Our integrated model needs to be improved in order to generate biophysically realistic spontaneous firing required to explore the effect of CyPPA on a DSM cell’s excitability.

Most drugs prescribed for the treatment of bladder dysfunction induce unwanted side-effects since they alter excitability of vascular smooth muscles. However, pharmacological activation of SK3 channels is of particular importance since these are not expressed in vascular smooth muscles [1], and thus will not produce unanticipated side-effects when SK3-specific drugs are administered to a pathological bladder. To this end, our preliminary findings show promise and can be taken forward for further study.

References

1. Herrera GM, Pozo MJ, Zvara P, Petkov GV, Bond CT, et al. Urinary bladder instability induced by selective suppression of the murine small conductance calcium‐activated potassium (SK3) channel. The Journal of physiology. 2003 Sep;551(3):893–903.

2. Hougaard C, Eriksen BL, Jørgensen S, Johansen TH, Dyhring T, et al. Selective positive modulation of the SK3 and SK2 subtypes of small conductance Ca2 + ‐activated K + channels. British journal of pharmacology. 2007 Jul;151(5):655–65.

3. Gupta S, Manchanda R. A computational model of large conductance voltage and calcium activated potassium channels: implications for calcium dynamics and electrophysiology in detrusor smooth muscle cells. Journal of computational neuroscience. 2019 Jun;46(3):233–56.

Fig. 1
figure bf

A Effect of CyPPA on the activation curve of the SK channel. B Effect of CyPPA on a DSM cell action potential

P113 A biochemical mechanism for time-encoding memory formation within individual synapses of Purkinje cells

Ayush Mandwal 1

1 University of Calgary, Calgary, Canada

Email: ayush.mandwal@ucalgary.ca

Purkinje cells within the cerebellum are known to suppress their tonic firing rates for a well defined time period in response to the conditional stimulus after classical eye-blink conditioning training. The classical eye-blink conditioning training protocol consists of stimulation of the Purkinje cells by two stimuli: conditional and unconditional stimulus separated by a finite time interval called interstimulus interval (ISI). This ISI duration decides the temporal profile i.e., the onset and the duration of the drop in tonic firing rate of Purkinje cells. Direct stimulation of parallel fibers and climbing fiber by electrodes which provide conditional and unconditional stimuli to Purkinje cells respectively was found to be sufficient to reproduce the same characteristic drop in the firing rate. In addition, the specific metabotropic glutamate-based receptor type 7 (mGluR7) was found responsible for the initiation of the response, suggesting that there exist an intrinsic mechanism within the Purkinje cell for the temporal learning. In an attempt to look for a underlying mechanism for time-encoding memory formation within individual Purkinje cells, we propose a biochemical mechanism based on recent experimental findings. The proposed mechanism attempts to answer key aspects of the “Coding problem” of Neuroscience by focusing on the Purkinje cell’s ability to encode time intervals through training. According to the proposed mechanism, the time memory is encoded within the dynamics of a set of proteins–mGluR7, G-protein, G-protein coupled Inward Rectifier Potassium ion channel, Protein Kinase A, Protein Phosphatase 1 and other associated biomolecules -which self-organize themselves into a protein complex. The intrinsic dynamics of these protein complexes can differ and thus can encode different time durations. We propose that the amount of mGluR7 receptor proteins and the collective dynamics of protein complexes within individual synapses allow Purkinje cell to suppress its own tonic firing rate for a specific time interval. The time memory is encoded within the effective dynamics of the biochemical reactions between involved biomolecules and altering these dynamics means storing a different time memory. The proposed mechanism is verified by both a minimal and a more comprehensive mathematical model of the conditional response behavior of the Purkinje cell. Furthermore the dynamical simulations of the involved biomolecules, provide us testable experimental predictions to verify the proposed mechanism.

P114 Building somatosensory cortex neuron models using a workflow for the creation, validation and generalization of biophysically detailed cell models

Maria Reva 1 , Christian Roessert 1 , Darshan Mandge 1 , Alexis Arnaudon 1 , Tanguy Damart 1 , Mustafa Anıl Tuncel 1 , Srikanth Ramaswamy 1 , Werner Van Geit 1 , Henry Markram 1

1 École Polytechnique Fédérale de Lausanne, Blue Brain Project, Geneva, Switzerland

Email: maria.reva@epfl.ch

Detailed single neuron modeling is widely used to study neuronal functions. While cellular and functional diversity across the mammalian cortex is vast, most of the available computational tools are dedicated to the reproduction of a small set of specific features characteristic to a single neuron. Here, we present a generalized automated workflow for the creation of robust electrical models and illustrate its performance with models present in the rat somatosensory cortex (SSCx). Each model is based on a 3D morphological reconstruction and a number of ionic mechanisms specific to the cell type of interest. We use an evolutionary algorithm to optimize the densities of ion channels and other parameters to match the electrophysiological features extracted from a number of recordings of each type. To better understand which parameters were well constrained by the optimization and which ones might be degenerate, we performed a parameter sensitivity analysis. We also validate the optimized models against the experimental data of additional stimuli and test how they generalize to other morphologies of the same neuronal type. By applying this workflow to various electrical and morphological types of the SSCx we created a new generation of SSCx neuronal models which reproduce the variability of neuronal responses observed in experiments. Due to its versatility, our workflow can be used to build robust biophysical models of any neuronal type.

P115 Computational modelling of a mouse layer 5 pyramidal neuron using genetic ion channels

Darshan Mandge 1 , Yann Roussel 1 , Stijn van Dorp 1 , Tanguy Damart 1 , Daniel Keller 1 , Rajnish Ranjan 1 , Werner Van Geit 1 , Henry Markram 1

1 École Polytechnique Fédérale de Lausanne, Blue Brain Project, Geneva, Switzerland

Email: darshan.mandge@epfl.ch

Traditionally detailed computational neuron models use pharmacologically characterized generic ion channel models for the membrane currents. Although these generic ion channel models represent different current types (K, Na, HCN, KCa and Ca), they mostly capture the response of a mixture of several genetic subtypes of an ion channel family. With this approach one can faithfully capture the electrical properties of different neurons, and one can trace the causal events of an emergent phenomenon down to individual neurons as well as to current types. However, one can not link such phenomena to specific ion channel genes. Now that cell-type-specific gene expression data from the Allen Institute for Brain Science [1] and corresponding models for a set of genetically-specified ion channels have become available [2], we were able to construct a detailed electrical model of the mouse somatosensory cortex layer-5 pyramidal neuron. We adjusted the density of 35 genetic ion channels from the Kv, Nav and HCN family, along with generic voltage-gated calcium (Cav) channels and calcium-activated potassium (KCa) channels so that the electrical behavior of the modelled neuron matched somatic membrane potential recordings. The model parameters were constrained using BluePyOpt [3] with the experimental electrophysiology features extracted with BluePyEfe [4]. With such a large parameter space, the optimization time and computational resources used were substantially higher than those used by models with generic ion channel models. Optimization results corroborate the established concepts of ion channel degeneracy as multiple combinations of ion channel conductances were able to replicate the experimental electrophysiological features [5]. The resulting model could be used to explore the role of ion channels in cellular physiology and in a longer-term perspective, such models could allow simulation of channelopathies at the cellular and network level.

References

1. Tasic B, Yao Z, Graybuck LT, Smith KA, Nguyen TN, et al. Shared and distinct transcriptomic cell types across neocortical areas. Nature. 2018 Nov;563(7729):72–8.

(Database:https://celltypes.brain-map.org/rnaseq/mouse_ctx-hip_smart-seq, Last Accessed: 14-May-2021).

2. Ranjan R, Logette E, Marani M, Herzog M, Tâche V, et al. A kinetic map of the homomeric voltage-gated potassium channel (Kv) family. Frontiers in Cellular Neuroscience. 2019 Aug 20;13:358.

3. Van Geit W, Gevaert M, Chindemi G. Rossert C, Courcol JD, et al. 2016. BluePyOpt: leveraging open source software and cloud infrastructure to optimise model parameters in neuroscience. Frontiers in Neuroinformatics.;10:17.

4. Roessert C, Damart T, Van Geit W. BluePyEfe 0.2 (Version 0.2). 2020 Mar. Available at: Zenodo https://doi.org/10.5281/zenodo.3728192.

5. Goaillard JM, Marder E. Ion channel degeneracy, variability, and covariation in neuron and circuit resilience. Annual review of neuroscience. 2021 Mar 26;44.

P116 Extracellular stimulation and Local Field Potential recording in a L5 PC model with full axonal arbor

Joseph Tharayil 1 , Mickael Zbili 2 , Alexis Arnaudon 1 , Werner Van Geit 1 , Esra Neufeld 3 , Michael Reimann 1 , Felix Schürmann 1 , Henry Markram 1

1 École Polytechnique Fédérale de Lausanne, Blue Brain Project, Geneva, Switzerland

2 École Polytechnique Fédérale de Lausanne, Laboratory of Neural Microcircuitry, Brain Mind Institute, School of Life Sciences, Geneva, Switzerland

3 IT'IS Foundation, Blue Brain Project, Zurich, Switzerland

Email: joseph.tharayil@epfl.ch

The cortical local field potential (LFP) is a commonly used experimental metric and has a growing number of applications in brain-machine interfaces. Previous modeling studies of the LFP have helped to clarify its biological origins. However, the contributions of axonal currents to the LFP signal are poorly understood. Simulations of morphologically and electrophysiologically detailed neuron models that include explicit axons with full propagation of action potentials along the branches may provide further insight into the origins of the LFP signal. Similarly, extracellular electrical stimulation is frequently used to perturb neurons and neuronal circuits, both experimentally and in clinical applications. It is believed that electrical stimulation primarily affects axons, but it is unclear how this effect depends on axonal properties. In silico experiments with realistic axon models may therefore provide insight into the mechanisms of electrical stimulation.

In this work, we will present an extension of the L5 pyramidal model of Markram and colleagues with continuous adapting (cAD) electrical type, that adds an axon model comprising axon initial segment (AIS), myelinated internodes, nodes of Ranvier and unmyelinated collaterals; with an axon-specific pool of ion channels and optimized ion channels densities. We show that this model reproduces the main axonal electrical features, such as the action potential (AP) waveform and its preferential initiation at the AIS, as well as propagation throughout the axonal arbor, and that the modeling approach generalizes to a wide range of reconstructed L5 pyramidal morphologies. We use our model to compute realistic LFP generated by a single neuron and study the electrical response of each compartment under extracellular stimulation from point source electrodes (Intracortical Microstimulation or ICMS) at various locations.

We quantified the difference between the LFP generated by these neuron models and the LFP created by the neuron without a detailed axon model. We evaluate differences that arise not only due to direct axonal contributions, but also due to the changes in the electrical behavior of the non-axonal compartments that the addition of the axon entails. We find that simulated ICMS is able to generate action potentials in the axon. The location of action potential initiation, and consequently, the action potential properties, vary within the neuronal arbor as a function of electrode position and stimulus parameters. Moreover, excitability and backpropagation effectiveness vary between the main axon and collateral branches. Our model may therefore help in clarifying the mechanisms of ICMS stimulation and in optimizing stimulation / recording parameters.

Acknowledgments

Joseph Tharayil and Mickael Zbili contributed equally to this work.

P117 Dimensionality reduction methods for neural decoding

Alan Cherne 1 , David Boothe 2 , Piotr Franaszczuk 2 , Vasileios Maroulas 1

1 University of Tennessee, Department of Mathematics, Knoxville, TN, United States of America

2 U.S. Army Research Laboratory, Human Research Engineering Directorate, Aberdeen Proving Ground, MD, United States of America

Email: pfranasz@gmail.com

Recent advancements in engineering has made it possible to record spike-time data of dozens of individual neurons simultaneously. This data has made it possible to answer long-standing questions on how the brain stores information and performs various tasks. A central tenant of neuroscience has been that populations of interconnected neurons orchestrate to perform these tasks, and that the firing pattern of any single neuron need not correlate with the organism’s behavior. What was hypothesized, and what we can now observe, is that the firing patterns of these large groups of neurons tend to have a low dimensionality that matches the canonical variable which they are meant to represent. Some examples of this phenomenon that have been studied in this way are the head-direction system in mice, auditory pitch detection, and hand movement [1–3].

When high-dimensional data is hypothesized to represent a low-dimensional variable, the process by which this variable is uncovered in the data is known as manifold discovery. Manifolds are locally Euclidean regions of space that may have distinct topologies such as circles, spheres, or tori. Here we examine a 22-dimensional data set of spike-time data that has a clear ring structure and circular variable when properly embedded in 3-dimensional space. There are many methods of dimensionality reduction, both linear and non-linear; we provide a survey of these methods and test the conditions under which data sets with non-trivial topology are preserved.

Some methods that are presented are Locally Linear Embedding (LLE), Modified Locally Linear Embedding (MLLE), Principal Component Analysis (PCA), Spectral Embedding, t-SNE, Multi-Dimensional Scaling (MDS), and Isomap [4]. In particular, it is shown that methods such as Isomap that account for global distances in the high-dimensional data are best equipped to preserve the ring structure. We investigate what types of pre-processing on the data is necessary in order to reproduce the manifold structure when mapped by these methods as well as their resiliency in the presence of noise. Finally, we present a novel method that uses flux to quantify the stability of such manifolds in terms of dynamical system attractors.

References

1. Peyrache A, Buzsáki G. Extracellular recordings from multi-site silicon probes in the anterior thalamus and subicular formation of freely moving mice. CRCNS. org. 2015.

2. Chaudhuri R, Gerçek B, Pandey B, Peyrache A, Fiete I. The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep. Nature neuroscience. 2019 Sep;22(9):1512–20.

3. Gallego JA, Perich MG, Miller LE, Solla SA. Neural manifolds for the control of movement. Neuron. 2017 Jun 7;94(5):978–84.

4. Tenenbaum JB, De Silva V, Langford JC. A global geometric framework for nonlinear dimensionality reduction. science. 2000 Dec 22;290(5500):2319–23.

P118 A network model for migraine-driven alterations in the contrast sensitivity of rodent visual cortex

Nicolò Meneghetti 1 , Alberto Mazzoni 1

1 Scuola Superiore Sant'Anna Pisa, The Biorobotics Institute and Department of Excellence for Robotics and AI, Pisa, Italy

Email: nicolo.meneghetti@santannapisa.it

Migraine is a complex neurological condition affecting more than 10% of the general population and is characterized by global dysfunctions in multisensory information processing. Mouse models of familial hemiplegic migraine display increased glutamatergic transmission at intracortical synapses, while GABAergic transmission remains unaltered [1]. Moreover, excitatory thalamocortical afferents are also enhanced. This effect is stronger in fast-spiking inhibitory neurons than in pyramidal cells [2]. These results suggest that the dysregulation of the excitatory-inhibitory cortical balance might be one of the central mechanisms underlying the intricacies of migraine neurobiology.

The development of new therapeutic interventions is however limited by our poor understanding of the link between such cellular alterations and the subsequent dysfunctional computations at the network level. Here we investigated this link by modeling migraine-related cellular alterations in a recurrent network of spiking neurons developed in previous works [3]. We investigated the effects of each of the pathological synaptic changes at the macroscopic network level, and their relationship with the dysregulation of excitatory-inhibitory balance observed experimentally [2]. The network reproduced the experimental spectral content of murine V1 local field potentials (LFPs) in response to visual grating stimuli of different spatial contrasts in both healthy and migraine conditions. In particular, the thalamic input caused the emergence i) of a broad [30–100] gamma band by triggering local resonances and ii) of a narrow gamma band at 60 Hz through entrainment to an oscillatory drive.

Our model could shed new light on how the experimentally observed cellular alterations at the basis of the migraine are reflected into the macroscopic measurements of brain activity, such as LFP and EEG. Unraveling the correlates of a pathological cellular circuitry into such network-wide signals (commonly recorded in clinical neurophysiological investigations) could be of unvaluable help in using EEG or LFPs to probe the alterations of information processing in migraineurs patients. Finally, a model capturing the network dynamics of migraine could be a valuable benchmark for developing new pharmacological targets and for predicting in silico their effects.

Acknowledgements

This work was funded by the Italian Ministry of Research (MIUR) through PRIN-2017 “PROTECTION” (project 20178L7WRS).

References

1. Pietrobon D. Ion channels in migraine disorders. Current Opinion in Physiology. 2018 Apr 1;2:98–108.

2. Tottene A, Favero M, Pietrobon D. Enhanced Thalamocortical Synaptic Transmission and Dysregulation of the Excitatory–Inhibitory Balance at the Thalamocortical Feedforward Inhibitory Microcircuit in a Genetic Mouse Model of Migraine. Journal of Neuroscience. 2019 Dec 4;39(49):9841–51.

3. Meneghetti N, Cerri C, Tantillo E, Vannini E, Caleo M, et al. Thalamic inputs determine functionally distinct gamma bands in mouse primary visual cortex. bioRxiv. 2020 Jan 1.

P119 Synchronization through uncorrelated noise in excitatory-inhibitory networks

Lucas Rebscher 1 , Klaus Obermayer 1 , Christoph Metzner 2

1 Technische Universität Berlin, Neural Information Processing Group, Berlin, Germany

2 Technische Universität Berlin, Department of Software Engineering and Theoretical Computer Science, Berlin, Germany

Email: christoph.metzner@gmail.com

Gamma rhythms are thought to underlie many different cognitive processes in the brain, ranging from attention over working memory to sensory processing and has further been suggested as a key mechanism in neuronal communication [1]. Recently, Meng and Riecke [2] demonstrated that, counterintuitively, synchronization across networks of inhibitory neurons increased when neurons were subject to independent noise. However, they focused on inhibitory networks with gamma band activity produced by the interneuronal network gamma (ING) mechanism. We therefore asked whether uncorrelated noise can also have a beneficial effect on the synchronization of interacting gamma rhythms produced by the pyramidal-interneuronal network gamma (PING) mechanism.

We modelled two interconnected excitatory-inhibitory (EI) networks in different network settings and analyzed how synchronization within and across the networks changed depending on the strength of uncorrelated noise the networks received. The EI networks comprised 1000 excitatory adaptive exponential integrate and fire (aEIF) neurons [3] and 250 aEIF neurons each and were coupled using conductance-based synapses. We explored two different connectivity settings (all-to-all and sparse random coupling) and for each setting three network configurations: 1) weak coupling between networks and weak noise, 2) strong coupling and weak noise and 3) weak coupling and strong noise.

Results for the two different settings did not differ strongly, therefore we only present the results for the sparse random coupling. In the weak coupling and weak noise configuration, we found a strong synchronization within but no coupling across networks as coupling and noise were too weak. For configuration 2 with strong coupling but weak noise, we found a strong synchronization both within and across networks. Both networks showed the same dominant frequency and spike time variability was very low, especially in the inhibitory population. In configuration 3 with weak coupling but strong noise, we also found a strong synchronization across the networks but with a weaker within-network synchronization compared to configuration 2. Here, we found an increased spike time variability and a sparse participation of the inhibitory population in the network rhythms. We further saw that the synchronization depended on the weakening of the intra-network synchronization which allowed the second network to control the activity of a subpopulation thereby synchronizing the two networks (Fig. 1).

In conclusion, our results suggest that synaptic noise can have a supporting role in facilitating inter-regional communication, however, with a different signature and mechanism than synchronization through strong coupling. Importantly, our models build a basis to investigate mechanistic explanations for altered neuronal dynamics in neurologic or psychiatric disorders, where deficits of inter-regional communication in the gamma band seem to play a crucial [4].

References

1. Fries P. Rhythms for cognition: communication through coherence. Neuron. 2015 Oct 7;88(1):220–35.

2. Meng JH, Riecke H. Synchronization by uncorrelated noise: interacting rhythms in interconnected oscillator networks. Scientific reports. 2018 May 3;8(1):1–4.

3. Brette R, Gerstner W. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. Journal of neurophysiology. 2005 Nov;94(5):3637–42.

4. Uhlhaas PJ, Singer W. Neural synchrony in brain disorders: relevance for cognitive dysfunctions and pathophysiology. neuron. 2006 Oct 5;52(1):155–68.

Fig. 1
figure bg

A Setup consisting of two interconnected EI network that are both subject to uncorrelated noise. B Results of two-dimensional parameter exploration over the noise strength and the frequency ratio p. C Weak noise + weak coupling: strong within-network and weak across-network synchrony. D Strong noise + weak coupling: weak within-network and strong across-network synchrony

P120 Phasic and tonic changes in pupil size differentially track surprise and confidence during adaptive learning

Tiffany Bounmy 1 , Audrey Mazancieux 1 , Florent Meyniel 1

1 Cognitive Neuroimaging Unit, NeuroSpin center, Frédéric Joliot Institute, CEA-Saclay, Gif-sur-Yvette, France

Email: tiffany.bounmy@cea.fr

Learning in a changing and stochastic world is a challenging problem. To face stochasticity, one should integrate over past observations to infer stable estimates of the world’s statistics. However, if those statistics change over time, one should also update her estimates quickly and flexibly. Ideally, the weights assigned to past versus new observations should thus be adjusted dynamically according to the occurence of changes. The ability to dynamically strike this balance between stability and flexibility is known as adaptive learning. On the computational level, Bayesian inference indicates that confidence about our estimates is key to adaptive learning: high confidence promotes stability and inversely, low confidence fosters flexibility. On the implementational level, specific neuromodulators such as noradrenaline (NA, a.k.a. norepinephrine) have been linked to unexpected uncertainty [1,2], a form of uncertainty that reduces confidence about current estimates when changes arise. However, the role of NA in the confidence-weighted regulation of learning remains unclear.

Here, we tested the implication of NA in the confidence-weighting of learning by combining two learning tasks with pupillometry (one previously published [3] and a new one) in 36 participants (24 + 12). Subjects had to learn the hidden probabilities that generated auditory sequences of binary stimuli, and report their probability estimates together with the associated confidence. Subjects were fully informed in a non-technical way that these probabilities changed abruptly over time without notice and that an order-1 Markov process and Bernoulli process generated the sequences in the two tasks, respectively. We designed an ideal Bayesian learning model for each task and we formalized surprise as the log improbability of each observation and confidence about probability estimates as their (log) posterior precision.We relied on pupillometry to indirectly probe brain levels of NA [4].

We found that reported probability estimates and confidence levels correlated with the Bayesian solution and exhibited different qualitative signatures of this solution, replicating previous studies [5]. Phasic and tonic changes in pupil size showed an interesting dissociation. Phasic changes were accounted for by surprise and tonic changes by confidence. Those results were obtained in each task, demonstrating robustness to the task statistics used. Our findings are compatible with noradrenaline playing a role in the confidence-weighted regulation of learning.

References

1. Angela JY, Dayan P. Uncertainty, neuromodulation, and attention. Neuron. 2005 May 19;46(4):681–92.

2. Nassar MR, Rumsey KM, Wilson RC, Parikh K, Heasly B, et al. Rational regulation of learning dynamics by pupil-linked arousal systems. Nature neuroscience. 2012 Jul;15(7):1040–6.

3. Meyniel F. Brain dynamics for confidence-weighted learning. PLoS computational biology. 2020 Jun 2;16(6):e1007935.

4. Joshi S, Li Y, Kalwani RM, Gold JI. Relationships between pupil diameter and neuronal activity in the locus coeruleus, colliculi, and cingulate cortex. Neuron. 2016 Jan 6;89(1):221–34.

5. Meyniel F, Schlunegger D, Dehaene S. The sense of confidence during probabilistic learning: A normative account. PLoS computational biology. 2015 Jun 15;11(6):e1004305.

P121 Traveling waves in the prefrontal cortex during working memory

Sayak Bhattacharya 1 , Scott L. Brincat 1 , Mikael Lundqvist 1 , Earl K. Miller 1

1 Massachusetts Institute of Technology, Picower Institute for Learning and Memory, Boston, MA, United States of America

Email: sayak@mit.edu

Neural oscillations are evident across cortex but their spatial structure is not well explored. Are oscillations stationary or do they form “traveling waves”, i.e., spatially organized patterns whose peaks and troughs move sequentially across cortex? Here, we show that oscillations in the prefrontal cortex (PFC) organized as traveling waves in the theta (4-8 Hz), alpha (8-12 Hz) and beta (12-30 Hz) bands. Some traveling waves were planar while many rotated around an anatomical point. The waves were modulated during performance of a working memory task. During baseline conditions, waves flowed bidirectionally along a specific axis of orientation. During task performance, there was an increase in waves in one direction over the other, especially in the beta band. We discuss potential functional implications.

P122 Linking hippocampal replay content to neuronal properties through modeling

Jordan Breffle 1 , Shantanu Jadhav 2 , Paul Miller 1

1 Brandeis University, Department of Biology, Waltham, MA, United States of America

2 Brandeis University, Psychology, Waltham, MA, United States of America

Email: jbreffle@brandeis.edu

The reactivation of neural activity associated with past experiences has been found in both human and non-human mammals to support memory recall as well as consolidation, but how the intrinsic and synaptic properties of neurons produce this network-level activity is not well understood. Replay has been best studied in the hippocampus of rodents performing spatial navigation tasks. The hippocampus has place cells, which are cells that fire when the animal is in a particular region of the environment. During rest and pauses in movement the hippocampus then replays on a compressed timescale sequences of place cells that correspond to actual trajectories through the environment. The content of a replay event can potentially be of any possible trajectory through the environment, and the replay can occur in either forward or reverse order of the actual movement. There are several existing models that show how particular plasticity features can produce replay in biological recurrent neuronal networks, but none replicate the change in replay content observed over learning.

Here, we performed new data analysis on an existing hippocampal replay data set [1], and we perform network simulations to assess which plasticity rules are necessary to replicate the experimental results. Shin et al. [1] recorded hippocampal replay events in rats that learned to perform a W-track spatial-alternation task in a single day. They found that the fraction of reverse replay events at the side well that were of the taken past path decreased with learning, while the fraction for forward replay events of the taken future path increased with learning. We performed additional data analysis on this data set and found that this change in replay content is explained by 1) a decrease over learning of the probability that a locally starting replay is reverse ordered and 2) an increase over learning of the probability that a remotely starting replay is reverse ordered.

From these results we can infer how the likelihood of a given place cell to participate in each type of replay event changes over learning. We adapt a previously published model of replay [2] to simulate the spiking activity of an animal performing the W-track spatial alternation task (Fig. 1a). The model spontaneously generates replay events during pauses in movement (Fig. 1b), which are analyzed using Bayesian decoding as in Shin et al., 2019 (Fig. 1c). With this model we develop and test several hypotheses to explain the experimental results through a combination of intrinsic and synaptic plasticity.

References

1. Shin JD, Tang W, Jadhav SP. Dynamics of awake hippocampal-prefrontal replay for spatial learning and memory-guided decision making. Neuron. 2019 Dec 18;104(6):1110–25.

2. Pang R, Fairhall AL. Fast and flexible sequence induction in spiking neural networks via rapid excitability changes. Elife. 2019 May 13;8:e44324.

Fig. 1
figure bh

Simulation of hippocampal replay. a Raster plot of an example simulation with movement-related spiking and then spontaneous activity. Significant replay events marked in grey. Cells sorted by their order of firing along the completed trajectory. b Raster plot of the second replay event in A. c Linear Bayesian decoding of the spikes in b (time-bin shuffle, p < 0.001, R2 = 0.981)

P123 Combining fMRI with computational modeling to explore the influence of attention on human auditory cortex

Kabir Arora 1 , Isma Zulfiqar 2 , Hannah Schultheiss 3 , Federico De Martino 2 , Omer Faruk Gulban 4 , Agustin Lage-Castellanos 5 , Michelle Moerel 1

1 Maastricht University, Maastricht Center for Systems Biology, Maastricht, Netherlands

2 Maastricht University, Department of Cognitive Neuroscience, Maastricht, Netherlands

3 Maastricht University, Maastricht Science Programme, Maastricht, Netherlands

4 Brain Innovation B.V., Maastricht, Netherlands

5 Cuban Center for Neuroscience, Department of NeuroInformatics, Havana, Cuba

Email: k.arora@student.maastrichtuniversity.nl

Attention allows the human auditory system to preferentially process specific stimuli of behavioural or situational relevance. The neural mechanisms underlying frequency-based attention, the attention to a specific sound frequency, have been studied across species and spatial scales. At the neuronal level, electrophysiology studies in animals have shown attention-induced changes in the response properties (i.e., the receptive field) of individual neurons [1]. The influence of frequency-based attention has also been studied in the human brain using, for example, functional MRI (fMRI). These studies uniformly showed increased responses to attended sounds [2,3], but did not provide evidence for receptive field modifications, similar to those observed in animal electrophysiology, in the human auditory cortex. This study combined the collection of fMRI data during a frequency-based attention task to measure attention-induced changes in auditory cortical responses with computational modeling to simulate the neuronal mechanisms underlying the fMRI data. Unexpectedly, fMRI showed a reduced response to attended sounds, which was strongest in cortical locations whose preferred frequency matched the attended one (Fig. 1). To explore the neuronal underpinnings of these observations, frequency-based attention was incorporated in a Wilson Cowan Cortical Model (WCCM) of the auditory cortex [4] as frequency-specific sharpening of neuronal receptive fields (at population level) and decreased response gain. These mechanisms were implemented by modifying parameters defining excitatory-inhibitory WCCM connections. Model responses replicated the suppressed response to attended sounds as seen with fMRI. While the observation of decreased responses with frequency-based attention conflicts with previous fMRI studies, both increases in frequency selectivity and decreased gain have been described in animal studies [5,6]. Our results therefore suggest that the mechanisms underlying frequency-specific attention may depend on the employed experimental paradigm. They furthermore put a reduced gain and increased frequency selectivity forward as candidate mechanisms underlying our fMRI findings, and future modeling endeavors will be aimed at discriminating between (or determining the relative contribution of) these alternatives.

References

1. Fritz J, Shamma S, Elhilali M, Klein D. Rapid task-related plasticity of spectrotemporal receptive fields in primary auditory cortex. Nature neuroscience. 2003 Nov;6(11):1216–23.

2. Da Costa S, van der Zwaag W, Miller LM, Clarke S, Saenz M. Tuning in to sound: frequency-selective attentional filter in human primary auditory cortex. Journal of Neuroscience. 2013 Jan 30;33(5):1858–63.

3. Riecke L, Peters JC, Valente G, Poser BA, Kemper VG, et al. Frequency-specific attentional modulation in human primary auditory cortex and midbrain. NeuroImage. 2018 Jul 1;174:274–87.

4. Zulfiqar I, Moerel M, Formisano E. Spectro-temporal processing in a two-stream computational model of auditory cortex. Frontiers in computational neuroscience. 2020 Jan 22;13:95.

5. Otazu GH, Tai LH, Yang Y, Zador AM. Engaging in an auditory task suppresses responses in auditory cortex. Nature neuroscience. 2009 May;12(5):646–54.

6. David SV, Fritz JB, Shamma SA. Task reward structure shapes rapid receptive field plasticity in auditory cortex. Proceedings of the National Academy of Sciences. 2012 Feb 7;109(6):2144–9.

Fig. 1
figure bi

A Group results show a lower response to attended than non-attended sounds (FDR-corrected, q < 0.05 in black outlines). The white dashed line outlines Heschl’s gyrus. B Response to attended and non-attended sounds (in dashed and solid black lines), and their difference (blue), as a function of the distance (in octaves) between a voxel’s best frequency (BF) and the attended frequency

P124 Working memory stabilization by sinusoidal and noisy inputs

Nikita Novikov 1 , Boris Gutkin 2 , Denis Zakharov 1 , Victoria Moiseeva 1

1 HSE University, Centre for Cognition and Decision Making, Moscow, Russia

2 Ecole Normale Superieure, Paris, France

Email: nikknovikov@gmail.com

Working memory (WM) is the ability to retain information not directly perceived by sensory systems. A neural correlate of WM retention is sustained firing rate elevation in cortical circuits, which is usually modelled using bistable systems with the background and active steady states [1]. The active regime could be also metastable, so the system slowly returns to the background after a stimulus [2]. WM retention is accompanied by increased gamma-band power and coherence between cortical sites [3]. However, the functional role of gamma activity in WM is not fully understood.

Here we explore stabilizing effect of gamma oscillations on a multi-circuit single-object metastable WM model. Each circuit (Fig. 1A) is described by firing rate equations. The system contains two local clusters with two circuit groups in each (Fig. 1B). Circuits within a cluster receive gamma-band input in the same phase. The groups C1 and C2 receive a common white noise, which mimics an input from a larger WM representation. The circuits from I1 and I2 receive independent white-noise inputs. Circuits are linked via excitatory connections, fast within a cluster and either fast or slow between the clusters. The results are shown in Fig. 1C. Gamma input increased the post-stimulus activity duration, as well as the duration difference between C and I groups. In the model with fast (but not slow) inter-cluster connections, these effects were more prominent when gamma inputs to the clusters had the same phase. Thus, we demonstrated that gamma input could selectively stabilize WM-related activity in the circuits that participate in a larger WM network, and such stabilization is more efficient when long-range connections are fast and local gamma generators are synchronized.

Acknowledgements

This paper is an output of a research project implemented as part of the Basic Research Program at the National Research University Higher School of Economics (HSE University). BSG acknowledges support from CNRS, INSERM, ANR-17-EURE-0017 and ANR-10-IDEX-0001–02.

References

1. Amit DJ, Brunel N. Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral cortex (New York, NY: 1991). 1997 Apr 1;7(3):237–52.

2. Lim S, Goldman MS. Balanced cortical microcircuitry for maintaining information in working memory. Nature neuroscience. 2013 Sep;16(9):1306–14.

3. Kornblith S, Buschman TJ, Miller EK. Stimulus load and oscillatory activity in higher cortex. Cerebral Cortex. 2016 Sep 1;26(9):3772–84.

Fig. 1
figure bj

A Single-circuit system. E/I – excitatory/inhibitory populations. B Multi-circuit system. Lines represent symmetrical excitatory connections. C Duration of post-stimulus activity (statistics of 25 runs). Horizontal dash – the median, thick vertical line – two middle quartiles. NONE/1CLUST/SYNC/ANTI – absent/1-cluster/in-phase/anti-phase input

P125 Phase and amplitude coupling of high-frequency oscillations in seizure records

Sabrina Natali Guisande Donadio 1 , Monserrat Pallares Di Nunzio 1 , Mauro Granado 1 , Fernando Montani 1

1 Conicet - UNLP, IFLP, La Plata, Argentina

Email: guisande.natali@fisica.unlp.edu.ar

Epilepsy is a chronic neurological disease that affects 1 in 200 people. In 30% of those affected there is a negative response to pharmacological treatment, where this type is called refractory epilepsy. In this case, a surgical intervention is indicated as treatment, where success consists in finding the cortical area responsible for the generation of seizures, called the epileptogenic zone. In this work, electrical recordings of this area were studied in patients with refractory epilepsy in order to discern the underlying oscillatory mechanisms during the epileptic process. For this, neuronal activity was studied for basal (far from the seizure) and preictal (immediately before the seizure) periods through recordings of intracerebral electrodes implanted in patients to achieve a greater resolution of the local field potential. Then, the intrinsic dynamics of the two types of records was discerned by using a time windows analysis and studying the amplitude and phase couplings for each signal. The causality of these records was also quantified through information theory tools and the Bandt-Pompe permutation methodology, which showed an increases in the carry of information of brain oscillations in the range of high frequencies.

P126 Quantification of the network strength in neural anticipated and delayed synchronization

Monserrat Pallares Di Nunzio 1 , Sabrina Natali Guisande Donadio 2 , Mauro Granado 3 , Fernando Montani 3

1 Conicet- UNLP, IFLP, La Plata, Argentina

Email: monsepallaresdinunzio@gmail.com

The phenomenon of synchronization between two or more asymmetrically coupled brain areas is a very relevant topic for understanding the mechanisms and functions within the cerebral cortex. Anticipatory synchronization (AS) refers to the situation in which the receiving system, referred to as the 'slave', synchronizes with the future dynamics of the sending system, referred to as the 'master'. In contrast, delayed synchronization (DS) represents the intuitively opposite case. In this work we investigate and compare the magnitude of connection between simulated neural networks in AS and DS regimes making use of causal information and calculating the Jensen-Shannon divergence through a symbolic formalism of ordinal patterns. By studying multiple temporal scales, it could be observed that Jensen-Shannon divergence is bigger for the AS regime than the DS regime, which means that AS has a lower magnitude of connection than the DS regime. Furthermore, this formalism allows us to successfully discern the dynamical characteristics that differ in these two synchronization cases.

P127 From structure to dynamics in combinatorial threshold linear networks

Caitlin Lienkaemper 1 , Carina Curto 1 , Katherine Morrison 2

1 Pennsylvania State University, Department of Mathematics, State College, PA, United States of America

2 University of Northern Colorado, Mathematical Sciences, Greely, CO, United States of America

Email: cul434@psu.edu

Neural circuits display nonlinear dynamics. For instance, central pattern generators display internally generated oscillations. Multistable systems are used as models of memory storage and retrieval. The structure of network connectivity is a key feature determining network dynamics, but many questions remain as to how structure shapes activity. We study the relationship between structure and dynamics in a simple model of neural activity, combinatorial threshold linear networks (CTLNs), whose activity is governed by a system of threshold-linear ordinary differential equations determined by an underlying directed graph (Fig. 1A). Like real networks, CTLNs display the full spectrum of nonlinear behavior, including multistability, limit cycles, and chaos.

Much is known about fixed points of CTLNs, but much less is known about their dynamic attractors [1–3]. On one hand, the activity of a symmetric TLN always converges to a stable fixed point [2]. On the other, CTLNs whose underlying graph has no bidirectional edges or sinks have must have persistent dynamic activity [3]. However, many CTLNs outside this family also exhibit dynamic attractors, and both dynamic attractors and stable fixed points can coexist in a network. Networks with superficially similar structure can have wildly different dynamics.

We give some of the first results which go beyond fixed points and relate the structure of a CTLN to its dynamics. We focus on a structural relationship, graphical domination, and show that if one neuron graphically dominates another neuron, then the firing rate of the dominating neuron eventually becomes greater than the firing rate of the dominated neuron. This constrains trajectories of the dynamical system. Using this fact, we show that many CTLNs do not have persistent dynamic activity. We prove that if a CTLN's underlying graph is a directed acyclic graph, neural activity really must flow through the graph and must eventually end up at a stable fixed point (Fig. 1B). This is the first example of a proof guaranteeing convergence of the activity of a TLN to a stable fixed point outside the symmetric case.

We also construct a family of sequential memory networks. Each network consists of m layers of n neurons connected cyclically (Fig. 1C). The network has mn limit cycles, each corresponding to a sequence of neurons. Different initial conditions lead to different limit cycles (Fig. 1C). Our domination result allows the network to “remember its place” once it comes back around to a previously visited layer. These networks have a large capacity to encode dynamic patterns via limit cycles, giving a richer set of memory patterns than stable fixed points. Thus, these networks can model sequential memories, rhythms, or central pattern generators.

Acknowledgements

CL was supported by NSF fellowship grant DGE 1,255,832.

References

1. Curto C, Geneson J, Morrison K. Fixed points of competitive threshold-linear networks. Neural computation. 2019 Jan 1;31(1):94–155.

2. Hahnloser RH, Seung HS, Slotine JJ. Permitted and forbidden sets in symmetric threshold-linear networks. Neural computation. 2003 Mar 1;15(3):621–38.

3. Morrison K, Degeratu A, Itskov V, Curto C. Diversity of emergent dynamics in competitive threshold-linear networks: a preliminary report. arXiv preprint arXiv: 1605. 04463. 2016 May 14.

Fig. 1
figure bk

A The equations governing a CTLN. B If the underlying graph of a CTLN is a directed acyclic graph, then all trajectories of the network approach a stable fixed point where the only active neuron is a sink. C We show the structure of our sequential memory networks. A network with n layers with m neurons per layer has mn limit cycles

P128 TLN counters, position trackers and central pattern generators

Juliana Londono-Alvarez 1 , Carina Curto 1 , Katie Morrison 2

1 Pennsylvania State University, Department of Mathematics, State College, PA, United States of America

2 University of Northern Colorado, School of Mathematical Sciences, Greely, CO, United States of America

Email: jbl5958@psu.edu

Threshold linear networks (TLNs) are recurrent networks whose neuron dynamics are prescribed by a system of differential equations with threshold nonlinearities. The choice of the ReLU function [] + as threshold makes the system piecewise linear in the state space, providing a very simple yet rich framework. A special case of TLNs with uniform synaptic weights was first introduced in [1] and provides a purely combinatorial framework in which the dynamics of the network solely dependent on the connectivity of the associated graph (hence called Combinatorial Threshold Linear Networks, CTLNs), whereby changing only the connections among edges, rich dynamics (multistability, chaos, quasiperiodicity) arise. Moreover, since the CTLN model consists of simple perceptron-like units, it does not require the neurons to intrinsically oscillate, further simplifying the assumptions posed on neurons.

This very simple mathematical setup makes it particularly suitable for engineering circuits performing common neural functions, yet still allowing a lot of flexibility in terms of the kind of dynamics that might be observed. The CTLN model then constitutes a powerful unifying framework for modelling a wide range of phenomena in neuroscience, in which the various neural computations can be obtained as graph variations. Our aim here is to present a few interesting cases that exemplifies how connectivity alone gives rise to a diverse range of important neural functions (Fig. 1).

In the first example, in panel A, we present a TLN that can count the number of pulse inputs it has received via the position of the attractor in a linear chain of attractor states. More precisely, when the network receives a uniform input, it will move to the next state in the chain, indicating an increase in the count. Activity is maintained in this state indefinitely until future pulses are provided to the system, allowing to track the number of pulses by the attractor position in the chain. This network is a very simple alternative to the neural integrators used to maintain a count in working memory of some number of input cues.

The network in panel B only differs from A on the direction of the bottom arrows. This small change now allows the network to count signed pulses, since now it can travel back to the previous attractor on the chain as well. This type of signed count is valuable for tracking the relative number of left and right cues as in various two-alternative forced-choice tasks.

Finally, in panel C we exhibit a CTLN capable of producing two coexisting quadrupedal gaits: bound and trot. In fact, all quadrupedal gaits presented in [2] can be reproduced by CTLNs and moreover, it is possible to have at least three coexisting gaits in a single network (not pictured here) without having to resort to synaptic plasticity (corresponding to changes in the synaptic weights in the network). This constitutes a new outlook on central pattern generators (CPGs) where different gaits correspond to different attractors in the same network, that differ only by initial conditions (equivalently, by the stimulation of a specific neuron) and not by a parameter bifurcation.

References

1. Curto C, Morrison K. Pattern completion in symmetric threshold-linear networks. Neural computation. 2016 Nov 23;28(12):2825–52.

2. Golubitsky M, Stewart I, Buono PL, Collins JJ. A modular network for legged locomotion. Physica D: Nonlinear Phenomena. 1998 Apr 15;115(1–2):56–72.

Fig. 1
figure bl

A Chain of 2-cliques each corresponding to a fixed point of the network. B Chain of 2-cliques with opposite cycling connections. Rightward pulses in black and leftward pulses in red. C An 8-neuron network capable of producing the quadrupedal gaits bound and trot. Initializing the activity in a grey node produces bound and initializing the activity in a yellow node produces trot

P129 Personalized cortical circuit modelling implicates synaptic dynamics in functional variation across individuals

Rachel Cooper 1 , Murat Demirtas 2 , Joshua Burt 1 , Amber Howell 3 , Jie Lisa Ji 3 , Alan Anticevic 4 , John Murray 4

1 Yale University, Physics, New Haven, CT, United States of America

2 Universitat Pompeu Fabra, Neuroscience, Barcelona, Spain

3 Yale University, Interdepartmental Neuroscience Program, New Haven, CT, United States of America

4 Yale University, Psychiatry, New Haven, CT, United States of America

Email: rachel.a.cooper@yale.edu

Popular for its ability to non-invasively image the macroscopic anatomical and functional connections in the brain, functional magnetic resonance imaging (fMRI) of the human cortex has revealed promising results concerning the reliability and stability of individual-level cortical connectomics. However, the synapse-level mechanisms underlying inter-individual variability are not well-understood, such as the respective roles of long-range white-matter structural connectivity vs. cortical physiological dynamics. One approach to bridging these mechanistic gaps is using biophysically-based neural circuit models of large-scale brain dynamics which can be quantitatively fit to empirical neuroimaging data. In this study we have utilized a circuit model [1] with neurobiologically interpretable parameters to model functional connectivity (FC) at the individual level in healthy subjects, finding that such a model is able to capture differences between subjects.

We generated parcellated, resting-state FC matrices for 879 healthy adults. We employ a cortical circuit model developed by Demirtas et al. (Fig. 1) whose free parameters represent synapse-level activity, allowing the macroscopic inter-individual variations apparent in fMRI scans to be understood in terms of the underlying cellular architecture [1]. A key advantage of this model lies in its assumption that local circuit properties are heterogeneous across the cortex, following a large-scale gradient related to cortical hierarchy and indexed by the T1w/T2w structural MRI measure [1,2]. Using this low-dimensional circuit model, we generate simulated FC matrices which are optimized to maximally fit the respective empirical data at the level of individual subjects as well as the group average.

Our circuit model, with hierarchical heterogeneity in local circuit properties, is sensitive to subject-level differences in FC. Allowing a hierarchical, heterogeneous distribution of weight parameters across the cortex substantially improves the model's ability to fit empirical FC data by specifically adding to the model the flexibility to capture leading components of inter-individual variation. To verify that these improvements in fit meaningfully capture inter-individual variation, we visualize the leading principal components of the empirical FC matrices across subjects. We use these principal components of inter-individual variation to develop a novel method to quantify a model's ability to capture inter-subject variability and propose extensions to the model accordingly. Further, model parameters related to cortical physiological dynamics explain the majority of variation across subjects, while subject-level structural connectivity failed to significantly capture variation. Thus, our model supports the notion that microcircuit properties related to cortical physiology and dynamics contribute to neural variability across individuals in healthy populations.

References

1. Demirtaş M, Burt JB, Helmer M, Ji JL, Adkinson BD, et al. Hierarchical heterogeneity across human cortex shapes large-scale neural dynamics. Neuron. 2019 Mar 20;101(6):1181–94.

2. Burt JB, Demirtaş M, Eckner WJ, Navejar NM, Ji JL, et al. Hierarchy of transcriptomic specialization across human cortex captured by structural neuroimaging topography. Nature neuroscience. 2018 Sep;21(9):1251–9.

Fig. 1
figure bm

Parcels are modelled as circuits containing an excitatory (E) unit and an inhibitory (I) unit. The E and I units interact locally as shown, and E units interact across the cortex according to the G parameter and the structural connectivity (SC) between parcels i and j. The weights wEIi and wEEi are scaled across the cortex according to cortical hierarchy based on linearized T1w/T2w maps

P130 Computational circuit mechanisms underlying thalamic control of attention

Qinglong Gu 1 , Norman Lam 2 , Ralf Wimmer 2 , Michael Halassa 2 , John Murray 3

1 Yale University, New Haven, CT, United States of America

2 Massachusetts Institute of Technology, MA, United States of America

3 Yale University, Psychiatry, New Haven, CT, United States of America

Email: qinglong.gu@yale.edu

The thalamus is a key brain structure engaged in attentional functions, such as selectively amplifying task-relevant signals of one sensory modality while filtering distractors of another. Whether the architectural features of thalamic circuitry offer a unique locus for attentional control is unknown. Here, we developed a biophysically grounded thalamic circuit model of comprising excitatory thalamocortical and inhibitory reticular neurons, which captures characteristic neurophysiological observations from task-engaged animals (Fig. 1). Our results provided important insights into the following questions.

We found that top-down attentional control inputs onto reticular neurons effectively modulate thalamic gain and enhance downstream readout, to improve performance across detection, discrimination, and cross-modal task paradigms. In addition, our simulations and theoretical analyses reveal that the thalamic reticular nucleus (TRN) is much more potent a site for top-down control than thalamocortical neurons. This provides mechanistic insight and functional explanation and for the experimental finding of the indirect, TRN-mediated pathway for top-down attentional control neurons [1]. Both bottom-up and top-down inputs increase firing rates in thalamus, raising the questions of how they are disambiguated in downstream readout. Our analyses reveal that heterogeneity of thalamic response patterns plays an essential role in attentional enhancement of stimulus information. We examined neuronal recordings from auditory thalamus and primary auditory cortex in the mice during a cross-modal attention task, and found the existence of the similar geometrical structure in population activity patterns (i.e., coding and readout axes).

It has been a question of whether attentional gain modulation observed in thalamus is generated locally within the thalamic circuit or instead whether such signals could be inherited from downstream sensory cortex via corticothalamic feedback projections. We analyzed spiking activity from simultaneously recorded auditory thalamus (MGB) and primary auditory cortex (A1) during task performance, and our results revealed that thalamic gain modulation is not explained by corticothalamic feedback. Furthermore, auditory cortex activity patterns show signatures of the readout strategy predicted by the model, to decode information from multiplexed bottom-up and top-down modulations. Moreover, our modeling indicates that strong recurrent excitation degrades the separability between bottom-up from top-down signals from population firing patterns.

Significance and Fit for OCNS Audience. This work should be of broad appeal to the OCNS audience.

Our model makes specific predictions on how distinct synaptic-level perturbations could alter circuit dynamics and attentional behaviors, allowing direct testing in animal models using optogenetics and electrophysiological recordings. In addition, our well-constrained thalamic circuit model in the awake regime can be further extended to study how distributed thalamo-cortical networks perform cognitive computations. More generally, we hope such studies will encourage the study of circuit models to make dissociable, testable predictions across circuit and behavioral levels.

References

1. Nakajima M, Schmitt LI, Halassa MM. Prefrontal cortex regulates sensory filtering through a basal ganglia-to-thalamus pathway. Neuron. 2019 Aug 7;103(3):445–58.

Fig. 1
figure bn

Circuit mechanisms for top-down attentional modulation. Major topics of study here include roles of heterogeneous of population coding for bottom-up and top-down signals in thalamus; differential potency of RE vs. TC cells as sites of top-down control; and inter-regional communication between thalamus and downstream cortical circuits

P131 Mixed vine copula flows for flexible modelling of neural dependencies

Lazaros Mitskopoulos 1 , Arno Onken 1

1 University of Edinburgh, School of Informatics, Edinburgh, United Kingdom

Email: lazarosmits@gmail.com

The advent of large high-dimensional datasets in neuroscience has been an important milestone for advancing our understanding of neural information processing and improving performance of brain computer interfaces. However, most existing methods of analysis fall short of capturing the complexity of interactions within the concerted population activity. Novel techniques need to address this complexity and be applicable in a wide range of neural data analysis scenarios. In this work, we employed copulas which disentangle single-neuron statistics from the dependency structures within the population and evade the curse of dimensionality with pair copula constructions [1,2]. This approach makes it possible to study the shapes of dependency structures between variables with vastly different statistics, (Fig. 1A) e.g. discrete spiking activity and continuous behavioural response variables like running speed. We adopted a fully non-parametric approach for the single-neuron margins and copulas, since parametric copula families impose strong assumptions on the shape of the stochastic relationships which can lead to misspecification, especially in the case of discrete variables. Both copula and margin densities were estimated using Neural Spline Flows (NSF) [3]. Overall, NSFs performed better relative to existing non-parametric estimators when trained on artificial data with known dependency structures (Fig. 1.B), while allowing for easier sampling and more flexibility. Finally, we demonstrate our framework’s aptitude to capture non-symmetric tail dependencies (Fig. 1C) in deconvolved spiking responses from calcium recordings of neurons in the rodent primary visual cortex responding to a visual learning task [4].

References

1. Aas K, Czado C, Frigessi A, Bakken H. Pair-copula constructions of multiple dependence. Insurance: Mathematics and economics. 2009 Apr 1;44(2):182–98.

2. Onken A, Panzeri S. Mixed vine copulas as joint models of spike counts and local field potentials. InAdvances in Neural Information Processing Systems 2016 (pp. 1325–1333).

3. Durkan C, Bekasov A, Murray I, Papamakarios G. Neural spline flows. Advances in Neural Information Processing Systems. 2019;32:7511–22.

4. Pakan JM, Currie SP, Fischer L, Rochefort NL. The impact of visual cues, reward, and motor feedback on the representation of behaviorally relevant spatial locations in primary visual cortex. Cell reports. 2018 Sep 4;24(10):2521–8.

Fig. 1
figure bo

A Joint probability density function with continuous and discrete margins is decomposed into a copula and separate margins. B NSFs outperform other non-parametric estimators on artificial data (inset). C (Left) Average activity of 5 rodent V1 neurons as a function of position of mouse in a virtual corridor. (Right) Flow vine copulas extracted from same group of neurons (5D vine)

P132 Whole-brain modelling suggest mechanisms behind pro-segregation effects of cholinergic neuromodulation

Carlos Coronel 1 , Rodrigo Cofré 2 , Carsten Gießing 3 , Patricio Orio 4

1 Universidad de Valparaíso, Centro Interdisciplinario de Neurociencia de Valparaíso (CINV), Valparaíso, Chile

2 Universidad de Valparaíso, Centro de Investigación y Modelamiento de Fenómenos Aleatorios, Valparaíso, Chile

3 Carl von Ossietzky University Oldenburg, Department of Psychology, Oldenburg, Germany

4 Universidad de Valparaíso, Instituto de Neurociencia, Valparaiso, Chile

Email: carlos.coronel@postgrado.uv.cl

Segregation and integration are two fundamental principles of brain organization [1,2]. While segregation is necessary for specialized processing of information, integration allows the coordination of the activity of several brain regions to produce a coherent behavioral outcome. Recent studies show that neuromodulatory systems dynamically promote the transitions between different functional states, starting from a static connectome [2]. Specifically, a recent framework proposed that the cholinergic and noradrenergic systems promote segregated and integrated brain states, respectively [2]. Here, we combined empirical fMRI recordings with computational modeling to gain insights into the biophysical mechanisms involved in the pro-segregation effects of the cholinergic system. The empirical fMRI data consider recordings under the effects of nicotine in healthy smokers, both in resting-state and in a Go/No-Go attentional task [3]. We built functional connectivity (FC) matrices from the fMRI BOLD signals, and quantified integration and segregation using tools from graph theory [4]. We showed that nicotine has a pro-segregation effect (increase transitivity and decrease global efficiency) in the task block, but not in the resting-state. Then, we used a whole-brain neural mass model [5], interconnected using a human connectome and coupled to a hemodynamic function to simulate fMRI BOLD-like signals. We simulated the effects of nicotine by decreasing global coupling and the feedback inhibition of the model, and then fitted the empirical and simulated FC matrices. The model fitted to the empirical data showed an increase in transitivity, a decrease in global efficiency, and a loss of modular organization under the effects of nicotine. Therefore, our model validates the results using the empirical data, that is, confirms the pro-segregation effects of nicotine and provided a biophysical mechanism to simulate these effects. This framework constitutes a new set of tools and ideas to test how neural gain mechanisms mediate the balance between functional integration and segregation in the brain.

References

1. Cohen JR, D'Esposito M. The segregation and integration of distinct brain networks and their relationship to cognition. Journal of Neuroscience. 2016 Nov 30;36(48):12,083–94.

2. Shine JM. Neuromodulatory influences on integration and segregation in the brain. Trends in cognitive sciences. 2019 Jul 1;23(7):572–83.

3. Rubinov M, Sporns O. Complex network measures of brain connectivity: uses and interpretations. Neuroimage. 2010 Sep 1;52(3):1059–69.

4. Gießing C, Thiel CM, Alexander-Bloch AF, Patel AX, Bullmore ET. Human brain functional network changes associated with enhanced and impaired attentional task performance. Journal of Neuroscience. 2013 Apr 3;33(14):5903–14.

5. Jansen BH, Rit VG. Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biological cybernetics. 1995 Sep;73(4):357–66.

P133 Emergence of high order interactions in a model of neural oscillators

Fernando Lehue 1 , Pedro Mediano 2 , Fernando Rosas 3 , Patricio Orio 4

1 Universidad de Valparaíso, Ciencias, Valparaíso, Chile

2 University of Cambridge, Department of Psychology, Cambridge, United Kingdom

3 Imperial College London, Department of Medicine, London, United Kingdom

4 Universidad de Valparaíso, Instituto de Neurociencia, Valparaiso, Chile

Email: fernando.lehue@postgrado.uv.cl

The complexity of brain dynamics has been approached from several points of view, in particular, using measures coming from dynamical systems and information theory. Several studies have proposed the existence and importance of chaotic regimes in brain activity, and chaotic oscillators have been used to simulate brain data due to their rich dynamical repertoire. On the other hand, plenty of measures have been used for assessing complexity from real neural data and theoretical models. In particular, Information theory provides tools for defining synergy: the information contained in the interactions of the system is higher when looking at the system as a whole than as separated parts, i.e., there are more high order than low order interactions. Using the O-information, a measure that builds on multivariate extensions of the mutual information, synergy was assessed in fMRI data and shown to decrease with aging. In this work, we try to answer how the dynamical and information-theoretic views on the complexity of brain signals are related. For this purpose, we studied the emergence and quality of statistical high-order interdependencies on small networks of homogeneous neural oscillators, assessed through the calculation of the O-information. The analysis consisted on a survey over the possible coupling configurations of 2 and 3 nodes oscillators, varying the inter-and intra-node connection parameters, and the calculation of the Lyapunov spectrum and the O-information, for distinguishing distinct dynamical and information-theoretic regimes, respectively. In addition, we performed a search over the possible 3-node configurations using a genetic algorithm, looking for the best connectivity matrix in terms of synergy i.e., minimizing O-information. We found that the simple limit-cycle dynamical regimes were redundant, i.e., showing positive O-information, and dynamical regimes with non-integer attractor dimension showed negative O-information, suggesting synergy. The higher dimensional integer dimension tori (quasiperiodic regimes) showed mixed results, being redundant in some cases and synergistic in others. However, when the interdependencies between the variables were broken through a random time shifting of the data points, the O-information in the quasiperiodic (toroidal) regimes was maintained, making the synergy in these regimes non-significant. On the other hand, O-information in time-shifted data from chaotic series dropped to zero. These results were confirmed using simulations of simple chaotic systems such as the Lorenz equations. In the case of three oscillators, the optimal synergistic configuration among nodes presented one independent node influencing the other two ones, and the induce dynamical regime was chaotic. A parameter sweep in the vicinity of the optimum also showed correspondence between synergy and higher dimensional dynamics. Our results invite further numerical and theoretical approaches for understanding the relation between dynamical complexity and information-theoretic measures, especially for oscillatory systems. Also, the relationship between synchronization and redundancy may underlay previous results related to aging.

Acknowledgements

We thank the Dirección de Investigación of the Universidad de Valparaíso for supporting FL. We thank the Agencia Nacional de Investigación y Desarrollo for supporting PO with grants Fondecyt 1,211,750, FB0008 and ICN09-022.

P134 Correlation structure between brain regions in working-memory tasks: fMRI fractal and spectral analysis

Anna Ceglarek 1 , Jeremi Ochab 2 , Marcin Wątorek 3 , Paweł Oświęcimka 3

1 Jagiellonian University, Kraków, Poland

2 Jagiellonian University, Institute of Theoretical Physics, Kraków, Poland

3 Institute of Nuclear Physics Polish Academy of Sciences, Kraków, Poland

Email: jeremi.ochab@uj.edu.pl

Measuring neural activity with fMRI while a person is memorising and retrieving information, can provide insight into cognitive processes of short-term memory distortions [1], or shortly, false memories. Functional activations have been analysed through a range of methods, however, they have a non-trivially associated auto-correlation and cross-correlation signal structure and are notoriously challenging to analyse due to their very low temporal resolution.

In our study, we applied detrended fluctuation analysis (DFA) to investigate fMRI data representing a diurnal variation of working memory [2] in four types of experimental tasks: two visual-verbal (based on lists of semantically or phonetically associated words) and two non-verbal (pictures of similar objects). The regional brain activity was quantified with the Hurst exponent and detrended cross-correlation coefficients [3]. Our analyses clearly show that the fMRI data obtained from most brain areas within a small-scale range can be regarded as 1/f type process identified in many physical, biological or even economic systems. However, the obtained characteristics of the signals in specific occipital lobe areas depend not only on the type of experimental tasks but also on the stage of the experiment, i.e., memorising the stimuli or information retrieval.

A particularly apparent difference is visible between memorisation in verbal and non-verbal tasks. In the former case, for some brain regions in the Visual II resting-state network, the Hurst exponents assume values very close to 0.5, indicating a lack of linear temporal correlations in the signals [4]. In contrast, we observe more persistent behaviour in the latter. The reduction of persistent behaviour in tasks relative to the spontaneous brain activity (resting state) is statistically significant in many brain areas, as presented in (Fig. 1). The cross-correlations between brain areas are, too, indicative of differences in the processing of tasks and experimental stages. Uncovering such regionally coordinated changes involves comparing distributions of correlation matrices’ eigenvalues. We strengthen these results by grouping eigenvalues according to their eigenvector similarity rather than their natural order. The detrended correlations turn out to be more sensitive than Pearson correlations, showing the greatest differences between the resting state and other tasks, between memorisation and retrieval and between verbal and non-verbal tasks, as well as other subtler results.

Acknowledgments

The study was supported by the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund in the POIR.04.04.00–00-14DE/18–00 project carried out within the Team-Net programme.

References

1. Atkins AS, Reuter-Lorenz PA. Neural mechanisms of semantic interference and false recognition in short-term memory. NeuroImage. 2011 Jun 1;56(3):1726–34.

2. Lewandowska K, Wachowicz B, Marek T, Oginska H, Fafrowicz M. Would you say “yes” in the evening? Time-of-day effect on response bias in four types of working memory recognition tasks. Chronobiology international. 2018 Jan 2;35(1):80–9.

3. Kwapień J, Oświęcimka P, Drożdż S. Detrended fluctuation analysis made flexible to detect range of cross-correlated fluctuations. Physical Review E. 2015 Nov 30;92(5):052,815.

4. Oświȩcimka P, Kwapień J, Drożdż S. Wavelet versus detrended fluctuation analysis of multifractal structures. Physical Review E. 2006 Jul 6;74(1):016,103.

Fig. 1
figure bp

Variation of the Hurst exponents across brain regions. Top x-axis denotes the brain area’s number according to the Automated Anatomical Labelling atlas. Bottom x-axis denotes the ordering of brain areas according to resting-state networks. PHO and SEM are visual-verbal tasks (semantic and phonetic), whereas GLO and LOC are non-verbal tasks (involving global and local visual feature processing)

P135 Dopamine activity plays a double role in improving perception and signaling motivation in a working memory task

Joan Falcó-Roget 1 , Stefania Sarno 2 , Manuel Beirán 3 , Gabriel Diaz-deLeon 4 , Román Rossi-Pool 4 , Ranulfo Romo 4 , Nestor Parga 1

1 Universidad Autónoma de Madrid, Departamento de Física Teórica, Madrid, Spain

2 Aix Marseille Univ, Turing Center for Living Systems, Marseille, France

3 École Normale Supérieure - PSL University, Laboratoire de Neurosciences Cognitives Computationnelles, Paris, France

4 Universidad Nacional Autónoma de México, Instituto de Fisiología Celular, Neurociencias, México DF, Mexico

Email: nestor.parga@uam.es

We recorded the activity of midbrain dopamine (DA) neurons in a task where monkeys had to use working memory to discriminate between two temporally separated vibrotactile stimuli. Since the animal had no clue about trial difficulty, its motivational level could be quantified by the reaction time (RT) to a tactile start cue through which the animal communicated its readiness to perform the task at the beginning of each trial (Satoh et al., 2003). Then, the animal was presented randomly and independently with one of 12 stimulus classes (f1, f2). Even if |f1-f2| was the same, performance in some classes was clearly worse. This disparity was previously explained by a contraction bias that shifts f1 perception towards its mean and generates a subjective difficulty [1].

Here we address the question of how motivation influences behavior and DA activity in the discrimination task. To do so the recorded trials were divided into two groups based on their RT (short- and long-RT trials). Interestingly, when averaged over all classes, the RT was significantly longer in error trials. Furthermore, a shorter RT improved performance in classes with a higher subjective difficulty (Fig. 1A). To find out the reason of this enhancement a Bayesian model for the discrimination was fitted to both trial groups independently. The noise parameter introduced to emulate uncertainty was smaller in the short-RT condition (p < 0.001, t-test), implying that motivation increased precision. Since smaller noise generates a weaker contraction bias, subjective difficulty was diminished, boosting performance in that group. These results confirmed that the motivation level of the animal had a strong impact in decision-making by selectively enhancing perception and reducing subjective difficulty in conflictive classes, therefore increasing reward rates.

Midbrain DA activity codes reward prediction errors. However, these predictions can be modulated by the eagerness to work for rewards. DA activity in the two trial groups exhibited significant differences. Phasic responses to the initial cue and to the first stimulus were larger in short-RT trials (Fig. 1B). Contrastingly, during the second stimulus and at reward delivery phasic DA did not depend on the RT. DA responses to the start cue and to the first stimulus represented the motivational state in the trial since they had significant negative correlation with the RT on a trial-by-trial basis. Instead, responses to the second stimulus only represented reward prediction errors. Firing activity during working memory was a purely motivational signal: it was not tuned to the initial stored frequency and exhibited a ramping behavior. Importantly we found that the ramp-like DA activity was more pronounced in short-RT trials (Fig. 1B).

To sum up, we showed that willingness to work for rewards leads to better outcomes by enhancing precision and reducing a perceptual bias. Also, high motivation was associated with larger DA activity. During the delay period, when the bias presumably appears, DA showed a more pronounced ramping in trials with higher motivation. Such a higher sustained DA activity may be related to a better usage of cognitive resources such as working memory, allowing for more precise inferences when needed. Together, our results point to an intricate relation between DA and perception as they are both modulated by the animal’s intrinsic motivation.

Acknowledgments

Grants PGC2018-101,992-B-I00 (Spain).

References

1. Sarno S, Beirán M, Diaz-deLeon G, Rossi-Pool R, Romo R, et al. Midbrain dopamine firing activity codes reward expectation and motivation in a parametric working memory task. bioRxiv. 2020 Jan 1.

Fig. 1
figure bq

A Performance for each pair of stimuli (class number) sorted by short- and long-RTs. Central classes (5–8) are those in which the bias is stronger (higher difficulty) while extreme ones (1, 2, 11, 12) are favored by it. B Normalized population activity as a function of time for correct trials, divided by RT. Yellow colored bar marks significant differences between RT groups (AUROC, p < 0.05)

P136 Bayesian computations in recurrent spiking neural networks trained to discriminate time intervals.

Luis Serrano-Fernández 1 , Manuel Beirán 2 , Nestor Parga 1

1 Universidad Autónoma Madrid, Department of Theoretical Physics, Madrid, Spain

2 École Normale Supérieure - PSL University, Laboratoire de Neurosciences Cognitives Computationnelles, Paris, France

Email: nestor.parga@uam.es

In delayed comparison tasks the first stimulus is perceived contracted towards its mean; an effect known as contraction bias. However the nature of the bias and its representation by the neural population activity are not well known. To get insight about these issues, we trained recurrent spiking neural networks (RSNNs) to decide which of two-time intervals (d1, d2), presented sequentially separated by a delay interval, was longer. Networks were trained with a set of duration pairs, selected randomly and independently in each trial (Fig. 1a), using the full-FORCE algorithm. Then, a large number of test trials were obtained from the trained networks for further analysis of task performance and population activity.

The trained networks exhibited the contraction bias (Fig. 1b), implying that temporal correlations in the sequence of the training stimuli are not needed to generate the bias. To investigate its origin we explored the idea that the perceived duration resulted from combining present and past stimuli. With this goal, we considered two models: Bayesian inference and a plausible Bayesian heuristics. In the latter, the perceived d1was an exponentially weighted sum of current and past d1 ‘s. At the behavioral level we fitted both models to the performance data from the trained networks (Fig. 1b). The parameters were the variance of the two noisy observations. Although the models yielded statistically similar fits, a given network favored either one or the other, as assessed by their RMSEs. At the neural activity level we analyzed the kinematics of the population trajectories in state space. The mean population activity for each d1 described orbits for which we computed all the relative distances [1].

To assess whether the delay-period population activity combined present and past d1 ‘s (either as a Bayes estimate or as a Bayesian heuristics) or coded the true value of d1, we reasoned that the mean relative distances < D > should reflect the network’s estimate of d1. Then, once the model that best fitted the performance data and its estimate of d1 were determined, we confronted the two hypotheses as follows: the coefficients of a linear function (a < D >  + b) were separately fitted to the estimated d1 and to the true d1 and the goodness of the fits were compared using their RMSEs [2]. For the 20 tested networks the test favored the mixing hypothesis (Fig. 1c). The evaluation of the mutual information that neurons had on previous d1`s showed that the mixing of current and past stimuli came from the network recurrent connectivity, which allowed information from past stimuli to persist and be combined with the current d1.

To summarize, during the delay period the trained RSNNs combined current and past stimuli, thus generating a contraction bias. The population activity for fixed d1 described orbits in state space maintaining relative distances that coded an estimate of d1. Interestingly, this estimate closely approximated either Bayesian inference or a simple Bayesian heuristics, depending on the network. Networks processed information about the stimulus in a way that closely resembled the way that cortical populations reproduced a sample interval [1]. Our results suggest that a similar strategy could be employed both by the brain and by trained RNNs and in different tasks, generating biases through Bayesian or Bayesian-like computations

Acknowledgments

Grant PGC2018-101,992-B-I00.

References

1. Remington ED, Narain D, Hosseini EA, Jazayeri M. Flexible sensorimotor computations through rapid reconfiguration of cortical dynamics. Neuron. 2018 Jun 6;98(5):1005–19.

2. Egger SW, Remington ED, Chang CJ, Jazayeri M. Internal models of sensorimotor integration regulate cortical dynamics. Nature neuroscience. 2019 Nov;22(11):1871–82.

Fig. 1
figure br

a Stimulus set (d1, d2). b Psychometric curves (percentage of trials where d1 was called lower) for an example network. The Bayesian model yielded the best fit. c the mean relative distance < D > coded d1 following rather closely the Bayesian estimate d1 Bayes. The RMSE for the Bayesian hypothesis was 2.74 while for the true d1 hypothesis was 12.58, clearly favoring Bayesian inference

P137 An integrative framework for dynamic causal modeling of neural circuitry using multiscale, multimodal measurements

Jiyoung Kang 1 , Hae-Jeong Park 2

1 Yonsei University, Center for Systems and Translational Brain Sciences, Seoul, South Korea

2 Yonsei University College of Medicine, Department of Nuclear Medicine, Seoul, South Korea

Email: jiyoungkang01@gmail.com

The brain is a hierarchical system composed of diverse interactions among neural units (referring to neurons or neural populations) across different levels of hierarchy, however we are yet to finalise the method of multiscale brain model construction. To resolve this issue, we propose a computational framework, namely multimodal dynamic causal modeling (mmd-DCM). Extending the conventional DCM method, which has been widely used for macroscale and mesoscale brain data analysis with the Bayesian modeling method, we coupled one neural model with multiple observation models.

More specifically, neural activity is translated into different observation signals: all model parameters are fitted to reproduce the observation data and share neural activity. The present mmd-DCM focuses on model construction using electrophysiological data. This opens up the possibility of considering microscale brain dynamics, and includes three types of observation signals: voltage-sensitive dye imaging (VSDI), calcium imaging (CaI), and blood-oxygen-level-dependent (BOLD) signals, which are in different temporal and spatial resolutions.

In order to apply the proposed mmd-DCM to a large brain circuit, we developed a systematic estimating scheme that integrates information from local and global circuits. In our previous studies [1,2] we showed that the incorporation of interactions with other brain regions (not observed) is necessary for the modeling of local activity. The local activity is not the result of exclusive interaction among local neural units (neurons or neural populations depending on the level) isolated from other neural populations, but is affected by the external neural inputs or contexts. Thus, while estimating the connectivity, multiscale and multimodal data would complement the inference of the system circuitry. In the current study, we combined multimodal data to link multiscale circuitry and to infer circuitry at each scale using mmd-DCM.

To evaluate our scheme with mmd-DCM, we constructed a biologically plausible model with CaI signals obtained from the 2/3 layer of the barrel cortex. Then, we simulated CaI, VSDI, and BOLD signals at different temporal and spatial scales, and estimated model parameters. The results show that by integrating local and global circuit information with mmd-DCM, we are able to estimate model parameters with a higher accuracy than those of the conventional method, thereby showing its usefulness for extending multiscale brain dynamics.

Acknowledgments

This research was supported by Brain Research Program and Brain Pool Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (NRF-2017M3C7A1049051 and NRF-2017H1D3A1A01053094).

References

1. Jung K, Kang J, Chung S, Park HJ. Dynamic causal modeling for calcium imaging: Exploration of differential effective connectivity for sensory processing in a barrel cortical column. NeuroImage. 2019 Nov 1;201:116,008.

2. Kang J, Jung K, Eo J, Son J, Park HJ. Dynamic causal modeling of hippocampal activity measured via mesoscopic voltage-sensitive dye imaging. NeuroImage. 2020 Jun 1;213:116,755.

P138 Outlining contextual settings for rule learning through a probabilistic category learning task

Nicholas Menghi 1 , Will Penny 1

1University of East Anglia, Social Sciences, Norwich, United Kingdom

Email: n.menghi@uea.ac.uk

Category learning can be achieved by using different cognitive strategies. Learners might focus on acquiring the response associated with particular exemplars or they can try to extract a pattern from the stimuli and learn the rule or structure behind the associations. This work aims to extend the literature on exemplar and rule learning by outlining the context in which participants either learn to extract a rule or the value of each exemplar. We design a task in which multiple stimuli are relevant and the appropriate response depends on the pattern of stimuli presented. Participants were not directly instructed to find a rule but to learn the association between stimuli and outcomes. We manipulated two contextual settings: the stimulus–response mapping (rule) and the temporal structure of the stimuli that were presented. We had two different rules, where participants had to either add or subtract stimulus features to find the pattern. The subtraction rule was designed to be easier to explicitly declare. We had three different trial structures: one where order was interleaved, one where it was blocked and a mixed one, which was a mixture of the first two, blocked first, then interleaved.We fitted an online latent cause model of participants’ behaviour. It allowed us to cluster stimuli based on their similarity, participants action and the category the stimuli belonged to, giving us insights about participants strategy. We analyzed the number of clusters created, the pruning threshold which defined which cluster to prune during learning, and two additional measures derived from the model: entropy and recognition. Those indexed the uncertainty about which cluster a stimulus belongs to and the probability of a stimulus given the model. Later we used these measures as regressors for a following EEG study. Participants performed better for the subtraction rule and the blocked temporal structure. The proportion of participants who correctly declared the underlying association performed better than participants who did not. The difference in performance was clear in the mixed temporal structure: when the temporal structure switched from blocked to interleaved performance for non-declarative participants decayed compared to the declarative ones. The model created more clusters in the blocked temporal structure compared to the others. The cluster pruning threshold was higher for the addition compared to the subtraction rule and for the interleaved compared to the blocked and mixed temporal structure. Recognition varied based on temporal structure and differed between declarative and non-declarative participants in the mixed conditions after the switch in temporal structure. Our results describe the context in which rule and exemplar learning occur, so providing a foundation for further behavioural and neuroimaging studies.

P139 Critical slowing biomarkers in mathematical neural models

Wei Qin 1 , Anthony Burkitt 1 , Andre Peterson 1

1 University of Melbourne, Department of Biomedical Engineering, Melbourne, Australia

Email: wqin1@student.unimelb.edu.au

50 million people worldwide suffer from epilepsy, among which one-third of patients cannot be effectively treated by pharmacotherapy or surgery. Epilepsy is a highly patient specific neurological disease and epileptogenesis is not well understood. Clearly, the individual brain structures and functions play an important role in contributing to epileptogenesis, the gradual process by which a brain develops epilepsy. Fortunately, with modern technology, we are able to visualise those changes through measurements of brain activity. Functional neuro-imaging data such as EEG shows that there is hyper-excitable and hyper-synchronous neuronal activity during seizures. Critical slowing down (CSD) is a phenomenon seen in many dynamical systems. When a system is getting closer to a critical transition state, its variance, autocorrelation and the time for a perturbed system to return back to baseline increases. The first two passive characteristics (variance and autocorrelation) have been seen in epileptic patients [1]. On the other hand, phase transitions are often used to describe pathological brain state transitions observed in neurological diseases such as epilepsy. In this project, we are interested in investigating the phase transitions through CSD biomarkers as a way to measure the state of a brain.

We have reviewed the state of art literature on the topics of critical slowing down, seizure prediction and time series analysis. Eventually, we have come up with 6 new biomarkers on the top of traditional critical slowing biomarkers (variance, autocorrelation and response to perturbation). Together we have 9 biomarkers which are designed for time-series signals such as EEG. The goal of the biomarkers is to forecast the state transitions of a dynamical system when a system is close to a criticality. We tested those biomarkers in simple mathematical models. The aim is to examine the performance of the biomarkers in noise-free and noisy environments through simulations. All of the models are known for their bifurcations when some structural parameters are varied [2]. The work in this stage is also carried out as a proof of concept; the biomarkers are able to indicate the upcoming critical transition before it takes place. Most of the biomarkers are able to indicate the state changes, but those changes are only shown qualitatively not quantitatively. The values of the biomarkers measured from one system are not necessarily comparable with the same biomarkers from another system. Noise tolerances are also tested for each biomarker obtained at different levels of white noise superposed on the simulation data. The noise level is categorised as low, medium and high based on their signal–noise-ratios (SNRs). We examined whether each biomarker derived from 100 realisations of simulations is still able to provide a statistically significant separatrix under different SNRs.

References

1. Maturana MI, Meisel C, Dell K, Karoly PJ, D’Souza W, et al. Critical slowing down as a biomarker for seizure susceptibility. Nature communications. 2020 May 1;11(1):1–2.

2. Negahbani E, Steyn-Ross DA, Steyn-Ross ML, Wilson MT, Sleigh JW. Noise-induced precursors of state transitions in the stochastic Wilson–Cowan model. The Journal of Mathematical Neuroscience (JMN). 2015 Dec;5(1):1–27.

P140 Modelling the neurophysiology of sleep development over the first five years of life

Lachlan Webb 1 , James A. Roberts 2 , Andrew Phillips 3

1 QIMR Berghofer Medical Research Institute, Brisbane, Australia

2 QIMR Berghofer, Computational Biology, Brisbane, Australia

3 Monash University, Turner Institute for Brain and Mental Health, Melbourne, Australia

Email: lachlan.webb@qimrberghofer.edu.au

The rapid changes in sleep patterns over the first few years of life vary widely between children. In fact, never are sleep characteristics and dynamics more varied than during early childhood [1,2]. Sleep is important for infant and child neurodevelopment, yet there is a lack of mechanistic understanding of what drives the changes on sleep over the early years of life. While sleep in the adult brain has been studied and modelled extensively, very little has been done in infants and children, mainly limited to descriptive studies of sleep behaviour. Here, we adapted an existing, physiologically based model of adult sleep to study infant and child sleep behaviour [3,4]. We used Bayesian model estimation to identify the likely physiological parameters underpinning population-level diversity in sleep characteristics as a function of age from 0 to 5 years. We found that the empirically observed decrease in total sleep duration and consolidation of sleep bouts with increasing age are well explained by decreases in the constant inhibitory input to sleep promoting neurons and increases in the characteristic time to clear somnogens (sleep inducing agents) during sleep. Moreover, we explored time-dependent parameter changes to simulate individual maturation of sleep patterns, finding realistic sleep–wake dynamics consistent with heavily sampled, single infant data. Our findings show that physiologically based models can be used to understand the developing neurophysiology driving sleep behaviour in children.

References

1. Galland BC, Taylor BJ, Elder DE, Herbison P. Normal sleep patterns in infants and children: a systematic review of observational studies. Sleep medicine reviews. 2012 Jun 1;16(3):213–22.

2. Jenni OG, Carskadon MA. Sleep behavior and sleep regulation from infancy through adolescence: Normative aspects. Sleep medicine clinics. 2007 Sep 1;2(3):321–9.

3. Phillips AJ, Robinson PA. A quantitative model of sleep–wake dynamics based on the physiology of the brainstem ascending arousal system. Journal of Biological Rhythms. 2007 Apr;22(2):167–79.

4. Skeldon AC, Phillips AJ, Dijk DJ. The effects of self-selected light–dark cycles and social constraints on human sleep and circadian timing: a modeling approach. Scientific reports. 2017 Mar 27;7(1):1–4.

P141 Effects of heterogeneous inputs on cortical activity in medium-scale neuronal networks on chip

Francesca Callegari 1 , Paolo Massobrio 1 , Martina Brofiga 1 , Marietta Pisano 1

1 University of Genova, Department of Informatics, Bioengineering, Robotics, Systems Engineering (DIBRIS), Genova, Italy

Email: francesca.callegari@edu.unige.it

Many higher brain functions are attributed to the cerebral cortex, characterized not only by a large number of neurons but also by an extensive connectivity with other districts. The finely regulated interactions between these different areas are suggested to be at the basis of the rise of complex patterns of activity. Due to the complexity of the system itself, unravelling the mechanisms underlying brain functions such as sensory processing and memory consolidation requires to devise simplified in vitro models that allow to understand how cells of different brain circuits interact.

In this work, we recorded the emerging electrophysiological activity by means of Micro-Electrode Arrays (MEAs) paired with ad hoc polymeric structures in order to recreate interconnected heterogeneous networks. We studied how the spontaneous activity of a cortical population is modulated by two distinct and specific physiological inputs provided by thalamic and hippocampal subpopulations. Using compartmentalized polymeric engineered masks, we recreated and recorded the electrophysiological activity of the cortico-thalamic and cortico-hippocampal circuits, which are highly relevant as they are involved in the genesis of physiological oscillatory rhythms whose alterations induce pathological conditions, such as absence seizures, and during sensory processing and memory consolidation, respectively. From the spike and burst trains, we obtained parameters to characterize the spiking and bursting activity, to identify the excitatory and inhibitory functional connections, and to evaluate the interaction between sub-populations in terms of synchronization level of the spiking activity. In particular, statistical interdependence between neuron pairs was obtained by convolving the cross-correlogram with an edge filter to identify the local maxima and minima in the peak trains. Finally, the synchronization level was evaluated by means of the Coincidence Index defined on the basis of the cross-correlation function.

We found that the thalamic and hippocampal input modulate cortical activity in a complementary way. We observed that the specific features of thalamic activity, characterized by tonic spiking, and the hippocampal one, which presents highly stereotyped high-frequency bursts, modulated both the spiking and bursting dynamics of the co-cultured cortical population. Hippocampal neurons drove a more sustained and packed cortical activity. Moreover, they induce a change in the distribution of the inhibitory connections, which resulted in a decrease in the amount of inhibitory information exchanged between the two populations. It was also observed that the sub-populations in the cortical-hippocampal co-cultures established a greater number of strong connections within themselves than in controls. A possible consequence is the observed modification of the synchronization of the two sub-populations, which shows a significant increase of the synchronization level within the compartments with respect to the one between them. Thalamic neurons induced a more random and scattered activity pattern, with a strong redistribution of the functional inhibitory links. The thalamic assembly generates more inhibitory connections than in controls, however none of them are projected to the cortical compartment. This difference in functional connections may be the cause of the observed strengthening of the cortical compartment inner synchronization.

P142 Computational insights into sinoatrial arrhythmogenesis of hydroxychloroquine for the treatment of COVID-19

Chitaranjan Mahapatra 1 , Vikas Pandey 2 , Ashish Pradhan 3

1 University of California, San Francisco, Cardiovascular Research Institute, San Francisco, CA, United States of America

2 University of California, Los Angeles, CA, United States of America

3 Samsung R & D Institute Bangalore, Bangalore, India

Email: cmahapatra97@gmail.com

The disruption of coronavirus disease 2019 (COVID-19) poses a serious threat to global public health and local economies. The combination of antimalarials hydroxychloroquine (HCQ) with azithromycin has confirmed the anti-viral treatment on an urgent basis in limited clinical studies [1]. With the growing interest in the potential use of HCQ for the treatment of COVID-19, it is essential to reflect on the risks of treatment, particularly for cardiac toxicity. The purpose of this computational study was to investigate the propensity of hydroxychloroquine (HCQ) on various ionic mechanisms to cause diverse effects on the sinoatrial action potential. The sinoatrial node cell (SAN) was described as an equivalent electrical circuit with a number of variable conductances representing voltage-gated Na + channels (INa), voltage-gated Ca2 + channels (ICa), voltage-gated K + channels (IK), Ca2 +–dependent K + channels (IKCa) and hyperpolarization-activated current (funny current, If). AHCQ drug model for the multiple ion channels was simulated after mining data from experimental studies [2]. The biophysically altered ionic currents (ICa, IK, and If) were integrated into the single SA node electrophysiological model [3].The resting membrane potential (RMP) was set at –80 mV. Application of 1 µM HQN showed inhibitory effects on ICa, IK, and If. The steady-state values for activation and inactivation parameters are altered. The If current was substantially reduced with comparison to other currents. As a consequence, the model produced SAN action potential prolongation, and the frequency was reduced. The results show that the modified funny current plays an important role in reducing the frequency of the spontaneous action potentials at SA node.The model successfully reproduces both ionic currents and action potential observed in intracellular recordings from individual SAN cell. The effects of Hydroxychloroquine drug are simulated with respect to funny current and action potential. As Hydroxychloroquin reduces the frequency rate of the spontaneous action potential firing, we should prevent it as a potential drug against COVID-19. It also supports the FDA guideline against using HCQ for COVID-1.

References

1. Younis NK, Zareef RO, Al Hassan SN, Bitar F, Eid AH, et al. Hydroxychloroquine in COVID-19 Patients: pros and Cons. Frontiers in pharmacology. 2020 Nov 19;11:1798.

2. Capel RA, Herring N, Kalla M, Yavari A, Mirams GR, et al. Hydroxychloroquine reduces heart rate by modulating the hyperpolarization-activated current If: Novel electrophysiological insights and therapeutic potential. Heart rhythm. 2015 Oct 1;12(10):2186–94.

3. Courtemanche M, Ramirez RJ, Nattel S. Ionic mechanisms underlying human atrial action potential properties: insights from a mathematical model. American Journal of Physiology-Heart and Circulatory Physiology. 1998 Jul 1;275(1):H301-21.

P143 Enhanced ensemble computational models of mouse thoracic sympathetic postganglionic neurons with offline compensation of electrode artifacts

Krishna Pusuluri 1 , Yaqing Li 2 , Shawn Hochman 2 , Astrid A. Prinz 1

1 Emory University, Department of Biology, Atlanta, GA, United States of America

2 Emory University, Department of Physiology, Atlanta, GA, United States of America

Email: pusuluri.krishna@gmail.com

Thoracic sympathetic postganglionic neurons (tSPNs) receive synaptic inputs from preganglionic neurons in the spinal cord and regulate downstream effector targets including vasomotor and thermoregulatory systems. Rather than acting as simple relays of spinal signals to the periphery, tSPNs can integrate and transform signals depending on their cell-intrinsic biophysical properties. Understanding tSPN cellular integrative and recruitment principles is essential to study mechanisms that alter excitability, including those seen after spinal cord injury (SCI).

A previous conductance-based computational model of mouse tSPNs was described in [1]. In the current study, we present updated ensemble tSPN models that effectively describe experimental data from different electrophysiological modalities, including voltage-clamp (VC) step and ramp, as well as current clamp (CC) protocols. A model of electrode resistance and capacitance is incorporated [2] to separate experimental artifacts from ion channel dynamics, across multiple cells and multiple trials. In line with the previously studied mRNA profiles in these cells, we determine ion channel properties such as maximal conductance and decay time constants that best describe the experimental recordings. VC step protocol data (before and after the application of Na +–channel blocker tetrodotoxin) is used to determine the dynamics of transient currents via Na + and A-type K + channels, as well as electrode artifact properties. This data also gives an estimate of the sum of all the long-lasting currents (delayed rectifier K + , M-type K + , Ca2 +–dependent K + etc.), which are further separated and their maximal conductances determined based on their voltage dependent peaks under VC ramp protocol. With this setup, including the electrode model, we can obtain ensemble models tuned to individual neurons and describe the voltage-dependent delays observed in the onset of Na + currents under VC step protocol. Further improvement of space clamp errors is expected with a spatial model of the cell.

Future work will incorporate data obtained from CC recordings and all channel properties will be simultaneously tuned to match the firing properties of the cells in response to current injections.We will employ this updated tSPN model to study differences in passive and firing properties of genetically identified tSPN subpopulations and the putative homeostatic plasticity engaged to maintain excitability after lost central drive as seen after high level SCIs.

Acknowledgements

This work is supported by NIH grant5R01NS102871 (MPI: Hochman, Prinz).We thank Michael McKinnon for important discussions.

References

1. McKinnon ML, Tian K, Li Y, Sokoloff AJ, Galvin ML, et al. Dramatically Amplified Thoracic Sympathetic Postganglionic Excitability and Integrative Capacity Revealed with Whole-Cell Patch-Clamp Recordings. Eneuro. 2019 Mar;6(2).

2. Günay C, Prinz AA. An offline correction method for uncompensated series resistance and capacitance artifacts from whole-cell patch clamp recordings of small cells. BMC Neuroscience. 2011 Dec;12(1):1–2.

P144 Tau from no tau: Temporal dynamics of Na + pump mediated memory traces

Obinna F. Megwa 1 , Leila M. Pascual 1 , Stefan R. Pulver 2 , Astrid A. Prinz 1

1 Emory University, Department of Biology, Atlanta, GA, United States of America

2 University of St Andrews, School of Psychology & Neuroscience, St Andrews, United Kingdom

Email: astrid.prinz@emory.edu

Na + /K + ATPases (Na + pumps) mediate long-lasting activity-dependent ionic currents that provide a neuronal memory for previous activity than can last tens of seconds [1]. The cellular mechanisms controlling the dynamics of these long events are not well understood and counterintuitive. Long-lasting memory traces arise from Na + pumps that instantaneously respond to Na + concentration changes, with no explicit pump activation time constant. Here, we use computational modelling of pump currents to examine how pump dynamics without time constants shape both electrical (membrane potential) and chemical (Na + concentration) memory traces.

We incorporated 1) a Na + pump, 2) its effects on intracellular Na + concentration, and 3) a dynamic Na + reversal potential into a Drosophila larval motor neuron model [2]. The pump current Ip = Ipmax / (1 + exp((Nah–Na)/Nar)) is modeled as a maximal pump current Ipmax multiplied by an S-shaped function of intracellular Na + concentration (Na) with two parameters, the Na + concentration (Nah) of pump half-activation, and a factor (Nar) that determines the range of the current’s dependence on Na. This model does not include a time constant Tau of pump activation – the pump responds instantaneously to changes in Na. The pump current shapes neuronal dynamics in two ways: 1) The electric current resulting from extrusion of Na + ions and pumping into the cell of K + ions contributes to membrane potential changes. 2) The extrusion of Na + ions contributes to changes in intracellular Na + concentration and the Na + reversal potential.

We find that despite the absence of an explicit Tau, the pump produces after-hyperpolarizations (AHPs) following bursts of action potentials that can last for multiple seconds, as in experimental preparations [1]. This ‘Tau from no Tau’ arises from the interaction of the pump current with membrane currents and the intracellular Na + buffering system. The AHP duration depends on both parameters Nah and Nar, with larger values of either producing longer AHPs. The dependence of AHP duration on Nah is weaker than its dependence on Nar. We further show that at the end of the AHP, when the electrical effect of the pump has largely subsided, Na is still substantially different from its resting level. The chemical effects of pump activity in the cell can thus last several-fold longer than the electrical effects. This chemical memory trace that out-lasts the electrical memory arises solely from interactions of the pump with membrane conductances and ion buffering. It does not require additional molecular signaling cascades with slow dynamics.

We conclude that even in the absence of an activation time constant Tau, Na + pumps provide a mechanism for long lasting electrical (AHP) memory traces and even longer chemical (Na + concentration) memory traces. Our work provides testable predictions for physiologists and has implications for understanding information processing in neural networks and the neural control of animal behavior.

References

1. Pulver SR, Griffith LC. Spike integration and cellular memory in a rhythmic network from Na + /K + pump current dynamics. Nature neuroscience. 2010 Jan;13(1):53–9.

2. Günay C, Sieling FH, Dharmar L, Lin WH, Wolfram V, et al. Distal spike initiation zone location estimation by morphological simulation of ionic current filtering demonstrated in a novel model of an identified Drosophila motoneuron. PLoS computational biology. 2015 May 15;11(5):e1004189.

P145 Sympathetic postganglionic neurons as a potential locus of modulation and plasticity in the sympathetic pathway

Nelson H. Chang 1 , Michael McKinnon 1 , Shawn Hochman 2 , Astrid A. Prinz 1

1 Emory University, Department of Biology, Atlanta, GA, United States of America

2 Emory University, Department of Physiology, Atlanta, GA, United States of America

Email: astrid.prinz@emory.edu

Thoracic sympathetic postganglionic neurons (tSPNs) reside in the sympathetic ganglia and receive excitatory inputs from preganglionic spinal neurons. tSPNs were long thought to act as relays of input from the spinal cord to targets such as vasomotor and thermoregulatory systems. We previously used modeling to show that tSPNs may play a more active role in signal integration in the sympathetic pathway [1]. Important questions are whether tSPN membrane properties: (i) are tuned to optimally process synaptic inputs, (ii) represent a locus for behavioral state-dependent modulation, and (iii) undergo compensatory homeostatic changes in excitability due to long-term alterations in central preganglionic synaptic drive (e.g., after spinal cord injury). We address these questions in a model of mouse tSPNs [2]. Its membrane currents include: a fast Na + current INa; a low-threshold Ca2 + current ICaL; K + currents IKd (delayed rectifier), IA (fast transient), IM (slow non-inactivating), and IKCa (Ca2 + dependent); a hyperpolarization-activated inward current IH; and a leak current. To simulate input from the spinal cord, we provide the tSPN model with excitatory synaptic conductance waveforms that match measurements from mouse tSPNs in number, synapse strength, and presynaptic firing pattern. In the “canonical” version of the tSPN model [2], this synaptic input results in a synaptic gain (defined as tSPN firing rate divided by preganglionic firing rate) larger than 1, meaning that tSPNs can integrate and amplify these synaptic inputs. This confirms that tSPNs may act as more than relays of spinal inputs [1].

We next individually vary the maximal conductance for each membrane current from its canonical value and observe that: 1) Varying the conductances for IKd, IA, and IHhas little effect on synaptic gain. These currents therefore may not be effective targets for modulation or plasticity. 2) Increasing the conductance for INaincreases synaptic gain, as expected for an inward current. 3) Increasing the conductance for the potassium currents IM and IKCa and for the Ca2 + current ICaL decreases synaptic gain. For IM and IKCa this occurs because they are outward currents, thus increasing their conductance decreases tSPN excitability. ICaL is an inward current, but has a negative effect on synaptic gain because it indirectly reduces excitability via increasing the outward current IKCa.

Our simulations implicate the tSPN membrane currents INa, IM, IKCa, and ICaL as factors that may determine tSPN excitability and synaptic gain, with Ina being a positive regulator of gain, while IM, IKCa, and ICaL are negative regulators. These membrane currents may thus be suitable targets for plasticity and modulation of signal integration in the thoracic sympathetic pathway, including in the context of systemic changes after spinal cord injury.

Acknowledgements

Supported by NIH grant 5R01NS102871 (MPI: Hochman, Prinz).

References

1. Prinz A, McKinnon M, Tian K, Hochman S. Frequency dependent synaptic gain in a computational model of mouse thoracic sympathetic postganglionic neurons. BMC Neuroscience. 2020, 21(Suppl1)

2. McKinnon ML, Tian K, Li Y, Sokoloff AJ, Galvin ML, et al. Dramatically Amplified Thoracic Sympathetic Postganglionic Excitability and Integrative Capacity Revealed with Whole-Cell Patch-Clamp Recordings. Eneuro. 2019 Mar;6(2).

P146 A biophysical spectral graph theory-based model for brain oscillations

Parul Verma 1 , Srikantan Nagarajan 1 , Ashish Raj 1

1 University of California, San Francisco, Radiology and Biomedical Imaging, San Francisco, CA, United States of America

Email: parul.verma@ucsf.edu

Understanding the relationship between the functional activity and the structural wiring of the brain is an important question in neuroscience. To address this, various mathematical modeling approaches have been undertaken in the past, which largely consisted of non-linear and biophysically detailed mathematical models with regionally varying model parameters. While such models provide us a rich repertoire of dynamics that can be displayed by the brain, they are computationally demanding. Moreover, although neuronal dynamics at the microscopic level are nonlinear and chaotic, it is unclear if such detailed nonlinear models are required to capture the emergent meso- (regional population ensemble) and macro-scale (whole brain) behavior, which is largely deterministic and reproducible across individuals. Indeed, recent modeling effort based on spectral graph theory has shown that a linear and analytical model without regionally varying parameters can capture the empirical magnetoencephalography frequency spectra and the spatial patterns of the alpha and beta frequency bands accurately.

In this work, we explore the properties of an improved hierarchical, linearized, and analytic spectral graph theory-based model that can capture the frequency spectra obtained from magnetoencephalography recordings. The model consists of coupled excitatory and inhibitory dynamics of the neural ensembles for every brain region, and white-matter structural wiring-based long-range excitatory macroscopic dynamics. We demonstrate that this model, with just a parsimonious set of global and biophysically interpretable model parameters, can display frequency-rich spectra. In particular, we show that even without any oscillations on the regional level, the macroscopic model alone can exhibit oscillations with a frequency in the alpha band. We also show that depending on the parameters, the model can exhibit damped oscillations, limit cycles, or unstable oscillations that blow up with time. We further determined bounds on these parameters to ensure stability of the modeled oscillations. These biophysically interpretable model parameters can be employed to investigate correlates of differences in frequency spectra observed in different brain states and neurological diseases.

P147 How cerebellar architecture aids online motor learning

Adriana Perez Rotondo 1 , Timothy O'Leary 1 , Dhruva Venkita Raman 1

1 University of Cambridge, Department of Engineering, Cambridge, United Kingdom

Email: ap2013@cam.ac.uk

The cerebellum has a distinctive circuit architecture in which each mossy fibre input typically projects to 250 granule cells, a population that comprises more than half of the neurons in the brain [1]. How does this size expansion relate to cerebellar function? This has been an active research topic for decades [2–4]. Recent theoretical work has shown how this expansion facilitates pattern separation and smooth function approximation [5,6]. However, we currently lack a theory that explains why this architecture is suited to rapid online learning.

The cerebellum is critically involved in motor learning, refining trajectories as movements are being executed. This is a dynamic problem requiring fast learning from limited information. We ask how this specific class of learning problem informs the distinctive cerebellar architecture.

We consider a cerebellar-like network, with sparse, tunable connections that map low-dimensional inputs into a high-dimensional internal, ‘granule cell’ layer. The network is tasked with simultaneously learning an internal model of a motor system, and then using this model to better control motor output (Fig. 1A). Learning happens concurrently with trajectory execution, using a biologically plausible learning rule to adapt synaptic weights (Fig. 1B).

Learning online from motor output as a motor plan is executed introduces a narrow time window that severely limits the information available for synaptic plasticity mechanisms to appropriately adjust synaptic weights. We show, theoretically and numerically, that increasing the number of granule cells effectively trades time for space, allowing rapid and accurate learning in an online context. Our theoretical analysis uses general, geometric arguments that are independent of specific learning rules. We find that the effect of having limited information depends on the spread of the Hessian of the task error. As the number of granule cells increases, the spread decreases. Hence the geometry of the error surface becomes more favourable for online learning, diminishing the effect of information error and allowing for faster learning (Fig. 1C, D). This suggests that the large energy cost associated with maintaining the majority of the brain’s neurons might be an inevitable cost of precise, fast, motor learning. Our result fills gaps in the understanding of how cerebellar structure is optimised for online learning of motor tasks.

Acknowledgments

We thank the European Research Council for supporting this work. ERC grant FLEXONEURO 716,643.

References

1. Billings G, Piasini E, Lőrincz A, Nusser Z, Silver RA. Network structure within the cerebellar input layer enables lossless sparse encoding. Neuron. 2014 Aug 20;83(4):960–74.

2. Albus JS. A theory of cerebellar function. Mathematical biosciences. 1971 Feb 1;10(1–2):25–61.

3. Blomfield S, Marr D, Cowan JD. How the cerebellum may be used. In From the Retina to the Neocortex 1970 (pp. 51–57). Birkhäuser Boston.

4. Kawato M, Furukawa K, Suzuki R. A hierarchical neural-network model for control and learning of voluntary movement. Biological cybernetics. 1987 Oct;57(3):169–85.

5. Cayco-Gajic NA, Clopath C, Silver RA. Sparse synaptic connectivity is required for decorrelation and pattern separation in feedforward networks. Nature communications. 2017 Oct 24;8(1):1–1.

6. Sanger TD, Yamashita O, Kawato M. Expansion coding and computation in the cerebellum: 50 years after the Marr–Albus codon theory. The Journal of physiology. 2020 Mar;598(5):913–28.

Fig. 1
figure bs

A Task diagram. The cerebellar-like net (FF) modulates the motor commands sent to the motor plant P. The weights W adapt online so the plant output y matches the target trajectory yd. B Learning of trajectories. C Network resizing. Granule cells are added. The number of mossy fibres and Purkinje cells is maintained. D Effect of increasing the number of granule cells on learning performance

P148 Impact of electrode placement on spiking probability of a stimulated human auditory nerve fiber

Thomas Tanzer 1 , Sogand Sajedi 1 , Frank Rattay 1

TU Wien, Institute of Analysis and Scientific Computing, Vienna, Austria

Email: sogand.sajedi@tuwien.ac.at

Introduction

Spiking probability as a function of stimulus intensity is the key-control element in the input–output relation in functional electrical nerve stimulation. The range of intensities where the spiking probability of an auditory nerve fiber (ANF) increases from 10 to 90% is defined as its dynamic range and reflects the fiber’s individual loudness contribution during cochlear implant stimulation. The strongest noisy components during the excitation process are fluctuations in sodium ion currents. A single ANF has a quite inhomogeneous structure with changing diameter and large variations in sodium channel densities. The question arises how much the dynamic range depends on the position of the stimulating electrode of a cochlear implant.

Methods

The noisy currents across the cell membrane were simulated in a simple, computationally efficient way: A Gaussian noise current was added to each segment (compartment) of the ANF model every 2.5 µs, scaling proportionally to the square root of the number of sodium channels (defined by sodium conductance measured in mS/cm2). The intensity could thus be controlled by a deterministic parameter (knoise = 0.05) [1]. This selected knoise value induced root mean square amplitudes of the transmembrane voltage comparable to experimental results [1,2]. We placed the electrode at possible positions along a selected ANF varying from terminal to soma and calculated the relative spread (RS) [2], a normalized measure which is about half of the dynamic range.

Results

For a standard ANF (dendrite diameter = 1.35 µm, axon diameter = 2.67 µm [3], 100 µm non- myelinated presomatic region and 20 µm soma) increased stochastic behavior was found especially for electrode positions at the dendrite, while the soma acted as a dampening factor. The closer the electrode was to the ANF, the more pronounced the regional differences in spiking behavior and RS were. For an ANF–electrode (center) distance of 300 µm the RS were 13.10% for stimulation at the terminal, 5.51% for middle of dendrite and 3.97% for soma, respectively – the 3 corresponding thresholds (100 µs pulses) were 117.1, 113.3 and 460.2µA. For the terminal position spiking probabilities of 10, 50 (= threshold) and 90% need currents of 97.7, 117.1 (threshold) and 136.1µA, resulting in a dynamic range of 38.8µA (136.1–97.7) and 33.1% normalized to threshold (~ 2.5 × RS).

Conclusion

The dynamic range of an average human ANF stimulated from a cochlear implant is largest for electrodes close to the fiber terminal (that is close to the outer wall of scala tympani) where it exceeds a central position (close to modiolus) by a factor in the order of three.

References

1. Rattay F, Lutter P, Felix H. A model of the electrically excited human cochlear neuron: I. Contribution of neural substructures to the generation and propagation of spikes. Hearing research. 2001 Mar 1;153(1–2):43–63.

2. Verveen AA. Axon diameter and fluctuation in excitability. Acta Morphol Neerl Scand. 1962;5:79–85.

3. Rattay F, Potrusil T, Wenger C, Wise AK, Glueckert R, et al. Impact of morphometry, myelinization and synaptic current strength on spike conduction in human and cat spiral ganglion neurons. Plos one. 2013 Nov 8;8(11):e79256.

P149 Model of neuronal activity coupled to energy resources generates neonatal burst suppression

Shrey Dutta1, Kartik Iyer2, Sampsa Vanhatalo3, Michael Breakspear4, James A. Roberts5.

1 QIMR Berghofer Medical Research Institute, University of Queensland, Faculty of Medicine, Brisbane, Australia

2 QIMR Berghofer Medical Research Institute, Brisbane, Australia

3 University of Helsinki and Helsinki University Hospital, Helsinki Institute of Life Science, Department of Clinical Neurophysiology, Neuroscience Center, Helsinki, Finland

4 The University of Newcastle, School of Psychology, School of Medicine and Public Health, Hunter Medical Research Institute, Newcastle, Australia

5 QIMR Berghofer, Computational Biology, Brisbane, Australia

Email: shrey.dutta@qimrberghofer.edu.au

Models of neuronal activity across scales have been widely studied, but few models consider coupling of neuronal activity to its metabolic supply. Disruption of energy and oxygen availability to neurons, for example during asphyxia or during epileptic seizures, leads to pathological activity in the electroencephalogram (EEG) [1–3]. By varying energy supply and demand in a network model of Hodgkin-Huxley neurons, we observe that activity varies from healthy asynchronous-irregular (AI) activity, to pathological states of iso-electric activity, burst-suppression activity, and seizure activity. In the burst-suppression regime, as the energy supply ([O2] Bath) increases, a transition from highly synchronous bursts to scale-free (semi-synchronous) bursts to less synchronous bursts takes place. In parallel with this transition the average shape of the bursts changes from asymmetric to symmetric. Scale-free bursts and a transition from asymmetric bursts to symmetric bursts are seen in neonates recovering from hypoxic insult [1]. Therefore, we validate our model using EEG data from hypoxic neonates. We estimate the model parameters that best fit empirical EEG epochs in terms of their burst statistics during the recovery phase, yielding trajectories through the parameter space of [K +] Bath and [O2] Bath. We show that for neonates with good outcomes (normal or mild injury), the projections of the time series tend to travel toward the healthy regime. On the other hand, in neonates with bad outcomes (died or severe injury), the projections of the time series tend to dwell in the pathological region of parameter space. Our modeling thus provides a general platform to study recovery from brain pathologies arising from disturbances of brain metabolism.

References

1. Roberts JA, Iyer KK, Finnigan S, Vanhatalo S, Breakspear M. Scale-free bursting in human cortex following hypoxia at birth. Journal of Neuroscience. 2014 May 7;34(19):6557–72.

2. Jirsa VK, Stacey WC, Quilichini PP, Ivanov AI, Bernard C. On the nature of seizure dynamics. Brain. 2014 Aug 1;137(8):2210–30.

3. Wei Y, Ullah G, Ingram J, Schiff SJ. Oxygen and seizure dynamics: II. Computational modeling. Journal of neurophysiology. 2014 Jul 15;112(2):213–23.

P150 Kenyon cells sensitivity control through thresholds tuning improves the discrimination capacity of the insect olfactory system

Jessica López-Hazas Sacristán1, Francisco B. Rodriguez1.

1 Universidad Autonoma de Madrid, Ingeniería Informática, Madrid, Spain

Email: jessicalopezhazas@gmail.com

Studies on the olfactory system of insects have found that Kenyon cells (KCs) show variable sensitivities to stimuli [1]. One of the mechanisms responsible for this could be the control of their activity level through their neural firing threshold. Controlling the activity level of the KCs could have a positive impact on the discrimination capacity of the network. To explore this hypothesis, we have used a similar model of the insect olfactory system to the one in [2], which includes a learning algorithm capable of finding the best distribution of neural thresholds in KCs to solve a classification problem. After training the model using a random threshold distribution and other one adjusted by the learning algorithm to obtain different levels of activity in the layer corresponding to the KCs. As a first approximation to study the impact of threshold variability on the discrimination capacity of the system we measure the similarity between the internal representations of the patterns belonging to different classes using cosine distance [3].

The results are shown in (Fig. 1), using boxplots that represent the distribution of the cosine distances of the patterns of one class to the patterns of the other classes. The first column shows the similarity between the representations of classes when the thresholds of the KCs are random. The thresholds were initialized randomly with values in this range between 0 and the maximum number of inputs that a KC neuron can receive, so it could be determined whether the threshold adjustment made by the learning algorithm in the rest of the cases results in better representations than the random case. The second column in the figure shows the results for low activity (s = 0.1), the third for medium activity (s = 0.5), and the last one for high activity (s = 0.9). It is clear that the only activation level for which the similarity between the representations decreases is for the low activity level. This shows that a threshold distribution that allows neurons to have different degrees of sensitivity improves the case of random thresholds when the activity level of neurons is kept low. This advantage is lost as the specific threshold distribution in the KCs begins to increase their level of activity. These results are coherent with the findings of other studies like [4]. The fact that the control and variation of the neural threshold in a population of neurons improve its discrimination capacity could be one of the mechanisms from which the generation of a sparse code in biological systems is achieved, and leaves the door open to adapt this into a bio inspired algorithm that could work in the context of deep learning to improve the effectiveness of neural networks.

Acknowledgements

Funded by AEI/FEDER TIN2017-84,452-R, and PID2020-114867RB-I00.

References

1. Rodríguez FB, Huerta R. Techniques for temporal detection of neural sensitivity to external stimulation. Biological cybernetics. 2009 Apr;100(4):289–97.

2. Lopez-Hazas J, Montero A, Rodriguez FB. Strategies to Enhance Pattern Recognition in Neural Networks Based on the Insect Olfactory System. InInternational Conference on Artificial Neural Networks 2018 Oct 4 (pp. 468–475). Springer, Cham.

3. Han J, Kamber M, Pei J. Data Mining: Concepts and Techniques. San Fransisco (US).

4. Babadi B, Sompolinsky H. Sparseness and expansion in sensory representations. Neuron. 2014 Sep 3;83(5):1213–26.

Fig. 1
figure bt

Discrimination of KCs among classes for different threshold configurations which give different degrees of network activity, depending on the parameter s. It can be observed that the greatest discrimination between classes is reached when the adjustment of thresholds set the activity of KCs low

P151 Improving the detection of ERPs and managing variability with dry electrodes in personalized brain computer interfaces

Vinicio Changoluisa 1 , Pablo Varona 2 , Francisco B. Rodriguez 2

1 Universidad Politécnica Salesiana, Quito, Ecuador

2 Universidad Autónoma Madrid, Ingeniería Informática, Madrid, Spain

Email: fchangoluisa@ups.edu.ec

Event-related potentials (ERP) are positive and negative voltage deflections detected on the scalp related to a specific stimulus. ERPs can be used to study and understand memory, attention, or as a control signal for brain-computer interfaces (BCI) [1]⁠. Due to their wide utility, new technologies have been offered to facilitate brain monitoring for ERP detection e. g., dry ones, which are more comfortable and require less set-up time than their wet counterparts. However, this modern technology still has problems to solve as its low signal-to-noise ratio compared to traditional wet electrodes [2], which are combined with the well-known problem of inter-and intra-subject variability in brain activity in the context of precise detection of ERPs. Thus, it is necessary to develop algorithms to improve the detection accuracy of ERPs with dry electrodes. We propose to take advantage of the hit vector, which is a feature vector obtained of the characterization of ERPs with the maxAUC method [3]⁠ in each electrode. This method benefits from the continuous calculation of the area under the curve (AUC) in each epoch of the EEG signal related to the presented stimuli, thus keeping ERPs´spatial and temporal information structure. We initially proposed the AUC calculation to convert the hit vector into a scalar and thus rank each electrode. However, along with the P300, other components such as the N200 are generated [3]⁠. Therefore, we propose the variance (VAR) as another metric to qualify the electrodes from the hit vector. We applied our methodology to a data set from a 12-subject P300-based BCI experiment using dry electrodes on three different days to study the variability. The results show that characterizing the ERPs with maxAUC and scoring each dry electrode with AUC and variance has an advantage over choosing a set of standard electrodes (STD), traditionally used in P300 detection. We tested our method with five configurations of 1, 2, 3, 5, and 7 electrodes and recordings performed on the same subjects on distinct days. Table 1 shows the accuracy reached with a Bayesian classifier (BLDA) in one electrode.

Table 1 Accuracy reached with one electrode. Cross-validation was implemented with two sessions (one training and one test) by day.

For the rest of the configurations, the precision achieved with our methodology is higher, although with configurations considering a larger number of electrodes the advantage decreases. The results show that with AUC it is possible to deal with data with a low signal-to-noise ratio, reduce the number of electrodes and achieve better accuracy in ERP detection. Finally, due to the minimal electrode configuration search for each subject, it is possible to create technologies to customize the detection of ERPs with better performance managing variability, and being user-friendly.

Acknowledgements

Research supported by TIN2017-84,452-R, PGC2018-095,895-B-I00, PID2020-114867RB-I00, and 2015-AR2Q9086.

References

1. Yadav D, Yadav S, Veer K. A comprehensive assessment of Brain Computer Interfaces: Recent trends and challenges. Journal of Neuroscience Methods. 2020 Aug 25:108,918.

2. Shad EH, Molinas M, Ytterdal T. Impedance and noise of passive and active dry EEG electrodes: A review. IEEE Sensors Journal. 2020 Jul 27;20(24):14,565–77.

3. Changoluisa V, Varona P, Rodríguez FD. A low-cost computational method for characterizing event-related potentials for BCI applications and beyond. IEEE Access. 2020 Jun 5;8:111,089–101.

P152 Closed-loop stimulation guided by minimal codes in the sequential activity of weakly electric fish

Angel Lareo 1 , Pablo Varona 2 , Francisco B. Rodriguez 2

1 Universidad Autonoma de Madrid, Madrid, Spain

2 Universidad Autónoma de Madrid, Ingeniería Informática, Madrid, Spain

Email: angel.lareo@estudiante.uam.es

Temporal code-driven stimulation (TCDS) has been defined as a closed-loop method for studying temporal sequences of activity in complex biological systems [1]. It adds to a long list of closed-loop stimulation techniques applied in neuroscience research (e.g., since the generalization of dynamic clamp methods [2]. Particularly, it provides an easy and generalizable method to register, as binary codes, the sequential activity of a living system. It can be used to establish closed-loop stimulation with a biological system by triggering stimuli after the detection of predetermined sequences of events.

This method has been successfully applied to study weakly electric fish signaling [1,3,4]. The properties of the electromotor system of weakly electric fish, which generates electric signals in the water to communicate, enable TCDS to be used to answer questions in the intersection between computational neuroscience and neuroethology. In the case of Gnathonemus Petersii, a species of pulse-type weakly electric fish, patterns of sequences of pulse intervals (SPI) have been related to behavioral responses [5]. Two of these patterns – scallops and accelerations – were used here to stimulate the fish during closed-loop stimulation sessions. The relevance of minimal codes – 2 bits representing short-term sequential activity – for the characterization of the state of the system was addressed. Two different codes were selected to trigger the stimuli: 01 and 11. Results from 29 experiments and 7 different specimens show that, even when using such simple codes as triggers, distinct responses arose from different codes. As indicated by preliminary results using an aversive stimulus, these results hold as long as the stimulation is presented in a closed-loop manner, regardless of the shape of the stimulus [1].

This response of the system could be explained by an increase of the probability of generating SPI patterns with shorter IPIs – like scallops or accelerations – due to the presence of an artificial social context implemented by closed-loop stimulation. TCDS also enables the use of triggering codes with behavioral significance, which is expected to evoke more significant changes in the SPI pattern generation.

Acknowledgements

Funded by AEI/FEDER TIN2017-84,452-R, PID2020-114867RB-I00, and PGC2018-095,895-B-I00.

References

1. Lareo A, Forlim CG, Pinto RD, Varona P, Rodriguez FD. Temporal code-driven stimulation: definition and application to electric fish signaling. Frontiers in neuroinformatics. 2016 Oct 6;10:41.

2. Chamorro P, Muñiz C, Levi R, Arroyo D, Rodríguez FB, et al. Generalization of the dynamic clamp concept in neurophysiology and behavior. PLoS One. 2012 Jul 19;7(7):e40887.

3. Lareo A, Varona P, Rodriguez FB. Weakly electric fish information processing analyzed through close-loop code-driven stimulation. In 10th AIMS Conference on Dynamical Systems, Differential Equations and Applications, Special Session 77: Theoretical, Technical, and Experimental Challenges in Closed-Loop Approaches in Biology, Madrid, 2014.

4. Lareo Á, Forlim CG, Pinto RD, Varona P, Rodríguez FB. Analysis of electroreception with temporal code-driven stimulation. InInternational Work-Conference on Artificial Neural Networks 2017 Jun 14 (pp. 101–111). Springer, Cham.

5. Lareo A, Varona P, Rodríguez FB. Evolutionary Tuning of a Pulse Mormyrid Electromotor Model to Generate Stereotyped Sequences of Electrical Pulse Intervals. InInternational Conference on Artificial Neural Networks 2018 Oct 4 (pp. 359–368). Springer, Cham.

P153 Closed-loop temporal code-driven stimulation implemented and tested using Real-Time eXperimental Interface (RTXI)

Alberto Ayala 1 , Angel Lareo 2 , Pablo Varona 1 , Francisco B. Rodriguez 1

1 Universidad Autónoma de Madrid, Ingeniería Informática, Madrid, Spain

2 Universidad Autonoma de Madrid, Madrid, Spain

Email: angel.lareo@estudiante.uam.es

Feedback loops are relevant to understand complex dynamics in neural systems. Closed-loops methodologies, in which the system is stimulated based on its ongoing activity are well-suited to study this kind of dynamics [1,2]. However, relevant neural systems events frequently occur within or below the milliseconds scale. Therefore, closed-loop tasks must be implemented at this time range with appropriate precision. To guarantee compliance with these temporal constraints, it is convenient to use a real-time system, which performs tasks and responds to certain asynchronous events within a deterministic time frame. To analyze the performance of real-time systems it is necessary to measure latency, which is defined as the difference between the time when a task should start and the time when the task actually starts.

A real-time implementation of a closed-loop stimulation method known as Temporal Code-Driven Stimulation (TCDS) [3] is presented here. This implementation uses the Real-Time eXperiment Interface (RTXI), an updated, open-source, flexible, and fast hard real-time framework specifically designed for research in biology and neuroscience widely used by many laboratories [4]. The TCDS protocol acquires a biological signal in real-time with the required precision and binarizes it. The binary stream is used to stimulate the system after the detection of a predetermined binary code. This real-time protocol is useful for studying how neural systems encode, decode, and process temporal information, which is a complex task due to the high variability of temporal coding schemes that can even be multiplexed. A performance analysis is carried out measuring latency values to verify that the TCDS protocol complies with the temporal constraints for its correct operation. In addition, a validation test of the protocol is performed using an electronic neuron mimicking a living entity.

The average latency obtained during this performance analysis is below the order of milliseconds and the maximum latency obtained is below RTXI task period. Based on these results, we can conclude that the implemented TCDS protocol using the RTXI tool fulfills the temporal requirements for the study of temporal coding in a wide variety of neural systems. Finally, the validation test results showed that stimuli are emitted after code detection in the electronic neuron with a coherent response to the stimulation. These results provide evidence of a successful TCDS implementation.

Acknowledgements

Funded by AEI/FEDER TIN2017-84,452-R, PID2020-114867RB-I00, and PGC2018-095,895-B-I00.

References

1. Chamorro P, Muñiz C, Levi R, Arroyo D, Rodríguez FB, et al. Generalization of the dynamic clamp concept in neurophysiology and behavior. PLoS One. 2012 Jul 19;7(7):e40887.

2. Varona P, Guardeño DA, Nowotny T, Ortiz FD. Online event detection requirements in closed-loop neuroscience. InClosed loop neuroscience 2016 (pp. 81–91).

3. Lareo A, Forlim CG, Pinto RD, Varona P, Rodriguez FD. Temporal code-driven stimulation: definition and application to electric fish signaling. Frontiers in neuroinformatics. 2016 Oct 6;10:41.

4. Patel YA, George A, Dorval AD, White JA, Christini DJ, et al. Hard real-time closed-loop electrophysiology with the Real-Time eXperiment Interface (RTXI). PLoS computational biology. 2017 May 30;13(5):e1005430.

P154 Network patterns emerging from the interplay of lateral inhibition and the intrinsic properties of striatal MSN

Vicente Gonzalez Bosca 1 , Dennis Burke 2 , Veronica Alvarez 2 , Horacio Rotstein 3

1 New York University, Courant Institute of Mathematical Sciences, New York, NY, United States of America

2 National Institutes of Health, Laboratory on Neurobiology of Compulsive Behaviors, Bethesda, MD, United States of America

3 New Jersey Institute of Technology, Federated Department of Biological Sciences, NJIT / Rutgers University, Newark, NJ, United States of America

Email: vg2101@nyu.edu

The striatum is the input structure to the basal ganglia and plays an important role in the selection of motivated behaviors. Its dysfunction is involved in some neurological disorders. The processing of information from upstream regions to the basal ganglia is believed to happen locally in the striatum circuitry. While this implies that the striatal output is not simply a relay station, it is still unclear how this processing occurs and how the flowing information is shaped by the striatal network components, and ultimately affects behavior. It is also unclear how the presence of dopamine affects these patterns. In previous work, we proposed a framework for the striatal projection cells microcircuit based on lateral inhibition among different functional units containing spiny projection neurons from both the direct and indirect paths [1], and argued that the asymmetrical architectures resulting from experimental findings on the synaptic connectivity are best suited to produce the behaviorally correlated patterns where complementary “go” and “no go” cells are simultaneously active and switches between different types of behaviorally correlated patterns.

In this project we use biophysically plausible modeling, computational simulations and experimental information to systematically analyze the patterns that emerge in these lateral inhibition medium spiny neuron (MSN) networks as the result of the interplay of the intrinsic properties of the participating neurons and the network architecture. We use two qualitatively different types of MSN models, differing in their excitability properties, connected with GABAA inhibition with experimentally determined weight relations. One was adapted from the equations presented in [1] (type II) and the other combines information from the models previously used in [3,4] (type I). The neuron models were systematically reduced to have two-dimensional subthreshold dynamics. We implement realistic network architectures following [1] and investigate the emerging patterns. We analyze the different ensembles the neurons can form, their dependence on the intrinsic cellular properties and the network connectivity, and the effect of dopamine on these patterns. We determine the dependence of the asymmetrical patterns on the heterogeneity of both the weights of the lateral inhibitory connections and the cellular properties. We compare our results with other scenarios involving non-realistic architectures and non-realistic neuron models (e.g., no active ionic currents) to further establish the roles of the model building blocks on the emerging network patterns. Furthermore, we test the resonant properties of the networks and compare between the two scenarios determined by the two model types.

References

1. Burke DA, Rotstein HG, Alvarez VA. Striatal local circuitry: a new framework for lateral inhibition. Neuron. 2017 Oct 11;96(2):267–84.

2. McCarthy MM, Moore-Kochlacs C, Gu X, Boyden ES, Han X, et al. Striatal origin of the pathologic beta oscillations in Parkinson's disease. Proceedings of the National Academy of Sciences. 2011 Jul 12;108(28):11,620–5.

3. Gruber AJ, Solla SA, Surmeier DJ, Houk JC. Modulation of striatal single units by expected reward: a spiny neuron model displaying dopamine-induced bistability. Journal of neurophysiology. 2003 Aug;90(2):1095–114.

4. Evans RC, Morera-Herreras T, Cui Y, Du K, Sheehan T, et al. The effects of NMDA subunit composition on calcium influx and spike timing-dependent plasticity in striatal medium spiny neurons. PLoS computational biology. 2012 Apr 19;8(4):e1002493.

P155 Frequency filter interactions in networks of non-oscillatory cells

Andrea Bel 1 , Horacio Rotstein 2

1 Universidad Nacional del Sur, Departamento de Matemática, INMABB-CONICET, Bahía Blanca, Argentina

2 New Jersey Institute of Technology, Federated Department of Biological Sciences, NJIT / Rutgers University, Newark, NJ, United States of America

Email: andrea.bel@uns.edu.ar

Resonance refers to the ability of dynamical systems to exhibit a peak in their amplitude response to oscillatory inputs at a preferred (resonant) frequency. In neuronal circuits, resonance is typically measured by using the impedance amplitude profile Z defined as the absolute value of the quotient of the Fourier transforms of the output and the input. Resonance has been investigated in single neurons by many authors both experimentally and theoretically [1,2]. Network resonance has received much less attention. Two important questions are (i) whether and under what conditions a network of neurons exhibits resonance in one or more neurons in response to inputs to one or more neurons, and (ii) whether and under what conditions the information is communicated between neurons in a frequency-dependent manner.

In this project we address these issues by using a minimal network consisting of two passive cells (linear, non-resonant neurons) recurrently connected via graded synaptic inhibition or excitation and receiving oscillatory inputs in either one or the two nodes [3]. In order to investigate how network resonance emerges we extend the concept of impedance to nonlinear systems by computing the peak-to-trough amplitudes normalized by the input amplitude. In order to investigate the communication of frequency-dependent information across neurons in the network we borrow the concept of the coupling coefficient from the gap junction literature. The coupling coefficient K, defined as the quotient between the postsynaptic and presynaptic membrane potentials of two electrically coupled neurons, is used to measure the strength of the connection in the presence of constant (DC) inputs. Here we extend this metrics to synaptically connected neurons and to the frequency domain. Linear networks (linear neurons and linear connectivity) can only show a low-pass filter K profile (K as a function of the input frequency). We show that the presence of the more realistic nonlinear synaptic connectivity can produce band-pass K profiles. We note that the concept of communication of information we use here is different than the standard one used in information theory.

Acknowledgments

This work was supported by the Universidad Nacional del Sur grant PGI 24/L113-2019 (AB) and the National Science Foundation grant DMS-1608077 (HGR).

References

1. Richardson MJ, Brunel N, Hakim V. From subthreshold to firing-rate resonance. Journal of neurophysiology. 2003 May 1;89(5):2538–54.

2. Rotstein HG, Nadim F. Frequency preference in two-dimensional neural models: a linear analysis of the interaction between resonant and amplifying currents. Journal of computational neuroscience. 2014 Aug 1;37(1):9–28.

3. Bel A, Rotstein HG. Membrane potential resonance in non-oscillatory neurons interacts with synaptic connectivity to produce network oscillations. Journal of computational neuroscience. 2019 Apr;46(2):169–95.

P156 Segregated resonant mechanisms in CA1 pyramidal cells: Interplay of ionic currents and cell’s spatial structure

Ulises Chialva 1 , Horacio Rotstein 2

1 Universidad Nacional del Sur, Departmento de Matemática and CONICET, Bahía Blanca, Argentina

2 New Jersey Institute of Technology, Federated Department of Biological Sciences, NJIT / Rutgers University, Newark, NJ, United States of America

Email: uchialva@gmail.com

Neuronal synaptic inputs are processed in a frequency dependent manner, exhibiting either low-pass or band-pass (resonance) response properties [1]. Resonance is believed to play a key role in the frequency-specific information flow in neuronal networks. While the generation of resonance by ionic conductances is well understood, less attention has been paid to the dependence of the resonant properties on the spatial structure of the cell and its voltage-dependent characteristics. It is well established that the spatial structure has a key role in supporting different and spatially segregated mechanisms of resonance. Previous works [2] investigated the generation of resonance in CA1-pyramidal neurons due the presence of different currents distributed along the cell. The authors uncover two mechanisms: a somatic M-resonance at depolarized levels and a dendritic H-resonance at hyperpolarized levels. However, the mechanisms by which the interplay of these two mechanisms occurs are not well understood and it is not clear what interactions will ensure due to the presence of voltage heterogeneities along the cell such as these expected to be present in realistic conditions due to inhibitory inputs coming from PV + (proximal) and OLM (distal) interneurons.

In this work we show how the mechanisms mentioned above interact at subthreshold level due significant differences of voltage across the cell membrane and generate new filtering regimens and resonant profiles, thus modifying the dendrosomatic integration and signal transmission across the neuron. For this, we build a simple reconstruction of a biophysical neuron derived from standard multicompartment models. The model exhibits great flexibility to support different voltage distributions and when the DC-terms are applied with a spatial distribution mimicking the potential inhibitory input patterns, the difference between the somatic and distal compartment resting voltage values could be sufficient to activate or inactivate different mechanisms simultaneously. With a minimum amount of currents, this model can recreate the classic results about the coexistence of different resonant mechanisms and also produce new scenarios with interaction between them. Futher, we obtain the network impedance profile [3] and show that the spatial structure determines differences of magnitude between somatic and dendritic responses. These differences are then amplified by ionic currents and change for different H-channels distributions [4]. Finally, we study the implication this has for the signal-attenuation profile, such as the appearance of phasonance and frequency bands with less attenuation.

References

1. Hutcheon B, Yarom Y. Resonance, oscillation and the intrinsic frequency preferences of neurons. Trends in neurosciences. 2000 May 1;23(5):216–22.

2. Hu H, Vervaeke K, Graham LJ, Storm JF. Complementary theta resonance filtering by two spatially segregated mechanisms in CA1 hippocampal pyramidal neurons. Journal of Neuroscience. 2009 Nov 18;29(46):14,472–83.

3. Leiser RJ, Rotstein HG. Network resonance: impedance interactions via a frequency response alternating map (FRAM). SIAM Journal on Applied Dynamical Systems. 2019;18(2):769–807.

4. Zhuchkova E, Remme MW, Schreiber S. Somatic versus dendritic resonance: differential filtering of inputs through non-uniform distributions of active conductances. PLoS One. 2013 Nov 5;8(11):e78908.

P157 Neuronal oscillations level sets for activity constancy: from single neurons to networks

Guillermo Villanueva1, Omar Itani2, Smita More-Potdar2, Jorge Golowasch2, Horacio Rotstein3.

1 Universitat Politecnica de Catalunya, Zaragoza, Spain

2 New Jersey Institute of Technology, Federated Department of Biological Sciences, Newark, NJ, United States of America

3 New Jersey Institute of Technology, Federated Department of Biological Sciences, NJIT / Rutgers University, Newark, NJ, United States of America

Email: guillermovillanuevabenito@gmail.com

Neuronal oscillatory patterns can be characterized by a number of attributes such as frequency, amplitude, duty cycle, characteristic transition times between silent and active phases, and number of spikes per burst. The values of these attributes are determined by the interplay of the participating currents and, for the appropriate currents, can be captured the maximal synaptic conductances. Experimental and theoretical work has shown that multiple combinations of parameters can generate patterns with the same attributes [1–4]. This endows neurons and networks with flexibility to adapt to changing environments and is substrate for homeostatic regulation [4]. At the same time, it presents modelers with the phenomenon of unidentifiability in parameter estimation. Attribute Level sets (LSs) in parameter space are curves (surfaces or hypersurfaces) joining parameter values for which a given attribute is constant. Typically, but not always, LSs are attribute-dependent [2]. In previous work we have characterized the dynamic compensatory mechanisms leading to the generation activity-attribute LSs in realistic models for single neurons [2]. Whether and under what circumstances the attribute LSs for individual neurons are conserved in the networks in which they are embedded and what additional network level sets emerge is not well understood.

In this work we describe a canonical (C-) model for oscillations LSs for single cells exhibiting a wide range of realistic neuronal oscillatory patterns. The model is canonical in the sense that all attributes share the same LS (the oscillations are identical along LSs) and can be considered as an idealization of the familiar, conductance-based two-dimensional models. A systematic symmetry breaking in the C-model leads to the familiar phase-plane diagrams for neuronal oscillations and to the separation of LSs for different attributes. The LSs for individual C-cells are preserved in networks of C-cells connected via gap junctions where all cells belong to the same LS, but are not necessarily identical. In contrast, LSs are not preserved for excitatory or inhibitory networks, except for certain connectivity patterns for which the model symmetries are maintained. However, new level sets emerge in these networks. We characterize them in terms of the single cell LSs and the connectivity parameters for both homogeneous and heterogeneous networks where individual cells are identical or not, respectively, within the considered LS. We extend our results to include biophysically plausible conductance-based network models.

References

1. Prinz AA, Bucher D, Marder E. Similar network activity from disparate circuit parameters. Nature neuroscience. 2004 Dec;7(12):1345–52.

2. Rotstein HG, Olarinre M, Golowasch J. Dynamic compensation mechanism gives rise to period and duty-cycle level sets in oscillatory neuronal models. Journal of neurophysiology. 2016 Nov 1;116(5):2431–52.

3. Olypher AV, Calabrese RL. Using constraints on neuronal activity to reveal compensatory changes in neuronal parameters. Journal of Neurophysiology. 2007 Dec;98(6):3749–58.

4. Olypher AV, Prinz AA. Geometry and dynamics of activity-dependent homeostatic regulation in neurons. Journal of computational neuroscience. 2010 Jun;28(3):361–74.

P158 Flexible selection of cognitive tasks and memory suppression in a hippocampus – prefrontal cortex network regulated by the nucleus reuniens

Rodrigo Pena 1 , Horacio Rotstein 2

1 New Jersey Institute of Technology, Federated Department of Biological Sciences, Newark, NJ, United States of America

2 New Jersey Institute of Technology, Federated Department of Biological Sciences, NJIT / Rutgers University, Newark, NJ, United States of America

Email: pena@njit.edu

Our ability to switch and perform an action in response to some attended information is known as cognitive flexibility. The prefrontal cortex (PFC) is responsible for selecting and flexible routing oscillatory information (item) from the hippocampus (HPC) to the target areas [1]. These may receive commands from PFC to suppress items in HPC (retrieval suppression). Recently, a PFC model [2] showed that multiple stored items could be selected by making use of firing rate resonance (fr) and lateral inhibition. There is evidence PFC and HPC transient coupling via oscillatory-synchrony is favored by the nucleus reuniens (Re) [1]. This raises the questions of how these structures cooperatively operate and what are the dynamic mechanisms behind it.

We address these issues by developing a PFC-HPC model which extends [2]. It includes (i) simpler neurons, which allows for a mechanistic understanding of flexible routing, (ii) an HPC network with local inhibition from interneurons (IN) preferentially to closer principal cells (PC), and (iii) relative input/output activity ratios in PFC [3]. The HPC network receives square-wavePoisson modulated spikes with different frequencies and keeps multiple oscillatory activity. Third, it also contains external Re input which influences the cognitive selection and memory suppression [3]. We consider 2D conductance-based neuron models [4] where 20 PC connect to all 5 IN in a single PFC gate. A second gate also connects to the same IN population. The HPC network contains 850 PC and 250 IN. Whichever PCs in the gate receives an input frequency from HPC closer to fr will fire more and engage with the IN population suppressing other cells from the network. PC and IN have different fr, thereby engaging with IN is more important in order to suppress the other item. In accordance with [2], (ii) and (iii) are the only necessary ingredients to observe this effect in the PFC.

Our results show that chosen inputs by PFC, given its proximity with the gate’s internal fr, can be switched by the activity from Re which alters the periodicity of the selected item. In addition, Re input into PFC can awaken an otherwise suppressed gate and engage with HPC reversing the direction flow. This shows the importance of Re in routing oscillatory-synchrony HPC-PFC in both directions [3]. We also show the relevance of HPC local inhibition to maintain many stored items in the same network. There is more flexibility if Re area controls HPC-PFC since it creates competition between PFC resonant networks in cognitive selection and HPC memory storage through activation of local inhibition.

Acknowledgments

This work was supported by the National Science Foundation grant DMS-1608077 (HGR).

References

1. Eichenbaum H. Prefrontal–hippocampal interactions in episodic memory. Nature Reviews Neuroscience. 2017 Sep;18(9):547–58.

2. Sherfey J, Ardid S, Miller EK, Hasselmo ME, Kopell NJ. Prefrontal oscillations modulate the propagation of neuronal activity required for working memory. Neurobiology of learning and memory. 2020 Sep 1;173:107,228.

3. Dolleman-van der Weel MJ, Griffin AL, Ito HT, Shapiro ML, Witter MP, Vertes RP, Allen TA. The nucleus reuniens of the thalamus sits at the nexus of a hippocampus and medial prefrontal cortex circuit enabling memory and behavior. Learning & Memory. 2019 Jul 1;26(7):191–205.

4. Rotstein HG. The shaping of intrinsic membrane potential oscillations: positive/negative feedback, ionic resonance/amplification, nonlinearities and time scales. Journal of computational neuroscience. 2017 Apr 1;42(2):133–66.

P159 Revealing the link between spiking cross-correlation patterns and the underlying subthreshold neuronal dynamics

Rodrigo Pena 1 , Horacio Rotstein 2

1 New Jersey Institute of Technology, Federated Department of Biological Sciences, Newark, NJ, United States of America

2 New Jersey Institute of Technology, Federated Department of Biological Sciences, NJIT / Rutgers University, Newark, NJ, United States of America

Email: horacio@njit.edu

A sharp peak near zero in cross-correlation functions (CCFs) indicates the presence of a putative monosynaptic connection between the pre- and post-synaptic neurons [1,2]. However, CCFs are complex and contain significantly more information about the spiking pattern relationships [2]. Some of this information is apparent from the spiking patterns themselves, but spiking patterns are controlled by the neuronal subthreshold (membrane potential) dynamics whose effects remain hidden in CCFs. Whether and how the subthreshold dynamic information of post-synaptic neurons can be extracted from CCFs remains an open question. This is not a straightforward task since in vivo neuronal interactions occur in the presence of background noise, oscillatory network activity, and resonances which very often can give rise to similar spiking patterns as subthreshold mechanisms making it difficult to disambiguate the source of the pattern.

We address this issue by combining biophysical modeling, numerical simulations, and dynamical systems tools (phase-space analysis). By systematically focusing on a wide number of representative scenarios we identify the presence of additional, lower peaks in the CCFs and link them to the type of nonlinearities and time scales that operate at the neuronal subthreshold level. Under certain conditions, the combination of these dynamic components which result from the neuron’s biophysical properties cause a subset of trajectories in the phase-space diagrams to remain at subthreshold membrane potential levels for a longer time than others before escaping the subthreshold regime and producing a spike. The variability of this spike-time delay is due to a combination of noise and intrinsic dynamics. Similarly, our observations show that lower peaks also emerge in the presence of background oscillations or ripples, but these come from a second wave of spikes and not from subthreshold delayed spikes. We discuss the differences between these two types of peaks. Our results shed light on the mechanisms underlying monosynaptic interactions and more general synaptic and background patterns.

Acknowledgments

This work was supported by the National Science Foundation grant DMS-1608077 (HGR).

References

1. English DF, McKenzie S, Evans T, Kim K, Yoon E, et al. Pyramidal cell-interneuron circuit architecture and dynamics in hippocampal networks. Neuron. 2017 Oct 11;96(2):505–20.

2. Platkiewicz J, Saccomano Z, McKenzie S, English D, Amarasingham A. Monosynaptic inference via finely-timed spikes. Journal of Computational Neuroscience. 2021 May;49(2):131–57.

P160 Audio frequency spike encoding methods evaluation through mutual information

Ahmad El Ferdaoussi1, Éric Plourde1, Jean Rouat1.

1 Université de Sherbrooke, Département de génie électrique et génie informatique, Sherbrooke, Canada

Email: ahmad.el.ferdaoussi@usherbrooke.ca

Auditory nerve fibers (ANFs) from the center to the edge of the cochlear spiral are tuned to progressively higher frequencies. This results in the sound frequency being "place coded", which is an important property of the ANF response. Several methods have been proposed and used in auditory models to encode the real-valued vibrations of the basilar membrane into discrete ANF neural signals. However, it is not known to what extent these spike encoding methods can encode the frequency of sounds. In this work, we investigate the amount of information that these methods carry in their population response on the instantaneous frequency of a time-dependent sound stimulus.

We first generate a simple stimulus that consists of random continuous frequency modulations in the range of 100 Hz to 10 kHz. We then extract a cochleagram representation from the stimulus, which is a rough approximation of auditory nerve fiber discharge probabilities, using a Gammatone filter bank. We encode the cochleagram into spikes with a population of neurons with a spike time resolution of 1 ms. We use four encoding methods: ISC (Independent Spike Coding with an inhomogeneous Poisson process), SoD (Send-on-Delta, based on the delta modulation sampling strategy) [1], BSA (Ben's Spiker Algorithm, based on stimulus estimation by reverse convolution) [2], and LIF coding (by injecting the cochleagram as current to Leaky Integrate-and-Fire neurons). To probe the place coding of frequency, we investigate how much information the instantaneous neuronal population response in time carries on the time-dependent instantaneous frequency of the sound stimulus, for each encoding method. To do so, we estimate the mutual information between these two variables. In doing this, we take into account any latency due to the processing of the spike encoding methods by finding the time delay between the two time series which maximizes the mutual information. We estimate this information for a wide range of mean firing rates by varying the parameters of each method (Fig. 1). The instantaneous frequency is quantized into 8 levels yielding a quasi-uniform distribution, and the total available information is about 3 bit. To make sure our mutual information estimation is reliable, we use a stimulus long enough such that the estimated error (shuffling bias) is less than 0.02 bit. We use the quadratic extrapolation method to correct for bias in all mutual information measures [3].

We observe that the encoding methods peak in mutual information at different mean firing rates. The most efficient method to place code frequency is Leaky Integrate-and-Fire coding, which captures about 80% of the available information at a low firing rate of about 180 Hz (Fig. 1). This result is relevant for applications in which sound stimuli have to be transformed into spike representations in a biologically plausible way, like in computational modeling of the auditory system, in neuromorphic silicon cochleae (which are audio sensors that output asynchronous spikes), and in biologically plausible spiking neural networks used in audio applications.

References

1. Miskowicz M. Send-on-delta concept: An event-based data reporting strategy. sensors. 2006 Jan;6(1):49–63.

2. Schrauwen B, Van Campenhout J. BSA, a fast and accurate spike train encoding scheme. InProceedings of the International Joint Conference on Neural Networks, 2003. 2003 Jul 20 (Vol. 4, pp. 2825–2830). IEEE.

3. Strong SP, Koberle R, Van Steveninck RR, Bialek W. Entropy and information in neural spike trains. Physical review letters. 1998 Jan 5;80(1):197.

Fig. 1
figure bu

The information that the population response encodes on the instantaneous frequency of the sound stimulus. The y-axis is normalized by the total available information. The x-axis is the mean firing rate of the response which depends on the parameters of the encoding methods. BSA is limited in firing rate by design. LIF coding is the most efficient method

P161 An inter-spike interval study of the stochastic Morris Lecar burster

Priscilla Greenwood 1 , Peter Rowat 2

1 University of British Columbia, Department of Mathematics, Vancouver, Canada

2 University of California, San Diego, Computational Neuroscience, San Diego, CA, United States of America

Email: pgreenw@math.ubc.ca

In Chapter 9 of his book, "Dynamical systems in neuroscience", Izhikevich describes the deterministic dynamics of various types of neurons which emit a number of rapid spikes with quiet intervals between these bursts. In this study we explore the stochastic pattern of bursts which results from including in the model the random noise which plays a part in the behaviour of any active neuron. The inter-spike interval histogram, which uses long simulation runs of the process and plots the number of occurrences of times between bursts as a function of time, is an estimator of the probability distribution of times between bursts, and is a useful characteristic of such a model. Here we extend an earlier study of the sample path behaviour of the stochastic Morris Lecar process to the case of a Morris Lecar family of bursters.

P162 Modeling intermittent synchronization of gamma-band neural oscillations

Quynh-Anh Nguyen 1 , Leonid Rubchinsky 2

1 Indiana University Purdue University Indianapolis, Mathematical Sciences, Indianapolis, IN, United States of America

2 Indiana University Purdue University Indianapolis and Indiana University School of Medicine, Department of Mathematical Sciences and Stark Neurosciences Research Institute, Indianapolis, IN, United States of America

Email: lrubchin@iupui.edu

Synchronization in neural system plays important role in many brain functions such as perception and memory. Abnormal synchronization can be observed in neurological disorders such as Parkinson’s disease, schizophrenia, autism, and addiction. Neural synchronization is frequently intermittent even in a short time scale. That is, neural systems exhibit intervals of synchronization followed by intervals of desynchronization. Thus, neural circuits dynamics may show different distributions of duration of desynchronization even if the synchronization strength is similar, and it was found that the patterning of neural synchrony (even if the overall synchrony strength is not changed) may be correlated with behavior [1–3]. In general, some partially synchronized systems can exhibit a few but long desynchronized intervals while other systems can yield many but short desynchronized intervals. Experimental data thus far has shown that neural synchronization follows the latter trend in either healthy or diseased brains [4,5]. In this study, we use a conductance-based PING network to study neural synchronization specifically in the low gamma band. We explore the cellular and synaptic effects on the temporal patterning of the partially synchronized model gamma rhythms and considers potential functional implications of different temporal patterns. We found that changing synaptic strength does not only change the average synchronization index but also alter the temporal patterning of synchronization (and these two do not necessarily co-vary in the same way). Stronger synapses from inhibitory to excitatory neurons and from excitatory to inhibitory neurons promote shorter desynchronizations, while stronger connections between inhibitory cells may have an opposite effect. However, in almost all the cases, short desynchronizations were the most frequent, similar to the experimental observations.

Acknowledgements

This work was supported by NSF grant DMS 1,813,819.

References

1. Ahn S, Rubchinsky LL, Lapish CC. Dynamical reorganization of synchronous activity patterns in prefrontal cortex–hippocampus networks during behavioral sensitization. Cerebral Cortex. 2014 Oct 1;24(10):2553–61.

2. Ahn S, Zauber SE, Worth RM, Witt T, Rubchinsky LL. Neural synchronization: Average strength vs. temporal patterning.

3. Malaia EA, Ahn S, Rubchinsky LL. Dysregulation of temporal dynamics of synchronous neural activity in adolescents on autism spectrum. Autism Research. 2020 Jan;13(1):24–31.

4. Ahn S, Rubchinsky LL. Short desynchronization episodes prevail in synchronous dynamics of human brain rhythms. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2013 Mar 8;23(1):013,138.

5. Ratnadurai-Giridharan S, Zauber SE, Worth RM, Witt T, Ahn S, et al. Temporal patterning of neural synchrony in the basal ganglia in Parkinson’s disease.

P163 Nonlinear optimal control of neural populations

Lena Salfenmoser 1

1 Technische Universität Berlin, Institute of Software Engineering and Theoretical Computer Science, Berlin, Germany

Email: lena.salfenmoser@tu-berlin.de

We investigate optimal control strategies for a biophysical mean-field model of excitatory and inhibitory neural populations [1]. Efficient stimulation can drive the model into specific activity patterns. We compute optimal control strategies for this nonlinear dynamical system to understand how to efficiently apply external perturbation to neural populations. This can give insights into the interaction of excitation and inhibition during state changes in neural activity. Also, it can help understand how external stimulation should be designed to optimally induce or stop specific activity patterns.

Our model is a mean-field approximation of the adaptive exponential integrate-and-fire model [1]. It consists of an excitatory and an inhibitory node with feedback and feedforward couplings, which receive external input. These external currents define the dynamical landscape of the system. There is a stable up state, a stable down state, oscillations, and a bistable region. Studying optimal control strategies for a biologically plausible model of neural dynamics might enable efficient perturbation strategies as opposed to ad-hoc stimulation protocols found by trial and error. The concept of optimality requires to measure the cost of a control and the resulting neural activity. The total cost is the sum of the precision cost (how much does the activity differ from a defined target?), the sparsity cost (is control applied over extended periods of time and through one or both nodes?), and the energy cost of the control [2]. The optimal control is the control that has minimum cost (Fig. 1).

As a first exploration into the potential of such optimal control strategies, we investigate transitions from down to up or from up to down state throughout the bistable regime, imposing constraints on either sparsity or energy. We compute the optimal control for these state switching tasks with an iterative algorithm. In each step, it first applies the adjoint method [3] to compute the control gradient, and second approaches the optimum control by gradient descent. This is done numerically within neurolib, a simulation framework for neural modeling [4].

The optimal control at one particular point in the state space is shown in the figure. We analyze dimensionality (does the control use both nodes, or one node?), amplitude, and cost of the bell-shaped control currents. Enforcing energy efficiency leads exclusively to two-dimensional solutions (control is applied through both nodes). Enforcing sparsity can lead to solutions where control is applied through either the excitatory or inhibitory node, as well as to two-dimensional solutions. Which type is found depends on the location in the state space. Control through excitatory and inhibitory currents is inherently different in a sense that firstly, inhibitory control is sparser, and secondly, energy-efficient control operates primarily through the excitatory node.

References

1. Cakan C, Obermayer K. Biophysically grounded mean-field models of neural populations under electrical stimulation. PLoS computational biology. 2020 Apr 23;16(4):e1007822.

2. Casas E, Herzog R, Wachsmuth G. Analysis of spatio-temporally sparse optimal control problems of semilinear parabolic equations. ESAIM: Control, Optimisation and Calculus of Variations. 2017 Jan 1;23(1):263–95.

3. Bradley AM. PDE-constrained optimization and the adjoint method. Technical Report. Stanford University. https://cs.stanford.edu/~ambrad/adjoint_tutorial.pdf; 2013.

4. Cakan C, Jajcay N, Obermayer K. neurolib: a simulation framework for whole-brain neural mass modeling. bioRxiv. 2021 Jan 1.

Fig. 1
figure bv

The optimal control currents (second row) and the resulting excitatory (red) and inhibitory (blue) activity (first row) for the four state switching tasks for the static external input currents μEext = 0.45 nA, μIext = 0.475 nA. Precision is measured during the last 20 ms

P164 Stability and predictability code in higher-order neuronal correlations

Emili Balaguer-Ballester 1 , Ramon Nogueira 2 , Juan M. Abolafia 3 , Ruben Moreno-Bote 4 , Maria V. Sanchez-Vives 5

1 Faculty Of Science, Bournemouth University, Computing and Informatics, Bournemouth, United Kingdom

2 Mind Brain Behavior Institute, Columbia University, Center for Theoretical Neuroscience, Mortimer B. Zuckerman, New York, NY, United States of America

3 Institut d'Investigacions Biomediques August Pi i Sunyer (IDIBAPS), Systems neuroscience, Barcelona, Spain

4 Universitat Pompeu Fabra, Center for Brain and Cognition & DTIC, Barcelona, Spain

5 Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), ICREA, Systems Neuroscience, Barcelona, Spain

Email: eb-ballester@bournemouth.ac.uk

The functional role of the observed neural and behavioural variability in repetitions of the same task is a fundamental question in neuroscience [1]. However, the relationship between trial-by-trial shared variability (noise correlation) and behavioural performance is heterogeneous [2]. For instance, it has been proposed that neuronal pairwise correlations might not always serve as a proxy for behavioural performance, since only the variability along the encoding axis is detrimental to information transmission [3].

In this study, we investigate the complex relationship between predictability of optimal choices, correlations, and stable states in rodent lateral orbitofrontal cortex (OFC) ensembles. The OFC has been associated with multiple behaviourally relevant variables in the decision-making task space. However, unlike in other frontal areas, the OFC signature of whether optimal choices are or are not predictable from previous trials outcomes is less established [4]. We used a two-choice interval-discrimination task, designed such that the rewarded stimulus is repeated in the upcoming trial after an incorrect choice, and thus it can be predicted. Methodologically, we demonstrated the mapping between noise correlations of order θ, decoders operating in specific high-dimensional Hilbert state-spaces, and stability of ensemble states associated with correct choices. This mapping enabled us to explore the full space of all possible θ-order correlations, not directly accessible computationally, leveraging Bayes-optimal kernel classifiers [5].

Results showed that only states associated with correct choices that can be predicted from the previous trial outcome, are effectively decoded [5]; and showed higher positive noise correlations [2,5]. Moreover, such states behaved as attractors embedded in a high-dimensional state-space spanned by all possible constellations of up to θ = 3 correlated units. However, both incorrect and unpredictable choice outcome states were unstable in the state space, and non-decodable. This was due to strong negative correlations occurring before stimulus presentation. These phenomena were significantly weaker for pairwise correlations and for other correlation orders.

Our results suggest that the successful processing of the task by lOFC ensembles could map to long-lasting metastable states over trials. Such metastable states gain stability when the optimal choice is deterministic and behaviourally relevant by attenuating triple-wise negative correlations; and destabilize otherwise [5].

References

1. Balaguer-Ballester E. Cortical variability and challenges for modeling approaches. Frontiers in systems neuroscience. 2017 Apr 4;11:15.

2. Valente M, Pica G, Bondanelli G, Moroni M, Runyan CA, et al. Correlations enhance the behavioral readout of neural population activity in association cortex. Nature Neuroscience. 2021 May 13:1–2.

3. Moreno-Bote R, Beck J, Kanitscheider I, Pitkow X, Latham P, et al. Information-limiting correlations. Nature neuroscience. 2014 Oct;17(10):1410–7.

4. Nogueira R, Abolafia JM, Drugowitsch J, Balaguer-Ballester E, Sanchez-Vives MV, et al. Lateral orbitofrontal cortex anticipates choices and integrates prior with current information. Nature Communications. 2017 Mar 24;8(1):1–3.

5. Balaguer-Ballester E, Nogueira R, Abofalia JM, Moreno-Bote R, Sanchez-Vives MV. Representation of foreseeable choice outcomes in orbitofrontal cortex triplet-wise interactions. PLoS computational biology. 2020 Jun 24;16(6):e1007862.

P165 Slow oscillations in mice show a rich club structure inside time evolving chimera states

Andrea Insabato 1 , Melody Torao-Angosto 1 , Anton Filipchuk 2 , Gorka Zamora-Lopez 3 , Brice Bathellier 2 , Maria V. Sanchez-Vives 4

1 Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain

2 Institut Pasteur, INSERM, Institut de l’Audition, Paris, France

3 Universitat Pompeu Fabra, Center for Brain and Cognition, Barcelona, Spain

4 Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), ICREA, Systems Neuroscience, Barcelona, Spain

Email: a.insabato@gmail.com

Slow oscillations are a pattern of synchronized activity commonly observed in the cerebral cortex, characterized by the alternation of high (Up) and low activity states (Down).

The structure of local brain networks underlying such characteristic activity pattern is largely unknown.

In order to fill this gap, we study the evolution in time of network structure during synchronized (isofluorane anesthesia) versus desynchronized activity patterns during the awake state.

We recorded the activity from head-fixed mice expressing gCamp6s calcium indicator in a window of 1 mm side on the temporal lobe that allowed monitoring the simultaneous activity of ~ 200 neurons. Calcium images were preprocessed to identify neuronal cell bodies, extract the mean calcium fluorescence signal of each neuron and reconstruct the spike train of each neuron [1].

We used Fano Factor of calcium spike times to measure network synchronization. We estimated the time evolving network topology with a sliding window approach, where for each window we calculate the L1-regularized precision matrix of fluorescence traces. As a result, we obtained a time sequence of functional networks.

Our results are summarized as follows: During synchronized periods of Up and Down states, population events (groups of spikes emitted by different neurons in a short time window) alternated with silent periods that are characteristic of slow oscillations. Network synchronization as measured by the Fano Factor increases at the beginning of a population event, then decreases and increases again at the end of the population event (Fig. 1). Although electrophysiological recordings have suggested that the majority or all neurons in the network contribute to Up states [2], our results revealed that during each population event only part of the observed network synchronizes giving rise to a so-called chimera state (where synchrony and asynchrony coexist) [3]. As previously reported [4], these Up-like states were represented by repeating neuronal ensembles. Interestingly, we show that these ensembles present a non-trivial network structure characterized by the presence of a rich club of highly connected hub neurons connected to peripheral (less connected) nodes, which produces negative assortativity. During the awake state, the activity in the network was generally higher and less synchronized, although some population events could still be identified. These Up-like events were less synchronous and their structure more similar to that of a randomly connected network.

Acknowledgements

Funded by EU H2020 Research and Innovation Programme, Grant No. 945539 (HBP SGA3).

References

1. Deneux T, Kaszas A, Szalay G, Katona G, Lakner T, et al. Accurate spike estimation from noisy calcium signals for ultrafast three-dimensional imaging of large neuronal populations in vivo. Nature communications. 2016 Jul 19;7(1):1–7.

2. Steriade M, Nunez A, Amzica F. A novel slow (< 1 Hz) oscillation of neocortical neurons in vivo: depolarizing and hyperpolarizing components. Journal of neuroscience. 1993 Aug 1;13(8):3252–65.

3. Majhi S, Bera BK, Ghosh D, Perc M. Chimera states in neuronal networks: a review. Physics of life reviews. 2019 Mar 1;28:100–21.

4. Filipchuk A, Destexhe A, Bathellier B. Spontaneous and sensory-evoked neuronal ensembles are distinct in awake mouse auditory cortex. Conference abstract 2–033. Cosyne (2021).

Fig. 1
figure bw

Example of network topology dynamics during anesthesia. a, b weighted adjacency matrix of two example networks corresponding to a population event (a), where a chimera state is clearly visible and a silent period (b), as indicated by arrows; c fano factor (upper panel) and rich club coefficient and assortativity (bottom panel) over time superimposed to the spike raster

P166 Fast and slow inhibition on cortical spatiotemporal complexity in a computational model of the cerebral cortex

Leonardo Dalla Porta 1 , Maria V. Sanchez-Vives 2

1 Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain

2 Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), ICREA, Systems Neuroscience, Barcelona, Spain

Email: leonardodallaporta@gmail.com

The cerebral cortex exhibits a rich dynamic repertoire of activity ranging from highly synchronized to desynchronized states. Each of these states, either physiological or pathologic, can be characterized by its spatiotemporal complexity. An approach used in the clinic to quantify cortical complexity is the perturbational complexity index (PCI), which quantifies the causal interactions that follow an exogenous perturbation of the cortex [1]. It consists of estimating the Lempel–Ziv complexity of the spatiotemporal matrix of cortical activation after perturbation. However, how do cellular, synaptic and network parameters modulate cortical spatiotemporal complexity? In cortical processing there is co-occurrence of excitation and inhibition both during spontaneous activity and in response to stimulation. To shed light on the role of inhibition in cortical complexity, here we proposed a data-driven biophysically detailed two-dimensional computational model to investigate the relevance of fast inhibition, GABA-A receptors-mediated, and slow inhibition, mediated by GABA-B-Rs.

Our model consists of pyramidal and inhibitory conductance-based neurons randomly distributed and interconnected through biologically plausible synaptic dynamics within a local range [2]. The model is able to reproduce spontaneous activity in the form of slow oscillations (SO, characterized by Up and Down dynamics) as well as evoked activity by means of external perturbation. In our model, fast and slow inhibition modulated Up and Down dynamics. During spontaneous activity, the progressive blockage of GABA-A resulted in a shortening of Up states and elongation of Down states while the progressive blockage of GABA-B resulted in a gradual elongation of Up and Down states. During evoked activity, the progressive reduction of GABA-A and GABA-B resulted in a decrease in the PCI. We took advantage that the model allowed us to explore a larger parameter space than the experiments did, and so we did a parametric variation of the inhibition levels. We explored the effects of fast inhibition on PCI also by enhancing inhibition and found that there is a window of excitatory/inhibitory balance in which complexity was maximal, but either enhancing or decreasing inhibition diminished complexity. Indeed, we observed that during SO, a disinhibited network was fully integrated, while weakly segregated, giving rise to activation waves that rapidly spanned the whole network. Conversely, in an inhibited network, the spontaneous activity was highly segregated and weakly integrated, and the activation waves propagated more locally and did not span over the whole network. Nonetheless, where there was a balance between integration and segregation, the activation waves spanned over the whole network recruiting their nearest neighbors. Our findings suggest that there is a close link between integration and segregation with E/I balance and that higher/lower PCI values are not the consequence of merely increasing/decreasing excitability.

Acknowledgments

Funded by the Spanish Ministry of Science (BFU2017-85,048-R) and EU H2020 Research and Innovation Programme, Grant 945,539 (HBP SGA3).

References

1. Casali AG, Gosseries O, Rosanova M, Boly M, Sarasso S, et al. A theoretically based index of consciousness independent of sensory processing and behavior. Science translational medicine. 2013 Aug 14;5(198):198ra105-.

2. Compte A, Sanchez-Vives MV, McCormick DA, Wang XJ. Cellular and network mechanisms of slow oscillatory activity (< 1 Hz) and wave propagations in a cortical network model. Journal of neurophysiology. 2003 May 1;89(5):2707–25.

P167 Classification of large-scale brainwaves in brain network models with heterogeneous time delays

Sebastian Raison 1 , Paula Sanz-Leon 2

1 QIMR Berghofer Medical Research Institute, Computational Biology and Genetics, Brisbane, Australia

2 University of Sydney, QIMR Berghofer, Brisbane, Australia

Email: sebraison1@gmail.com

Spatiotemporal patterns of neural activity, often called brainwaves, have been established as common expressions of the collective dynamical behaviour of neurons over mesoscopic [1–4] and macroscopic [5–8] spatial scales. Recently, it has been shown that macroscopic wave patterns can be simulated in whole-brain oscillator networks derived from human MRI tractography, when oscillator dynamics reflect the mean activity in a cortically localised neural aggregate with a high degree of biophysical fidelity and nearby regions exert strong influences on each other’s dynamics [9]. However, until now, it remained unknown whether whole-brain waves also emerge with realistic, tractography-based time delays, though distance-dependent delays are well known to contribute to the formation of spatial patterns [10]. We simulate whole-brain waves with delays empirically derived from human MRI tractography, and develop a classification system for the array of resulting dynamics. We utilise a dual approach to characterising patterns in 3-dimensional space. Firstly, we follow previous research in calculating 3D flow-fields [9], making use of the neural-flows toolbox (https://github.com/brain-modelling-group/neural-flows). The resulting flow patterns are described as sinks, sources, travelling waves, rotating waves, diverging waves, or complex waves. Secondly, we assess the local phase coherence [11,12] of patterns by use of a time- and node-averaged Kuramoto local order parameter, and describe dynamics as synchronised, coherent, partially coherent, or incoherent. Simulations exhibiting a variety of dynamical behaviours are obtained by varying global coupling strength, global conduction speed, and time delay spatial structure. We classify each simulation into one of 6 classes, constructed by observation of common pairings of a particular flow pattern and coherence description.

We find that wave patterns emerge most strongly (i.e., with a high degree of local phase coherence) when global coupling strength and global conduction speed are high. We also find that while wave patterns with a high degree of coherence can occur even when time delays have been completely restructured, the empirical delay structure preferentially supports a stable coherent rotating wave when global coupling strength and conduction speed are sufficiently high. Other delay structures tend to either obliterate large-scale patterns (i.e., have very low local phase coherence) or support coherent activity with a variety of flow patterns other than the stable rotating wave (Fig. 1).

References

1. Huang X, Xu W, Liang J, Takagaki K, Gao X, et al. Spiral wave dynamics in neocortex. Neuron. 2010 Dec 9;68(5):978–90.

2. Rubino D, Robbins KA, Hatsopoulos NG. Propagating waves mediate information transfer in the motor cortex. Nature neuroscience. 2006 Dec;9(12):1549–57.

3. Sato TK, Nauhaus I, Carandini M. Traveling waves in visual cortex. Neuron. 2012 Jul 26;75(2):218–29.

4. Townsend RG, Solomon SS, Chen SC, Pietersen AN, Martin PR, et al. Emergence of complex wave patterns in primate cerebral cortex. Journal of Neuroscience. 2015 Mar 18;35(11):4657–62.

5. Hangya B, Tihanyi BT, Entz L, Fabó D, Eróss L, et al. Complex propagation patterns characterize human cortical activity during slow-wave sleep. Journal of Neuroscience. 2011 Jun 15;31(24):8770–9.

6. Ito J, Nikolaev AR, Van Leeuwen C. Spatial and temporal structure of phase synchronization of spontaneous alpha EEG activity. Biological cybernetics. 2005 Jan;92(1):54–60.

7. Massimini M, Huber R, Ferrarelli F, Hill S, Tononi G. The sleep slow oscillation as a traveling wave. Journal of Neuroscience. 2004 Aug 4;24(31):6862–70.

8. Muller L, Piantoni G, Koller D, Cash SS, Halgren E, Sejnowski TJ. Rotating waves during human sleep spindles organize global patterns of activity that repeat precisely through the night. Elife. 2016 Nov 15;5:e17267.

9. Roberts JA, Gollo LL, Abeysuriya RG, Roberts G, Mitchell PB, Woolrich MW, Breakspear M. Metastable brain waves. Nature communications. 2019 Mar 5;10(1):1–7.

10. Jeong SO, Ko TW, Moon HT. Time-delayed spatial patterns in a two-dimensional array of coupled oscillators. Physical review letters. 2002 Sep 20;89(15):154,104.

11. Petkoski S, Spiegler A, Proix T, Aram P, Temprado JJ, Jirsa VK. Heterogeneity of time delays determines synchronization of coupled oscillators. Physical Review E. 2016 Jul 11;94(1):012,209.

12. Kuramoto Y. Chemical turbulence. In Chemical Oscillations, Waves, and Turbulence 1984 (pp. 111–140). Springer, Berlin, Heidelberg.

Fig. 1
figure bx

Dominant flow patterns arising from dynamics under tractography-data based delays change with network global coupling strength and global conduction speed

P168 Altered intrinsic excitability impairs synaptic plasticity at Schaffer-collateral synapses on hippocampal CA1 pyramidal neuron in Alzheimer’s disease

Justinas Dainauskas 1 , Michele Migliore 2 , Helene Marie 3 , Ausra Saudargiene 1

1 Lithuanan University of Health Sciences, Neuroscience Institute, Kaunas, Lithuania

2 Institute of Biophysics, National Research Council, Palermo, Italy

3 Université Côte d ‘Azur, CNRS, Institut de Pharmacologie Moléculaire et Cellulaire, Valbonne, France

Email: justinas.juozas.dainauskas@lsmu.lt

Long-term potentiation (LTP) and long-term depression (LTD), the ability of a synapse to enhance or weaken its strength, is believed to be a biological basis of learning and memory. Hippocampal synaptic plasticity is modulated by the alterations in neuronal intrinsic excitability. Intrinsic excitability and synaptic plasticity are affected in Alzheimer’s disease (AD), a neurodegenerative disorder, characterized by progressive memory loss and cognitive dysfunction. In the early stage of AD, hippocampal learning impairment is observed due to the accumulation of amyloid precursor protein (APP) metabolite APP intracellular fragment (AICD) that modifies intrinsic excitability of hippocampal CA1 pyramidal neuron and disrupts synaptic plasticity [1].

In this study, we investigated the effect of altered intrinsic excitability on synaptic plasticity in a hippocampal CA1 pyramidal cell affected by AD using a computational modeling approach. We used a detailed compartmental model of a hippocampal CA1 pyramidal neuron [2] and included the influence of AICD by altering the small-conductance calcium-activated potassium channels (SK), L-type calcium channels, and contribution of the GluN2B-containing NMDA receptor (NMDAr). A modified NMDAr dependent voltage-based synaptic plasticity model [3] was used to analyse synaptic plasticity changes at clustered Schaffer collateral synapses. Each cluster contained 50 synapses distributed along the dendritic branches with densities in a range of 0.05 to 1.0 synapse/μm. The synapses were stimulated with 1 Hz for 900 s to induce LTD and 2 bursts of 100 Hz for 1 s, separated by 2 s window for LTP [1]. The results show that altered neuronal intrinsic excitability due to the increased AICD production disrupts LTP leaving LTD intact. Elevated AICD levels enhance NMDAr expression and lead to SK channel overactivation, thus reducing neuron sensitivity to the incoming presynaptic inputs for high frequency LTP induction protocol. Contrary, neuron adequately responds to low frequency stimulation and maintains LTD. Partial blockade of NMDAr restores normal SK channel function and rescues LTP. These findings provide insights into the pathological dynamical effects of AICD on NMDAr, SK channel properties, the resulting neuronal intrinsic excitability and impaired synaptic plasticity.

Acknowledgements

Funded by the Research Council of (Lithuania), Agence Nationale de la Recherche (France) (Flagship ERA-NET Joint Transnational Call JTC 2019 in synergy with the Human Brain Project, No. S-FLAG-ERA-20–1/2020-PRO-28), the EU Horizon 2020 Framework Program for Research and Innovation (Specific Grant 945,539, Human Brain Project SGA3); Fenix computing and storage resources was provided under Specific Grant Agreement No. 800858 (Human Brain Project ICEI) and a grant from the Swiss National Supercomputing Centre (CSCS) under project ID ich011.

References

1. Pousinha PA, Mouska X, Bianchi D, Temido-Ferreira M, Rajao-Saraiva J, et al. The amyloid precursor protein C-terminal domain alters CA1 neuron firing, modifying hippocampus oscillations and impairing spatial memory encoding. Cell reports. 2019 Oct 8;29(2):317–31.

2. Migliore R, Lupascu CA, Bologna LL, Romani A, Courcol JD, et al. The physiological variability of channel density in hippocampal CA1 pyramidal cells and interneurons explored using a unified data-driven modeling workflow. PLoS computational biology. 2018 Sep 17;14(9):e1006423.

3. Clopath C, Gerstner W. Voltage and spike timing interact in STDP–a unified model. Frontiers in synaptic neuroscience. 2010 Jul 21;2:25.

P169 Replicating bursting neurons that signal input slopes with izhikevich neurons

Rebecca Miko 1 , Volker Steuber 1 , Michael Schmuker 1

1 University of Hertfordshire, Biocomputation Research Group, Hatfield, United Kingdom

Email: r.miko@herts.ac.uk

Pyramidal neurons commonly fire with short bursts of high frequency. [1] used a computational model of pyramidal neurons to understand if particular spatial and temporal features in neuronal inputs trigger these bursts, which would suggest that these firing patterns represent special neuronal coding. Their two-compartmental model fired bursts most often at the positive slopes of both sinusoidal and naturalistic inputs.

Here, we simplify their model, with the view of a more efficient simulation and implementation on neuromorphic hardware. We do this by investigating whether the same behaviours from their model can be seen in a network of intrinsically bursting (IB) Izhikevich neurons [2]. We create a comparably similar input signal of Gaussian white noise (sampling rate fs = 400 Hz, μ = 0.003, and σ = 0.005) and use a 20 Hz Butterworth low-pass filter. It was first injected as input current directly into an excitatory IB Izhikevich neuron. The input current was then inverted and injected into the same neuron to obtain the inhibitory response, investigating whether the neuron has the bidirectional slope detection demonstrated in the Kepecs model [1].

The results show that the neuron fires most often at the positive slopes and also demonstrates bidirectional slope detection (Fig. 1 shows an example of this behaviour), specifically for the low-pass filter cut-off frequency fc > 20 Hz. Therefore, the IB Izhikevich neuron can indeed display similar behaviours in comparison to theKepecs model [1].

References

1. Kepecs A, Wang XJ, Lisman J. Bursting neurons signal input slope. Journal of Neuroscience. 2002 Oct 15;22(20):9053–62.

2. Izhikevich EM. Simple model of spiking neurons. IEEE Transactions on neural networks. 2003 Nov;14(6):1569–72.

Fig. 1
figure by

Depiction of the IB Izhikevich neurons’ response to the first 1s of stimulus. The e spiketrain shows the response for the input current as shown, whereas the i spiketrain shows the response for the invert of the input current

P170 Dynamical differential covariance (DDC) recovers network structure in multi­scale neural systems

Yusi Chen 1 , Burke Rosen 2 , Terrence Sejnowski 3

1 University of California, San Diego, Neurobiology, San Diego, CA, United States of America

2 University of California, San Diego, Neuroscience, San Diego, CA, United States of America

3 Salk, Computational Neurobiology Lab, San Diego, CA, United States of America

Email: cyusi@ucsd.edu

Our ability to sense, think, and react emerges from neural interactions at all scales, thus methods investigating such causal relationship is essential to the study of brain functions. A rich repertoire of statistical methods has been introduced to the field. Still, it remains difficult to efficiently and correctly estimate the network connection, especially the connection direction[1]. Our previous work empirically evaluated differential covariance (dCov) [2], calculated as the covariance between the derivative signal and the original signal, and then demonstrated its superior performance in detecting network connections. In this paper, we explored the intrinsic link of dCov to dynamical systems and modified it for dynamical differential covariance (DDC). After formulating system equations of multi­scale neural dynamics, DDC was derived analytically and validated in both simulations and real datasets. In networks with common false positive motifs governed by various dynamics, DDC could correctly estimate both the existence and direction of ground truth connections with low bias and variance. In addition, DDC retrieved ground truth connections with high sensitivity in both microscopic and macroscopic neural dynamic simulations. Furthermore, using the Human Connectome Project (HCP) resting state fMRI (rs-fMRI) recordings, DDC consistently picked up regional interactions with stronger structural connectivity, measured by diffusion MRI (dMRI) [3], at the individual level. Compared to the empirical dCov, DDC has higher noise tolerance and higher sensitivity. Moreover, it has the potential to adapt to different interacting dynamics and recording techniques.

References

1. Smith SM, Miller KL, Salimi-Khorshidi G, Webster M, Beckmann CF, et al. Network modelling methods for FMRI. Neuroimage. 2011 Jan 15;54(2):875–91.

2. Lin TW, Chen Y, Bukhari Q, Krishnan GP, Bazhenov M, et al. Differential covariance: A new method to estimate functional connectivity in fMRI. Neural Computation. 2020 Dec 1;32(12):2389–421.

3. Rosen BQ, Halgren E. A whole-cortex probabilistic diffusion tractography connectome. Eneuro. 2021 Jan;8(1).

P171 Odor-evoked increases in olfactory bulb mitral cell spiking variability

Cheng Ly 1 , Andrea Barreiro 2 , Shree Hari Gautam 3 , Woodrow Shew 3

1 Virginia Commonwealth University, Statistical Sciences and Operations Research, Richmond, VA, United States of America

2 Southern Methodist University, Mathematics, Dallas, TX, United States of America

3 University of Arkansas, Physics, Fayetteville, AR, United States of America

Email: cly@vcu.edu

At the onset of sensory stimulation, the variability and co-variability of spiking activity is widely reported to decrease, especially in cortex. Considering the potential benefits of such decreased variability for coding, it has been suggested that this could be a general principle governing all sensory systems. Here we show that this is not so. We recorded mitral cells in olfactory bulb (OB) of anesthetized rats and found increased variability and co-variability of spiking at the onset of odor stimulation. Using models and analysis, we predicted that these increases arise due to network interactions within OB, without increasing variability of input from the nose. We tested and confirmed this prediction using optogenetic stimulation of OB in awake animals. Our results establish increases in spiking variability at stimulus onset as a viable alternative coding strategy to the more commonly observed decreases in variability in many cortical systems.

Simultaneous microelectrode array recordings were made from the OB and anterior piriform cortex (aPC), with and without an odor stimulus (1120 cells, 17,674 pairs, 10 trials). An odor (Ethyl Butyrate) was presented for 1 s, from which we computed the population firing rate (i.e., the PSTH), spike count variance, and spike count covariance in 100 ms overlapping time windows (Fig. 1A). In contrast to recordings in cortex, measures of variability and covariability in OB increased when the stimulus was presented (Fig. 1A).

In order to explain this, we studied a minimal microcircuit of 7 cells with 2 representative glomeruli (Fig. 1B) each with a Periglomerular and Mitral cell. Three granule cells provided inhibition; two independent to each glomerulus, and a third common to both glomeruli. Each cell was described by a firing rate model in the form of a stochastic differential equation. The transfer functions, synaptic variables, and time-scales are all derived from a detailed biophysical model [1] for each cell type. To identify the circuit mechanisms consistent with our experimental data, we considered both: i) the dynamics of plausible olfactory receptor neuron input noise (presynaptic to OB) that capture our data, ii) whether the OB synaptic strengths were a factor, focusing on those known to modulate mitral cell activity [2,3]: independent granule cell inhibition of mitral cells (wMG), shared granule cell inhibition (wGc), and mitral cell excitation of granule cells (wGM). We calculated model PSTH for 10,000 points that fill the 3D volume of parameter space (Halton sampling) and retained those which matched the experiments within a certain tolerance.

We found that granule inhibition to distinct MCs must be relatively strong, while shared GC inhibition among MC must be weak (Fig. 1C). Qualitatively matching our experimental data is not automatically ensured but requires specific combinations of co-tuned parameters (Fig. 1E). Importantly, we also found that total error between model and data are minimized when the ORN input noise is relatively small and fixed (Fig. 1D). Thus, we predict that evoked increases in OB variability does not require increases in ORN input noise. In awake mice with direct optogenetic OB stimulation (Fig. 1F) that circumvents the ORN pathway [4], we indeed verify our prediction.

References

1. Li G, Cleland TA. A coupled-oscillator model of olfactory bulb gamma oscillations. PLoS computational biology. 2017 Nov 15;13(11):e1005760.

2. Galán RF, Fourcaud-Trocmé N, Ermentrout GB, Urban NN. Correlation-induced synchronization of oscillations in olfactory bulb neurons. Journal of Neuroscience. 2006 Apr 5;26(14):3646–55.

3. Giridhar S, Doiron B, Urban NN. Timescale-dependent shaping of correlation by olfactory bulb lateral inhibition. Proceedings of the National Academy of Sciences. 2011 Apr 5;108(14):5843–8.

4. Bolding KA, Franks KM. Recurrent cortical circuits implement concentration-invariant odor coding. Science. 2018 Sep 14;361(6407).

Fig. 1
figure bz

A Recordings in anesthetized rats show evoked increases in variability (variance and covariance too, not shown), 1120 cells, 17,674 pairs, averaged over 10 trials; gray regions represent population heterogeneity. B Two glomeruli model focusing on individual (wMG) and shared (wGc) inhibition, and excitation (wGM). C wMG > wGM > wGc captures data best. D Small, fixed ORN input noise (black) captures data best. E Capturing firing rate data with model. F Direct optogenetic stimulation of OB in awake mice [4] gives evoked increases in spike variability, verifying our prediction that input from nose does not need to increase for evoked increases in OB spike variability

P172 Deep neural embedding of neuronal connectivity

Arata Shirakami 1 , Takuma Toba 2 , Isao Nakamoto 1 , Takeshi Hase 3 , Masanori Shimono 1

1 Kyoto University, Graduate School of Medicine and Faculty of Medicine, Kyoto, Japan

2 The University of Tokyo, Graduate School of Medicine and Faculty of Medicine, Bunkyo-ku, Japan

3 Tokyo Medical and Dental University, Medical Data Sciences Office, Bunkyo-ku, Japan

Email: nori417@gmail.com

Our brain works with a complex network of hundreds of millions of components, cells. With recent advances in measurement technology, the measurable size of data on brain connectivity is becoming larger. Therefore, more efficient compression methods are becoming more important. This study utilized a Deep Neural Embedding (DAE) technique to compress ~1000 neuronal functional connectivity [1] into small data as the form that can be fully recovered. We then analyzed what features of the brain were captured by the compressed data, comparing it to several network variables and principal components (PC). We also compared performance of DAE with that of Principal Component Analysis (PCA), a commonly used linear dimensional compression method.

We could expect that DAE extracts features within the range of human explanations and may also extract features that are beyond the range of human explanation. Therefore, we not only tried to interpret the extracted features by widely comparing them with representative network-metrics, and but also designed a new metric, which is not commonly used for network analyses to complement the difficult features to simply interpret. This compression scheme will help us to effectively extract rules of various complex connectivity architectures.

References

1. Kajiwara M, Nomura R, Goetze F, Kawabata M, Isomura Y, et al. Inhibitory neurons exhibit high controlling ability in the cortical microconnectome. PLoS Computational Biology. 2021 Apr 8;17(4):e1008846.

Fig. 1
figure ca

The network pattern shows a functional neuronal connectivity pattern. Here, the marker sizes show Degree in panel (a), Betweenness Centrality in panel (b), and a new metric in panel (c)

P173 Prediction of clinical symptoms based on global cortical thinning patterns in Parkinson's disease

Saeko Kikuchi 1 , Arata Shirakami 2 , Masanori Shimono 2

1 Kyoto University, Faculty of Medicine, Kyoto, Japan

2 Kyoto University, Graduate School of Medicine and Faculty of Medicine, Kyoto, Japan

Email: nori417@gmail.com

In Parkinson's disease (PD), the relationship between cortical thinning and various physical and mental symptoms is not fully understood. Here, we attempted to predict PD symptoms from cortical thinning patterns in PD patients. We evaluated the motor and non-motor symptoms of 181 PD patients treated at Kyoto University Hospital using neurological tests, neuropsychological tests, and questionnaires. In addition, head MRI was also recorded, and T1-weighted images (MPRAGE) were obtained. Then, we determined cortical thickness for T1-weighted images using FreeSurfer (ver. 6) by dividing cortex into 180 unilateral regions (360 bilateral regions) based on the HCP-MMP1 atlas.

From the dataset, first, we drew a dendrogram based on the Spearman correlations (Fig. 1), which evaluates the similarity of individual differences in behavioral performance among tasks, and we were able to naturally classify clinical tasks close to known domains. Second, we predicted the clinical-task performances based on combinations of cortical thickness in all cortical regions using a a machine learning algorithm. Because we found age severely affected to individual difference of many tasks performances, we will report the prediction result after correcting the effect of age in the main conference.

Acknowledgements

We warmly acknowledge all support by many medical doctors relating to this project in Department of Neurology, Kyoto University Graduate School of Medicine relating to this project, especially Kenji Yoshimura, Atsushi Shima, Sawamoto Nobukatsu, and support by Asuka Honji in data analysis.

Fig. 1
figure cb

Dendrogram of behavioral tasks based on Spearman correlations, which reflect individual differences in scores of tasks. The dendrogram is colored just to identify branches of subgroups easily. Odd- and even-numbered tasks are listed on the left and right respectively. Here, we used omitted names of tasks and subcategories to express individual tasks

P174 Dynamic network reconstruction of hypothalamic melanin-concentrating hormone (MCH) neurons

Sorinel Oprisan 1 , Xandre Clementsmith 2 , Carlos Blanco-Centurion 3 , Priyattam Shiromani 3

1 College of Charleston, Department of Physics and Astronomy, Charleston, SC, United States of America

2 College of Charleston, Department of Psychology, Charleston, SC, United States of America

3 Medical University of South Carolina, Neuroscience, Charleston, SC, United States of America

Email: oprisans@cofc.edu

Hypothalamic neurons that synthesize the neuropeptide melanin-concentrating hormone (MCH) are active during waking and REM sleep. We used deep-brain calcium fluorescence imaging to identify individual hypothalamic neurons that contain MCH. Previous in-vivo electrophysiological studies established a linear relationship between neural depolarization and calcium fluorescence in MCH neurons. Spatial and temporal correlation maps of the change in fluorescence between pairs of MCH neurons revealed local coupling among neurons and the changes in connectivities that take place at the transition between REM sleep and exploratory behavior [1]. In this study, we investigated the causal relationship among different MCH neurons and modeled the local network using a Generalized Linear Model (GLM) and Transfer Entropy (TE). GLM is a generalization of linear regression [2] and TE is a measurement of directed information flow. The calcium fluorescence z-scores were fed into the MLSpike package to extract spike trains from calcium fluorescence. MLSpike maps the continuous z-score values to a discrete point process. We used normal distribution for calcium fluorescence and tested Poisson and Gaussian distributions for the spike trains. In each of these cases, GLMs and TE models were utilized for each neuron, determining each neuron's effect on every other neuron. This approach differs from correlation measures of neural activity in that it is directional. Using GLMs and TE, we were able to approximate the directional (causal) couplings among neurons, i.e., the neural network's functional structure.

Comparisons between actual test data (red) and GLM predictions (blue) reveal strong model performance (Fig. 1). Correlation was used to measure the similarity between predicted and actual z-scores. While the weighted connections estimated by GLM (not shown) are nearly mirrored across the diagonal, as it becomes more difficult to determine directionality with only one experimental variable. Additionally, there are a large number of extreme coefficients, with both strong excitatory and inhibitory connections.

In contrast, GLMs coefficients were more targeted, due to the multivariate approach (Fig. 1). Only a few weights fell outside of one standard deviation, with the set following a normal distribution (not shown). This determination was reinforced by a kstest. Still, this was sufficient for predicting the test dataset. Additionally, the coefficients are less reflexive than with linear regression models, such that neuron 1's effect on neuron 2 is not identical to neuron 2's effect on neuron 1.

Acknowledgments

This research was supported by College of Charleston R&D. This project was also supported by grants from the National Center for Research Resources (5 P20 RR016461) and the National Institute of General Medical Sciences (8 P20 GM103499) from the National Institutes of Health. This work is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

References

1. Blanco-Centurion C, Luo S, Spergel DJ, Vidal-Ortiz A, Oprisan SA, et al. Dynamic network activation of hypothalamic MCH neurons in REM sleep and exploratory behavior. Journal of Neuroscience. 2019 Jun 19;39(25):4986–98.

2. Dobson AJ, Barnett AG. An introduction to generalized linear models. CRC press; 2018 Apr 17.

Fig. 1
figure cc

GLM Modeling. a Prediction of a test dataset against the actual test dataset. b Multivariate connections calculated using Generalized Linear Models

P175 Uncovering the invariant structural organization of the human connectome

Anand Pathak 1 , Shakti N. Menon 1 , Sitabhra Sinha 1

1 The Institute of Mathematical Sciences, Chennai, India

Email: anandpathak31@gmail.com

In order to understand the complex cognitive functions of the human brain, it is essential to study the structural macro-connectome, i.e., the wiring of different brain regions to each other through axonal pathways, that has been revealed by imaging techniques. However, the high degree of plasticity and cross-population variability in human brains makes it difficult to relate structure to function, motivating a search for invariant patterns in the connectivity. At the same time, variability within a population can provide information about the generative mechanisms. In this paper we analyze the connection topology and link-weight distribution of human structural connectomes obtained from a database comprising 196 subjects. By demonstrating a correspondence between the occurrence frequency of individual links and their average weight across the population, we show that the process by which the human brain is wired is not independent of the process by which the link weights of the connectome are determined. Furthermore, using the specific distribution of the weights associated with each link over the entire population, we show that a single parameter that is specific to a link can account for its frequency of occurrence, as well as the variation in its weight across different subjects. This parameter provides a basis for “rescaling” the link weights in each connectome, allowing us to obtain a generic network representative of the human brain, distinct from a simple average over the connectomes. We obtain the functional connectomes by implementing a neural mass model on each of the vertices of the corresponding structural connectomes. By comparing these with the empirical functional brain networks, we demonstrate that the rescaling procedure yields a closer structure–function correspondence. Finally, we show that the representative network can be decomposed into a basal component that is stable across the population and a highly variable superstructure (Fig. 1).

Fig. 1
figure cd

The representative structural connectivity of the human brain can be resolved into two components: the “basal” network (left) and the “superstructure” network (right). The former comprises 1106 ubiquitous links, i.e., those that occur in every individual, and the latter consists of the remaining 2806 links. Thickness of each link corresponds to their average weights acros the population

P176 Modeling the dynamics of partially known systems via the integration of a system of ordinary differential equations into a recurrent neural network

Domas Linkevicius 1 , Angus Chadwick 1 , Melanie I. Stefan 2 , David C. Sterratt 1

1 University of Edinburgh, Intitute for Adaptive and Neural Computations, School of Informatics, Edinburgh, United Kingdom

2 University of Edinburgh, Centre for Discovery Brain Science, Edinburgh, United Kingdom

Email: s1788788@ed.ac.uk

Synapses are complicated pieces of biochemical machinery that are necessary for various cognitive functions [1], and their dynamics have long been modelled computationally using ordinary differential equations (ODEs). Recent proteomic studies have shown that synapses contain on the order of a thousand unique protein species [2]. However, to date, models of biochemical activity within a synapse contain at most 55 unique protein species [3]. This disparity in the number of biochemical species between real and model synapses can be attributed to a dearth of knowledge that is needed to construct a structurally faithful and a fully parameterised dynamical ODE model of a synapse.

Since there is currently not enough data on the synaptic biochemical reaction structure and rates to construct a fully parameterised model, alternative approaches are necessary in order to have a model that incorporates more of the known synaptic physiology. Specifically, any new approach would have to be agnostic to the full structure and parameterisation of the biochemical system. We present a hybrid modelling framework where well parameterised parts of the biochemical network are modeled using ODEs, and parts of the network that are not well parameterised are modeled using a recurrent neural network (RNN) [4]. The RNN can learn dynamics that are consistent with a minimal set of assumptions and physiological data without relying on the precise knowledge of reaction structure and rates. Thus, we can link an existing ODE model to a learnable RNN model that represents parts of the system lacking data via a minimal set of assumptions. The state vector in our model consists of three parts: real biochemical species from the ODEs (e.g., calcium, calmodulin, etc.), RNN hidden states and bridge species connecting the two. The result is a machine learning model that is partly interpretable, can learn from data in order to reproduce observable dynamics and does not require prohibitively restrictive amounts of data on reaction structures and rates. We present a working toy example in order to illustrate our approach.

Most other models of synaptic activity are constructed in order to investigate a single or at best a handful of phenomena. Because existing models include only a small part of the full synaptic biochemical network [3], it is unclear how they would perform in reproducing a broader range of synaptic dynamics. While our work is still in its early stages, we believe that our semi-black box RNN model could ultimately produce a dynamical model that is rich enough to reproduce a much larger set of observations of synaptic dynamics. In addition, we believe that such a model will serve as an invaluable tool for future experimental research via its usage inprobabilistic experimental design techniques.

References

1. Akhondzadeh S. Hippocampal synaptic plasticity and cognition. Journal of clinical pharmacy and therapeutics. 1999 Aug;24(4):241–8.

2. Roy M, Sorokina O, McLean C, Tapia-González S, DeFelipe J, et al. Regional diversity in the postsynaptic proteome of the mouse brain. Proteomes. 2018 Sep;6(3):31.

3. Heil KF, Wysocka EM, Sorokina O, Kotaleski JH, Simpson TI, et al. Analysis of proteins in computational models of synaptic plasticity. BioRxiv. 2018 Jan 1:254,094.

4. You Y, Nikolaou M. Dynamic process modeling with recurrent neural networks. AIChE Journal. 1993 Oct;39(10):1654–67.

P177 Determinants of pattern recognition in a network model of cerebellar cortex

Ohki Katakura 1 , Reinoud Maex 2 , Shabnam Kadir 1 , Volker Steuber 2

1 University of Hertfordshire, Centre for Computer Science and Informatics Research, Hatfield, United Kingdom

2 University of Hertfordshire, Biocomputation Research Group, Hatfield, United Kingdom

Email: cns@neuronalpail.com

The cerebellum plays key roles in motor learning, temporal information processing and cognition. It has been suggested that the cerebellar granule cells (GrCs), the most numerous neurons in the brain, convert mossy fibre (MF) input patterns into sparser signals, which could maximise the storage capacity of the synapses onto the Purkinje cells (PCs). However, the postulated sparse coding scheme is still under discussion due to conflicting experimental findings. A clustered activation of MFs has been found in vivo, but the computational advantages of this clustering also remain unclear. Furthermore, GrC axons have two distinct parts, ascending axons (AAs) and parallel fibres (PFs). Experimental studies indicate that AAs excite PCs more strongly but AA synapses onto PCs are less plastic than PF synapses.

The goal of the present study was to examine how PCs can recognise spatial patterns in the input. In a previous study the input was applied directly to PFs [1]. Here we extended the previous model with a detailed granular layer model [2] to apply input to MFs. The extended network model measured 4.00 mm × 0.40 mm × 0.51 mm along the transversal, sagittal, and vertical axes, respectively, and contained 491,520 GrCs, 1,228 Golgi cells (GoCs), 1 PC, and 16,158 MFs forming 137,793 glomeruli. Based on different distributions of PC spines for PFs and AAs, the PC received input through 110,777 PF synapses (77.08%) and 32,933 AA synapses (22.92%). We also introduced 1,695 stellate cells as inhibitory Poisson generators spiking at 3.5 Hz. The spontaneous firing rate was 5 Hz for MFs, which evoked spontaneous spikes at a rate of 1.00 ± 0.12 Hz for GrCs, 7.6 ± 2.4 Hz for GoCs and 67 ± 32 for PCs.

In particular, we wanted to explore the effect of the spatial extension, position and sparsity of the MF input [3], and in the next stage, the projection patterns from the GrCs to the PCs [4]. We found that fewer GrCs were activated with clustered MF than when the same number of MFs were excited in a distributed fashion. As sparse coding is beneficial for pattern recognition, this result predicts that clustered MF input would improve the storage capacity of the cerebellar cortex. In these preliminary simulations, we also found large effects of the location of the excited MF patch. When we stimulated MFs beneath the PC, the PC showed stronger firing rates or depolarisation block depending on the intensity of stimuli. Owing to the randomness of positions of cells and connectivity, there were large variations of the strength of the PC response depending on the precise location of the stimulus. Likewise, we found a large variability in the difference between the PC responses to learnt and novel pattern stimuli. We are currently investigating the effect of this variability on the reliability of pattern recognition.

References

1. Safaryan K, Maex R, Davey N, Adams R, Steuber V. Nonspecific synaptic plasticity improves the recognition of sparse patterns degraded by local noise. Scientific reports. 2017 Apr 20;7(1):1–4.

2. Sudhakar SK, Hong S, Raikov I, Publio R, Lang C, et al. Spatiotemporal network coding of physiological mossy fiber inputs by the cerebellar granular layer. PLoS computational biology. 2017 Sep 21;13(9):e1005754.

3. Gilmer JI, Person AL. Morphological constraints on cerebellar granule cell combinatorial diversity. Journal of Neuroscience. 2017 Dec 13;37(50):12,153–66.

4. Wilms CD, Häusser M. Reading out a spatiotemporal population code by imaging neighbouring parallel fibre axons in vivo. Nature communications. 2015 Mar 9;6(1):1–9.

P178 Synaptic pulse duration determines phase difference between asymmetrically coupled oscillators

Arun Neru Balachandar1 1 , Alexander Khibnik 2 , Roman Borisyuk 3 , Joel Tabak 4

1 University of Exeter, Exeter, United Kingdom

2 Independent Scientist, Boston, MA, United States of America

3 University of Exeter, College of Engineering, Mathematics and Physical Sciences, Exeter, United Kingdom

4 University of Exeter Medical School, Exeter, United Kingdom

Email: j.tabak@exeter.ac.uk

The neural circuits that control movements can be dynamically reconfigured to support a wide range of behaviours. This flexibility is supported by the modular organization of these circuits into Central Pattern Generators (CPGs). The CPGs that generate swimming in fish and tadpoles form a chain of synaptically coupled oscillators along the spinal cord. For a tadpole to swim forward, the oscillators along the chain have to be phase locked, with oscillations propagating from head to tail. Tadpole can also struggle – a stronger movement with oscillations propagating from tail to head. There is no clear mechanism to explain why the direction of propagation is reversed during struggling. Current hypotheses consider that the relative frequencies of the oscillators along the chain, or the ratio between descending and ascending excitatory coupling between oscillators, determine the direction of propagation.

Here we demonstrate that the duration of the synaptic pulses coupling the oscillators also determines the direction of propagation. In a chain of identical oscillators with unidirectional coupling, long synaptic pulses support propagation in the direction of the coupling, while short synaptic pulses support propagation in the opposite direction (Fig. 1).

To understand why this is happening, we consider two identical Morris-Lecar oscillators, with a unidirectional synaptic connection, and analyse the phase difference between the postsynaptic and the presynaptic oscillators. At the relaxation limit, and for an infinitely short synaptic pulse duration, the postsynaptic oscillator is phase-advanced, by its entire active phase. This is because during the active phase, a short excitatory pulse phase-advances the postsynaptic oscillator, while during the silent phase a short pulse delays the oscillator. The stable point where the pulse neither delays nor advance the oscillator is at the jump from active to silent phase.

When pulse duration is increased, the pulse may delay the oscillator during the active phase, if it spans a significant amount of time close to the jump from active to silent phase. If it is received close to this jump, the pulse will delay the jump. This reduces the phase advance of the postsynaptic oscillator, and for large enough pulse duration the postsynaptic oscillator may even be phase-delayed.

This transition from advance to delay of the postsynaptic oscillator, as synaptic pulse duration is increased, is robust. We observe it far from the relaxation limit, and for other oscillator models, such as the Hodgkin Huxley model. It does not rely on the excitability type of the oscillators. The phase difference between the oscillators is controlled by the duration of the synaptic pulse, relative to the duration of the active phase of the oscillation.

These results extend from a pair to a chain of coupled oscillators as shown Fig. 1. This highlights a new hypothesis for the change in the direction of propagation between forward swimming and backward struggling. During struggling, oscillations are slower so the synapses become depressed, shortening the relative duration of the synaptic coupling between oscillators. This shortening of the relative synaptic coupling duration could reverse the direction of propagation during struggling.

Acknowledgments

We thank the Biotechnology and Biological Sciences Research Council for supporting RB withBB/T002352/1 and JT with BB/T002549/1.

Fig. 1
figure ce

Activity propagation along a chain of Morris-Lecar oscillators. A Tadpole body and chain of 20 oscillators with one-way excitatory synaptic coupling, to model wave propagation along the tadpole spinal cord. B “Tail-to-head” propagation for short synaptic pulse duration. C “Head-to-tail” propagation for large synaptic pulse duration

P179 A role of receptor desensitization, feedback loops and spontaneous activity in astrocyte calcium responses

Andrew Liu 1 , Gregory Handy 2 , Alla Borisyuk 1 , Marsa Taheri 3

1 University of Utah, Mathematics, Salt Lake City, UT, United States of America

2 University of Chicago, Neurobiology and Statistics, Grossman Center for Quantitative Biology and Human Behavior, Chicago, IL, United States of America

3 University of California, Los Angeles, Physiology, Los Angeles, CA, United States of America

Email: aliu@math.utah.edu

Astrocytes are a major cell type in the mammalian brain that produce large cytosolic calcium signals that are thought to mediate astrocytes’ critical functions in the brain. These calcium transients are often initiated by the binding of neurotransmitters (e.g., glutamate and ATP) to G-protein-coupled receptors (GPCRs) on the surface of astrocytes. In this work, we extend an earlier detailed model of the astrocyte calcium response [1,2] to include biochemical reaction cascades from the GPCR activation to the calcium signal. Importantly, we build in putative positive and negative feedback loops from the cytosolic calcium to the signaling molecule inositol 1,4,5-triphosphate (IP3), as well as two types of desensitization proposed for GPCRs (see Fig. 1 for a schematic of our model). We use dynamical systems analysis and numerical simulations of the model to test a number of experimentally-derived hypotheses about the astrocyte responses, and offer new testable predictions to further our understanding of this system.

Namely, we make the following observations and predictions. We start by providing computational evidence for two types of GPCR desensitization. Homologous desensitization affects only activated receptors, while the slower heterologous desensitization depends on a downstream intermediary molecule and affects all GPCRs. We propose experiments that would distinguish whether one or the other or both types of desensitization are at play in a particular experimental preparation. Then, we suggest that the experimentally-observed reduction in calcium level (or a reduction in amplitude of the continued calcium spike oscillations) in response to a sustained stimulus may be more dependent on GPCR desensitization than on depletion of calcium levels in the endoplasmic reticulum of the cell. Next, we show that astrocyte spontaneous calcium activity contributes to the variability of calcium responses to a brief agonist pulse. Finally, we demonstrate that potential positive and negative feedback loops from calcium onto IP3 production play crucial roles in determining the response delay and the distribution of the calcium response types. Thus, we predict that the presence and the relative prominence of these feedback loops can be assessed based on experimentally recorded calcium responses to specific experimental perturbations. Overall, our results improve our understanding of astrocyte physiology, and provide specific predictions for future experiments.

Acknowledgements

This material is based upon work supported by the National Science Foundation under Grant No. NSF-DMS-1853673.

References

1. Handy G, Taheri M, White JA, Borisyuk A. Mathematical investigation of IP3-dependent calcium dynamics in astrocytes. Journal of computational neuroscience. 2017 Jun;42(3):257.

2. Taheri M, Handy G, Borisyuk A, White JA. Diversity of evoked astrocyte Ca2 + dynamics quantified through experimental measurements and mathematical modeling. Frontiers in systems neuroscience. 2017 Oct 23;11:79.

Fig. 1
figure cf

Schematic representation of our dynamical system. The GPCR system includes inactive GPCR (GPCR), activated GPCR (GPCR*) with activation induced by glutamate or ATP (G/A), homologous desensitization (Gd1), heterologous desensitization (Gd2), and an intermediary molecule representing the signal cascade that leads to heterologous desensitization (λ)

P180 Online working memory changes brain metabolisms in self-control and default mode networks

Rongxiang Tang 1 , Changho Choi 2 , Yiyuan Tang 3

1 Washington University in St. Louis, Psychological & Brain Sciences, Saint Louis, MO, United States of America

2 University of Texas Southwestern Medical Center, Advanced Imaging Research Center, Dallas, TX, United States of America

3 Texas Tech University, Lubbock, TX, United States of America

Email: yiyuan.tang@ttu.edu

Working memory training (WMT) has been used to improve attention and cognitive functions. Neuroimaging studies have shown its effects on brain function and structures in cognitive control, salience, and default networks [1]. However, WMT-induced neurochemical changes are sparsely investigated and how brief WMT could change brain metabolism in these networks remains unexplored. Utilizing a non-invasive 3 T proton magnetic resonance spectroscopy (1H-MRS), we conducted the first pilot study investigating whether brief WMT could change the excitatory and inhibitory responses of neurotransmitters within the key nodes of these networks – dorsal lateral prefrontal cortex (dlPFC), anterior cingulate cortex (ACC) and posterior cingulate cortex (PCC), regions highly relevant for WMT [1,2]. Ten healthy college students completed ten 1-h online WMT sessions within two weeks and brain metabolisms were assessed before and after WMT. Following survey imaging and T1-weighted structural imaging, single-voxel point-resolved spectroscopy (PRESS) was conducted for estimating the metabolite concentrations in left dlPFC, dorsal ACC and PCC [1,3]. PRESS scan parameters included TR 2 s, TE 90 ms, sweep width 2.5 kHz, 1024 sampling points, and 256 signal averages. Water suppression and B0 shimming up to second order were performed with the vendor-supplied tools. Reference water signal was acquired for eddy current compensation, multi-channel combination, and metabolite quantification. Spectral fitting was performed with LCModel software [4], using in-house basis spectra of 18 metabolites which were calculated incorporating the PRESS slice selective RF and gradient pulses. The spectral fitting was performed between 0.5–4.0 ppm. After correcting the LCModel estimates of metabolite signals for the T2 relaxation effects, the millimolar concentrations of metabolites were calculated with reference to water at 42 M [5]. Results indicated a significant increase in myo-inositol (p = 0.017) in left dlPFC and choline metabolism in both dorsal ACC (p = 0.007) and PCC (p = 0.021) after WMT. Our results suggest that brief WMT can change glia-related metabolites such as myo-inositol and choline in key hubs of cognitive control, salience, and default networks. However, this warrants further investigations in large sample size.

Acknowledgments

This work was supported by ONR.

References

1. Pappa K, Biswas V, Flegal K, Evans J, Baylan S. Working memory updating training promotes plasticity & behavioural gains: A systematic review & meta-analysis. Neuroscience & Biobehavioral Reviews. 2020 Jul 30.

2. Cichocka M, Bereś A. From fetus to older age: a review of brain metabolic changes across the lifespan. Ageing research reviews. 2018 Sep 1;46:60–73.

3. Tang YY, Askari P, Choi C. Brief mindfulness training increased glutamate metabolism in the anterior cingulate cortex. NeuroReport. 2020 Nov 4;31(16):1142–5.

4. Provencher SW. Estimation of metabolite concentrations from localized in vivo proton NMR spectra. Magnetic resonance in medicine. 1993 Dec;30(6):672–9.

5. Ganji SK, Banerjee A, Patel AM, Zhao YD, Dimitrov IE, et al. T2 measurement of J‐coupled metabolites in the human brain at 3 T. NMR in biomedicine. 2012 Apr;25(4):523–9.

P181 Sleep latency modulates the relationships between adaption-innovation and resting-state EEG microstates

Xiaoqian Ding 1 , Fengzhi Cao 1 , Shenghan Wang 1 , Xinshu Wang 1 , Yiyuan Tang 2

1 Liaoning Normal University, Dalian, China

2 Texas Tech University, Lubbock, TX, United States of America

Email: yiyuan.tang@ttu.edu

Efficiency subcomponent in Kirton Adaption-Innovation Inventory (KAI) [1] is closely linked to conscientiousness in Big-Five personality traits [2]. Conscientiousness can predict a better sleep continuity and subjective sleep quality [3]. Neurologically, conscientiousness is associated with the functional connection of dorsal anterior cingulate cortex (dACC) and insula, two main components of Salience Network (SN) [4]. However, an over-activated SN was frequently observed in previous insomnia research [e.g., 5]. Therefore, the relationships between sleep quality, efficiency, and SN require further clarification. The resting-state electroencephalogram (RS-EEG) microstates can be divided into four classical types, with type 3 (MS3) localized to operculo‐cingulate cortex, including ACC and anterior insula [6]. Utilizing EEG microstate analysis, the current study aims to elucidate the link between trait efficiency and sleep quality as well as the underlying neural mechanism. We hypothesize that sleep quality moderates the association between Efficiency and the RS-EEG microstates. Sixty-one Chinese college students (22 females, 20.84 ± 1.53 years old) participate in this study. Their adaption-innovation and sleep quality (e.g., sleep latency) were measured by KAI and Pittsburgh sleep quality index [7]. EEG microstate analysis was conducted on their resting-state EEG datasets. We applied the PROCESS macro (model 1, sample size 5000, in 95% confidence interval) to examine the moderating effect, in which Efficiency is chosen as the independent variable, MS3 as the dependent variable, Sleep Latency as the regulatory variable. We found a significant effect of Efficiency on MS3 (b = 0.29, p < 0.05), which is moderated by Sleep Latency (b = 0.25, p < 0.05, see Fig. 1). We visualized the relationships between MS3 and Efficiency at high and low (1 SD above and below the mean) levels of Sleep Latency. Simple slope tests indicated that only in the participants with long sleep latency, higher levels of efficiency were associated with longer duration of MS3 (see Fig. 1). These results indicate that the difficulties of falling asleep, reflected by a longer sleep latency, are prominent in high Efficiency population who tend to have an overly activated SN.

References

1. Bobic M, Davis E, Cunningham R. The Kirton adaptation-innovation inventory: Validity issues, practical questions. Review of Public Personnel Administration. 1999 Apr;19(2):18–31.

2. Kwang NA, Rodrigues D. A Big‐Five Personality profile of the adaptor and innovator. The Journal of Creative Behavior. 2002 Dec;36(4):254–68.

3. Križan Z, Hisler G. Personality and sleep: Neuroticism and conscientiousness predict behaviourally recorded sleep years later. European Journal of Personality. 2019 Mar;33(2):133–53.

4. Rueter AR, Abram SV, MacDonald III AW, Rustichini A, DeYoung CG. The goal priority network as a neural substrate of Conscientiousness. Human Brain Mapping. 2018 Sep;39(9):3574–85.

5. Chen MC, Chang C, Glover GH, Gotlib IH. Increased insula coactivation with salience networks in insomnia. Biological psychology. 2014 Mar 1;97:1–8.

6. Britz J, Van De Ville D, Michel CM. BOLD correlates of EEG topography reveal rapid resting-state network dynamics. Neuroimage. 2010 Oct 1;52(4):1162–70.

7. Ding X, Wang X, Yang Z, Tang R, Tang YY. Relationship between trait mindfulness and sleep quality in college students: a conditional process model. Frontiers in psychology. 2020 Sep 29;11:2587.

Fig. 1
figure cg

Effects of Efficiency and Sleep Latency on Duration of MS3. The graph is for description only. All inferential analyses keep continuous data of Efficiency and Sleep Latency

P182 Exploiting modern multi-site electrodes for counteracting abnormal synchronization

Ali khaledi Nasab 1 , Justus Kromer 1 , Peter Tass 1

1 Stanford University, Neurosurgery, Stanford, CA, United States of America

Email: khaledi@stanford.edu

Abnormal synchronization of neuronal activity is related to multiple neurological disorders including Parkinson’s disease (PD) and epilepsy. High-frequency deep brain stimulation (HF DBS) is an established treatment for PD; however, symptoms typically return shortly after stimulation ceases. Coordinated reset (CR) is a novel stimulation method that aims at counteracting hypersynchrony in neural networks. During CR, phase-shifted stimuli are delivered through multiple stimulation sites. CR has been used for the treatment of Parkinson's disease using both DBS electrodes and noninvasive fingertip vibrotactile stimulation, and its efficacy was demonstrated in preclinical and clinical studies. Computational studies in neural networks with spike-timing-dependent plasticity (STDP) showed that CR stimulation might reduce the synaptic weights, and in doing so it leads to a long-lasting desynchronization by ultimately moving the neural networks to a weakly coupled and stable desynchronized state. CR stimulation frequency in most studies was adjusted to the dominant rhythm, however that may limit the use of CR due to the coexistence of multiple disease-related abnormal rhythms. Motivated by new multi-contact electrode designs and spatially directed stimulation current steering algorithms, we study the impact of the number of stimulation sites and the CR frequency in leaky integrate-and-fire (LIF) neural networks with STDP. We show that long-lasting effects become most pronounced when stimulation parameters are adjusted to the characteristics of STDP—rather than to the dominant rhythm. In addition, we reveal a nonlinear dependence of long-lasting effects on the number of stimulation sites and the CR frequency. Intriguingly, optimal long-lasting desynchronization does not require larger numbers of stimulation sites or high-frequency CR stimulation.

P183 Long-lasting desynchronization using randomized spatio-temporal stimulus patterns

Justus Kromer 1 , Ali khaledi Nasab 1 , Peter Tass 1

1 Stanford University, Neurosurgery, Stanford, CA, United States of America

Email: jkromer@stanford.edu

Excessive neuronal synchrony is a hallmark of several neurological disorders, including Parkinson’s Disease (PD). An established treatment for advanced PD is invasive high-frequency deep brain stimulation (DBS). PD symptoms return shortly after stimulation ceases. Theory-based approaches, such as coordinated reset (CR) stimulation, counteract neuronal synchrony by delivering spatio-temporal stimulus patterns. In computational studies, CR stimulation reshaped synaptic connectivity and drove plastic neuronal networks into an attractor of a stable desynchronized state. This led to desynchronization effects that outlasted stimulation. Corresponding long-lasting therapeutic effects were reported by preclinical and clinical studies delivering CR through implanted DBS electrodes. Recent computational studies provided evidence that long-lasting effects of CR stimulation might be sensitive to changes of the stimulation frequency. This might limit clinical applicability as excessive synchrony in different frequency bands is associated with PD symptoms.

To improve parameter robustness of long-lasting effects of invasive electrical stimulation, we studied synaptic reshaping due to spatio-temporal stimulus patterns in neuronal networks with spike-timing dependent plasticity [1]. Our theoretical and computational results led to the hypothesis that randomized stimulus patterns improve parameter robustness of long-lasting effects. In our theoretical and computational work, we analyzed sequence- and stimulus-induced synaptic reshaping [1]. These two mechanisms describe synaptic reshaping as a result of spatio-temporal (sequence-induced) correlations in the stimulus pattern and as a result of neuronal response variability to individual stimuli (stimulus-induced), respectively. Stimulus patterns that adequately combined both mechanisms led to strengthening of certain groups of synapses while weakening others. Focusing on patterns that stabilize desynchronized neuronal activity by weakening excitatory synapses, we found that randomized stimulus deliveries lead to more robust effects [1,2]. Long-lasting effects motivated the development of non-invasive, sensory therapies that require the stimulus to be delivered only regularly or occasionally [1]. Accordingly, we extended our approach to model the effects of (moderately) randomized non-invasive, vibrotactile CR stimulation, taking into account vibratory masking and habituation constraints [4].

In a corresponding clinical feasibility study in six PD patients, both regular and randomized vibrotactile CR fingertip stimulation turned out to be safe and well-tolerated [4]. Patients experienced a sustained, significant cumulative improvement of motor performance. Stimulation led to a significant reduction of high-beta EEG power in the sensorimotor cortex after 3 months of therapy, indicating long-lasting desynchronization effects (Fig. 1). Our results provide promising first evidence that randomized spatio-temporal stimulus patterns may be suitable for inducing long-lasting desynchronization effects and symptom relief in PD patients.

References

1. Kromer JA, Tass PA. Long-lasting desynchronization by decoupling stimulation. Physical Review Research. 2020 Jul 20;2(3):033,101.

2. Khaledi-Nasab A, Kromer JA, Tass PA. Long-lasting desynchronization of plastic neural networks by random reset stimulation. Frontiers in Physiology. 2020;11:1843.

3. Tass PA. Vibrotactile coordinated reset stimulation for the treatment of neurological diseases: Concepts and device specifications. Cureus. 2017 Aug;9(8).

4. Pfeifer KJ, Kromer JA, Cook AJ, Hornbeck T, Lim EA, et al. Coordinated reset vibrotactile stimulation induces sustained cumulative benefits in Parkinson’s disease. Frontiers in physiology. 2021 Apr 6;12:200.

Fig. 1
figure ch

Long-lasting desynchronization of a plastic neuronal network by coordinated reset (CR) stimulation. A,B CR stimulation is delivered to four neuronal subpopulations (colored regions). C-H Simulation results for the degree of in-phase synchronized spiking (C) and the mean synaptic weight (D) during randomized vibrotactile CR stimulation. E–H Snapshots of connectivity matrix at different times

P184 Gating null and potent modes of propagation in a feedforward model of cortical activity

Artem Pilzak 1 , Jean-Philippe Thivierge 1

1 University of Ottawa, School of Psychology, Ottawa, Canada

Email: jthivier@uottawa.ca

Both sensory and motor processes are driven by coordinated activity across brain areas. Analyses based on dimensionality reduction have shown that some patterns of covariance amongst populations of neurons fall within a "potent space" that influences downstream neural responses. Other patterns, however, fall within a "null space" that inhibits the propagation of activity. These patterns are ubiquitous across neural modalities, and are reported in primary visual areas [1] as well as preparatory motor areas [2]. Nevertheless, despite growing support for the role of null space activity, its origins within synaptic circuits remain unclear. Here, a mean-rate model was developed to capture the feedforward propagation of activity between two interconnected areas (a "sender" and a "receiver" area) each representing an anatomically distinct network (Fig. 1a). Null and potent modes of activity were gated by adjusting the connections between the two areas based on a novel synaptic rule. Mode-specific propagation of neural activity was readily observed by applying a singular value decomposition to the activity of both sender and receiver areas, and computing the correlation between each mode of activity, respectively (Fig. 1b). Altering the number of null modes propagated between the two areas yielded no systematic changes in firing rates, pairwise correlations, or mean synaptic strength. Thus, characterizing the interactions between the two areas could not be achieved by standard measures of functional connectivity, highlighting a fundamental limitation of these approaches. As an alternative, a measure termed the "null ratio" was developed to capture the proportion of null modes propagated from one area to the other (Fig. 1c). This measure was applied to experimental data consisting in simultaneous recordings from primate visual areas V1 and V2 while subjects were presented with oriented stimuli. The null ratio revealed that feedforward propagation between these areas consisted of a predominant proportion of null modes, whereas only a few potent modes of V1 activated downstream targets in V2 (Fig. 1d). These results are consistent with the small number of potent modes required to encode simple oriented images, suggesting that the ratio of null and potent modes may reflect properties of the visual stimuli employed in experiments.

References

1. Semedo JD, Zandvakili A, Machens CK, Byron MY, Kohn A. Cortical areas interact through a communication subspace. Neuron. 2019 Apr 3;102(1):249–59.

2. Kaufman MT, Churchland MM, Ryu SI, Shenoy KV. Cortical activity in the null space: permitting preparation without movement. Nature neuroscience. 2014 Mar;17(3):440–8.

Fig. 1
figure ci

Feedforward propagation of null and potent modes of neural activity. a Illustration of a feedforward network with sender and receiver neurons. b Correlation between modes of the sender and receiver areas with N = 100 units and N/2 null modes. c The null ratio provided an estimation of the proportion of null modes in the model. d Null/potent modes of V1-V1 and V1-V2 interactions

P185 One-shot learning of static and sequential patterns with extreme neural machines

Megan Boucher-Routhier 1 , Jean-Philippe Thivierge 1

1 University of Ottawa, School of Psychology, Ottawa, Canada

Email: jthivier@uottawa.ca

Many everyday tasks require that we produce and repeat sequences of thoughts or behaviors that unfold over time. Examples abound, from playing golf to uttering sentences. Recent computational advances offer a solution to this problem whereby a recurrent network projects to a read-out layer whose synaptic weights are trained to generate the desired response [1,2]. However, these approaches rely on iterative learning rules that cannot account for the rapid, one-shot learning reported in sensory and motor domains [3]. Here, we describe a one-shot algorithm, termed Extreme Neural Machine (ENM), that learns to reproduce static and sequential patterns of activity. The centrepiece of our model is a recurrent circuit comprised of either mean-rate or integrate-and-fire neurons (Fig. 1a). These neurons activate an output layer via connections that are adjusted by a one-shot supervised learning rule whose goal is to compress the desired signal onto a smaller number of dimensions, where each dimension is a neuron from the recurrent network. While the learning process is not biologically-grounded, the model is informative of recurrent population activity regimes that support task performance. First, networks learned to compress and reproduce natural images (Fig. 1b). Error rates were computed by the mean squared error between input images and network output. Error was lower for ENMs than statistical (principal components analysis) and one-shotmachine learning (extreme learning machine) approaches (Fig. 1c). Random elimination of a small proportion of recurrent network connections prior to training yielded a negligible impact on performance. Neurons of the recurrent network exhibited mixed selectivity for particular characteristics of the stimuli, capturing aspects of cortical responses to combinations of input features. Next, networks learned to draw and recall 2D figures (Fig. 1d), as well as reproduce high-resolution movie scenes (Fig. 1e). Following a training phase where the model received a short movie segment (10 s), the recall capacity of the network was tested by presenting a brief 1 s segment. The output units generated the remainder of the sequence, and accurate performance was attained with only a few hundred recurrent units. An analysis of the activity within the recurrent network revealed that neurons responded preferentially to spatial locations of the movie with high contrast. Overall, the model provides a novel avenue to perform one-shot learning in recurrent networks. Distinct signatures of recurrent activity obtained as a result of training will inform experiments on the biological basis of temporal sequence acquisition.

References

1. Sussillo D, Abbott LF. Generating coherent patterns of activity from chaotic neural networks. Neuron. 2009 Aug 27;63(4):544–57.

2. Vincent-Lamarre P, Calderini M, Thivierge JP. Learning long temporal sequences in spiking networks by multiplexing neural oscillations. Frontiers in computational neuroscience. 2020 Sep 7;14:78.

3. Brea J, Gerstner W. Does computational neuroscience need new synaptic learning paradigms?. Current Opinion in Behavioral Sciences. 2016 Oct 1;11:61–6.

Fig. 1
figure cj

One-shot learning of static and temporal patterns in a recurrent network. a Architecture of the model. b Static image reproduced at the output of a rate-based network. c Rate and spiking networks improve their performance on static images with more recurrent neurons. d 2D figure drawn by a rate network. e Recalling a movie sequence after receiving a short (1 s) cue

P186 Application of tensor component decomposition methods to characterize nonstationary population activity dynamics during a reversal learning task

Xiaochen Zhao 1 , Thilo Womelsdorf 2 , Paul Tiesinga 3

1 Radboud University, Donders Institute for Brain, Cognition and Behaviour, Department of Neuroinformatics, Nijmegen, Netherlands

2 Vanderbilt University, Department of Psychology, Nashville, TN, United States of America

3 Radboud University, Department of Neuroinformatics, Nijmegen, Netherlands

Email: xiaochen.zhao@mail.bnu.edu.cn

Reversal learning is often used to assess cognitive flexibility which reflects the ability to rapidly adjust behavior to a changing environment and which is affected in psychiatric disorders. Determining the neural mechanisms underlying reversal learning is therefore important for advancing our understanding of cognitive processes in health and disease. It is currently not clear how the brain regions collaborate to achieve this type of adaptive behavior.

We analyzed the neural responses in non-human primates during a reversal learning task [1] in which to obtain a reward they needed to make an eye movement instructed by the direction of motion of one of two objects that contained a target feature. The target feature changed (reversed) uncued multiple times during the session. Multiple neurons in LPFC, ACC and striatum were recorded simultaneously. We converted the spike times into normalized firing rates and combined these across multiple sessions to create a data matrix with three dimensions – neuron identity, trial index relative to reversal and time relative to reward onset. We used this matrix to extract the changes in the neural population response accompanying learning the target feature reversal. Our hypothesis was that the changes across learning trials could be either in the form of overall activity changes as well as in shifting the onset time of response features. To test this, we evaluated a number of tensor factorization methods [2–4] to extract these changes across trials while preserving the natural structure of the tensor. We found that tensor component analysis (TCA, [2]) could extract relevant components better while accounting for a larger fraction of the response. TCA achieved this by identifying additional factors related to the same subset of neurons. In our hands, for this data set, even methods explicitly designed to extract latency changes did not work as well as TCA. In addition, our simulation results using artificial data showed that the TCA results were stable against white noise and resampling noise.

In our analysis we focused on the neurons in the striatum that previous analysis identified as broad spike neurons which comprised 83% of the neurons in the dataset [1]. TCA-decomposition of the population firing rates identified two distinct components whose activity increased or decreased with trial index relative to reversal. Some components are active right after the reward, while others show an activation at later stages. Based on these components we could identify two groups of neurons, whose firing rate responses shift in time when the target is learned. One group shifts forward and the other backwards. This paves the way for a more comprehensive analysis of these groups in terms of changes in spike-spike correlations between them as well as the relation to the local field potential. In conclusion, we found that TCA can be used to find groups of neurons with distinct learning profiles, which is a first step towards improving our mechanistic understanding of multi-areal interactions during learning as well as other processes involving cognitive flexibility.

References

1. Oemisch M, Westendorff S, Azimi M, Hassani SA, Ardid S, et al. Feature-specific prediction errors and surprise across macaque fronto-striatal circuits. Nature communications. 2019 Jan 11;10(1):1–5.

2. Williams AH, Kim TH, Wang F, Vyas S, Ryu SI, et al. Unsupervised discovery of demixed, low-dimensional neural dynamics across multiple timescales through tensor component analysis. Neuron. 2018 Jun 27;98(6):1099–115.

3. Williams AH. Combining tensor decomposition and time warping models for multi-neuronal spike train analysis. bioRxiv. 2020 Jan 1.

4. Williams AH, Poole B, Maheswaranathan N, Dhawale AK, Fisher T, et al. Discovering precise temporal patterns in large-scale neural recordings through robust and interpretable time warping. Neuron. 2020 Jan 22;105(2):246–59.

P187 A decision-making model with anticipation of surprise for explaining ‘irrational’ economic behaviors

Ho Ka Chan 1 , Taro Toyoizumi 1

1 RIKEN Center for Brain Science, Laboratory for Neural Computation and Adaptation, Wako-shi, Japan

Email: chanhoka911212@yahoo.com.hk

Many experimental observations have shown that the expected utility theory is violated when people make decisions under risk. Here, we present a decision-making model inspired by the prediction of error signal in reinforcement learning. In the model, we choose the expected value across all outcomes of an action to be a reference point which people use to gauge the value of different outcomes. Action is chosen based on a nonlinear average of the anticipated surprise, defined by the difference between individual outcomes and the abovementioned reference point. The nonlinear ‘surprise function’ assumes that (1) surprises of large amplitudes have disproportionately magnified effects, and (2) negative surprises have larger effects than positive ones. It is also straightforward to extend the model to multi-step decision-making scenarios. In the extended model, new reference points are created as people update their expectation when they evaluate the outcomes associated with an action sequentially rather than simultaneously. The creation of these new reference points could be due to partial revelation of outcomes, ambiguity or segregation of probable and improbable outcomes. Several economic paradoxes and gambling behaviors can be explained by the single-step and/or the multi-step version of the model. Our model might bridge the gap between theories on decision-making in quantitative economy and neuroscience.

Acknowledgments

This study was supported by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Number JP18H05432 and Brain/MINDS from Japan Agency for Medical Research and Development (AMED) under Grant Number JP21dm020700.

P188 Profiling cell-type responses to external stimulation

Daniel Trotter 1

1 University of Ottawa, Physics, Ottawa, Canada

Email: dtrot074@uottawa.ca

Neural oscillations are a fundamental mechanism for the communication and coordination of neural activity between brain regions. The temporal coordination of these oscillations underlies many cognitive and behavioural responses and higher order brain functions, such as attention and working memory. As a result of this, there has been increased attention on targeting these oscillations for intervention in neurological disorders. Of particular interest has been expanding treatment from pharmaceutical interventions, which are not always successful, to include non-invasive stimulation techniques, such as transcranial alternating current stimulation (tACS) and transcranial magnetic stimulation (TMS). These approaches have shown great promise for improving our understanding of the dynamics underlying and entrainment of neural oscillations, and intervention in neurological disorders. Existing models for external stimulation are often abstracted for computational efficiency, making use of mean-field approaches and various simplified neuron models, such as leaky integrate-and-fire neurons. These models, while useful for parsing out system dynamics, do not account for cell-type differences in their frameworks. Improved understanding of cell-type specific responses to external stimulation provides a pathway for testing the limits of acute targeting via external stimulation. In-vivo this would be an intractable task, however, in-silico approaches give an efficient way to investigate these effects. In recent years, the Allen Institute has made available morphologically accurate models of individual excitatory and inhibitory cells. Using these models, we recreate external stimulation frameworks and investigate cell-specific reactions to multiple stimulation paradigms of varying angle, strength, distance, and frequency. By taking such stimulation parameters into consideration we begin to create a mapping of excitatory and inhibitory cell-type responses to external stimulation in multiple cortical layers. The exploration of the state-space surrounding individual cell type stimulation responses offers important insight into inter-cell dynamics. Of particular interest are differences in the reactivity of inhibitory versus excitatory cell populations that may suggest future targeting options via external stimulation. Further, these classifications offer a way to ameliorate existing abstracted models to better capture the underlying dynamics of the system and better our understanding of the effects of external stimulation on the brain.

P189 Frequency-resolved connectivity disturbances in first-episode psychosis

Fabian Kamp 1 , Cristiana Dimulescu 1 , Tineke Grent-'t-Jong 2 , Joachim Gross 3 , Rajeev Krishnadas 3 , Klaus Obermayer 1 , Christoph Metzner 4 , Peter Uhlhaas 2

1 Technische Universität Berlin, Neural Information Processing Group, Berlin, Germany

2 Charite Universitätsmedizin, Department of Child and Adolescent Psychiatry, Berlin, Germany

3 University of Glasgow, Institute of Neuroscience & Psychology, Glasgow, United Kingdom

4 Technische Universität Berlin, Department of Software Engineering and Theoretical Computer Science, Berlin, Germany

Email: christoph.metzner@gmail.com

In schizophrenia, large-scale connectivity differences across illness stages have been identified in resting-state fMRI-data [1]. Furthermore, analyses of resting-state MEG-data (rsMEG) identified stage-specific changes to signal power in the gamma frequency band, with first-episode schizophrenia patients showing increases in gamma power and chronic patients showing reductions [2]. However, frequency-resolved connectivity changes across illness stages remain largely unexplored in schizophrenia.

Therefore, we investigated the frequency-resolved functional connectivity (frFC) of first-episode psychosis patients (FEP, n = 27) and healthy controls (HC, n = 49) using rsMEG recordings. We analyzed global brain connectivity, differences in subnetwork connectivity and, the topology of cortical networks through graph theoretical measures of frFC. frFC was calculated by correlating the slow (< 0.2 Hz) signal envelope between cortical regions for specific, narrow frequency bands (delta [1-3 Hz], theta [3-7 Hz], alpha [8-12 Hz], beta [18-22 Hz], gamma [32-42 Hz]). We assessed group differences of global brain connectivity and graph measures and identified differences in the subnetwork connectivity using network-based statistics.

We found a significant reduction of global brain connectivity in the alpha band for the FEP group, which was most pronounced in left frontal regions. Furthermore, for the alpha band, we also identified a specific network of connections that showed a significant reduction in FEP patients in the default and frontoparietal system. We additionally found significantly increased gamma band connectivity in a network that was mostly located in the limbic system and the default mode network. Lastly, graph measures assessing information integration, segregation and network centrality were significantly lower in FEP patients then in HC subjects in the alpha band (Fig. 1). These results further support the notion that the onset of psychosis is characterized by impairments of interregional cortical communication which depends on neuronal synchronization. Whereas alterations of the alpha band connectivity might relate to cognitive impairments, reduced attention, and executive control in FEP, alterations in the gamma band connectivity may be involved in emotional and cognitive impairments.

References

1. Anticevic A, Corlett PR, Cole MW, Savic A, Gancsos M, et al. N-methyl-D-aspartate receptor antagonist effects on prefrontal cortical connectivity better model early than chronic schizophrenia. Biological psychiatry. 2015 Mar 15;77(6):569–80.

2. Grent T, Gross J, Goense J, Wibral M, Gajwani R, et al. Resting-state gamma-band power alterations in schizophrenia reveal E/I-balance abnormalities across illness-stages. Elife. 2018 Sep 27;7:e37799.

Fig. 1
figure ck

I. Global Brain Connectivity of first episode psychosis patients (FEP) and control subjects across frequency bands. II. Graph Measures within the alpha frequency band

P190 Phase response curve of the excitatory-inhibitory neuronal populations and its role in the coherent oscillation of inter-connected brain regions

Aref Pariz 1 , Jeremie Lefebvre 1 , Alireza Valizade 2

1 University of Ottawa, Biology, Ottawa, Canada

2 Institute for Advanced Studies in Basic Sciences (IASBS), Physics, Zanjan, Iran

Email: apariz@uottawa.ca

Computation in the brain takes place through the integration of the information processed in different specialized brain regions. Brain oscillations can affect integration of the information by controlling the communication between interconnected brain regions. It is hypothesized that the synchrony and the phase relations between local oscillations of the brain regions controls the efficacy of the communication channels in the brain circuits. Like every other network of oscillating dynamical systems, the synchronization between the oscillatory activities of brain’s modules depends on how their phase change upon the impact of the inputs from other connected modules. This property is conventionally quantified by the phase response curve (PRC) which shows how much the brief inputs incoming at different phases of oscillation, change the phase of the receiving oscillator. The main challenge for quantifying the inter-module synchrony in the brain is that every module is itself a giant oscillator composed of many neurons that cannot be readily treated by the methods developed for low-dimensional oscillators.

In this study, we numerically study the collective PRC (cPRC) for oscillating networks of excitatory-inhibitory (EI) neurons and show how they affect the phase relation between two interconnected populations. We first show that in hybrid networks, whereEandIneurons of different types, changing the strength of connections between E and I continuously changes the cPRC and can ultimately change the type of PRC. Then we apply the results to study the synchronization between two inter-connected populations and show that in a more biologically plausible low coherence regime, when the synchrony within the populations is low, the cPRC cannot adequately predict the phase difference.

First, we investigated the cPRC of a EI network and the effect of synaptic strength from I to E neurons on the type of cPRC. To show the effect clearly, we built three types of populations. In the first case, we modeled the E neurons by Hodgkin-Huxley (HH) and the I neurons by Wang-Buzsaki (WB) models as generic forms of Type-II and type-I neuron models, respectively (Fig. 1). By increasing the strength of synaptic connections from the I neurons to E neurons, we observed that the cPRC changed from type-II to type-I (Fig. 1A). In the second experiment, we interchanged the type of E and I neurons (Fig. 1B), and by following the same procedure, we observed a similar result. In the third experiment, we modeled the neurons all by the HH neuronal model (Fig. 1C), and in this case, increasing the synaptic strength of I o E neurons didn’t have a significant effect on the cPRC. So, for those simulations with a single type of neurons, the connection strength between E and I neurons does not play an important role, but it crucially affects the results whenEandIneurons are of different kinds.

In the second part of the study, we connected two EI populations to explore if cPRC could adequately predict dependence of the phase relation between the two populations to the inter-population communication delay (Fig. 1D). We changed the coherency of population activity by varying the noise level and observed that the phase relation of coupled populations qualitatively changed by the level of coherency so that the prediction based on the cPRC does not work for lower values of coherency.

Fig. 1
figure cl

The effect of synaptic strength on cPRC. A-C show cPRC for networks composed of different neuronal types as discussed in results section. The color bar refers to the synaptic strength from inhibitory to excitatory neurons. D Heat-map: the zero-lag correlation between the activity of coupled population for different values of coherency and delay between them

P191 Biophysical parameters control information transfer in spiking networks

Tomás Garnier Artiñano 1 , Simo Vanni 2

1 University of Helsinki, Helsinki, Finland

2 University of Helsinki, HUS Neurocenter, Helsinki, Finland

Email: tomas.garnier.a@gmail.com

The nature of information transfer in neuronal circuits has been a mystery in neuroscience throughout the history of this discipline. Effective population coding is dependent on connectivity, active and passive postsynaptic membrane parameters and synapse dynamics, but how it relates to information transfer and information representation in the brain is still poorly understood. Recently, Brendel et al. [1] showed how spiking neuronal networks can efficiently represent a noise input signal. This "DModel” successfully showed that spiking neural networks can recreate input signal representations and how these networks can be resilient to the loss of neurons. However, this model has multiple unphysiological characteristics, such as instantaneous firing, single neuron firing per time frame, and the lack of units related to physical values. To determine how this model relates to information transfer in biological systems, it would be important to implement the DModel in a more physiologically accurate simulator. The aim of the present study is to build upon the DModel to study how information transfer is affected by biophysical parameters.

We first modified the DModel in the Matlab environment to allow for the simultaneous firing of the neurons. The network saturated when the simultaneous firing model used the synaptic weights previously learned for single neuron firing but simulating de-novo with all neurons allowed to fire, the simultaneous firing Dmodel was able to reduce reconstruction error and firing rate. Using our CxSystem2 simulator in a Python environment, we built a network consisting of 300 excitatory and 75 inhibitory neurons, replicating the network used in the DModel. We quantified the information transfer of Leaky Integrate-and-Fire neurons that had identical physiological values for both inhibitory and excitatory neurons (Comrade cells) as well as more biologically accurate physiological values (Bacon cells). We used Granger causality, transfer entropy, reconstruction error, coherence, cross-correlation latency, and classification accuracy based on Granger causality F-statistics to quantify the information transfer of the network. Using the weights obtained from the trained DModel in the CxSystem2 simulator, we were able to quantify information transfer with a conductance-based spiking model.

We examined the behavior of the network while altering the values of the capacitance, synaptic delay, equilibrium potential, leak conductance, reset potential, and voltage threshold. Broad parameter searches showed that no single set of biophysical parameters maximized all information transfer metrics, but some ranges fully blocked information transfer by either saturating or stopping neuronal firing. This draws boundaries on the possible electrophysiological values neurons can have. This held true even under closer inspection with narrow searches within electrophysiological ranges. From this, we conclude that there is no single optimal set of physiological values for information transfer. We hypothesize that different neuronal types may specialize in transferring different aspects of information (e.g., accuracy) or act as frequency filters, providing the evolutionary pressure that gave rise to the diversity of cell types observed in the nervous system.

References

1. Brendel W, Bourdoukan R, Vertechi P, Machens CK, Denéve S. Learning to represent signals spike by spike. PLoS computational biology. 2020 Mar 16;16(3):e1007692.

P192 Parameter adaptation of hybrid circuits by online exploration driven by genetic algorithms

Manuel Reyes-Sanchez 1 , Rodrigo Amaducci 1 , Rafael Levi 1 , Francisco B. Rodriguez 2 , Pablo Varona 2

1 Universidad Autónoma de Madrid, Grupo de Neurocomputacion Biologica, Madrid, Spain

2 Universidad Autónoma de Madrid, Ingeniería Informática, Madrid, Spain

Email: manuel.reyes@uam.es

Hybrid circuits built using living and model neurons and connections allow a precise characterization of neuronal and synapse dynamics. They are also useful to validate model neurons and to unveil key components of neural circuit dynamics. Our previous works have shown that adaptation and calibration of model neuron and synapse models are a necessary step to build hybrid circuits, and that automatization of these processes leads to highly successful implementations [1,2]. Many parameters affecting the dynamic interaction in hybrid circuits have complex nonlinear dependencies which are difficult to establish a priori. An option to deal with this problem is a massive search within the parameter space to map the regions that lead to robustness in the target dynamics. This is possible but it is a high time-consuming process. Instead, we propose here an informed search that optimizes this process in a short time by exploring the parameter space using a genetic algorithm. This approach does not consider a detailed full parameter characterization but results in a fast adaptation that is convenient for many experimental goals.

To illustrate this approach, we implemented a hybrid circuit to reproduce dynamical invariants between living and model neurons in the pyloric CPG of crustaceans. Dynamical invariants are robust linear relationships between intervals that build an activation sequence between neurons [3]. With this goal in mind, and from an initial automatic adaptation/calibration, we explored online the parameter space of the neuron and synapse models involved in the modulation of key elements that affect a specific dynamical invariant. In our study, we used the linear correlation between the time intervals that define a dynamical invariant between a living neuron and a model as the cost function for the genetic algorithm.

Our results show that within minutes, by employing just a dozen of individuals and 5 generations, the genetic algorithm can easily obtain a valid configuration for the hybrid circuit to build the dynamical invariant. We also compare the online solution provided by the genetic algorithm exploration with a full characterization of the parameter space obtained with a computer cluster implemented with parallelization. The full cluster exploration validates the proposed genetic approach. The genetic algorithm approximation for tuning synapse and model neurons can be easily generalized to achieve other target dynamics, serving as a useful tool in the construction of hybrid circuits.

Acknowledgements

Funded by AEI/FEDER PGC2018-095,895-B-I00, TIN2017-84,452-R, PID2020-114867RB-I00.

References

1. Reyes-Sanchez M, Amaducci R, Elices I, Rodriguez FB, Varona P. Automatic adaptation of model neurons and connections to build hybrid circuits with living networks. Neuroinformatics. 2020 Jan 13:1–7.

2. Amaducci R, Reyes-Sanchez M, Elices I, Rodriguez FB, Varona P. RTHybrid: a standardized and open-source real-time software model library for experimental neuroscience. Frontiers in neuroinformatics. 2019 Mar 12;13:11.

3. Elices I, Levi R, Arroyo D, Rodriguez FB, Varona P. Robust dynamical invariants in sequential neural activity. Scientific reports. 2019 Jun 21;9(1):1–3.

P193 Effect of infrared laser stimulation in single neurons: Experimental and modeling study

Alicia Garrido-Peña 1 , Rafael Levi 1 , Javier Castilla 2 , Jesus Tornero 2 , Pablo Varona 3

1 Universidad Autónoma Madrid, Grupo de Neurocomputacion Biologica, Madrid, Spain

2 Hospital Los Madroños, Unidad Avanzada de Neurorrehabilitación, Brunete, Spain

3 Universidad Autónoma Madrid, Ingeniería Informática, Madrid, Spain

Email: alicia.garrido@uam.es

Effective stimulation of neural cells has played a key role in the study of neural activity. Perturbation of ongoing neural dynamics has been traditionally implemented with electrophysiological and chemical techniques. Even though the former techniques are highly effective at changing dynamical states, they are usually more invasive and frequently produce irreversible effects in neural activity. Such invasive techniques limit the possibilities of stimulation and experimental set-ups, limiting their applications, e.g., in protocols such as transcranial stimulation. These limitations motivate the search for less invasive techniques that achieve effective dynamic modification while avoiding neuron damage as well as allowing the recovery of departing states. Here we demonstrate that infrared-laser stimulation elicits changes in neural dynamics in a non-invasive way. Previous works have shown that laser stimulation successfully changes neural dynamics [1–3]. However, the biophysical source of these alterations is still under study. In this work we analyze the effect of near infrared laser stimulation in the neural system of the mollusk Lymnaea stagnalis, which is frequently employed in a wide variety of experimental and theoretical studies in neuroscience research [4,5]. Using intracellular recordings, we characterized the spiking activity before, during and after infrared-laser stimulation of individual neurons. During stimulation, a laser beam was focused on the specific cell being recorded. We characterized spike amplitude, duration, depolarization and repolarization slopes. The most notable changes were present in spike duration, which was notably reduced by a dynamical change mainly in the repolarization phase. This change is reversible, right after the stimulation ceases the neuron recovers its original waveform. Changes in spikes characteristic were sustained throughout all experiments, showing the reproducibility of the stimulation effect and the subsequent recovery. We assessed possible biophysical sources of the observed phenomena in a detailed conductance-based model. Different electrotonic and active ionic channel parameter combinations were studied in a Hodgkin-Huxley model. The results point out possible factors generating the laser effect observed experimentally.

Acknowledgements

We acknowledge support from AEI/FEDER PGC2018-095,895-B-I00.

References

1. Li X, Liu J, Liang S, Guan K, An L, et al. Temporal modulation of sodium current kinetics in neuron cells by near-infrared laser. Cell biochemistry and biophysics. 2013 Dec;67(3):1409–19.

2. Liang S, Yang F, Zhou C, Wang Y, Li S, et al. Temperature-dependent activation of neurons by continuous near-infrared laser. Cell biochemistry and biophysics. 2009 Jan;53(1):33–42.

3. Rajguru SM, Matic AI, Richter CP. Optical stimulation of neurons. Laser imaging and manipulation in cell biology. 2010 Oct 13:99–112.

4. Garrido-Peña A, Elices I, Varona P. Characterization of interval variability in the sequential activity of a central pattern generator model. Neurocomputing. 2021 May 13.

5. Vavoulis DV, Straub VA, Kemenes I, Kemenes G, Feng J, et al. Dynamic control of a central pattern generator circuit: a computational model of the snail feeding network. European Journal of Neuroscience. 2007 May;25(9):2805–18.

P194 Gap junctions shape the intervals that build robust sequences in a central pattern generator model

Blanca Berbel 1 , Alicia Garrido-Peña 2 , Irene Elices 2 , Roberto Latorre 3 , Pablo Varona 1

1 Universidad Autónoma de Madrid, Ingeniería Informática, Madrid, Spain

2 Universidad Autónoma de Madrid, Grupo de Neurocomputacion Biologica, Madrid, Spain

3 Universidad Autónoma de Madrid, Grupo de Neurocomputación Biológica, Ingeniería Informática, Escuela Politécnica Superior, Madrid, Spain

Email: blanca.berbel@inv.uam.es

Central Pattern Generators (CPGs) are convenient neural circuits to study sequential neural activations because their connection topology and neuron dynamics can be characterized in detail using both experimental preparations and biophysical models. Multiple studies have addressed the coordination of multifunctional CPG rhythm cycles and have established that, in many CPGs, it is a consequence of mutual inhibition by chemical synapses and synchronization induced by electrical coupling [1]. In this study, we use a biophysical model [2,3] of the pyloric CPG to assess the role of electrical synapses in shaping the intervals that build up the sequence of the circuit. We have analyzed the effect of the electrical conductance between the neurons of the pacemaker group (AB-PD1-PD2) to induce variability in the circuit and thus discuss the change produced in the different intervals that define the CPG sequence without disturbing theorderof the neuron activations (LP-PY-AB(PDs)). We quantified the variability of all time intervals measured cycle-by-cycle in a set of long simulations. Our results show that the conductance of electric gap junctions can regulate the variability of the intervals that build up the cycle-by-cycle period. Several of such intervals are known to participate in dynamical invariants in the form of robust relationships with the period. Dynamical invariants have been proposed to balance flexibility and robustness of CPG sequences and their coordinated rhythm [4]. These results support the view that electrical coupling largely contributes to shape the intervals that define functional sequences and dynamical invariants in CPGs. The hypotheses drawn from this modeling study could be tested in hybrid circuits of living and model neurons with modern dynamic clamp protocols [5,6].

Acknowledgements

Work funded by AEI/FEDER PGC2018-095,895-B-I00.

References

1. Hooper SL, DiCaprio RA. Crustacean motor pattern generator networks. Neurosignals. 2004;13(1–2):50–69.

2. Latorre R, Rodríguez FB, Varona P. Characterization of triphasic rhythms in central pattern generators (I): interspike interval analysis. InInternational Conference on Artificial Neural Networks 2002 Aug 28 (pp. 160–166). Springer, Berlin, Heidelberg.

3. Latorre R, de Borja Rodrı́guez F, Varona P. Effect of individual spiking activity on rhythm generation of central pattern generators. Neurocomputing. 2004 Jun 1;58:535–40.

4. Elices I, Levi R, Arroyo D, Rodriguez FB, Varona P. Robust dynamical invariants in sequential neural activity. Scientific reports. 2019 Jun 21;9(1):1–3.

5. Amaducci R, Reyes-Sanchez M, Elices I, Rodriguez FB, Varona P. RTHybrid: a standardized and open-source real-time software model library for experimental neuroscience. Frontiers in neuroinformatics. 2019 Mar 12;13:11.

6. Reyes-Sanchez M, Amaducci R, Elices I, Rodriguez FB, Varona P. Automatic adaptation of model neurons and connections to build hybrid circuits with living networks. Neuroinformatics. 2020 Jan 13:1–7.

P195 Dynamic synchronization between electrically coupled cells of central pattern generators

Pablo Sánchez-Martín 1 , Irene Elices 2 , Alicia Garrido-Peña 2 , Rafael Levi 2 , Francisco B. Rodriguez 1 , Pablo Varona 1

1 Universidad Autónoma de Madrid, Ingenieria Informática, Madrid, Spain

2 Universidad Autónoma de Madrid, Grupo de Neurocomputacion Biologica, Madrid, Spain

Email: pablos15@ucm.es

Electrical synapses are an efficient mechanism for achieving a high level of synchronization of neural activity between neighbor cells [1–5]. Invertebrates Central Pattern Generators (CPGs) are suitable for studying coordination of neural dynamics and, in particular, they allow for long recordings of simultaneous activity to characterize synchronization, sequential activations, and overall circuit coordination. The present work aims to analyze the evolution of synchronization between the two pyloric dilators (PD) neurons of the pyloric CPG of crustaceans as a first step to assess the role of electrical coupling in flexible coordination of sequences generated by this circuit. Long time series of simultaneous bursting activity were recorded with intracellular electrodes in the PD neurons of Carcinus maenas. The activity was analyzed first with a detailed temporal characterization of the synchronization of the time series, then a precise assessment of the timing and delay of action potentials within each burst, and finally with maps showing the degree of synchronization of both the depolarizing wave and spike dynamics. The results indicate that in this system the synchronization is not constant, but evolves smoothly with each spike during the bursts. The observed spike delay variability between the PD neurons is linked to their transient desynchronization, which in turn is influenced by the duration of the bursts. The experimental analysis is complemented with a conductance based model study to estimate the coupling conductance and the source of its dynamical features. Within the context of these results, we discuss the role of gap junctions in shaping the time intervals that build robust sequences in central pattern generators [6] and tips to test the hypothesis derived from the experiments through the design of hybrid circuits of living and model neurons [7,8].

Acknowledgements

This research was supported by AEI/FEDER PGC2018-095,895-B-I00, TIN2017-84,452-R, PID2020-114867RB-I00.

References

1. Connors BW, Long MA. Electrical synapses in the mammalian brain. Annu. Rev. Neurosci.. 2004 Jul 21;27:393–418.

2. Elson RC, Selverston AI, Huerta R, Rulkov NF, Rabinovich MI, et al. Synchronous behavior of two coupled biological neurons. Physical Review Letters. 1998 Dec 21;81(25):5692.

3. Varona P, Torres JJ, Huerta R, Abarbanel HD, Rabinovich MI. Regularization mechanisms of spiking–bursting neurons. Neural Networks. 2001 Jul 9;14(6–7):865–75.

4. Soto-Trevino C, Rabbah P, Marder E, Nadim F. Computational model of electrically coupled, intrinsically distinct pacemaker neurons. Journal of neurophysiology. 2005 Jul;94(1):590–604.

5. Nadim F, Li X, Gray M, Golowasch J. The role of electrical coupling in rhythm generation in small networks. InNetwork Functions and Plasticity 2017 Jan 1 (pp. 51–78). Academic Press.

6. Elices I, Levi R, Arroyo D, Rodriguez FB, Varona P. Robust dynamical invariants in sequential neural activity. Scientific reports. 2019 Jun 21;9(1):1–3.

7. Reyes-Sanchez M, Amaducci R, Elices I, Rodriguez FB, Varona P. Automatic adaptation of model neurons and connections to build hybrid circuits with living networks. Neuroinformatics. 2020 Jan 13:1–7.

8. Amaducci R, Reyes-Sanchez M, Elices I, Rodriguez FB, Varona P. RTHybrid: a standardized and open-source real-time software model library for experimental neuroscience. Frontiers in neuroinformatics. 2019 Mar 12;13:11.

P196 Assessing robust sequences of neural dynamics: A new approach to M/EEG signals analysis

Sara Kamali 1 , Fabiano Baroni 1 , Pablo Varona 2

1 Universidad Autónoma de Madrid, Computer Engineering, Madrid, Spain

2 Universidad Autónoma de Madrid, Ingeniería Informática, Madrid, Spain

Email: sara.kamali@uam.es

The brain can be seen as working in a metastable regime. In the phase space of its dynamics, the trajectories pass through one state to the next one, consecutively [1]. From this perspective, every behavior and thought involves different brain networks that interact with each other in a sequential manner in this dynamical space [2]. Hence, the identification of robust sequences can be key to relate neural activity with behavior and cognition. The sequences of each process in the brain, as a complex biological system, are not exactly the same in every repetition. However, they retain enough similarities to be distinct and distinguishable from one another. These features allow sequences to be robust in functionality while retaining flexibility regarding the dynamic external environment.

While the mathematical description and conceptual basis of the neural sequences had been proposed and studied in previous works [3], they have not been systematically investigated in brain signals. The focus of the current work is to provide a unifying approach to identify robust sequences from brain signals. Methodologies for the study of M/EEG datasets are still developing due to the nature of these nonstationary waves. Designing an approach to characterize neural sequences, as fundamental features of any brain activity, is a pending milestone in M/EEG analysis. This will lead to novel biomarkers and metrics foranalysis of M/EEG signals for the purpose of predictive and diagnostic implementations, as needed for patients with brain disorders or trauma, and for the design of brain-machine interfaces and biologically inspired technologies. Such an approach can also be leveraged for better understanding the dynamics of M/EEG microstates, i.e., quasi-stable spatial configurations of brain activity, which have been shown to exhibit structured sequential activity [4].

The current study is a step forward in finding a methodology to clearly characterize neural sequences and understanding the principles that govern their composition and the transition between them. We propose several metrics for the analysis of their robustness and flexibility, and their hierarchical spatio-temporal organization. In particular, we report on observed sequences in the study of the publicly available brain activity datasets. We also illustrate the usefulness of computational models to interpret the characterization of robust sequences from experimental data.

Acknowledgements

Funded by AEI/FEDER PGC2018-095,895-B-I00.

References

1. Rabinovich MI, Varona P, Tristan I, Afraimovich VS. Chunking dynamics: heteroclinics in mind. Frontiers in computational neuroscience. 2014 Mar 14;8:22.

2. Rabinovich MI, Zaks MA, Varona P. Sequential dynamics of complex networks in mind: Consciousness and creativity. Physics Reports. 2020 Aug 29.

3. Latorre R, Varona P, Rabinovich MI. Rhythmic control of oscillatory sequential dynamics in heteroclinic motifs. Neurocomputing. 2019 Feb 28;331:108–20.

4. Zanesco AP, King BG, Skwara AC, Saron CD. Within and between-person correlates of the temporal dynamics of resting EEG microstates. NeuroImage. 2020 May 1;211:116,631.

P197 Linking acute stress and heart rate variability in daily life while accounting for physical activity – a machine learning approach

Benedikt Jordan 1 , Antonin Fourcade 2 , Michael Gaebler 3 , Nico Scherf 3 , Anne-Christin Loheit 4 , Arno Villringer 3

1 TU Dresden, Leipzig, Germany

2 Humboldt-Universitaet zu Berlin, MindBrainBody Institute, Berlin School of Mind and Brain, Berlin, Germany

3 Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany

4 Charité—Universitaetsmedizin Berlin, Berlin, Germany

Email: benedikt.jordan@posteo.de

Background

Cardiovascular disease is a principal cause of death, with stress being one of the risk factors [1]. Physiological stress markers can be used for preventive, diagnostic, and therapeutic purposes. Lab-based studies have associated decreases in HRV, that is, variations in the timing of consecutive heartbeats, which index parasympathetic cardioregulation [2] with higher levels of self-reported stress [3]. It remains unclear to what extent this link generalizes to daily life, particularly as naturalistic settings typically involve physical activity, which itself affects HRV [4].

Methods

ECG and ACC data were ambulatorily recorded with a chest strap (EKGmove3; Movisens, Germany) in the daily life of healthy older adults. Participants reported the level and timing of stress events (10-min temporal resolution) every waking hour via a smartphone-based ecological momentary assessment. Heart rate and HRV features were calculated. Supervised learning models (Decision Forest, Support Vector Machine, Multilayered Perceptron and Stacking Ensemble) were trained on the extracted heart rate and HRV features with and without including physical activity to classify binary stress labels. fivefold nested cross validation was applied for hyperparameter tuning. SHapley Additive exPlanations (SHAP) technique was used for feature importance calculation.

Results

Twenty-five older adults (11 females; 69 ± 4, 60–76 years) provided data from (on average) 5.76 days, which included 168 stress events from 20 participants. The best performing supervised machine learning model, trained without physical activity, was predictive with an accuracy of 74.2% (F1: 72.9%). Highly predictive features were Median-RR- and Mean-RR-Interval. Including physical activity increased predictive model performance by 9% to 83.2% accuracy (F1: 82.6%).

Conclusions

This study provides evidence for the link between heart rate, HRV and acute stress under naturalistic conditions, as well as when including physical activity. Ambulatorily assessed HRV as a physiological stress marker can be useful for clinical applications. The approach of integrating physical activity into machine learning models is expected to be of broader relevance for naturalistic (i.e., interactive and dynamic) psychophysiological studies.

References

1. Steptoe A, Kivimäki M. Stress and cardiovascular disease. Nature Reviews Cardiology. 2012 Jun;9(6):360–70.

2. Task Force of the European Society of Cardiology. the North American Society of Pacing and Electrophysiology (1996) Heart rate variability: standards of measurement, physiological interpretation and clinical use. Circulation. 1996 Mar;93(5):1043–65.

3. Kim HG, Cheon EJ, Bai DS, Lee YH, Koo BH. Stress and heart rate variability: a meta-analysis and review of the literature. Psychiatry investigation. 2018 Mar;15(3):235.

4. Verkuil BB. JF; Tollenaar, MS; Lane, RD; Thayer, JF Prolonged Non-metabolic Heart Rate Variability Reduction as a Physiological Marker of Psychological Stress in Daily Life. Annals of behavioral medicine: a publication of the Society of Behavioral Medicine. 2016;50(5):704–14.

P198 Neurodynamical model for the visual recognition of dynamic bodies

Prerana Kumar 1 , Martin Giese 2 , Nick Taubert 2 , Michael Stettler 3 , Rufin Vogels 4

1 Eberhard Karls University of Tübingen, Department of Cognitive Neurology, Tuebingen, Germany

2 Eberhard Karls University of Tübingen, Center for Integrative Neuroscience (CIN) & Hertie Institute for Clinical Brain Research (HIH), Tübingen, Germany

3 Eberhard Karls University of Tübingen, Compsens, Tübingen, Germany

4 Katholieke Universiteit Leuven, Department of Neurosciences, Leuven, Belgium

Email: prerana.kumar@uni-tuebingen.de

The ability to visually recognize different actions and complex movements is necessary for the survival of many social species. The detailed circuitry underlying the neural processing of the visual recognition of body movements is not yet known. For a detailed comparison with electrophysiological data, we have developed a physiologically inspired hierarchical neural model for the recognition of body movements.

The model combines a feed-forward deep network (VGG-19 [1]) with a neurodynamical model that has been demonstrated to reproduce the neural dynamics at the single-cell level in higher areas of the visual and premotor cortex [2]. The lower levels of the visual hierarchy were modeled by the layers up to the conv5.1 layer of the VGG-19 network, pre-trained on the ImageNet database. This readout level was chosen since it was shown to match the activity of middle superior temporal sulcus body (MSB) patch neurons well [3]. These output features were massively reduced by a feature reduction procedure that eliminates features with low variability over the training set in combination with PCA. The reduced feature responses are used as input signals for radial basis function networks that were trained with individual keyframes of the action (Fig. 1A). (Up to this level, the model assumes a feed-forward architecture). Sequences of such keyframes were then encoded by recurrent neural networks (RNNs). Building on previous work modeling in detail the activation dynamics of neurons in the superior temporal sulcus (STS) and of mirror neurons in the premotor cortex [2,4], these recurrent networks were modeled by a set of neural fields with mutual inhibition, resulting in a competitive selection between the different learned actions. The outputs of the individual neural fields were summed up by motion pattern neurons that are active only during one of the learned actions.

We tested the model using movies that show macaque monkeys involved in different types of actions. Similar movies are presently being used in physiological experiments on body motion encoding in monkeys. The model successfully recognizes the actions from real videos. The snapshot neurons are showing a traveling pulse of activity within the neural field that encodes the corresponding pattern (Fig. 1A). The motion pattern neurons show responses that clearly differentiate between the different encoded actions (Fig. 1B). The model makes precise predictions about the response dynamics of different neuron classes, which are presently being compared to recordings from the macaque visual cortex.

Acknowledgments

This work was supported by ERC 2019-SyG-RELEVANCE-856495 and HFSP RGP0036/2016. MG was also supported by BMBF FKZ 01GQ1704 and SSTeP-KiZ BMG: ZMWI1-2520DAT700.

References

1. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv: 1409. 1556. 2014 Sep 4.

2. Giese MA, Poggio T. Neural mechanisms for the recognition of biological movements. Nature Reviews Neuroscience. 2003 Mar;4(3):179–92.

3. Kalfas I, Kumar S, Vogels R. Shape selectivity of middle superior temporal sulcus body patch neurons. Eneuro. 2017 May;4(3).

4. Caggiano V, Pomper JK, Fleischer F, Fogassi L, Giese M, et al. Mirror neurons in monkey area F5 do not adapt to the observation of repeated actions. Nature communications. 2013 Feb 5;4(1):1–8.

Fig. 1
figure cm

A Activity of snapshot neurons for the three actions. B Activity of motion pattern neurons for the three actions (represented by different colors)

P199 Stochastic axons in the mammalian brain

Skirmantas Janusonis 1 , Kasie Mays 1 , Ralf Metzler 2 , Thomas Vojta 3

1 University of California, Santa Barbara, Department of Psychological and Brain Sciences, Santa Barbara, CA, United States of America

3 University of Potsdam, Institute of Physics and Astronomy, Potsdam-Golm, Germany

4 Missouri University of Science and Technology, Department of Physics, Rolla, MO, United States of America

Email: janusonis@ucsb.edu

All dynamical processes in vertebrate brains are physically embedded in a dense matrix of thin axons (fibers) that release serotonin (5-hydroxytryptamine) – a neurotransmitter that modulates neural, glial, and vascular processes. Serotonergic axons appear to be an essential ingredient of any adaptive nervous tissue and may inform future architectures in machine learning. However, they typically do not form classical synapses and therefore cannot be understood within the connectomics framework. We have recently introduced the novel concept of the "stochastic axon systems," the scale of which may be comparable to that of the "deterministic," point-to-point axons systems. To advance the theoretical understanding of the trajectories of serotonergic axons, we propose two theoretical approaches.

The first approach is based on a random, step-wise 3D-walk driven by the von Mises-Fisher (vMF) directional distribution [1]. We have developed an algorithm to automatically trace serotonergic axons in 3D-confocal images in a transgenic mouse model and obtained estimates of the vMF-concentration parameter (κ) in several neuroanatomical regions. We hypothesize that the value of this parameter may control the self-organization of serotonergic fiber densities, with immediate implications for normal and diseased brain states. For example, an increase in serotonergic fiber densities have been reported in brains of individuals diagnosed with Autism Spectrum Disorder [2]. The second approach is based on fractional Brownian motion (FBM), a continuous stochastic process that generalizes normal Brownian motion. The model includes the recently discovered properties of the reflected FBM (rFBM) [3,4]. In the superdiffusive regime, rFBM-paths reproduce some essential features of serotonergic fiber densities in the forebrain and brainstem. Our supercomputing simulations show that rFBM-walkers accumulate near the surface of brain-shaped domains, just as serotonergic axons tend to produce higher densities near the pial and ventricular surfaces [3]. The FBM model can be further enriched with a "diffusing-diffusivity" (DD) component, to reflect the heterogeneous environment axons travel in [5].

Acknowledgements

This research is funded by NSF (#1,822,517 and #1,921,515 to SJ, #OAC-1919789 to TV), NIMH (#MH117488 to SJ), the California NanoSystems Institute (Challenge grants to SJ), the Research Corporation for Science Advancement (a Cottrell SEED Award to TV), and the German Research Foundation (DFG #ME 1535/7–1 to RM).

References

1. Janušonis S, Detering N. A stochastic approach to serotonergic fibers in mental disorders. Biochimie. 2019 Jun 1;161:15–22.

2. Azmitia EC, Singh JS, Whitaker-Azmitia PM. Increased serotonin axons (immunoreactive to 5-HT transporter) in postmortem brains from young autism donors. Neuropharmacology. 2011 Jun 1;60(7–8):1347–54.

3. Janušonis S, Detering N, Metzler R, Vojta T. Serotonergic axons as fractional Brownian motion paths: insights into the self-organization of regional densities. Frontiers in Computational Neuroscience. 2020 Jun 24;14:56.

4. Vojta T, Halladay S, Skinner S, Janušonis S, Guggenberger T, et al. Reflected fractional Brownian motion in one and higher dimensions. Physical Review E. 2020 Sep 8;102(3):032,108.

5. Wang W, Seno F, Sokolov IM, Chechkin AV, Metzler R. Unexpected crossovers in correlated random-diffusivity processes. New Journal of Physics. 2020 Aug 14;22(8):083,041.

P200 Combined effect of chemical and electrical synapses in coupled inhibitory neurons results in emergence of persistent activity

Janaki Raghavan 1 , Adityapuram Sivaraman Vytheeswaran 2

1 The Institute of Mathematical Sciences, Chennai, India

2 University of Madras, Department of Physics, Chennai, India

Email: janaki.phys@gmail.com

The ability of neurons to communicate with each other is crucial for normal functioning of the brain. The information transfer from one neuron to the other occurs primarily through synapses. In most organisms, the two synaptic modalities viz., chemical and electrical synapses co-exist, although their distribution is largely unknown. Moreover, neuronal dynamics is influenced to a great extent by the presence of inhibitory neurons. Although they constitute only 10%-20% of the neuronal population, they play a crucial role in maintaining normal brain activity. It has been shown that such inhibitory (GABA-ergic) neurons are largely connected through electrical synapses or gap-junctions. Furthermore, certain brain areas such as Reticular Thalamic Nuclei (RTN) has predominant occurrence of inhibitory neurons, it is crucial to understand how the interplay of synapses and gap-junctions influence the neuronal dynamics, especially in networks of inhibitory neurons. Hence, in this work, using a generic model of excitable neurons, coupled through both synapse and gap junctions, we study the conditions for self-sustained neuronal activity. We first show that coupled inhibitory neurons, with high inhibitory conductance and comparatively low gap-junctional conductances exhibits persistent activity. By systematically varying the gap-junctional conductance, we show the emergence of chaotic attractor arising through a series of period doubling bifurcations. We further extend our study by considering a ring of inhibitory neurons and obtain the optimal conditions required for sustained network activity.

P201 Does reward positivity encode trial-by-trial reward prediction error? A model-based EEG analysis

Ka Chun Wu 1 , Isaac Ip 2 , Fiona Ching 2 , Heytou Chiu 2 , Rosa Chan 3 , Bolton K. H. Chau 4 , Savio W. H. Wong 1 , Yetta Kwailing Wong 1

1 The Chinese University of Hong Kong, Department of Educational Psychology, Hong Kong, Hong Kong

2 The Chinese University of Hong Kong, Laboratory for Brain and Education, Department of Educational Psychology, Hong Kong, Hong Kong

3 City University of Hong Kong, Department of Electrical Engineering, Hong Kong, Hong Kong

4 The Hong Kong Polytechnic University, Department of Rehabilitation Sciences, Hong Kong, Hong Kong

Email: kachunwu7-s@link.cuhk.edu.hk

Reward positivity (RewP), an event-related potential observed 250–300 ms after feedback [1], is hypothesized to reflect the dopaminergic response to the reward prediction error (RPE) during the reward processing. However, the traditional grand averaging approach of ERP analyses cannot answer whether RewP is merely a response to RPE valence in a categorical way (i.e., better-than-expected or worse-than-expected) or reflects the computation of RPE in a parametric way. In this study, we take a model-based approach to explore the effect of RPE on RewP. Specifically, we use the hierarchical Bayesian modelling to estimate individual parameter under the reinforcement learning model and extract the trial-by-trial RPE as the regressor for model-based EEG analysis.

Thirty-seven healthy adults (19 male, 18 female, Mage = 26.97) performed four blocks of probabilistic reversal learning task while we acquire their EEG response using a 128-channel system. The preprocessed data were segmented into 1000 ms epoch from -200 ms before to 800 ms after the feedback slides. The reinforcement learning model based on Rescorla Wagner model with separate learning rates for positive and negative feedbacks was fitted with the choice data of the subjects to estimate the hyper-parameter and individual parameters using the hBayesDM package [2]. Using the fitted parameters, the trial-by-trial RPE is generated and input as the trial-by-trial regressors for the model-based EEG analysis using the LIMO-EEG plugin of EEGLAB [3].

Traditional ERP analysis found a P200, FRN, and P300 effect of feedback valence at FCz (Fig. 1A). For the model-based analysis, one-sample t-test is applied to condition contrast (reward vs. non-reward) and RPE contrast (positive RPE vs. negative RPE). While no significant cluster is found for condition effect after correction for multiple comparisons using spatiotemporal clustering, the results reveal a significant modulation of ERP for positive RPE than for negative RPE (Fig. 1B – cluster started at 212 ms and ends at 268 ms encompassing frontocentral electrodes, mean beta value = 0.68, 95% CI [-0.07 1.42], maximum t-value = 5.20 at 232 ms channel F6, corrected p-value = 0.003). The results support that a more positive RPE predicts a more positive EEG response at frontocentral region, though the corresponding time is earlier than the typical RewP time window. Model-based analysis provides an alternative angle to the average-based RewP and shed a new light on the temporal dynamic of reward computation. It provides direct evidence that the “early” RewP encode positive RPE in a parametric way.

References

1. Proudfit GH. The reward positivity: From basic research on reward to a biomarker for depression. Psychophysiology. 2015 Apr;52(4):449–59.

2. Ahn WY, Haines N, Zhang L. Revealing neurocomputational mechanisms of reinforcement learning and decision-making with the hBayesDM package. Computational Psychiatry. 2017 Oct 1:24–57.

3. Pernet CR, Chauveau N, Gaspar C, Rousselet GA. LIMO EEG: a toolbox for hierarchical Linear Modeling of Electroencephalographic data. Computational intelligence and neuroscience. 2011 Oct;2011.

Fig. 1
figure cn

A Feedback-locked ERP for reward feedback (red line) and non-reward feedback (blue line). B LIMO-EEG output of one sample t-test of beta of RPE for reward feedback vs. for non-reward feedback. Corrected for multiple comparisons using spatiotemporal clustering with a cluster forming threshold of p = 0.05

P202 Sequence learning, prediction, and generation in networks of spiking neurons

Younes Bouhadjar 1 , Markus Diesmann 2 , Tom Tetzlaff 3 , Dirk J. Wouters 4

1 Forschungszentrum Jülich, Jülich, Germany

2 Forschungszentrum Jülich GmbH, Jülich, Germany

3 Jülich Research Centre, Institute for Neuroscience and Medicine (INM-6), J Jülich, Germany

4 RWTH University Aachen, Aachen, Germany

Email: ys.bouhadjar@gmail.com

Sequence learning, prediction and generation has been proposed to be the universal computation performed by the neocortex [1]. The Hierarchical Temporal Memory (HTM) algorithm [2] realizes this form of computation. It learns sequences in an unsupervised and continuous manner using local learning rules, permits a context-specific prediction of future sequence elements, and generates mismatch signals in case the predictions are not met. While the HTM algorithm accounts for a number of biological features such as topographic receptive fields, nonlinear dendritic processing, and sparse connectivity, it is based on abstract discrete-time neuron and synapse dynamics, as well as on plasticity mechanisms that can only partly be related to known biological mechanisms. Here, we devise a continuous-time implementation of the temporal-memory (TM) component of the HTM algorithm [3], which is based on a recurrent network of spiking neurons with biophysically interpretable variables and parameters. The model learns non-Markovian sequences by means of a structural Hebbian synaptic plasticity mechanism supplemented with a rate-based homeostatic control. In combination with nonlinear dendritic input integration and local inhibitory feedback, this type of plasticity leads to the dynamic self-organization of narrow sequence-specific feedforward subnetworks. These subnetworks provide the substrate for a faithful propagation of sparse, synchronous activity, and, thereby, for a robust, context-specific prediction of future sequence elements as well as for the autonomous replay of previously learned sequences. By strengthening the link to biology, our implementation facilitates the evaluation of the TM hypothesis based on experimentally accessible quantities. The continuous-time implementation of the TM algorithm permits, in particular, an investigation of the role of sequence timing for sequence learning, prediction and replay. We demonstrate this aspect by studying the effect of the sequence speed on the sequence learning performance and on the speed of autonomous sequence replay.

Acknowledgments

This project was funded by the Helmholtz Association Initiative and Networking Fund (project number SO-092, Advanced Computing Architectures), and the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3) and No. 785907 (Human Brain Project SGA2).

References

1. Hawkins J, Blakeslee S. On intelligence. Macmillan; 2004 Oct 3.

2. Hawkins J, Ahmad S, Dubinsky D. Cortical learning algorithm and hierarchical temporal memory. Numenta Whitepaper. 2011 Sep 1;1:68.

3. Hawkins J, Ahmad S. Why neurons have thousands of synapses, a theory of sequence memory in neocortex. Frontiers in neural circuits. 2016 Mar 30;10:23.

P203 A neuroanatomically-based model for trichromatic color sensations

Charles Wu 1

1 Perception and Cognition Research, San Francisco, CA, United States of America

Email: charlesqwu@percog.org

At the beginning of the last century, the British psychologist William McDougall [1] had proposed a model for color sensations. The neural stage for color sensations consists of four channels – three of them correspond to the three primary colors conceived by Thomas Young and the fourth one is for the white sensation – and is at a monocular level in visual cortex. Also related to color vision, the British neuroanatomist Le Gros Clark [2] had proposed that the three layers per retina within the Lateral Geniculate Nucleus (LGN) correspond to Young’s three primary color channels.

Presently, both McDougall’s and Le Gros Clark’s ideas have largely been ignored or dismissed by researchers in color vision (e.g., see [3]). Here I attempt to revive their ideas (with some modifications) and to develop a neuroanatomically-based model for color sensations. Based on the fact of phenomenological monocularity of color sensations, I localize the neural stage for color sensations to the thalamic recipient layer (i.e., Layer 4, which is usually but incorrectly labelled as Layer 4C, see [4]) of the primary visual cortex (i.e., V1). More specifically, I propose the following six-pack model for this layer: tangentially, it consists of two ocular subsystems – namely, the ocular dominance columns receiving inputs from the two eyes respectively; Vertically, from the top of the layer (i.e., the pia side of the cortical sheet) to its bottom, it consists of three sublayers corresponding to the three primary color sensations: blue, green, and red.

Textbooks on the neuroanatomical organization of the primate visual system usually explain the geniculo-cortical wiring schema as follows: for each retina, the M-layer (magnocellular) in the LGN projects to Layer 4Cα in V1; and the two P-layers (parvocellular) project to Layer 4Cβ. But, why does the Nature twist two bundles of neural fibers together into one on the geniculo-cortical route? Here I propose that the organizational feature of three divisions per retina in the LGN is still conserved within V1 Layer 4–though there may be a transform from the three cone-based (i.e., S-, M-, and L-cones) spectral selectivities in the LGN to the three primary colors (i.e., B-, G-, and R-colors) in V1 Layer 4.

Furthermore, I propose that the neural code directly corresponds to color sensations is single-moment synchronization and that the magnitude of a sensation directly corresponds to the number of neurons within a synchronously-firing cell assembly. In this view, for any snapshot of visual consciousness, the bindings at various levels–among spatial points, between the two ocular subsystems, within one primary color channel, across color channels (i.e., color fusion or mixture), and among visual features (such as between color and orientation)–are all due to the same neurophysiological mechanism (i.e., synchronization).

References

1. McDougall W. Some new observations in support of Thomas Young's theory of light-and colour-vision. Mind. 1901 Jan 1;10(37):52–97.

2. Clark WL. Anatomical basis of colour vision. Nature. 1940 Oct;146(3704):558–9.

3. FitzGibbon T, Eriköz B, Grünert U, Martin PR. Analysis of the lateral geniculate nucleus in dichromatic and trichromatic marmosets. Journal of Comparative Neurology. 2015 Sep 1;523(13):1948–66.

4. Balaram P, Young NA, Kaas JH. Histological features of layers and sublayers in cortical visual areas V1 and V2 of chimpanzees, macaque monkeys, and humans. Eye and brain. 2014;6(Suppl 1):5.

P204 Neuroscience Gateway enabling large scale modeling, data processing and software dissemination

Subhashini Sivagnanam 1 , Kenneth Yoshimoto 1 , Martin Kandes 1 , Ted Carnevale 2 , Amitava Majumdar 1 , Steven Yeu 1

1 University of California, San Diego, San Diego Supercomputer Center, La Jolla, CA, United States of America

2 Yale University, Neurobiology, New Haven, CT, United States of America

Email: majumdar@sdsc.edu

The Neuroscience Gateway (NSG) [1–3] has been in operation since early 2013. It provides ~20 neuroscience modeling and data processing software on high performance computing (HPC) and high throughput computing (HTC) resources of Extreme Science and Engineering Discovery Environment (XSEDE). It currently has over 1250 registered users (Fig. 1). Computational modeling of cells and networks has become an essential part of neuroscience requiring HPC, and similarly processing of experimental data (EEG, fMRI) increasingly require compute power of HTC and cloud. NSG lowers or eliminates the administrative and technical barriers that make it difficult for neuroscientists to use HPC/HTC/cloud resources. It offers free supercomputer timethat the NSG team acquires on XSEDE resources. NSG is open to any neuroscientist from any country.

We recently integrated NSG with the Open Science Grid (OSG) that is a framework for distributed HTC for the academic community. We have also demonstrated a capability of NSG job submission to AWS cloud resources where the NSG jobs use the “cloudbursting” features of supercomputers to send jobs to AWS resources. Both of these capabilities are to satisfy computing needs of experimental and cognitive neuroscientists who utilize HTC for data processing [4], just as computational neuroscientist utilize HPC for modeling. Recently added new features to NSG include ability for users to (i) transfer large data directly to NSG’s backend storage, (ii) share data with their NSG collaborators, (iii) process publicly shared data, etc. We have expanded NSG to include a software development platform where neuroscience software developers get direct access and which provides a neuroscience software stack. Neuroscientists can use this platform to develop, benchmark, and scale their software. Robust software can be made available in containerized or cloud image form for dissemination either via NSG or otherwise for the neuroscience community. We have added a software repository and a web front end which provides detail information about the software such that users can use the software on NSG or their computing resources of choice such as commercial cloud. NSG is increasingly used in workshops, training classes and classroom teaching. Since its inception, NSG has enabled over 250 publications, presentations and thesis work.

Acknowledgements

We acknowledge these grants: NIH U24EB029005; NSF 1,935,749; NSF 1,935,771; NIH R01EB023297; NSF 1,730,655; NSF 1,823,366.

References

1. Sivagnanam S, Majumdar A, Yoshimoto K, Astakhov V, Bandrowski AE, et al. Introducing the neuroscience gateway. IWSG. 2013 Jun;993.

2. Majumdar A, Sivagnanam S, Yoshimoto K, Carnevale T. Understanding the Evolving Cyberinfrastructure Needs of the Neuroscience Community. InProceedings of the XSEDE16 Conference on Diversity, Big Data, and Science at Scale 2016 Jul 17 (pp. 1–7).

3. Sivagnanam S, Yoshimoto K, Carnevale NT, Majumdar A. The neuroscience gateway: Enabling large scale modeling and data processing in neuroscience. InProceedings of the Practice and Experience on Advanced Research Computing 2018 Jul 22 (pp. 1–7).

4. Martínez-Cancino R, Delorme A, Truong D, Artoni F, Kreutz-Delgado K, et al. The open EEGLAB portal interface: high-performance computing with EEGLAB. NeuroImage. 2021 Jan 1;224:116,778.

Fig. 1
figure co

Growth in Number of NSG Users Over Years

P205 Modelling the effects of the perforant path in the recall performance of a CA1 microcircuit with excitatory and inhibitory neurons

Nikolaos Andreakos 1 , Vassilis Cutsuridis 1 , Shigang Yue 1

1 Univerity of Lincoln, School of Computer Science, Lincoln, United Kingdom

Email: nandreakos@lincoln.ac.uk

From recollecting childhood memories to recalling if we turn off the oven before we left the house, memory defines who we are. Losing it can be very harmful to our survival. Recently we quantitatively investigated the biophysical mechanisms leading to memory recall improvement of a computational CA1 microcircuit model of the hippocampus [1]. The model consisted of excitatory (pyramidal cells) and four types of inhibitory cells: axo-axonic, basket, bistratified and OLM cells. Cell properties were validated extensively against experimental data. Cells’ firings were timed to a theta oscillation paced by two distinct medial septal neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. To test recall of a previously stored memory pattern, the associated input pattern was applied as a cue in the form of spiking of active CA3 inputs (those belonging to the pattern) exciting the network’s excitatory cells’ proximal to the soma dendrites. The EC perforant path excitatory input (sensory input) to network’s excitatory cells’ distal dendrites was disconnected. Dendritic inhibition acted as a non-specific global threshold machine that removed spurious activity during recall. To systematically evaluate the model’s recall performance against stored patterns, pattern overlap, network size, and active cells per pattern, we selectively modulated feedforward and feedback excitatory and inhibitory pathways targeting specific excitatory and inhibitory cells. Our simulations showed that the number of “active cells” representing a memory pattern was the determining factor for improving the model’s recall performance regardless of the number of stored patterns and degree of overlap between them. As “active cells per pattern” decreased, the model’s memory capacity increased, interference effects between stored patterns decreased, and recall quality improved.

In the present study, we investigated the synergistic effects of the EC excitatory input (sensory input) and the CA3 excitatory input (contextual information) on the recall performance of the CA1 microcircuit. Our results showed that when the EC input was exactly the same as the CA3 input then the recall performance of our model was strengthened. When the two inputs were dissimilar (degree similarity: 40%–0%), then the recall performance was reduced. These results were positively correlated with how many “active cells” represented a memory pattern. When the number of active cells increased and the degree of similarity between the two inputs decreased, then the recall performance of the model was reduced. The latter finding confirms previous results of ours where the number of cells coding a piece of information plays a significant role in the recall performance of our model.

References

1. Andreakos N, Yue S, Cutsuridis V. Quantitative investigation of memory recall performance of a computational microcircuit model of the hippocampus. Brain Informatics. 2021 Dec;8(1):1–5.

P206 Multistability of coherent states in ring networks of type II neurons with asymmetrical nonlocal inhibitory connectivity

Olesia Dogonasheva 1 , Boris Gutkin 2 , Denis Zakharov 3

1 HSE University, School of Psychology, Moscow, Russia

2 École Normale Supérieure, Paris, France

3 Higher School of Economics, Centre for Cognition & Decision Making, Moscow, Russia

Email: odogonasheva@hse.ru

One of the significant dynamic property of neural networks is their ability to synchronize. Synchronization plays a key role in the formation of functional states in the brain [1]. Experimental evidence suggests that distinct functional cognitive states are associated with distinguishable patterns of brain activity, and these are flexibly rebuilt when solving different cognitive tasks. Notably, neuronal populations engaged in the task form spatio-temporal synchronous networks, while neurons that are not involved in the task may remain unsynchronized [2]. The coexistence of synchronous and asynchronous oscillations is called a chimera state [3]. Studies of chimera states in the neuronal networks are rapidly developing and have a great interest for computational functional importance [4].

In this work, we consider a ring neural network with asymmetrical chemical inhibitory synapses, modelling a prototypical connectivity for sequential information passing (see, for example, [5]). We describe each neuron by Morris-Lecar model with type II dynamics [6] which allow us to reproduce excitability properties of fast-spiking interneurons. First, to understand if our network is capable of rapid and flexible spatio-temporal state reconfiguration we need to determine exhaustively the dynamic modes of the network. Second, we make the ansatz that for flexible reconfiguration, our network needs to be in a multistability mode. To perform large-scale scanning of the network spatio-temporal dynamical regimes, we used the adaptive coherence measure [3]. Depending on the synaptic coupling strength (gsyn) and connectivity parameter (r = R/N, where R is a number of connections of each neuron andNis a number of neurons), we found cluster synchronization, travelling waves, chimeric states and regimes of oscillator death (cessation of activity). The multistability map is shown in Fig. 1.

We find that for the vast majority of the parameter space the network shows multi stability with different combinations of dynamic states coexisting. This shows that even simple network architectures allow for a rich repertoire of dynamical behaviors and that these can be rapidly and flexibly navigated between by inputs, resetting of initial conditions or neuromodulatory influences.

Acknowledgements

This work is an output of a research project implemented as part of the Basic Research Program at the HSE University.

References

1. Sejnowski TJ, Paulsen O. Network oscillations: emerging computational principles. Journal of Neuroscience. 2006 Aug 1;26(6):1673–6.

2. Krupa M, Gielen S, Gutkin B. Adaptation and shunting inhibition leads to pyramidal/interneuron gamma with sparse firing of pyramidal cells. Journal of computational neuroscience. 2014 Oct 1;37(2):357–76.

3. Abrams DM, Strogatz SH. Chimera states for coupled oscillators. Physical review letters. 2004 Oct 22;93(17):174,102.

4. Wang Z, Liu Z. A brief review of chimera state in empirical brain networks. Frontiers in Physiology. 2020 Jun 30;11:724.

5. Tsodyks M. Attractor neural network models of spatial maps in hippocampus. Hippocampus. 1999;9(4):481–9.

6. Dogonasheva O, Kasatkin D, Gutkin B, Zakharov D. Robust universal approach to identify travelling chimeras and synchronized clusters in spiking networks. arXiv preprint arXiv: 2103. 09304. 2021 Mar 16.

Fig. 1
figure cp

State diagram on (r, g) plane when the external current is fixed Iext = 95 μA/cm2

P207 Topological properties of mouse neuronal populations

Margarita Zaleshina 1 , Alexander Zaleshin 2

1 Moscow Institute of Physics and Technology, Russia

2 Institute of Higher Nervous Activity and Neurophysiology, Moscow, Russia

Email: zaleshina@gmail.com

In this work, we processed sets of images obtained by the light-sheet fluorescence microscopy method. We selected different cell groups and determined areas occupied by ensembles of cell groups in mouse brain tissue. Recognition of mouse neuronal populations was performed on the basis of visual properties of fluorescence-activated cells. The identification of individual ensembles and the principles of their interaction, and the correlation of activity of ensembles, are considered by many authors. Segmentation of a large set of neurons involves grouping them into neural ensembles, which are usually formed as populations of cells (or cultured neurons) with similar properties.

The proper selection of scale makes it possible to reduce errors and to use flexible settings for the integration of heterogeneous data, and to define filters for noise reduction. Data obtained at intermediate scales affects the identification of single image segments during their processing. Figure 1 shows how final segment contours can be formed in different ways, depending on initial scales. In this work typical samples of cell groups in the brain were studied. Spatial analysis of the distribution of cells according to fluorescence microscopy datasets was performed based on data packages [1,2] (https://ebrains.eu). In our study 60 fluorescence microscopy datasets obtained from 23 mice ex vivo were analyzed.

Based on data from the light-sheet microscopy datasets, we identified the visual characteristics of elements in multi-page TIFF files, such as the density of surface fill and its distribution over the study area, the boundaries of distinct objects and object groups, and the boundaries between homogeneous areas. To identify topological properties of the images, we performed operations such as contouring and segmentation, and identification of areas of interest. Individual elements in fluorescence microscopy records were selected based on their brightness in grayscale mode. Frequently occurring patterns formed by individual elements were classified and found in other sets of images: this way we built a training sample and classified the data in optogenetics multi-page TIFF files. The presence of training samples was tested for different types of fluorescence microscopy. We selected and constructed six sets of typical samples, with certain topological properties, on the basis of the density at the boundaries, the density inside the boundaries, and the shape type.

In this work we demonstrated the usability of spatial data processing methods for pattern recognition and comparative analysis of fluorescence microscopy records. Geoinformation applications provide sets of options for processing topological properties of images, such as contouring and segmentation, identification ROIs, data classification, and training sample construction. We have shown that the application of the procedure for combining a group of cells into typical ensembles enriches the possibilities of brain image processing.

Such applied algorithms and methods can be used for data processing at an "intermediate scale" and in describing the specific characteristics of the distinctive regions formed near the borderlines of stable ensembles.

References

1. Silvestri L, et al. Whole brain images of selected neuronal types. HBP Neuroinformatics Platform. 2019.

2. Silvestri L, et al. Whole-brain images of different neuronal markers. HBP Neuroinformatics Platform. 2020.

Fig. 1
figure cq

Selection of segments at different scales. A The original set of elements schematically: at scale 1 there are gray and lilac ovals; scale 2 contains lilac circles; scale 3 has orange strokes. B-D Different scales highlighted in light green

P208 Individual variability in the human connectome maintains selective cross-modal consistency and shares microstructural signatures

Esin Karahan1, Luke Tait1, Ruoguang Si1, Aysegul Ozkan1, Maciek Szul1, Jiaxiang Zhang1

1 Cardiff University, School of Psychology, CUBRIC, Cardiff, United Kingdom

Email: karahane@cardiff.ac.uk

Individuals differ in their behavior and cognitive abilities, but to what extent the brain connectome vary between individuals remain largely unknown. By combining diffusion-weighted images (DWI), fMRI, and magnetoencephalography (MEG), this study quantifies the individual variations of connectome and their consistency across imaging modalities. Furthermore, we associated individual variability in connectome with cortical myelin content and white-matter microstructure [1]. We recruited 64 healthy participants in two cohorts (49 females, age 18–35 years (mean:21.1, std:2.94). Cohort 1 (N = 29) underwent 3 T DWI, fMRI and MEG scanning sessions. Cohort 2 (N = 35) completed a 7 T high-resolution (0.65 mm isotropic) structural MRI session.

For Cohort 1, we generated individual DWI-based structural connectome from whole-brain probabilistic tractography. The connectivity matrix was calculated as the region-to-region connectivity strength, based on cortical surface parcellations from the HCP multimodal atlas [2]. White matter microstructural metrics were calculated from DTI, NODDI [3] and CHARMED models [4]. fMRI functional connectome was calculated by correlating BOLD responses between regions of interest. MEG functional connectome was calculated from regional correlations of source reconstructed alpha and beta-band oscillatory power. For Cohort 2, we measured the T1 relaxation rate (R1) as a proxy to cortical myelin content.

We quantified the inter-subject variability (ISV) on connectome as the average cosine distance between the connectivity profiles of individuals. The ISVs of structural and functional connectomes are characterized by higher variability in association cortices and lower variability in sensory and visual cortices (Fig. 1A). This pattern is consistent across all modalities at varying degrees, as shown by significant alignments between functional and structural connectome variabilities in selective cortical clusters. Cortical myelin content, indexed by the R1 value, is high in somatosensory, motor, auditory and visual cortices and low in association cortices (frontoparietal and temporal areas) (Fig. 1B). Across the cortex, R1 is negatively related to ISV across modalities (Spearman’s correlation between R1 map and structural ISV: r = –0.11, p = 0.009; fMRI ISV: r = –0.50, p = 0.78e-39; alpha-band MEG ISV: r = –0.24, p = 0.0006; beta-band MEG ISV: r = –0.36, p = 1.04e-07). Furthermore, fMRI ISV is mediated by the level of anisotropy in white-matter microstructure (ISV: r =–0.4, p = 0.22e-26).

Our findings contribute to understanding of the individual differences in the functional and structural organization of brain. The identification of consistent individual differences across modalities could provide benchmarks to understand how disease modifies brain function.

Acknowledgements

This study was supported by ERC (716,321) and Wellcome Trust ISSF3 Population Award (AC1710IF14).

References

1. Karahan E, Tait L, Si R, Özkan A, Szul M, et al. Individual variability in the human connectome maintains selective cross-modal consistency and shares microstructural signatures. BioRxiv. 2021 Jan 1.

2. Glasser MF, Coalson TS, Robinson EC, Hacker CD, Harwell J, et al. A multi-modal parcellation of human cerebral cortex. Nature. 2016 Aug;536(7615):171–8.

3. Zhang H, Schneider T, Wheeler-Kingshott CA, Alexander DC. NODDI: practical in vivo neurite orientation dispersion and density imaging of the human brain. Neuroimage. 2012 Jul 16;61(4):1000–16.

4. Assaf Y, Basser PJ. Composite hindered and restricted model of diffusion (CHARMED) MR imaging of the human brain. Neuroimage. 2005 Aug 1;27(1):48–58.

Fig. 1
figure cr

A Intersubject variability (ISV) is cosine distance between connectivity profiles of individuals. B ISV is calculated on structural connectome (sc-ISV), fMRI functional connectome (fc-ISV) and MEG functional connectome for alpha and beta bands (alpha meg-ISV, beta meg-ISV). R1 map shows cortical myelin content. All measures are normalized across brain and represented with z-score

P209 Segregation, integration and balance of large-scale resting brain networks configure different cognitive abilities

Changsong Zhou 1

1 Hong Kong Baptist University, Physics, Hong Kong

Email: cszhou@hkbu.edu.hk

Diverse cognitive processes set different demands on locally segregated and globally integrated brain activity. However, it remains an open question how resting brains configure their functional organization to balance the demands on network segregation and integration to best serve cognition. Here, we use an eigenmode-based approach [1] to identify hierarchical modules in functional brain networks, and quantify the functional balance between network segregation and integration. In a large sample of healthy young adults (n = 991), we combine the whole-brain resting state functional magnetic resonance imaging (fMRI) data with a mean-filed model on the structural network derived from diffusion tensor imaging and demonstrate that resting brain networks are on average close to a balanced state. This state allows for a balanced time dwelling at segregated and integrated configurations, and highly flexible switching between them. Furthermore, we employ structural equation modelling to estimate general and domain-specific cognitive phenotypes from nine tasks, and demonstrate that network segregation, integration and their balance in resting brains predict individual differences in diverse cognitive phenotypes. More specifically, stronger integration is associated with better general cognitive ability, stronger segregation fosters crystallized intelligence and processing speed, and individual’s tendency towards balance supports better memory. Our findings provide a comprehensive and deep understanding of the brain’s functioning principles in supporting diverse functional demands and cognitive abilities, and advance modern network neuroscience theories of human cognition. The work was published in [2].

References

1. Wang R, Lin P, Liu M, Wu Y, Zhou T, et al. Hierarchical connectome modes and critical state jointly maximize human brain functional diversity. Physical review letters. 2019 Jul 15;123(3):038,301.

2. Wang R, Liu M, Cheng X, Wu Y, Hildebrandt A, et al. Segregation, integration, and balance of large-scale resting brain networks configure different cognitive abilities. Proceedings of the National Academy of Sciences. 2021 Jun 8;118(23).

P210 Cognitive demand of visuomotor task depends on information rate

Sze Ying Lam 1 , Alexandre Zénon 2

1 University of Bordeaux, Bordeaux, Hong Kong

2 University of Bordeaux, INCIA, Bordeaux, France

Email: sze-ying.lam@u-bordeaux.fr

We have proposed that the cognitive cost of task performance should depend on the rate of acquisition of novel information and be independent of the amount of sensory data that can be predicted from past inputs. We have defined and computed a lower bound for such information rate in a visuomotor tracking task (Lam & Zénon, 2021), and showed that effective information rate in human subjects decreased as a function of the predictability of the signal, suggesting that subjects were modulating information rate to cope with the amount of noise in the signals. In the current study, we attempted to draw a positive relationship between information rate and cognitive effort. To do that, we evaluated effort by means of subjective effort ratings, pupil size data, choice preferences between conditions with different information processing rate and dual task interference on a concurrent auditory N-back task. Our results showed that higher information rate in the visuomotor tracking task was associated with higher subjective mental effort ratings, larger pupil dilations during trial performance, lower choice preferences and lower performance in the N-back, both in terms of accuracy and reaction time. Preliminary results suggest that these associations are specific to information rate and do not depend on confounding factors such as performance and physical effort.

P211 Multi-scale spiking network model of human cortex

Jari Pronold 1 , Alexander van Meegen 1 , Hannah Vollenbröker 1 , Rembrandt Bakker 1 , Sacha van Albad

1 Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6, INM-10) and Institute for Advanced Simulation (IAS-6), Jülich, Germa

Email: j.pronold@fz-juelich.de

Is our current knowledge about the structural connectivity of the brain compatible with the measured activity? Using a large-scale spiking network model of leaky integrate-and-fire neurons to achieve simulations with the full neuron and synapse density, we previously answered this question in the affirmative for macaque cortex [1,2]. Here, we apply the same framework to investigate human cortex. Concretely, we present a large-scale spiking network model that relates the cortical network structure to the resting-state activity of neurons, populations, layers, and areas.

The construction of the model is based on the integration of data on cortical architecture, single-cell properties, and local and cortico-cortical connectivity into a consistent multi-scale framework. It predicts connection probabilities between any two neurons based on their types and locations within areas and layers. Every area is represented by a 1 mm2 microcircuit with area-specific architecture and the full density of neurons and synapses. The cortical architecture in terms of laminar thicknesses and neuron densities is taken from the von Economo and Koskinas atlas [3] and enriched with more detailed data extracted from the BigBrain atlas [4]. While connectivity on the area level is informed by diffusion tensor imaging (DTI) data [5], it is necessary to complement this with predictions on laminar connectivity patterns. We rely on predictive connectomics based on macaque data which express regularities of laminar connectivity patterns as a function of cortical architecture. The local connectivity uses the model by Potjans and Diesmann [6] as a blueprint and is scaled according to the cytoarchitectonic data. Analysis of human neuron morphologies provides synapse-to-soma mappings based on layer- and cell-type-specific dendritic lengths [7]. The model contains roughly 4 million neurons and 50 billion synapses and is simulated on a supercomputer using the NEST simulator.

While the available data constrain the parameter space to some extent, the model remains underdetermined. Mean-field theory guides the exploration of the parameter space in search for a low-rate asynchronous irregular state that generates substantial inter-area interactions through cortico-cortical weights that poise the network at the edge of stability. Different realizations of the model are assessed via comparison with experimental data. The simulated functional connectivity is compared with experimental resting-state fMRI data. Furthermore, simulated spiking data is compared with spike recordings from medial frontal cortex recorded in epileptic patients [8]. Preliminary results show that the model can reproduce an asynchronous irregular network state and functional connectivity similar to the resting-state fMRI data. The model serves as a basis for the investigation of multi-scale structure-dynamics relationships in human cortex.

Acknowledgments

Funding

DFG SPP 2041, HBP SGA3 (grant 945539). Compute time: grant JINB33

References

1. Schmidt M, Bakker R, Hilgetag CC, Diesmann M, van Albada SJ. Multi-scale account of the network structure of macaque visual cortex. Brain Structure and Function. 2018 Apr;223(3):1409–35.

2. Schmidt M, Bakker R, Shen K, Bezgin G, Diesmann M, et al. A multi-scale layer-resolved spiking network model of resting-state dynamics in macaque visual cortical areas. PLoS computational biology. 2018 Oct 18;14(10):e1006359.

3. Von Economo C. Cellular structure of the human cerebral cortex. Karger Medical and Scientific Publishers; 2009.

4. Wagstyl K, Larocque S, Cucurull G, Lepage C, Cohen JP, et al. BigBrain 3D atlas of cortical layers: Cortical and laminar thickness gradients diverge in sensory and motor cortices. PLoS biology. 2020 Apr 3;18(4):e3000678.

5. Van Essen DC, Smith SM, Barch DM, Behrens TE, Yacoub E, Ugurbil K, Wu-Minn HCP Consortium. The WU-Minn human connectome project: an overview. Neuroimage. 2013 Oct 15;80:62–79.

6. Potjans TC, Diesmann M. The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model. Cerebral cortex. 2014 Mar 1;24(3):785–806.

7. Mohan H, Verhoog MB, Doreswamy KK, Eyal G, Aardse R, et al. Dendritic and axonal architecture of individual pyramidal neurons across layers of adult human neocortex. Cerebral Cortex. 2015 Dec 1;25(12):4839–53.

8. Minxha J, Adolphs R, Fusi S, Mamelak AN, Rutishauser U. Flexible recruitment of memory-based choice representations by the human medial frontal cortex. Science. 2020 Jun 26;368(6498).

P212 Energy adaptive reinforcement learning

Jiamu Jiang 1 , Mark van Rossum 2

1 University of Nottingham, Mathematics, Nottingham, United Kingdom

2 University of Nottingham, School of Psychology and School of Mathematical Sciences, Nottingham, United Kingdom

Email: jiamu.jiang@nottingham.ac.uk

Changes of synaptic strength during learning allow animals to adapt to tasks and environment. However, synaptic plasticity requires significant amounts of metabolic energy, which is so high that learning shortens the lifespan of fruit flies by 20% when feeding is stopped, compared to naive flies [1]. Consolidated associative memories in Drosophila have different metabolic cost, for instance, the formation of protein synthesis dependent long-term memory (LTM) is more energetically costly than anaesthesia-resistant memory (ARM) [1]. As an organ with a key role in the regulation of energy and metabolism, the brain is likely to modulate the use of energy while learning. Indeed, to survive during starvation, flies stop some forms of energy-intensive memory formation [2].

To research under which condition it is better to halt energetically costly LTM plasticity, we add an energy constraint to a reinforcement learning setup. We modelled a behavioural paradigm of instrumental conditioning as a decision-making network with two populations of sensory neurons corresponding to two alternatives, connecting with two populations of pre-motor neurons, and the choice of action is determined by the competition between the pre-motor populations. The synaptic strengths are modified by a covariance-based plasticity, modulated by reward and presynaptic activity. We associate an electric shock to one alternative; the fly should learn how to choose the safe alternative and avoid the hazard from aversive stimuli. We assume that the lifetime of the fly is affected by two hazards: 1) the aversive stimulus, and 2) when the remaining energy is low there is a hazard to perish from starvation. As the files will consume energy to learn to avoid the electric shock, they are facing a trade-off between starvation caused by synaptic plasticity and the hazard of aversive stimuli. We find the optimal regulation of the memory pathway by maximizing the lifespan the lifespan of the fruit flies. We implemented two distinct consolidated memory pathways in Drosophila – a high-cost LTM pathway with a strong memory retention and a low-cost ARM pathway which however decays over time [3]. When we implement a single memory pathway with a fixed initial energy, we found the fruit flies with sufficient initial energy using the LTM pathway survived longer than flies that don't learn. However, when the initial energy was low, exclusively using the ARM pathway led to a longer lifespan. Next, we gated the memory pathway by an energy threshold so that the model will select the LTM (ARM) pathway when the energy is above (below) the threshold. In this regime, the expected lifetime can exceed the case of a single memory pathway. Hence, the results show that energy adaptive learning allows the fruit flies to save energy when starving, and enable long-term memory retention when the energy is sufficient. This learning mechanism helps the fruit flies survive aversive tasks and hostile environments.

References

1. Mery F, Kawecki TJ. A cost of long-term memory in Drosophila. Science. 2005 May 20;308(5725):1148-.

2. Plaçais PY, Preat T. To favor survival under food shortage, the brain disables costly memory. Science. 2013 Jan 25;339(6118):440–2.

3. Tully T, Preat T, Boynton SC, Del Vecchio M. Genetic dissection of consolidated memory in Drosophila. Cell. 1994 Oct 7;79(1):35–47.

P213 Competition in synaptic plasticity leads to energy efficient learning

Silviu Ungureanu 1 , Mark van Rossum 2

1 University of Nottingham, School of Psychology, Nottingham, United Kingdom

2 University of Nottingham, School of Psychology and School of Mathematical Sciences, Nottingham, United Kingdom

Email: silviu.ungureanu@nottingham.ac.uk

In typical artificial neural networks with backpropagation, synaptic updates are distributed across the entire network. In principle, all synapses might be modified following a single learning event. In contrast, biological synaptic plasticity appears to be competitive at various levels, including between individual synapses [1], between dendritic branches [2,3], and between individual neurons [4]. As a result, in biology only a few synaptic connections undergo modifications at any time.

A possible reason for this restriction is that synaptic plasticity can be energetically costly [5]. Thus, competition effects may be due to the requirement that learning is "frugal", minimising the amount of metabolic energy consumed. For instance, it may be the case that energy debit constraints only allow a certain number of neurons to undergo memory consolidation in any particular time interval, requiring such selection mechanisms in order to efficiently allocate the energetic resources.

Here, we investigate the energetic impact of limiting neural plasticity through competition. We utilise a setup similar to that employed in [6], training a 2-layer artificial neural network on the MNIST dataset, and defining energy consumption as the magnitude of changes in the model's weights. We mainly focus on neuron-level competition, using a random selection rule where, after a training example is presented and backpropagation gradients are computed, only a subset of neurons have their weights updated.

We show that spatial competition between neurons can significantly reduce the energy needed for synaptic plasticity. We observe energy savings both in terms of the total energy required to reach a set accuracy threshold, with a more than two-fold reduction in cost for large networks of over 10,000 neurons (Fig. 1), as well as in terms of the energy efficiency ratio between the minimum energy needed to learn the final set of weights and the actual energy cost. Using the same methodology, we then further investigate the effects of more refined forms of the algorithm, such as synaptic-level and refractory competition.

In conclusion, the experimentally observed spatial competition of neural plasticity may be associated with a reduction in the energy needed to learn, providing evidence for the theory that such effects are at least in part caused by metabolic energy constraints.

References

1. Sajikumar S, Morris RG, Korte M. Competition between recently potentiated synaptic inputs reveals a winner-take-all phase of synaptic tagging and capture. Proceedings of the National Academy of Sciences. 2014 Aug 19;111(33):12217-21.

2. Cichon J, Gan WB. Branch-specific dendritic Ca 2+ spikes cause persistent synaptic plasticity. Nature. 2015 Apr;520(7546):180-5.

3. Sezener E, Grabska-Barwinska A, Kostadinov D, Beau M, Krishnagopal S, et al. A rapid and efficient learning rule for biological neural circuits. bioRxiv. 2021 Jan 1.

4. Josselyn SA, Tonegawa S. Memory engrams: Recalling the past and imagining the future. Science. 2020 Jan 3;367(6473).

5. Mery F, Kawecki TJ. A cost of long-term memory in Drosophila. Science. 2005 May 20;308(5725):1148-.

6. Li HL, Van Rossum MC. Energy efficient synaptic plasticity. Elife. 2020 Feb 13;9:e50804.

Fig. 1
figure cs

The total energy required to train the network to 95% accuracy; the learning rate is 5e-4, and the hidden layer consists of 10,000 units with exponential linear unit (ELU) activations