An exact mathematical description of computation with transient spatiotemporal dynamics in a complex-valued neural network

We study a complex-valued neural network (cv-NN) with linear, time-delayed interactions. We report the cv-NN displays sophisticated spatiotemporal dynamics, including partially synchronized ``chimera'' states. We then use these spatiotemporal dynamics, in combination with a nonlinear readout, for computation. The cv-NN can instantiate dynamics-based logic gates, encode short-term memories, and mediate secure message passing through a combination of interactions and time delays. The computations in this system can be fully described in an exact, closed-form mathematical expression. Finally, using direct intracellular recordings of neurons in slices from neocortex, we demonstrate that computations in the cv-NN are decodable by living biological neurons. These results demonstrate that complex-valued linear systems can perform sophisticated computations, while also being exactly solvable. Taken together, these results open future avenues for design of highly adaptable, bio-hybrid computing systems that can interface seamlessly with other neural networks.


I. INTRODUCTION
Spatially extended dynamics represent a powerful substrate for computation.Neural systems perform sensory computations with organized spatiotemporal dynamics traveling over maps of sensory space [1][2][3].For example, traveling waves of spontaneous neural activity traverse highly organized cortical maps of visual space, modulating perceptual sensitivity as they travel across local neural circuits [4].In non-biological systems, spatiotemporal patterns of optical or electromagnetic waves can perform sophisticated computations, such as predicting input sequences [5] or performing transformations [6].Despite their many differences, these example systems all perform computations through sophisticated spatiotemporal dynamics.In general, nonlinearities in these systems are thought to be essential for spatiotemporal computation, because they can provide a rich diversity of dynamical behavior that can, in turn, be leveraged for computation [7,8].Nonlinear dynamics are also used extensively in machine learning for training neural networks to perform specific tasks [9,10] and in physics for training reservoir computers to predict chaotic dynamics [11,12].To understand how these systems learn, analytical techniques such as mean-field approaches have created important insights into macroscopic features of the dynamics in these nonlinear networks [13,14], such as their autocorrelation structure and transitions to chaos [14][15][16][17].Despite these advances, however, it remains difficult to understand the precise dynamical trajectory an individual nonlinear system uses to perform a specific task.For example, a trained recurrent neural network has a set of connection weights that can perform a perceptual decision-making task [18], where the network can integrate inputs over a long time before converging to a choice, but the precise dynamical trajectory by which the network completes the task remains difficult to access mathematically.This is due to the fact that the nonlinear systems that are useful for computation do not have closed-form mathematical solutions, which could provide fundamental insight into how these networks perform tasks and eliminate the need for training paradigms that can consume large amounts of energy.For these reasons, a major recent focus has been to implement spatiotemporal computation in real-world physical systems [19], which could substantially speed up network training and reduce energy consumption.While these physical systems have provided significant advances over neural networks trained on a digital computer, they still require substantial effort to design because they are based on sophisticated nonlinear dynamics.The key missing element is a system that is sophisticated enough to perform computations, while also providing mathematical insight.While linear systems allow for closed-form mathematical expressions, they are often thought to be too simple to produce sophisticated spatiotemporal dynamics [7,20].Nonlinear systems have, in turn, more often been the focus for computation.
Here, we study a system with linear dynamics and nonlinear readout.Several experimental and numerical arXiv:2311.16431v1[cs.NE] 28 Nov 2023 simulation studies have observed that systems with this structure can be useful for computation [5,[21][22][23][24].We report that the full computation in our system can be solved exactly.The exact solution allows designing specific computations in the system with a precise mathematical formulation, and further, provides fundamental theoretical insight into computation with linear dynamics and nonlinear readout.Specifically, time delays in the network interactions extend the window of time for which amplitudes in this network remain bounded, and during this window, the network displays rich spatiotemporal dynamics.It is these rich spatiotemporal dynamics that we find can be used for computation.These dynamics include the well-studied "chimera" states, in which clusters of order and disorder can emerge in networks where all nodes are connected in exactly the same way [25,26].It is surprising to find chimera states in a linear system, because they are usually studied in nonlinear oscillator networks [27][28][29] or reaction-diffusion systems [30,31].That they can occur in a linear system opens a novel avenue for studying computation with transient dynamics, which are known to be important for neural computation [32][33][34].Here, we describe, with a precise mathematical expression, the process of computation with transient network states, for the first time.
The dynamics of this complex-valued neural network (cv-NN) is governed by the differential equation where x(t) ∈ C N is a complex vector that specifies the state of network at each point in time, and N is the number of elements in the cv-NN.Though in this work we focus on the general computational properties of this system, each node in the network can be thought to represent the activation of a small patch of neurons in a single region, as in previous work on complex-valued models to approximate the dynamics of biological spiking networks [35].I is the identity matrix, i is the imaginary unit1 , and ω specifies the frequency at which the nodes' dynamics evolve.Throughout this work, we set ω = 2πf to have a natural frequency of f = 10 Hz.This frequency sets a timescale relevant to information processing in the brain.Other values of f can be chosen without loss of generality.
We have previously shown that this complex-valued system displays the hallmarks of canonical synchronization behavior found in oscillator networks [36,37], and while we consider the case of homogeneous natural frequencies here, the approach also generalizes to heterogeneous natural frequencies [38,39].It is well established that a fixed time delay can be approximated by a phase delay in the oscillator interaction term [40,41].The matrix K = ϵe −iϕ A collects information on interactions between the nodes, where ϵ is the coupling strength and ϕ = ωT is a phase delay approximating time delay T .The matrix A represents connection weights a ij between nodes i and j in the network (see Appendix A).Here, we consider the nodes in the cv-NN to be coupled in a one-dimensional ring with periodic boundary conditions where the connection weight between two nodes decays in a distance-dependent fashion.To implement dynamics based computation, we set up a system with an input layer, the cv-NN, and a decoder that interprets the phase dynamics of the network (Fig. 1a).We utilize polar notation for complex numbers throughout the text, which provides a direct way to analyze the phase dynamics that will be used for computation.Specifically, the cv-NN performs computations with spatiotemporal dynamics in the recurrent network, in combination with a nonlinearity in the readout.With this formulation, we can now write the entire computation in the system with the following closed-form expression: where the operator D t = e iωt e Kt represents an exact solution for the linear dynamics in Eq. ( 1), starting from initial conditions x(0) (see Appendix B), the operator

II. RESULTS
To describe the system in more detail, we consider computation in the cv-NN in an input-decoder framework (Fig. 1a), where the N nodes in the network receive connections from M input nodes, and project to a set of L output nodes that constitute a decoder.The set of weighted connections from the input nodes to the network are collected into an M × N weight matrix W, which can take a variety of forms.In one setup, each input node could project to a single network node.In this case, W is the N × N identity matrix I N , and all input nodes drive the network with specific complex numbers.In another setup, each input node may project to the full network with varying weights.In this case, W is in the set of M × N matrices with complex coefficients (W ∈ M M ×N (C)), and input nodes are either "on" or "off".In either case, the input to the network takes the form of a complex valued vector, which is applied through multiplication with the state vector of the cv-NN (see Fig. A2).This process can be thought of as "nudging" the state of the network onto the trajectory in state space that will allow it to evolve with the desired dynamics.Since the dynamics of the cv-NN is gov- erned by the operator D, the input weights required for the system to evolve to a specific target pattern can be computed precisely using the inverse operator D −1 (see Appendix B and supplement Fig. S1).Starting from random and asynchronous initial conditions, we can calculate the input needed for the system to evolve to an arbitrary state several seconds into the future ("input 1", Figs.1b, 1c).Adding this input to the state vector of the cv-NN brings the network to the correct state required for evolution to the target.We can then use these inputs to design specific dynamics in the cv-NN, in both amplitude and phase, and we use the phase dynamics for computation throughout this paper.Figure 1 illustrates an example of this input-output framework in the cv-NN, depicting both the phase (Fig. 1b) and amplitude (Fig. 1c) at each node in the network.In this example, input 1 drives the network to a partially phase synchronized state 4 seconds in the future (Fig. 1b, Movie S1).This partially synchronized state, where a specific group of nodes has the same phase while other nodes remain in an asynchronous state, represents a chimera state [25].Chimera states have been studied in many nonlinear systems -e.g.reaction-diffusion systems [30,31], recurrent neural networks [42], and networks of Kuramoto oscillators [25,28,29].We report that the linear cv-NN displays a range of chimera states, from the short-lived states that we use here for computation (Fig. 1b) to chimeras that exist for long timescales (Fig. S2).Because we now have a system that exhibits sophisticated dynamics such as these transient chimeras, and has a closed-form expression, we can study the inputoutput mappings in the cv-NN and design dynamics to perform computation.
Not all desired target states are achievable with the network dynamics, however.For instance, if the target state is a chimera, but the network is in a fastsynchronizing regime, the network dynamics will not match the target state.With this in mind, we introduce a similarity measurement: to quantify the match between the target phase pattern Arg[χ], designed to appear at time τ , and the phase pattern displayed by the system at that time, Arg[x(τ )].
In general, amplitudes in the linear cv-NN will grow to become unbounded or decay to zero during a transient time window.Further, in many configurations the network will quickly synchronize, which collapses all inputs to the same output state (see Appendix D, Fig. A4).We find, however, that for intermediate interaction strength ϵ and values of ϕ within an interval near π /2, this procedure can generate arbitrary spatiotemporal patterns up to 10 s into the future (Fig. 1d) (see also Appendix B, and Figs.S1 and A5).It is important to note that, while here we consider computations based on phase synchronized clusters in the network, this framework naturally generalizes to different phase patterns.This result demonstrates the cv-NN can achieve a target state at a specific time window in the future.To demonstrate that these states can be used for computa-  Here, we consider two binary inputs "X" and "Y" applied to the cv-NN, and we use its dynamics to compute the output "Z" that is interpreted by the decoder.When one (or both) of the inputs turns on, the input specified by the weighted connections between the input node and the network is added to the state vector of the cv-NN.(b) Here, we implement an XOR gate.Rows in this table represent the input node state vector, [X, Y ], and the resulting output, Z. (c) when X = 1 and Y = 0, or X = 0 and Y = 1 (i.e.precisely one input is applied), we observe a coherent cluster in the spatiotemporal dynamics.This phase synchronized cluster is recognized by the decoder, which returns Z = 1.However, when X = 0 and Y = 0, or X = 1 and Y = 1 (i.e.neither input is applied or both inputs are applied), no phase synchronized cluster appears, and the decoder returns Z = 0. tion, we implement the cv-NN with two possible inputs, X and Y, and one output, Z, which is decoded from the network phase dynamics (Fig. 2a, Movie S2).This setup allows for the realization of an XOR logic gate (Fig. 2b).When X and Y are both 0, the cv-NN remains asynchronous, and no chimera occurs (Z= 0) (Fig. 2c, top).When either input X or Y is 1, a synchronized cluster occurs in the center of the network (Z= 1) (Fig. 2c, middle two rows).Lastly, when X and Y are both 1, these competing inputs interfere in such a way that no synchronized cluster is observed in the network (Fig. 2c, bottom).It is important to note that, in contrast to the way computations are often implemented in neural networks, where nonlinearities at single neurons alternate with pooling across trained network connections, the interference underlying the mapping (X=1, Y=1) → Z=0 occurs in the linear dynamics of the cv-NN, with the nonlinearity Θ σ R k only applied once at the readout.This specific combination of linear dynamics and simple nonlinear readout allows specifying the XOR operation precisely in a closed-form expression (Eq.( 2)).This XOR gate is robust to noise (see Appendix F and Supplement Fig. S3).Having a closed-form expression opens the opportunity to generalize easily to other standard logic gates (Supplement Fig. S4) and, potentially, more complex logic operations, in a natural way.
The cv-NN can thus perform simple spatiotemporal computations by "holding" target states several seconds into the future.These target states could, conceivably, enable a form of in-memory computation, which has proven to be a promising departure from traditional models of von Neumann computing architectures [43].To explore the possibility of performing in-memory compu- We use our computational framework to perform a short-term memory task in which 1 of 8 possible items is to be held in working memory for 3 seconds.The remembered item is encoded in the cv-NN dynamics by the location of the phase synchronized cluster, and can be read out by a simple decoder.We obtain analytically the input necessary to drive the cv-NN dynamics to the specific pattern for each of the 8 items.The input can be understood as a cue in the context of this task.In the example shown here, the input cuing item 2 is applied, and the cv-NN dynamics depicts the corresponding chimera state.
Once the decoder successfully interprets the phase dynamics, a second input (input 2) then drives the network back to asynchronous behavior.(b) This approach also allows memories to be updated online.In this example, due to the first cue, the network initially stores item 2 in memory.The application of input 2 (second cue) updates this memory to item 6.After the item is decoded, input 3 drives the cv-NN dynamics back to an asynchronous state.Our approach naturally generalizes to online updating because we do not need any information about the future state of the cv-NN to make updates, since we have a closed-form mathematical expression describing the whole process.tations with target states in the cv-NN, we considered an example task in which 1 of 8 items is to be held in short-term memory for 3 seconds.An input to the cv-NN cues the item to be remembered, which is encoded by a specific pattern in the cv-NN dynamics, and can then be read out by decoder units with connections to nearby nodes in the network (Fig. 3a).As before, we store the item in a coherent phase cluster at a specific position in the cv-NN.This cluster then triggers the decoding unit corresponding to the item held in memory (see Appendix C).The network is initially asynchronous, due to random initial conditions (t < 1 second, Fig. 3a).Following a specific input that cues item 2 at t = 1 second, the network evolves to a state with a coherent phase cluster centered at decoder 2, representing the item held in memory.After the item is correctly decoded, another input is applied, and the cv-NN returns to an asynchronous state (t > 4 seconds, Fig. 3a).As with the implementation of logic gates, this framework for short-term memory is robust to noise and perturbation (Supplement Fig. S4).
A key feature of in-memory computation is the ability to update online, a feature shared with biological working memory [44].For instance, if someone is asked to keep a phone number in memory, it is also possible for them to update the last digit from a "1" to a "9".Online updates provide biological working memory with the flexibility to adapt to inputs and solve problems over extended time scales [45].To demonstrate online updating, we consider a longer task where the cv-NN must switch between items 2 and 6 after 4 seconds (Fig. 3b, Movie S3).Importantly, the input cue needed for the switch is given by a single vector that can be computed locally in time, without requiring future information about the cv-NN's state.These results demonstrate that the cv-NN can store short-term memories with a process that can be both updated online and described with a mathematically exact solution.
Short-term states in the cv-NN can also be used to encode and transmit information between two or more sources, in a simple symmetric-key encryption format [46].To demonstrate this, we mapped different coherent phase clusters to each letter of the English alphabet (and one to a blank space), which leads to 27 different patterns constituting a "chimera alphabet" (see Appendix G).We then consider the traditional scenario in which Bob sends a message to Alice, which Eve tries to intercept (Fig. 4a).Bob and Alice agree on a secret key, {ω, x(0)}, where ω denotes the intrinsic frequency and x(0) the initial conditions.The chimera alphabet and the network structure (K) form the public structure through which the message is transmitted.Eve therefore knows the full transmission framework, including K, the chimera alphabet, and the format of the ciphertext, but must attempt to guess the secret key shared between Alice and Bob.To encrypt a message, Bob first chooses a set of target times at which the encoded letters should appear in Alice's cv-NN.He then uses D −1 to compute the set of inputs (I j ) to apply at specific input times (t j ) so that the chimera letters appear as desired (see Appendix B). Bob sends Alice the ciphertext {I j , t j }.Alice initializes her cv-NN using the shared secret {ω, x(0)}, then applies the ciphertext, letting the network dynamics evolve according to the operator D. Because Alice has the secret key, synchronized phase clusters will appear.When this happens, Alice decodes each letter of the message using the public chimera alphabet (Fig. 4b, Movie S4).We note that the synchronized clusters appear at the target times chosen by Bob, but that these times do not need to be known by Alice and are discovered in the spatiotemporal dynamics of the cv-NN.At the same time, an eavesdropper, Eve, intercepts the ciphertext and applies the inputs to the public network to decode the encrypted message; however, because Eve does not have the secret key {ω, x(0)}, Eve's network does not reach the chimera states and will, in practice, evolve to synchronized states.This example of dynamics-based encryption is robust to random attacks.Randomly guessing ω and x 0 does not produce phase clusters from the alphabet (Fig. A7).Further, the inputs I j must be applied in the correct sequence and at the correct times t j ; otherwise, the target patterns are not obtained.Finally, one may question whether the synchronization of Eve's network (Fig. 4c) could offer her some insight into the private information.This is not the case, however, because in practice Bob and Alice can always extend the private information to a series of keys {ω, x(0)} and jump between these keys in a sequence [47,48].
These results demonstrate the cv-NN can perform computations, store short-term memories, and can enable secure message passing, but is it possible to communicate with a biological neuron?We next test whether biological neurons can decode the spatiotemporal dynamics underlying computation in the cv-NN.To do this, we injected the cv-NN dynamics directly as a current into a biological cell via an intracellular recording electrode (Fig. 5a, see also Appendix H and Fig. A8), and then used the re-sulting spikes generated by the cell as the decoder (see Appendix I for details about the experiments)2 .We then implemented the short-term memory task where 1 of 8 items is to be held in memory (Fig. 3).As before, the cv-NN holds an item in short-term memory through the position of the phase coherent cluster.We then systematically injected dynamics from subsets of the cv-NN as a We implement the cv-NN using real, biological neurons as the decoders.To do so, we inject the dynamics of the cv-NN as a current into a biological neuron, whose resulting physiological signal is used for decoding.(b) As an example, we consider the short-term memory task represented in Fig. 3.The location of the phase synchronized cluster indicates which item is being remembered (item 2 in this case).(c) The current input created from the segment of the network corresponding to the phase synchronized cluster causes the biological neuron decoder to fire repeatedly (black trace).However, when the current input corresponds to an asynchronous segment of the network, the neuron does not fire (gray trace).(d) This procedure is repeated for several different trials, which shows successful decoding.
current into the biological neuron in separate trials, effectively using the biological cell in place of the 8 decoding units used previously (Fig. 5b).Inputs from the subset of the network corresponding to the remembered item sum constructively and cause the biological neuron to fire (black trace, Fig. 5c), while inputs from outside the coherent phase cluster sum destructively and do not cause the neuron to fire (gray trace, Fig. 5c).Over several trials, the biological neuron repeatedly spiked successfully for the remembered item and not for other inputs (Fig. 5d).These results are robust for different shortterm memory items, different scaling factors to translate the cv-NN dynamics into a biological current (Supplement Fig. S6), and are consistent with a standard mathematical model of the neuron (Supplement Fig. S7).It is important to note that, in contrast to standard mathematical models of single neurons, which always fire a spike at a fixed threshold potential and instantaneously reset, biological neurons have variable thresholds that change dynamically in time and with different input [49].Even under these conditions, however, computations in the cv-NN can be successfully implemented by real biological cells.Taken together, these results open a new path to neuron-computer interfaces with exactly solvable dynamics.

III. DISCUSSION
In this paper, we introduce a cv-NN that can perform computations while also being exactly solvable.These results provide a comprehensive theoretical framework to understand computation with linear systems and nonlinear readout.These results unify previous experimental [5,21,23] and numerical observations [22] of computation in linear systems with nonlinear readout, in addition to providing a precise mathematical framework in which to design networks to perform computation.In this way, we leverage the exact mathematical description of spatiotemporal dynamics in the cv-NN to perform computations that can be precisely interpreted.This mathematical framework allows performing multiple types of tasks with the same network, circumventing the need for computationally expensive training algorithms and hyperparameter tuning.This is, to the best of our knowledge, the first example of computation with spatiotemporal dynamics where such a closed-form expression -precisely describing the whole dynamics-based computation, from the input to the output -can be obtained.
Computing with linear network dynamics may at first seem counter-intuitive.Research in both neural networks and dynamics-based computation has largely focused on nonlinear systems, both because real-world systems are in general nonlinear [11,12,[50][51][52] and because saturating nonlinearities can keep RNN activity within bounded intervals [15,53].It is well known that RNNs can be trained to perform tasks similar to those we have studied here [9,42,54].At the same time, however, it is in general difficult to tune RNNs to achieve desired task performance [55] and that standard training algorithms, such as backpropagation through time (BPTT), are both difficult to train [56] and to interpret when successfully trained.In addition, previous work in dynamics-based computation with nonlinear systems generally required mapping the entire parameter space of the systems under consideration through exhaustive numerical simulation [57][58][59].As an alternative to standard RNNs, it is increasingly appreciated that nonlinear oscillator networks can be trained to perform sophisticated computations [60][61][62][63][64][65].In this work, we have developed a mathematical framework that allows computation with linear oscillator dynamics and nonlinear readout, in a system which has the advantage that it can be exactly described in a closedform expression.This provides a precise mathematical description of the dynamical trajectory the cv-NN uses to perform an individual computation, as it is generated through the combination of inputs, recurrent dynamics, time delays, and nonlinear readout.In recent work, we have also developed a detailed analytical understanding of the connection between nonlinear oscillator networks and complex-valued linear systems [36,38,39].This mathematical framework we have recently developed allowed us to understand how sophisticated spatiotemporal dynamics -and, specifically, transient chimera states -can be found in networks of complex-valued oscillators with linear interactions.Importantly, our results are consistent with recent computational work on linear RNNs, which has shown that linear RNNs, interleaved with nonlinear multi-layer perceptrons, can show improved performance and efficiency compared to modern Transformer networks in certain key tasks [66].
The computations we study here are possible in a linear system because we are considering the transient dynamics.Transient and spatiotemporal dynamics are highly useful for computation in artificial neural networks [32,33,67] and have been directly tied to computation in some biological neural systems [68][69][70].Leveraging our analytical framework allowed us to construct this linear, complex-valued system where we can meaningfully compute with phase during the transient regime that extends for relatively long times.It is this technical advance that allows us to construct sophisticated computations with transient dynamics that can be precisely described by a mathematical framework.The results we introduce in this paper can thus provide a fundamentally new way of looking at computation.This network can perform nontrivial computations that can also be precisely specified by a closed-form mathematical expression.From this perspective, the results we report here can advance the level to which detailed analytical explanations of computation in neural networks may be possible.
Finally, the connection between working memory and dynamics-based computation identified in this work may provide insight into models of short-term memory in artificial and biological neural networks.Mathematical approaches to working memory in the executive regions of the brain are often based on static "bump" attractors [71,72], where nodes within a bump of elevated activity are responsible for holding activity in working memory.Empirical results have shown, however, that more sophisticated dynamics, such as chaotic attractors, may play an important role in short-term memory processes [73].The cv-NN developed here can be generalized to store short-term memories through arbitrary spatiotemporal patterns, beyond the partially synchronized chimera states that we utilized here and that bear resemblance to the bump attractor models of working memory.In this way, the particular dynamics-based short-term memory and computation studied in this work may possess many desirable features of in-memory computation [43] that has generated much interest as the next generation of computing hardware.The cv-NN introduced here may provide a guiding mathematical framework for implementing and training in-memory computing systems both in digital simulations and in physical hardware, with the advantage of the training being specified by precise mathematical expressions.

Figure 1 .
Figure 1.A cv-NN with linear dynamics exhibits sophisticated spatiotemporal patterns.(a) Our framework is composed of an input layer connected to a recurrent layer (the cv-NN) that generates dynamics which are then interpreted by a decoder.(b-c) We use the inverse of operator D to determine the input required to drive the network to a specific target pattern at a precise time, several seconds into the future (see Appendix B).The dynamics of the cv-NN starts in an asynchronous state due to random initial conditions.After input 1, the network evolves to a chimera state pattern.A second input leads the cv-NN back to an asynchronous state.The cv-NN exhibits both (b) phase and (c) amplitude dynamics.Throughout this work, we use the phase dynamics to perform computations.(d) The success of this process -whether or not the cv-NN achieves the target pattern at the desired time -is quantified by the similarity metric Eq.(3).There is a specific range of the delay parameter, ϕ, and τ for which the target pattern is achieved by the cv-NN dynamics.

Figure 2 .
Figure2.The cv-NN can perform simple computations.(a) Here, we consider two binary inputs "X" and "Y" applied to the cv-NN, and we use its dynamics to compute the output "Z" that is interpreted by the decoder.When one (or both) of the inputs turns on, the input specified by the weighted connections between the input node and the network is added to the state vector of the cv-NN.(b) Here, we implement an XOR gate.Rows in this table represent the input node state vector, [X, Y ], and the resulting output, Z. (c) when X = 1 and Y = 0, or X = 0 and Y = 1 (i.e.precisely one input is applied), we observe a coherent cluster in the spatiotemporal dynamics.This phase synchronized cluster is recognized by the decoder, which returns Z = 1.However, when X = 0 and Y = 0, or X = 1 and Y = 1 (i.e.neither input is applied or both inputs are applied), no phase synchronized cluster appears, and the decoder returns Z = 0.

Figure 3 .
Figure3.The cv-NN can perform short-term memory tasks and online updating.(a) We use our computational framework to perform a short-term memory task in which 1 of 8 possible items is to be held in working memory for 3 seconds.The remembered item is encoded in the cv-NN dynamics by the location of the phase synchronized cluster, and can be read out by a simple decoder.We obtain analytically the input necessary to drive the cv-NN dynamics to the specific pattern for each of the 8 items.The input can be understood as a cue in the context of this task.In the example shown here, the input cuing item 2 is applied, and the cv-NN dynamics depicts the corresponding chimera state.Once the decoder successfully interprets the phase dynamics, a second input (input 2) then drives the network back to asynchronous behavior.(b) This approach also allows memories to be updated online.In this example, due to the first cue, the network initially stores item 2 in memory.The application of input 2 (second cue) updates this memory to item 6.After the item is decoded, input 3 drives the cv-NN dynamics back to an asynchronous state.Our approach naturally generalizes to online updating because we do not need any information about the future state of the cv-NN to make updates, since we have a closed-form mathematical expression describing the whole process.

Figure 4 .
Figure 4.The cv-NN enables message transmission based on spatiotemporal dynamics.(a) By associating letters to specific spatiotemporal patterns (different phase synchronized clusters that form a public alphabet -see Appendix G), we can use our framework for message transmission.In this example, we consider the traditional scenario where Bob sends a message to Alice, which Eve tries to intercept.Here, the public structure is given by the parameters ϵ, α, ϕ and the matrix A. Further, Bob and Alice agree in advance on the parameter ω and the initial state of the cv-NN x(0), which configures the secret key.To encrypt his message, Bob decides on a set of target times when the phase clusters corresponding each letter of his message should appear.He chooses a set of input times tj, and uses the inverse operator D −1 to obtain the required inputs Ij.He then sends the ciphertext {Ij,tj} to Alice.(b) Alice then implements the cv-NN using the secret key {ω, x(0)} and applies the inputs Ij at the times tj.The resulting spatiotemporal dynamics of the cv-NN can then be interpreted by the decoder using the public alphabet.By applying the shared inputs, different phase clusters can be observed in the cv-NN dynamics, which result in the message "HELLO".(c) At the same time, Eve uses the public information to build the cv-NN and tries to intercept the message.Even though Eve is able to obtain the ciphertext, she is not able to decode the message: when she applies the inputs Ij at the correct times tj, she does not obtain the phase clusters in the dynamics of her cv-NN because she does not have access to the secret key {ω, x(0)}.

Figure 5 .
Figure 5. Biological neurons can successfully decode the dynamics of the cv-NN.(a)We implement the cv-NN using real, biological neurons as the decoders.To do so, we inject the dynamics of the cv-NN as a current into a biological neuron, whose resulting physiological signal is used for decoding.(b) As an example, we consider the short-term memory task represented in Fig.3.The location of the phase synchronized cluster indicates which item is being remembered (item 2 in this case).(c) The current input created from the segment of the network corresponding to the phase synchronized cluster causes the biological neuron decoder to fire repeatedly (black trace).However, when the current input corresponds to an asynchronous segment of the network, the neuron does not fire (gray trace).(d) This procedure is repeated for several different trials, which shows successful decoding.