Asynchronous Networks and Event Driven Dynamics

Real-world networks in technology, engineering and biology often exhibit dynamics that cannot be adequately reproduced using network models given by smooth dynamical systems and a fixed network topology. Asynchronous networks give a theoretical and conceptual framework for the study of network dynamics where nodes can evolve independently of one another, be constrained, stop, and later restart, and where the interaction between different components of the network may depend on time, state, and stochastic effects. This framework is sufficiently general to encompass a wide range of applications ranging from engineering to neuroscience. Typically, dynamics is piecewise smooth and there are relationships with Filippov systems. In the first part of the paper, we give examples of asynchronous networks, and describe the basic formalism and structure. In the second part, we make the notion of a functional asynchronous network rigorous, discuss the phenomenon of dynamical locks, and present a foundational result on the spatiotemporal factorization of the dynamics for a large class of functional asynchronous networks.


Introduction
In this work we develop a theory of asynchronous networks and event driven dynamics. This theory constitutes an approach to network dynamics that takes account of features encountered in networks from modern technology, engineering, and biology, especially neuroscience. For these networks dynamics can involve a mix of distributed and decentralized control, adaptivity, event driven dynamics, switching, varying network topology and hybrid dynamics (continuous and discrete). The associated network dynamics will generally only be piecewise smooth, nodes may stop and later restart and there may be no intrinsic global time (we give specific examples and definitions later). Significantly, many of these networks have a function. For example, transportation networks bring people and goods from one point to another and neural networks may perform pattern recognition or computation.
Given the success of network models based on smooth differential equations and methods based on statistical physics, thermodynamic formalism and averaging (which typically lead to smooth network dynamics), it is not unreasonable to ask whether it is necessary to incorporate issues such as nonsmoothness in a theory of network dynamics. While nonsmooth dynamics is more familiar in engineering than in physics, we argue below that ideas from engineering, control and nonsmooth dynamics apply to many classes of network and that nonsmoothness often cannot be ignored in the analysis of network function. As part of these introductory comments, we also explain the motivation underlying our work, and describe one of our main results: the modularization of dynamics theorem.
Temporal averaging. Consider the analysis of a network where links are added and removed over time. Two extreme cases have been widely considered in the literature. If the network topology switches rapidly, relative to the time scale of the phenomenon being considered, then we may be able to replace the varying topology by the time-averaged topology 1 . Providing that the network topology is not state dependent, the resulting dynamics will typically be smooth. On the other hand, if the topology changes slowly enough relative to the time scale of interest, we may regard the topology as constant and again we obtain smooth network dynamics. Either one of these approaches may be applicable in a system where time scales are clearly separated.
However, in many situations, especially those involving control or close to bifurcation, changes in network topology may play a crucial role in network function and an averaging approach may fail or neglect essential structure. This is well-known for problems in optimized control where solutions are typically nonsmooth and averaging gives the wrong solutions (for example, in switching problems using thermostats). For an example with variable network topology, we cite the effects of changing connection structure (transmission line breakdown), or adding/subtracting a microgrid, on a power grid. Neither averaging nor the assumption of constant network structure are appropriate tools: we cannot average the problems away. Instead, we are forced to engage with an intermediate regime, where nonsmoothness (switching) and control play a crucial role in network function. 1 For example, if the input structure is additive -see section 2.
Spatial averaging and network evolution. Much current research on networks is related to the description and understanding of complex systems [7,18,44,49]. Roughly speaking, and avoiding a formal definition [44], we regard a complex system as a large network of nonlinearly interacting dynamical systems where there are feedback loops, multiple time and/or spatial scales, emergent behaviour, etc. One established approach to complex networks and systems uses ideas from statistical mechanics and thermodynamic formalism. For example, models of complex networks of interconnected neurons can sometimes be described in terms of their information processing capability and entropy [58]. These methods originate from applications to large interacting systems of particles in physics. As Schrödinger points out in his 1943 Trinity College, Dublin, lectures [59] "...the laws of physics and chemistry are statistical throughout." In contrast to the laws of physics and chemistry, evolution plays a decisive role in the development of complex biological structure. Functional biological structures that provided the basis for evolutionary development can be quite small -the nematode worm caenorhabditis elegans has 302 neurons. If the underlying small-scale structure still has functional relevance, an approach based on statistical averages to complex biological networks has to be limited; on the one hand, averaging over the entire network will likely ignore any small scale structure, and on the other hand statistical averages have little or no meaning for small systems -at least on a short time scale.
Reverse engineering large biological structures appears completely impractical; in part this is because of the role that evolution plays in the development of complex structure. Evolution works towards optimization of function, rather than simplicity, and is often local in character with the flavour of decentralized control. Similar issues arise in understanding evolved engineering structures. For example, the internal combustion engine of a car in 1950 was a simple device, whose operation was synchronized through mechanical means. A modern internal combustion engine is structurally complex and employs a mix of synchronous and asynchronous systems controlled by multiple computer processors, sensors and complex computer code.
Reductionism. In nonlinear network dynamics, and complex systems generally, there is the question as to how far one can make use of reductionist techniques [5], [44, 2.5]. One approach, advanced by Alon and Kastan [39] in biology, has been the identification and description of relatively simple and small dynamical units, such as non-linear oscillators or network motifs (small network configurations that occur frequently in large biological networks [18,Chapter 19]). Their premise is that a modular, or engineering, approach to network dynamics is feasible: identify building blocks, connect together to form networks and then describe dynamical properties of the resulting network in terms of the dynamics of its components.
"Ideally, we would like to understand the dynamics of the entire network based on the dynamics of the individual building blocks." Alon [4, page 27]. While such a reductionist approach works well in linear systems theory, where a superposition principle holds, or in the study of synchronization in weakly coupled systems of nonlinear approximately identical oscillators [55,56,8,32], it is usually unrealistic in the study of heterogenous networks modelled by a system of analytic nonlinear differential equations: network dynamics may bear little or no relationship to the intrinsic (uncoupled) dynamics of nodes.
A theory of asynchronous networks. The theory of asynchronous networks we develop provides an approach to the analysis of dynamics and function in complex networks. We illustrate the setting for our main result with a simple example. Figure 1 shows the schematics of a network where there is only intermittent connection between nodes 2 . We assume eight nodes N 1 , . . . , N 8 . Each node N i will be given an initial state and started at time T i ≥ 0. Crucially, we assume the network has a function: reaching designated terminal states in finite time -indicated on the right hand side of the figure. Nodes interact depending on their state. For example, referring to figure 1, nodes N 1 , N 2 first interact during the event indicated by E a . Observe there is no global time defined for this system but there is a partially ordered temporal structure: event E c always occurs after event E a but may occur before or after event E b . We caution that while the direction of time is from left-to-right, there is no requirement of moving from left to right in the spatial variables: the phase space dimension for nodes could be greater than one and the initialization and terminations sets could be the same. This example can be generalized to allow for changes in the number and type of nodes after each event. The intermittent connection structure we use may be viewed as an extension of the idea 2 Figure 1 can be viewed as representing part of a threaded computer program.
The events E a , . . . , E h will represent synchronization events -evolution of associated threads is stopped until each thread has finished its computation and then variables are synchronized across the threads. Our main result, stated and proved in part II of this work [13], is a modularization of dynamics theorem that yields a functional decomposition for a large class of asynchronous networks. Specifically, we give general conditions that enable us to represent a large class of functional asynchronous networks as feedforward functional networks of the type illustrated in figure 1. As a consequence, the function of the original network can be expressed explicitly in terms of uncoupled node dynamics and event function. Nonsmooth effects, such as changes in network topology through decoupling of nodes and stopping and restarting of nodes, are one of the crucial ingredients needed for this result. In networks modelled by smooth dynamical systems, all nodes are effectively coupled to each other at all times and information propagates instantly across the entire network. Thus, a spatiotemporal decomposition is only possible if the network dynamics is nonsmooth and (subsets of) nodes are allowed to evolve independently of each other for periods of time. This allows the identification of dynamical units, each with its own function, that together comprise the dynamics and function of the entire network. The result highlights a drawback of averaging over a network: the loss of information about the individual functional units, and their temporal relations, that yield network function.
A functional decomposition is natural from an evolutionary point of view: the goal of an evolutionary process is optimization of (network) function. Thus, rather than asking how network dynamics can be understood in terms of the dynamics of constituent subnetworks -the classical reductionist question -the issue is how network function can be understood in terms of the function of network constituents. Our result not only gives a satisfactory answer to Alon's question for a large class of functional asynchronous networks but suggests an approach to determining key structural features of components of a complex system that is partly based on an evolutionary model for development of structure. Starting with a small well understood model, such as the class of functional feedforward networks described above, we propose looking at bifurcation in the context of optimising a network function -for example, understanding the effect on function when we break the feedforward structure by adding feedback loops.
Relations with distributed networks. An underlying theme and guide for our formulation and theory of asynchronous networks is that of efficiency and cost in large distributed networks. We recall the guidelines given by Tannenbaum  Of course, networks dynamics, in either technology, engineering or biology, is likely to involve a complex mix of synchronous and asynchronous components. In particular, timing (clocks, whether local or global) may be used to trigger the onset of events or processes as part of a weak mechanism for centralized control or resetting. Evolution is opportunistic -whatever works well will be adopted (and adapted) whether synchronous or asynchronous in character. In specific cases, especially in biology, it may be a matter of debate as to which viewpoint -synchronous or asynchronous -is the most appropriate. The framework we develop is sufficiently flexible to allow for a wide mix of synchronous and asynchronous structure at the global or local level.
Past work. Mathematically speaking, much of what we say has significant overlap with other areas and past work. We cite in particular, the general area of nonsmooth dynamics, Filippov systems and hybrid systems (for example, [27,6,50,11]) and time dependent network structures (for example, [9,47,33,37]). While the theory of nonsmooth dynamics focuses on problems in control, impact, and engineering, rather than networks, there is significant work studying bifurcation (for example [43,10]) which is likely to apply to parts of the theory we describe. From a vast literature on networks and dynamics, we cite Newman's text [52] for a comprehensive introduction to networks, and the very recent tutorial of Porter & Gleeson [57] which addresses questions related to our work, gives an overview and introduction to dynamics on networks, and includes an extensive bibliography of past work.
Outline of contents. After preliminaries in section 2, we give in section 3 vignettes (no technical details) of several asynchronous networks from technology, engineering, transport and neuroscience. In section 4, we give a mathematical formulation of an asynchronous network with a focus on event driven dynamics, and constraints. We follow in section 5 with two more detailed examples of asynchronous networks including an illuminating and simple example of a transport network which requires minimal technical background yet exhibits many characteristic features of an asynchronous network, and a discussion of power grid models that indicates both the limitations and possibilities of our approach. We conclude with a discussion of products of asynchronous networks in section 6 that illuminates some of the subtle features of the event map. In part II [13], we develop the theory of functional asynchronous networks and give the statement and proof of the modularization of dynamics theorem.
Dedication. The genesis of this paper lies in a visit in 2010 by one us (MF) to work with Dave Broomhead at Manchester University. Dave was very interested in asynchronous processes and local clocks. During the visit, he came up with a 2 cell random dynamical systems model for the investigation of asynchronous dynamics and local time. This 2 cell model provided the seed and stimulus for the work described in this paper. Dave's illness and untimely death sadly meant that our planned collaboration on this work could not be realized.
Suppose the network N has k nodes, N 1 , . . . , N k . Abusing notation, we often let N denote both network and the set of nodes {N 1 , . . . , N k }.
Denote the state or phase space for N i by M i 3 and set M = i∈k M ithe network phase space. Denote the state of node N i by x i ∈ M i and the network state by X = (x 1 , . . . , x k ) ∈ M.
Smooth dynamics on N will be given by a system of ordinary differential equations (ODEs) of the form (1) x where the components f i are at least C 1 (usually C ∞ or analytic) and the following conditions are satisfied.
(N1) For all i ∈ k, j 1 < . . . < j e i are distinct elements of k {i} (and so e i < k).
(N2) For each i ∈ k, the evolution of N i depends nontrivially on the state of N j , j ∈ J(i), in the sense that there exists a choice of (N3) We generally assume the evolution of N i depends on the state of N i . If we need to emphasize that f i does not depend on x i in the sense of (N2), we write f i (x j 1 , . . . , x je i ), if J(i) = ∅. If J(i) = ∅, we regard the dependence of f i on x i as nontrivial iff f i is not identically zero and then write f i (x i ). Otherwise f i ≡ 0.
Remark 2.1. Given network equations (1) which do not satisfy (N1-3), we can first redefine the f i so as to satisfy (N1). Next we remove trivial dependencies so as to satisfy (N2). Finally, we check for the dependence of f i on the internal state x i and modify the f i as necessary to achieve (N3). If f i ≡ 0, we can remove the node from the network. Consequently, it is no loss of generality to always assume that (N1-3) are satisfied, with f i ≡ 0. A consequence is that any network vector field f = (f 1 , . . . , f k ) : M → T M can be uniquely written in the form (1) so as to satisfy (N1-3).
Let M(k) denote the space of k × k 0 -1 matrices β = (β ij ) i,j∈k with coefficients in {0, 1} and β ii = 0, all i ∈ k. Each β ∈ M(k) determines uniquely a directed graph Γ β with vertices N 1 , . . . , N k and directed edge N j → N i iff β ij = 1 and i = j. The matrix β is the adjacency matrix of Γ β . We refer to β as a connection structure on N .
If f : M → T M is a network vector field satisfying (N1-3), then f determines a unique connection structure C(f) ∈ M(k) with associated graph Γ C(f ) . In order to specify the graph uniquely, it suffices to specify the set of directed edges.
We define the network graph Γ = Γ(N , f) to be the directed graph Γ C(f ) . Thus, Γ(N , f) has node set N = {N 1 , . . . , N k } and a directed connection N j → N i will be an edge of Γ if and only if j = i and the dynamical evolution of N i depends nontrivially on the state of N j .
Remark 2.2. Our conventions are different from formalisms involving multiple edge types (for example, see [32,3] for continuous dynamics and [1] for discrete dynamics). We allow at most one connection between distinct nodes of the network graph and do not use self-loops: connections encode dependence.
2.2.1. Additive input structure. In many cases of interest, we have an additive input structure [26] and the components f i of f may be written Additive input structure implies that there are no interactions between inputs N j , N k → N i , as long as j, k = i, j = k, and allows us to add and subtract inputs and nodes in a consistent way. We may think of x ′ i = F i (x i ) as defining the intrinsic dynamics of the node. Remarks 2.3. (1) Additive input structure is usually assumed for modelling weakly coupled nonlinear oscillators and is required for reduction to the standard Kuramoto phase oscillator model [42,25,38].
(2) If we identify a null state z ⋆ j for each node N j , then the decomposition (2) will be unique if we require F ij (z ⋆ j , x i ) ≡ 0 4 . If a node is in the null state then it has no output to other nodes and is 'invisible' to the rest of the network. If we have an additive structure on the phase spaces M i (for example, each M i is a domain in R n or an n-torus T n ) it is natural to take z ⋆ i = 0.
2.3. Synchronous networks. Systems of ordinary differential equations like (1) give mathematical models for synchronous networks. By synchronous, we mean nodes are all synchronized to a global clock -the terminology comes from computer science. Indeed, if each node comes with a local clock, then all the clocks can be set to the same time provided that the network is connected (we ignore the issue of delays, but see [45]). The synchronization of local clocks is essentially forced by the model and the connectivity of the network graph; nodes cannot evolve independently of one another unless the network is disconnected.
We recall some characteristic features of synchronous networks. Global evolution: Nodes never evolve independently of each other: if the state of any node is perturbed, then generically the evolution of the states of the remaining nodes changes. Stopped nodes: If a node (or subset of node variables) is at equilibrium or "stopped" for a period of time, it will remain stopped for all future time. If a node has a non-zero initialization, it will never stop (in finite time). Fixed connection structure: The connection structure of a synchronous network is fixed: it does not vary in time and is not dependent on node states: one system of ODEs suffices to model network dynamics. Reversibility: Solutions are uniquely defined in backward time.

Asynchronous networks: examples
In this section, we give several vignettes of asynchronous networks that illustrate the main features differentiating them from synchronous networks. We amplify two of these examples in section 5 after we have developed our basic formalism for asynchronous networks.
Example 3.1 (Threaded and parallel computation). Threaded or parallelized computation provides an example of a discrete stochastic asynchronous network. Computation based on a single processor (or single core of a processor) proceeds synchronously and sequentially. The speed of the computation is dependent on the clock speed of the processor as the processor clock synchronizes the various steps in the computation. In threaded or parallel computation, computation is broken into blocks or 'threads' which are then computed independently of each other at a rate that is partly dependent on the clock rates of the processors involved in the computation (these need not be identical). At certain points in the computation, threads need to exchange information with other threads. This process involves stopping and synchronizing (updating) the thread states: a thread may have to stop and wait for other threads to complete their computations and update data before it can continue with its own computation.
Threaded computation is non-deterministic: the running and stopping times of each thread are unpredictable and differ from run to run.
Each thread has its own clock (determined by its associated processor). Threads will be unaware of the clock times of other threads except during the stopping and synchronization events which can be managed synchronously (central control) or asynchronously (local control).
This example shows many characteristic features of an asynchronous network: nodes (threads) evolving independently of each other, and stopping, synchronization and restarting events. The network also has a function -transforming a set of initial data into a set of final data in finite time -and there is the possibility of incorrect code that can lead to a process that stops before the computation is complete (a deadlock), or errors where threads try to access a resource at the same time (race condition). ♦ Example 3.2 (Power grids & microgrids). A power grid consists of a connected network of various types of generators and loads connected by transmission lines. A critical issue for the stability of the power grid is maintaining tight voltage frequency synchronization across the grid in the presence of voltage phase differences between generators and loads and variation in generator outputs and loads. We refer to Kundur [40] for classical power grid theory, Dörfler et al. [23] or Nishikawa & Motter [53], for some more recent and mathematical perspectives, and [41] for general issues and definitions on power system stability.
Historically, power grids have been centrally controlled and one of the main stability issues has been the effect on stability of a sudden change in structure -such as the removal of a transmission line, breakdown of a generator or big change in load. Detailed models of the power grid need to account for a complex multi-timescale stiff system. Typically stability has been analyzed using numerical methods. However, relatively simple classes of network models for power grids based on frequency and phase synchronization have been developed which are applicable for the analysis of some stability and control issues, especially those described in the next paragraph. We describe these models in more technical detail in section 5.
Interest has recently focused on renewable (small) energy sources in a power grid (for example, wind and solar power) and how to integrate microgrids based on renewable sources into the power grid using a mix of centralized and decentralized control. Concurrent with this interest is the issue of smart grids: modifying local loads in terms of the availability and real time costs of power. While the classical power grid model is of a synchronous network, though with asynchronous features such as the effects on stability of the breakdown of a connection (transmission line), these problems focus on asynchronous networks.
For example, given a microgrid with renewable energy sources such as wind and solar, time varying loads and buffers (large capacity batteries), how can the microgrid be switched in and out of the main power grid while maintaining overall system stability? In this case, switching will be determined by state (for example, frequency changes in the main power grid signifying changes in power demand or changes in the output of renewable sources or battery reserves) and stochastic effects (resulting, for example, from load changes and the incorporation of smart grid technology). This is already a tricky problem of distributed and decentralized control with just one microgrid; in the presence of many microgrids there is the potential problem of synchronization of switching microgrids in and out of the main power grid. Similar problems occur in smart grids [62].
Asynchronous features of power grid networks include variation in connection and node structure (separation, or islanding, of microgrids from main power grid), state dependence of connection structure, synchronization and restarting events (during incorporation of microgrid into main grid). ♦ Example 3.3 (Thresholds, spiking networks and adaptation). Many mathematical models from engineering and biology incorporate thresholds. For networks, when a node attains a threshold, there are often changes (addition, deletion, weights) in connections to another nodes. For networks of neurons, reaching a threshold can result in a neuron firing (spiking) and short term connections to other neurons (for transmission of the spike). For learning mechanisms, such as Spike-Timing Dependent Plasticity (STDP) [29] relative timings (the order of firing) are crucial [30,17,51] and so each neuron, or connection between a pair of neurons, comes with a 'local clock' that governs the adaptation in STDP. In general, networks with thresholds, spiking and adaptation provide characteristic examples of asynchronous networks where dynamics is piecewise smooth and hybrid -a mix of continuous and discrete dynamics. Spiking networks also highlight the importance of efficient communication in large networks: spiking induced connections between neurons are brief and low cost. There is also no oscillator clock governing all computations along the lines of a single processor computer. These examples all fit well into the framework of asynchronous networks but, on account of the background knowledge required, we develop the theory and formalism elsewhere [14]. ♦ Example 3.4 (Transport & production networks). We discuss transport networks first. For simplicity, we work with a single transport mode: trains. Typically, trains have to be scheduled to be in a station for overlapping times (stopping, restarting, connections and local times) so that passengers can transfer between trains, or stop in a passing loop (so that trains can pass on a single track line). In addition, a train can divide into two parts or two trains can be combined (variation in node structure, stopping and synchronization event). Generally, transport networks will have asynchronous features and exhibit state dependent connection structure, local times and have a strong stochastic component (for example, in stopping and restarting times). We develop a simple formal transport model in section 5 (and in part II [13]) that illustrates basic ideas and results in the theory of asynchronous networks but does not require extensive background knowledge.
A simple example of a production network is a paint mixer. Assume a controller which accepts inputs -requested colour -which, after computation to find tint weights ('tint code'), signals a request to inject selected tints according to the tint code into the base paint which is then mixed. The output is a can of coloured and fully mixed paint. Dynamics plays a limited role -except possibly at the mixing stage (for example, if there is a sensor that can detect an acceptable level of mixing). For this network, there is a varying connection structure determined by the signalling and tint injection. A characteristic feature of this, and many production networks, is the large variation in time scales. Signalling will typically be very fast, injection moderately fast and mixing rather slow. If the times of inputs to the controller are stochastic (for example, follow a Poisson process), then there will be issues of queueing and prioritization of inputs. If it is intended to maximize usage of the production facilities and avoid long waits, then it is natural to suppose that there are several mixing units and the output of the tint units is switched between mixer units according to their availability. Of course, the paint mixer may be a small part of a much larger distributed production network for which we can expect multiple time scales, switching between production units, changing the output of production units, stopping or restarting units, etc. The control of large distributed production systems will typically involve a mix of decentralized and centralized control.
Synthesis of proteins at the cellular level can be viewed as a generalization of the paint mixer model. We refer the reader to Alon [4, 8, Chapter 1] for background and more details, especially on transcription networks. ♦ We summarize some of the key features of asynchronous networks illustrated by all of the preceeding examples.
(1) Variable connection structure and dependencies between nodes.
Changes in connection structure may depend on the state of the system or be given by a stochastic process.

A Mathematical model for asynchronous networks
In this section we formalize the notion of an asynchronous network. Our focus is on deterministic (not stochastic) and continuous time asynchronous networks which are autonomous (no explicit dependencies on time) and we use the term 'asynchronous network' as synonym for a deterministic and autonomous continuous time asynchronous network.

4.1.
Basic formalism for asynchronous networks. Consider a network N with k nodes, N 1 , . . . , N k , and follow the conventions of section 2: each node N i has phase space M i , and M = k i=1 M i -the network phase space. A network vector field f on M is assumed to satisfy conditions (N1-3) and so determines a unique connection structure C(f) ∈ M(k) and associated network graph Γ C(f ) (no self-loops).
Stopping, waiting, and synchronization are characteristic features of asynchronous networks. If nodes of a network are stopped or partially stopped, then node dynamics will be constrained to subsets of node phase space. We codify this situation by introducing a constraining node N 0 that, when connected to N i , implies that dynamics on N i is constrained. We give the precise definition of constraint shortly (in 4.3); for the present, the reader may regard a constrained node as stopped -node dynamics is defined by the zero vector field. We only allow connections N 0 → N i , i ∈ k, and do not consider connections N i → N 0 , i ∈ k • . Henceforth we usually always assume there is a constraining node and let N = {N 0 , N 1 , . . . , N k } denote the set of nodes. We emphasize that the constraining node N 0 has no dynamics and no associated phase space. In a network with no constraints (there are no connections N 0 → N i ), the constraining node N 0 plays no role and can be omitted. If we allow constraints, there may be more than one type of constraint on a node N i .
The column vector α 0 codifies the connections from the constraining node and α ♭ encodes the connections between the nodes {N 1 , . . . , N k }.
Let α ∈ M • (k). We provisionally define an α-admissible vector field f = (f 1 , . . . , f k ) to be a network vector field such that for i, j ∈ k, i = j, f i depends on the state x j of N j iff α ij = 1. If there is a connection N 0 → N i (α i0 = 0), then there is a nontrivial constraint on N i . An α-admissible vector field has constrained dynamics if there are connections from the constraining node. If α = ∅, nodes are uncoupled and unconstrained. (1) A generalized connection structure A is a (nonempty) set of connection structures on N .
Interactions between nodes in asynchronous networks may vary and can be state or time dependent or both. We focus on state dependence and assume interactions and constraints are determined by the state of the network through an event map E : M → A. The network vector field of N is given by the state dependent vector field F : M → T M defined by Remarks 4.3. (1) Subject to simple regularity conditions, which we give later, the network vector field F will have a uniquely defined semiflow.
(2) In the sequel we often use the notation N as shorthand for the asynchronous network (N , A, F , E) (by extension, N a will be shorthand for (N a , A a , F a , E a ), etc.).
Assume constrained dynamics for either node is defined on the invari- When both nodes are constrained (x 1 = x 2 = 0), assume (constrained) coupling is defined by the vector field H = (H 1 , H 2 ), where Revert to standard (uncoupled and unconstrained) dynamics when |θ 1 − θ 2 | ≤ ε, where 0 < ε ≪ 1. We describe the network dynamics using asynchronous network formalism.
Take the generalized connection structure A = {∅, α 1 , α 2 , β}, where Define the event map E : Network dynamics is given by the vector field F(X) = f E(X) (X). Trajectories for F are built from pieces of the trajectories of f ∅ , f α 1 , f α 2 , and f β . Using the condition f i (0) = 0, i ∈ 2, we see easily that F has a well-defined semiflow Φ t (x 1 , θ 1 , x 2 , θ 2 ), which is continuous in time t ≥ 0 but is not necessarily continuous in (x 1 , θ 1 , x 2 , θ 2 ). ♦

Local foliations.
Conditions for a constrained node N i will be given in terms of foliations of open subsets of M i . We start by recalling basic definitions on foliations (see [46] for a detailed review). A p-dimensional smooth (always C ∞ here) foliation L of the mdimensional manifold W consists of a partition {L α | α ∈ Λ} of W into connected sets, called leaves, such that for every x ∈ W , we can choose an open neighbourhood U of x and smooth embedding ψ : U → R m such that for each leaf L α , the components of φ(L α ∩ U) are given by equations x p+1 = constant, . . . , x m = constant. Each leaf of a foliation will be an immersed p-dimensional submanifold of W . For our applications, we always assume leaves are properly embedded closed submanifolds of W , p < m, and that the manifold W has finitely many connected components. In general, a smooth foliation of the manifold W will consist of a smooth foliation of each connected component of W such that the dimension of leaves is constant on each connected component of W . Suppose that L is a p-dimensional smooth foliation of W with leaves {L α | α ∈ Λ}. The tangent bundle along the foliation τ : L → W is the smooth vector sub-bundle of the tangent bundle T W of W defined by

Constrained nodes and admissible vector fields. Following
In what follows, we assume P = 0. Definition 4.6. (Notation and assumptions as above. We can now give a precise definition of an α-admissible vector field when there are constraints.
The event sets {E α | α ∈ A} partition the network phase space M.
We require additional conditions on the event map when there are constraints. These conditions relate the event sets to the constraint structure C and are required because foliations are only locally defined.
Let π i : M → M i denote the projection map onto the phase space of Henceforth we assume that event maps are constraint regular.

4.5.
Asynchronous network with constraints. Remark 4.13. If A consists of a single connection structure α (with or without constraints), then F consists of one vector field f = f α , with dependencies given by α. We recover a synchronous network with dynamics defined by f and a fixed connection structure. 4.6. Network vector field of an asynchronous network. An asynchronous network N uniquely determines the network vector field F by (2) Equation (3) defines a state dependent dynamical system. Similar structures have been used in engineering applications (for example, [34]). We indicate in section 5.1.3 a relationship with Filippov systems (this is explored further in [12]). However, the notion of an integral curve for an asynchronous network is generally different from that of a Filippov system, see examples 4.17 (2).
(3) The network vector field does not uniquely determine A, E or F . Usually, however, the choice of A, E and F is naturally determined by the problem. Sometimes it is convenient to view the network vector field as the basic object and regard asynchronous networks as being equivalent if they define the same network vector field. (4) Since the event sets {E α | α ∈ A} partition M, the network vector field F only depends on f α |E α . Rather than assume that f α is smooth on M, we could have required that each f α was defined as smooth map in the sense of Whitney [63] on E α (and so extends smoothly to M).
Although the vector fields f α ∈ F are assumed to satisfy (N1-3), this may not hold for f α |E α , α ∈ A. Sometimes, but not always, there is an equivalent network N ′ such that the dependencies of each admissible vector field for N ′ are not changed by restriction to the corresponding event set.

4.7.
Integral curves and proper asynchronous networks. We start with a definition of integral curve suitable for asynchronous networks.
Definition 4.15. Let N be an asynchronous network with network vector field F. An integral curve or trajectory for F with initial condi- Remarks 4.16.
(1) It is routine to verify that if ψ : [0, S) → M is another integral curve with initial condition X 0 , then ψ = φ on [0, min{S, T }) (uniqueness). As a consequence we can define the maximal integral curve φ : [0, T max ) → M with initial condition X 0 . In the sequel, integral curves will be maximal unless otherwise indicated.
(2) If T = ∞ in the definition, the trajectory φ : R + → M is complete.
(3) The set D may have accumulation points in D -accumulation is always from the left on account of condition (3a).
In the examples we consider D will always be a finite set.
Without further conditions on the event map, the vector field F determined by an asynchronous network N may not have integral curves through every point of the phase space. (−1, 0) (see figure 2(a)). Trajectories cannot be continued, according to definition 4.15, once they meet x 1 = 0. One way round this problem is to define a new event set E 3 = ∂E 1 and the sliding vector field . There is then a complete integral curve through every point of R 2 and the corresponding semiflow Φ : continuous. This approach is based on the Filippov construction [27, Chapter 2, page 50] where we take a vector field in the positive cone defined by f 1 , f 2 (often the unique convex combination λf 1 + (1 − λ)f 2 ) which is tangent to ∂E 1 = E 3 ).
(2) Take event sets and corresponding vector fields f 1 (x 1 , x 2 ) = (1, −1), f 2 (x 1 , x 2 ) = (0, 0) (see figure 2(b) and note that the event F 2 models a collision, after which dynamics stops). Integral curves are defined for all initial conditions in R 2 but the semiflow Φ : R 2 × R + → R 2 will not be continuous on F 2 . Here the Filippov construction gives the wrong network solution -the diagonal F 2 is regarded as a removable singularity. We discuss the relationship between asynchronous networks and Filippov systems further in section 5.1.3; see also [12]. ♦   (2)).
(2) In many cases of interest, some of the node phase spaces M i may be open domains in R n with with ∂M i = ∅. Here there is the possibility that trajectories may exit M: if φ = (φ 1 , . . . , φ k ) is a trajectory, there may exist i ∈ k and a smallest s > 0 such that φ i (s) def = lim t→s − φ i (t) ∈ ∂M i . The maximal domain for φ is necessarily [0, s). Under additional hypotheses, it may be possible to extend φ to a complete trajectory by setting F j ≡ 0 on R n M j , j ∈ k (the jth component of φ is stopped when it meets the boundary of M j ). In this way, we can regard N as proper. We develop this point of view further in part II [13].
Event sets are typically defined by analytic and algebraic conditions that reflect logical conditions on the underlying dynamics.   (1) The event structure {E α | α ∈ A} is regular.
(2) If X ∈ E α , α ∈ A, there exists a maximal t(X) ∈ (0, ∞] such that the integral curve φ X through X is defined on [0, t(X)) and φ X (t) ∈ E α , t ∈ [0, t(X)).  (2) of definition 4.22 suggests that the vector field f α should in some sense be tangent to E α . The issue of tangency can be made precise using the regularity assumption which implies that E α has a locally finite stratification into submanifolds without boundary (for example, the canonical Whitney regular stratification of each event set [31,48]). This allows us to unambiguously define tangency at points of E α which do not lie in the boundary of strata. Care is needed at points lying in the boundary of strata and in the example below we indicate how the geometric structure of the event set can impose strong constraints on associated vector fields.
(2) If an event set is a closed submanifold without boundary, it follows from definition 4.22(2) that any trajectory that meets the event set will never leave the event set.
(3) In part II we extend definition 4.22 (3) to allow for trajectories to exit the domain and stop (see remark 4.19 (2)). (4) We may extend the definition of amenability to include asynchronous networks which are equivalent to an amenable network. (1) As event sets take the semialgebraic subsets of R 2 defined by The event sets are neither open nor closed. We define associated vector fields f j , j ∈ 2 • , on R 2 by It is a simple exercise to verify that the network is amenable and proper but that the associated semiflow Φ : R 2 × R + → R 2 is not continuous along E 1 or E 2 (it is continuous at (0, 0)).
(2) Suppose that the event set E 1 is the cusp defined by {(x, y) ∈ R 2 | x = 0, y 2 = x 3 } and E 2 = R 2 E 1 . In this case any smooth (C 1 suffices) vector field on R 2 which is tangent to E 1 must vanish at {(0, 0)} (an example of such a vector field is (2ax, 3ay), a ∈ R). If we require amenability, then all trajectories which meet E 1 will never leave E 1 .   (1), the semiflow given by proposition 4.25 need not be continuous (as a function of (X, t)).
(3) Amenability is sufficient but not necessary for properness.
4.8. Semiflows for amenable asynchronous networks. Assume N is an amenable asynchronous network with network vector field F. For each α ∈ A, denote the flow of f α by Φ α . Let X ∈ M and φ : R + → M be the maximal integral curve through X for F. If follows from the definition of integral curve and amenability that there is a countable closed subset (For E(u) = α we need amenability.) is the solution to X ′ (t) = f α X p (X) with initial condition X p = Φ X (t X p ). 4.9. Asynchronous networks with additive input structure. A natural source of asynchronous networks comes from synchronous networks with additive input structure. The event map can be either state dependent (with constraints) or stochastic (see the following section).
Fix a k node synchronous network N with additive input structure and network vector field f = (f 1 , . . . , f k ) given by.
On account of the additive input structure, it is natural to remove and later reinsert connections between nodes. For i ∈ k, let (W i , L i ) be the constraint defined by the 0-dimensional foliation of W i = M i . If dynamics on N i is constrained, then dynamics is stopped: x ′ i = 0. Let Γ be the network graph determined by (4) with associated 0 -1 matrix γ ∈ M(k). Take P = (1, . . . , 1) and let A ⊂ M • (k) be a generalized connection structure such that (1) (0 | γ) ∈ A, (2) for all α = (α 0 | α ♭ ) the matrix α ♭ defines a subgraph of Γ, and (3) α i0 ∈ {0, 1} for all i ∈ k, α ∈ A. For each α ∈ A, define the α-admissible vector field f α by and set F = {f α | α ∈ A}. If we choose an event map E : M → A and take F = {f α | α ∈ A}, then N = (N , A, F , E) is an asynchronous network. We refer to N as an asynchronous network with additive input structure.
For α ∈ A, i ∈ k, let J(i, α) = {j | α ij = 1, j ∈ k • } be the dependency set of f α i . Definition 4.28. An asynchronous network N is input consistent if for any node N i and α, β ∈ A with dependency sets satisfying J(i, α) = J(i, β) we have f α i = f β i . As an immediate consequence of our constructions we have In summary, if N is an asynchronous network with additive input structure all the admissible vector fields are derived from the network vector field of a synchronous network.

4.10.
Local clocks on an asynchronous network. In this section we describe local clocks on an asynchronous network. We give only brief details sufficient for the examples we give later (the general set up appears in [14]). Roughly speaking, a local clock will be associated to a set of nodes, or connections, and may be thought of thought of as a stopwatch with time τ ∈ R + . In particular, the local clock will run intermittently and switching between on and off states will be determined by thresholds.
Fix a finite set of nodes N = {N 0 , N 1 , . . . , N k } with associated phase spaces M i , i ∈ k, a generalized connection structure A ⊂ M • (k) and a constraint structure C. Local clocks will be defined in terms of strongly connected components of elements of A.
Suppose that α ∈ A and let β, γ be distinct strongly connected components of α with respective node sets A ⊂ k, B ⊂ k • . A local time τ β,γ ∈ R + will be defined on β (or the nodes A) if there exists a connection N j → N i , j ∈ B, i ∈ A.  Just as before, we define an A-structure F , an event map E : M → A and associated asynchronous network (N , A, F , E). Our previous definitions and results continue to apply. The asynchronous network (N , A, F , E) is amenable. If we initialize at (x 0 , 0), x 0 < 0, then the system evolves until x = 0, stops for local time T seconds and then restarts. In practice, the local clock is reset to zero after the system restarts. ♦ 4.11. Stochastic event processes and asynchronous networks. Given node set N , constraint structure C, generalized connection structure A and A-structure F , an event process is a state dependent stochastic process E (t,X) taking values in A.
Definition 4.32. (Notation as above.) A stochastic asynchronous network N is a quadruple (N , A, F , E), where E = E (t,X) is an event process.
In the most general case there are no restrictions on the process E (t,X) : there may be (stochastic) dependence on time t ∈ R + , pure space dependence (E (t,X) = E(X)), or both. If E (t,X) is independent of time, then the event process reduces to an event map E : M → A. If E (t,X) is independent of X, then under mild conditions on E, such as assuming E is Poisson, integral curves on the stochastic asynchronous network (N , A, F , E t ) will be almost surely piecewise smooth.
We discuss stochastic asynchronous networks in more detail in [14]. We give one simple example here related to additive input structure.
Example 4.33. We follow the assumptions and notational conventions of section 4.9 and assume given a synchronous network with additive input structure and dynamics given by (4). Let A be a generalized connection structure and E be a time dependent event process taking values in A. Assume M is compact and the set of times t 0 < t 1 < . . . where the connection structure changes has Poisson statistics. The stochastic asynchronous network (N , A, F , E) is an example of a stochastic asynchronous networks with additive input structure. Almost surely, trajectories will be piecewise smooth and defined for all positive time. ♦

Model examples of asynchronous networks
In this section, we describe two asynchronous networks using the formalism and ideas developed in the previous section. We refer also to [14], for the detailed description of an asynchronous network modelling spiking neurons, adaptivity and learning (STDP).

5.1.
A transport example: train dynamics. We use a simple transport example -a single track line with a passing loop -to illustrate characteristic features of asynchronous networks in a setting requiring minimal structure and background knowledge.
Consider two trains T 1 , T 2 travelling in opposite directions along a single track railway line; see figure 3. We assume no central control and no communication between train drivers unless both trains are in the passing loop.
Take as phase spaces for the trains the closed interval I = [−a, b], where a, b > 0. Suppose the end points of I correspond to the stations A (at −a) and B (at b) and that the passing loop is at 0 ∈ I. Assume that the passing loop is associated with a third station P .
The position of train T i at time t ≥ 0 will be denoted by x i (t) ∈ I, i ∈ 2. Suppose that x 1 (0) = −a, x 2 (0) = b. Assume that, outside of the stations A, B, P , the velocity of the trains is given by smooth vector fields V 1 , V 2 : I → R satisfying That is, T 1 is moving to the right and T 2 to the left. In order to pass each other, the trains must enter the passing loop and stop at P .
Fix thresholds S, S 1 , S 2 , T 1 , T 2 ∈ R + . Train T i will depart at time T i , i ∈ 2. We require that trains have to be together in station P for time S and, additionally, the train T i must be in the station for time S i , i ∈ 2 (this is an additional condition on T i only if S i > S). The trains can move out of the station when these thresholds are met. Note that the trains will not generally leave the station at the same time if S 1 > S or S 2 > S. We model train dynamics by an asynchronous network.
First we discuss connection structures. Associate the node N i with train T i , i ∈ 2. Train T i will be stopped at P only if there is a connection α i = N 0 → N i , i ∈ 2. We only allow communication between trains when both trains are stopped at P . In this case, the connection structure will be β = N 0 → N 1 ↔ N 2 ← N 2 . If either train is not stopped at P , there is no connection between the trains.
As the drivers of the trains cannot communicate (unless both trains are in the station P ) and there is no central control, the times associated with the thresholds S 1 , S 2 will be local times. Specifically, when train T i stops at P , the driver's stopwatch will be started. This will be a local time τ i for T i and associated to the connection N 0 → N i , When both trains are stopped at P , we use a third local time τ = τ 12 associated to the connection N 1 ↔ N 2 (alternatively, the drivers could synchronize their stopwatches but still the stopwatches may not run at the same speed).
We describe this setup using our formalism for asynchronous networks. As network phase space we take We define the generalized connection structure A = {α 1 , α 2 , β, ∅} and let F be the A-structure given by We define the event map E : M → A by Here we have used the logical connectives ∨ for or and ∧ for and. Dynamics on the asynchronous network N = (N , A, F , E) is given by the vector field F(X) = f E(X) (X). Provided that we initialize so that x 1 (0) < 0 < x 2 (0), τ 1 (0) = τ 2 (0) = τ (0) = 0, it is easy to see that N is amenable.
5.1.1. Initialization, termination and function. The network N has a function: each train has to traverse the line to reach the opposite station. Thus we can regard N as a functional asynchronous network. Formally, define initialization and termination sets by I 1 = {−a}, I 2 = {b} and F 1 = {b}, F 2 = {−a} respectively. We call I = I 1 × I 2 and F = F 1 × F 2 the initialization and termination sets for N. The function of the network is to get from I to F in finite time. Typically, the thresholds S, S 1 , S 2 , T 1 , T 2 ∈ R + will be chosen stochastically. For example, the starting times T 1 , T 2 according to an exponential distribution. If we initialize at (−a, T 1 ), (b, T 2 ), and take τ 1 (0) = τ 2 (0) = τ (0) = 0, it is easy to verify that solutions will be defined and continuous for all positive time under the assumption that a train stops when it reaches its termination set.

5.1.2.
Adding dynamics. The trains only "interact" when both are stopped at P . We now add a non-trivial dynamic interaction when the trains are stopped at P . To this end, we additionally require that (1) The drivers are running oscillators of approximately the same frequency (randomly initialized at the start of the trip). (2) When both trains are at P , the oscillators are cross-coupled allowing for eventual approximate frequency synchronization. (3) The trains cannot restart until the oscillators have phase synchronized to within ε, where 0 < ε < 0.5.

Relations with Filippov systems.
Assume all the thresholds of our model are zero. Note that if S = S 1 = S 2 = 0, then there is no need for local clocks and we may model by the asynchronous network N * = (N , x 1 ), 0), and the event map E * is defined by We show dynamics for N * in figure 4 under the initialization assumption that x 1 (0) ≤ 0 ≤ x 2 (0). Referring to the figure, trajectory η corresponds to train T 2 reaching P first and restarting only when T 1 reaches P . Train T 1 reaches P first for the trajectory ν. Regardless of which train reaches P first, the 'exit trajectory' φ is always the same and so there is a reduction to 1-dimensional dynamics. If both trains arrive simultaneously at P , neither stops. The dynamics shown in figure 4 is suggestive of a Filippov system [27,11] and it is natural to ask whether there are connections between asynchronous network and Filippov systems. Set R 2 • = {(x 1 , x 2 ) | x 1 x 2 ≤ 0} and observe that dynamics on N * is given by a continuous semiflow Φ * : We define a Filippov system on R 2 , with continuous semiflow Φ : R 2 × R + → R 2 , such that Φ = Φ * on R 2 • . To this end we let Q ij , i, j ∈ {+, −} denote the closed quadrants of R 2 (so , etc) and define smooth vector fields on each quadrant by These vector fields uniquely define a smooth vector field V on the union of the interiors of the quadrants. We extend V to a piecewise smooth vector field on R 2 {(0, 0)} using the Filippov conventions.
Thus, we regard the x i -axis as a sliding line S i , i ∈ 2, and define V on ∂Q −+ ∩ ∂Q −− = E α 2 ⊂ S 1 to be the unique convex combination of V −+ and V −− which is tangent to S 1 (in this case (V −+ + V −− )/2). Finally define V(0, 0) = (V 1 (0), V 2 (0)). The piecewise smooth vector field V has a continuous flow Φ : R 2 × R + → R 2 (integral curves are defined using the standard conventions of piecewise smooth dynamics -see [27]) and Φ|R 2 • = Φ * . Of course, the semiflow on R 2 R 2 • does not have an interpretation in terms of trains on a line with a passing loop (see figure 5).
In an asynchronous network, dynamics on event sets is given explicitly rather than by the conventions used in Filippov systems. However, as we have shown, asynchronous networks can sometimes be locally represented by a Filippov system (see [12] for more details and greater generality). This relationship suggests the possibility of applying methods and results from the extensive bifurcation theory of nonsmooth systems to asynchronous networks.

5.1.4.
Combining and splitting nodes. We conclude our discussion of asynchronous networks modelling transport with a brief description of processes defined by combining or splitting nodes (a dynamical version of a Petri Net [19]). We consider the simplest cases of two trains combining to form a single train or one train splitting to form two trains. We only give details for the first case but note that both situations are easily generalized and also, like much of what we have discussed above, apply naturally to production networks.
Consider node sets N a = {N 0 , N 1 , N 2 } and N b = {N 0 , N 12 }, where N 1 , N 2 , N 12 have phase space R and correspond to trains T 1 , T 2 , T 12 respectively. We give a network formulation of the event where trains T 1 , T 2 are combined to form a single train T 12 (see figure 6). Fix vector Figure 6. Combining two trains into a single train.

Define generalized connections structures
Assume a local clock with time τ = τ 12 that is shared between the connection β ∈ A a and γ ∈ A b . Define network phase spaces for N a , β = ((0, 0), 1).
Fix thresholds S 2 , S 1 > 0. The threshold S 1 gives the time taken to combine T 1 and T 2 , and S 2 models the time T 12 spends in the station before leaving. Initialize N a so that x 1 (0), x 2 (0) < 0 and τ (0) = 0. The event map E a (X, τ ) is defined for x 1 , x 2 ≤ 0 and τ < S by The event map E b (x 12 , τ ) is defined for x 12 ≥ 0 and τ ≥ S 1 by E b (x 12 , τ ) = γ, x 12 = 0, τ < S 1 + S 2 = ∅, otherwise When τ = S 1 , we switch from network N a to N b . The splitting construction is similar except that we need to split the local clock for the combined train into two clocks, one for each separated train.

5.2.1.
Power grids as asynchronous networks. We first consider an unrealistic, but simple and instructive model that shows how asynchronous and event dependent effects can naturally fit into the framework of power grids. In the following section, we describe how more realistic models are obtained, their limitations, and where we might expect asynchronous network models to be useful.
We use the simplest model [28] for power grid frequency stability that assumes generators are synchronous, loads are synchronous motors and consider the network of mechanical phase oscillators (5) θ where (k ij ) is a a symmetric matrix, all entries positive (zero is allowed). If j P j = 0 the system can reach an equilibrium (P j < 0 corresponds to a load). Let Γ be the (undirected) graph determined by the matrix of connections given by (k ij ). While the network described by (5) is not asynchronous (and the main interest lies with the stability of the equilibrium solution), the dynamics of real-world power grids are subject to factors that cannot be adequately described by a synchronous model. For integrity of transmission lines, as well as system stability, it is essential that the phase differences |θ i − θ j | are bounded away from π/2. For example, we might require |θ i − θ j | ≤ T ij , where T ij ∈ (0, π/2) will be a threshold determining the safe operational load for the transmission line. This leads to the construction of state dependent event maps E ij : {i ↔ j} and the transmission line between nodes i and j is disconnected. Equation (5) is modified accordingly. Similarly, lines or generators may be disconnected because of external events -such as lightening strikes or mechanical breakdowns. These can be modelled using a stochastic event map.
As indicated above, this model is unrealistic (it is not true, for example, that typical loads are synchronous motors). In the next section, we indicate how more realistic models are obtained, their limitations, and where we might expect asychronous network models to be useful.

5.2.2.
Network-reduced model for power grids. We give an overview of the network-reduced coupled phase oscillator model for power grids, largely based on Dörfler [20], and refer the reader to [53,20] for greater generality, alternative models, and the many details we omit. Apart from describing the model, our goal is part cautionary (it is not evident that general theories of synchronous or asynchronous networks have much to contribute to stability problems involving structural change), and part comparative with the models we describe later for microgrids.
Assume a power grid with synchronous generators, DC power sources, transmission lines and various types of load. We assume a reference frequency ω R for the power grid, usually 50Hz or 60Hz, and note that frequency synchronization is critical for the stability of the power grid: our equations will be written nominally in terms of phases θ i (t) but for the models, we can always replace θ i (t) by θ i (t) − ω R t to get the (same) equations for phase deviations that are needed for stability theory (phase differences, but not absolute phases, matter).
Formally, assume given an undirected (connected) weighted graph G with node set V = n and edge set E ⊂ V 2 . Nodes will be partitioned as where V 1 consists of synchronous generators, V 2 are DC power sources, and V 3 comprises various types of load (see below and note we do not consider all types of load).
Each edge (i, j) ∈ E, i = j, is weighted by a non-zero admittance Y ij ∈ C and corresponds to a transmission line. The imaginary part I(Y ij ) is the susceptance of transmission line and R(Y ij ) is the conductance. Typically, a high voltage AC transmission line is regarded as lossless (R(Y ij ) = 0) and inductive (I(Y ij ) > 0). We allow self-loops i = j, these will correspond to loads modelled as impedances to ground (nonzero "shunt admittances").
To each node is associated a voltage phasor V i = |V i |e ıθ i corresponding to phase θ i and magnitude |V i | of the sinusoidal solution to the circuit equations.
For a lossless network, the power flow from node i to node j is given by a ij sin(θ i − θ j ), where a ij = |V i ||V j |I(Y ij ) gives the maximal power flow (see Kundur [40,Chapter 6]).

Synchronous generators.
We assume dynamics of synchronous generators are given by (6) M where θ i , θ ′ i are generator rotor angle and frequency, M i , D i > 0 are inertia and damping coefficients, and P m,i is mechanical power input.

DC/AC inverters: droop controllers.
Each DC source in V 2 is connected to the AC grid via a DC/AC inverter following a frequency droop control law which obeys the dynamics [60]

5.2.5.
Frequency dependent loads. We assume the active power demand drawn by load i consists of a constant term P l,i > 0 and a frequency dependent term D i θ ′ i , D i > 0, leading to the power balance equation where V 3,f is the subset of V 3 consisting of frequency dependent loads. Equation (8) is of the same form as (7), and we may replace V 2 by V 2 ∪ V 3,f and consider the general equation where ω i is positive if the node is a DC generator and negative if it is a frequency dependent load. We can similarly allow for loads which are synchronous motors, incorporate them in V 1 and consider (10) M where ω i is positive if the node is a synchronous generator and negative if it is a synchronous motor.
5.2.6. Constant current and constant admittance loads. We assume the remaining loads each require a constant amount of current and have a shunt admittance (to ground). In this case we have a current balance equation and, through the process of Kron reduction [22], may obtain a reduced network the equations of which are We refer to [21] for the explicit form of the coefficients in (11,12).
The original power grid network is typically sparse with many nodes -V 3 is large. The process of Kron reduction results in a much smaller network which will be all-to-all coupled provided that the graph defined by V 3 is connected [22]. However, even if the original transmission lines are lossless, the phase shifts φ ij will generally be non-zero and not necessarily always small (we refer to [53, §6.2 Figure 4] for data from a real power grid network). The presence of phase shifts can and does make it harder to frequency synchronize (11,12).
From the point of view of transmission line failure in a power grid, even if the removal of an edge still results in a all-to-all coupled reduced network, many of the coupling coefficientsã ij will change. It is a hard problem, that goes beyond existing analytical theory for synchronous and asynchronous networks, to get good insight into whether or not a breakdown will destabilize the network (this is irrespective of phenomena like Braess's paradox [64,54]).

5.2.7.
Microgrids. Assume given a stable power grid network, robust to "small" changes in power demand, and consider the problem of modelling a microgrid and its combination or separation from the main grid. We outline structural and logical issues to make transparent the connection with asynchronous networks and largely ignore dynamics so as to keep the model simple and our discussion short (we refer to [23,60,24,16] for more details and references on microgrids and control from a large and rapidly developing literature in this area). Assume power generation in the microgrid is from DC generators (such as solar power or DC wind power) and that V 1 = ∅ (most motor loads are not synchronous). Assume the microgrid is Kron reduced.
Unlike the power grid model described above, we allow directed (one way) connections and a constraining node. Consider the simplified network N = {N 0 , N B , N G , N P }, where the nodes N B , N G , N P correspond to a large capacity battery (buffer), a DC generator, and main power grid respectively, and define subnetworks N M = {N B , N G } (microgrid) and N P = {N P } (main power grid).
The battery acts as reserve storage or buffer for the microgrid; in particular to maintain power in the event of intermittent loss of generated DC power or when the microgrid has been separated "islanded" from the main power grid. We suppose battery capacity B = B(t) ∈ [0, B M ], where B M corresponds to the battery being fully charged. We suppose that the DC generator produces power O = O(t) ∈ [0, O M ], where O M is the maximum power than can be generated.
The constraining node will play a role when the microgrid is islanded and is to be reconnected to the main power grid: either because the microgrid has insufficient power for the microgrid loads or because the microgrid has an excess of available power some of which can now be contributed to the main power grid. In either case a transition process needs to be implemented where the droop controller for the DC/AC converter needs to bring the AC output of the microgrid in precise voltage (phase, frequency and magnitude) synchronization with the state of the power grid at the connection point(s) to the microgrid. Similarly, we can constrain when the microgrid is to be islanded from the main grid so that the reduction in power contributed to the main power grid is gradual and done over an appropriate time scale so as not to destabilize the main power grid.
Leaving aside the dynamics of islanding and combining the microgrid with the main power grid, the generalized connection structures and control logic we need for management of the microgrid are complex and depend on several thresholds which may need to be time dependent -for example, if we use a time dependent model for the projected microgrid power load. If the microgrid is islanded, we work with N M and use the generalized connection structure The connection structure α corresponds to the DC generator having sufficient output to supply all power needed for the microgrid load and with a surplus which can be used to charge the battery, β corresponds to battery and generator providing all necessary power for the microgrid, and ∅ corresponds to the generator providing all needed power for the microgrid and either there is surplus power available for battery charging or the battery is fully charged. Thresholds that determine switching between these states are chosen so as to avoid "chattering" in the control system.
If the microgrid is combined with the main power grid, this can be either because battery and DC generators cannot provide sufficient power for the microgrid load or because the microgrid has surplus power which can be contributed to the main power grid or because the main power grid is stressed (possibly locally detected by frequency variation) and the battery state of the microgrid is sufficiently high to allow a temporary power contribution to the main grid. As generalized connection structure A we take the set of connection structures Each of these connection structures has a natural interpretation. For example, N M → N G → N B corresponds to the main power grid contributing to both the load of the microgrid and battery charging while N G → N M ← N B means battery and DC generator are contributing power to the main power grid as well as supplying all the power for the microgrid. On the other hand, N G → N M means DC generated power, but not battery power, is being contributed to the main power grid.
Of course, what we have described above is highly simplified as we have taken no account of (1) multiple DC generators and batteries within a microgrid, or (2) multiple microgrids. In the latter case, we need to take care that microgrid switching does not synchronize as this could lead to large destabilizing changes in load on the main grid.

Products of asynchronous networks
We conclude part I with the definition of the product of asynchronous networks and give sufficient conditions for an asynchronous network to decompose as a product of two or more asynchronous networks. Although the methods we use are elementary, the study of products is illuminating as it clarifies some subtleties in both the event map and the functional structure that are not present in the theory of synchronous networks. These ideas play a central role in the proof of the modularization of dynamics theorem in part II.
6.1. Products. Given α, β ∈ M(k), define α ∨ β ∈ M(k) (the join of α and β) by (the max-plus addition of tropical algebra [35]). We have α ∨ ∅ = α for all α ∈ M(k). If A, B ⊂ M(k) are generalized connection structures, define the generalized connection structure A ∨ B by Note that ∅ ∈ A∨B if and only if ∅ ∈ A∩B. Consequently, if ∅ ∈ A∨B, then A, B ⊂ A ∨ B.
Suppose that A is a nonempty subset of k containing k A elements. There is a natural embedding of M(k A ) in M(k) defined by relabelling the matrices in M(k A ) according to A. Specifically, map the matrix (α ij ) i,j∈A ∈ M(k A ) to the matrix α ∈ M(k) defined by This embedding extends to an embedding of M • (k A ) in M • (k) by More generally, given disjoint node sets where k is the total number of elements in A ∪ B, and then follow the conventions described above. Definition 6.1. (Notation and assumptions as above.) Given asynchronous networks N X = (N X , A X , F X , E X ), X ∈ {A, B}, define the product N A × N B to be the asynchronous network N = (N , A, F Proof. Immediate from the definitions. 6.2. Decomposability. Definition 6.4. An asynchronous network (N , A, F , E) is decomposable if it can be written as a product of asynchronous networks. If the network is not decomposable, it is indecomposable.
Example 6.5. Suppose that N is a synchronous network with connection structure α ∈ M(k) and α-admissible network vector field f satisfying conditions (N1-3) of section 2. Since α encodes the dependencies of f it is trivial that N can be written as a product of two synchronous networks iff the network graph Γ α is disconnected. ♦ Our aim to find sufficient conditions on an asynchronous network for it to be decomposable.
Definition 6.6. The connection graph of the asynchronous network N = (N , A, F , E) is the graph defined by the 0 -1 matrix Γ N = α∈A α ♭ . Lemma 6.7. If an asynchronous network N is decomposable, then the connection graph Γ N of N has at least two connected components.
Proof. If N is decomposable, then N = N A ×N B , where A, B are proper complementary subsets of k. Since there are no connections between nodes in N A and N B , Γ N has at least two connected components.
Remark 6.8. Lemma 6.7 gives a necessary condition for decomposability which is not sufficient. There are two issues. First, the event map encodes information about spatial dependence of node interactions that cannot be deduced from the connection graph. Second, the admissible vector fields may have dependencies that are incompatible with decomposability. Example 6.9. Let k = 2, M 1 = M 2 = R. Define connection structures α i = N 0 → N i , i ∈ 2 and generalized connection structure A = {∅, α 1 , α 2 , β = α 1 ∨ α 2 }. Suppose the event map is given by In this case, A = A 1 ∨ A 2 , where A i = {∅, α i }, i ∈ 2, and the network graph is disconnected. However, there is no way to write E(x 1 , x 2 ) as E 1 (x 1 ) ∨ E 2 (x 2 ) as the event sets involving x 1 ∈ M 1 depend nontrivially on x 2 ∈ M 2 . Hence the network cannot be decomposable or even equivalent to a decomposable network whatever choice we make for admissible vector fields.
Suppose instead we define the event map bỹ In this case A = A 1 ∨ A 2 and we may write E = E 1 ∨ E 2 where E i (0) = α i , and E i (x i ) = ∅, x i = 0, i ∈ 2. Suppose that f α 1 (x 1 , x 2 ) = (0, v 2 ), f α 2 (x 1 , x 2 ) = (v 1 , 0), f ∅ (x 1 , x 2 ) = (v 1 , v 2 ), where v 1 , v 2 = 0. For the moment leave f β unspecified. For (N , A, F , E) to be a product we additionally require f β (x 1 , x 2 ) = (f α 1 1 (x 1 ), f α 2 2 (x 2 )) = (0, 0), all (x 1 , x 2 ) ∈ R 2 . In particular, if f β (0, 0) = (0, 0), the network (N , A, F , E) is not even equivalent to a product network. However, if f β (0, 0) = (0, 0), then the network (N , A, F , E) will be equivalent to a product network if we redefine f β to be f α 1 1 × f α 2 2 (this does not change the values of f β on E β ). ♦ 6.3. Sufficient conditions for decomposability. Let N be an asynchronous network with k nodes and C be a proper connected component of the connection graph Γ N . Identify C with the nonempty subset of k corresponding to the labels of the nodes in the component C. Let C = k C. Since C is a connected component of Γ N , we can write each α ∈ A uniquely as α = α C ∨ α C , where α C , α C are connection structures on N C and N C respectively. Set A C = {α C | α ∈ A}. We have a well defined projection π C : A → A C defined by π C (α) = α C . Define the event map E C : M C × M C → A C by E C (X C , X C ) = π C (E(X C , X C )).
Definition 6.10. An asynchronous network N is structurally decomposable if for any connected component C of the connection graph Γ N , the map E C is independent of X C ∈ M C (that is, E C (X C , X C ) = E 1 (X C ) where E 1 : M C → A C ).
Remark 6.11. Structural decomposability implies conditions on structural dependencies that will generally be different from the dependencies of the network vector field. For example, suppose a component C of the connection graph contains the node N 1 . If the node N 1 is stopped there may be a condition that N 1 will restart when the state of another node, say N 2 , attains a certain value. Necessarily, N 2 must lie in C (structural decomposability). However, there need be no connection between N 1 and N 2 unless C contains exactly two nodes.
Suppose that N is structurally decomposable and that Γ N has connected components C 1 , . . . , C q . Set M ℓ = M C ℓ , A ℓ = π C ℓ (A), ℓ ∈ q. By structural decomposability we may write E(X) = ℓ∈q E ℓ (X ℓ ) where E ℓ : M ℓ → A ℓ . For α ∈ A, ℓ ∈ q, set α ℓ = π C ℓ (α) ∈ A ℓ and E ℓ α ℓ = (E ℓ ) −1 (α ℓ ) ⊂ M ℓ . Lemma 6.12. (Notation as above.) If N is structurally decomposable and Γ N has connected components C 1 , . . . , C q , then Proof. An immediate consequence of structural decomposability. If C be a proper connected component of the connection graph Γ N of an asynchronous network N, then by admissibility where f α C : M C → T M C and f α C : M C → T M C . In order that N be decomposable, this decomposition has to be compatible with the projections π C : A → A C , π C : A → A C . In particular, if connections in the set of nodes that are in C are added or deleted, dynamics on M C is not affected. Definition 6.13. (Notation as above.) The asynchronous network N is dynamically decomposable if for any connected component C of Γ N , we have f α C = f β C for all α, β ∈ A such that π C (α) = π C (β).
Lemma 6.14. (Notation as above.) Input consistent asynchronous networks are dynamically decomposable. In particular, asynchronous networks with additive input structure are dynamically decomposable.
Proof. Given i ∈ k, α ∈ A, let J(i, α) be the associated dependency set for node N i . If α, β ∈ A and J(i, α) = J(i, β), then f α i = f β i by input consistency. If i ∈ C, where C is a connected component of the network graph Γ N , then J(i, α)∩k ⊂ C for all α ∈ A. Hence J(i, α) = J(i, α C ∨ α C ) is independent of α C . Input consistency implies that f α C ∨β i = f α C ∨γ i for all β, γ ∈ A C which yields dynamical decomposability.
We now state the main result of this section. Theorem 6.15. Let N be a structurally and dynamically decomposable asynchronous network with connection graph Γ. If Γ has connected components C 1 , . . . , C q then there exist indecomposable asynchronous networks N 1 , . . . , N q such that N = N 1 × · · · × N q .
Proof. For ℓ ∈ q, define A ℓ = {α ℓ def = π ℓ (α) | α ∈ A} and F ℓ = {f α ℓ ℓ def = f α C ℓ : M ℓ → T M ℓ | α ∈ A}. By dynamical indecomposability we have f α = ℓ∈q f α ℓ ℓ , for all α ∈ A. Constraint structures are defined for individual nodes and so factorise naturally. Let E ℓ : M ℓ → A ℓ be the event maps given by structural indecomposability. If we let N ℓ be the asynchronous network (N ℓ , Our concluding result on decomposability is an immediate consequence of lemma 6.7 and theorem 6.15. Corollary 6.16. A structurally and dynamically decomposable asynchronous network N is decomposable if and only if its connection graph has more than one nontrivial connected component.

6.4.
Factorization of asynchronous networks. Assume for this section that N = (N , A, F , E) is an asynchronous network which is not necessarily structurally or dynamically indecomposable.
Definition 6.17. The asynchronous network N 1 is a factor of N if there is an asynchronous network N 2 such that N = N 1 × N 2 .
The proof of the next lemma is immediate from the definition of a product.
Lemma 6.18. If N 1 is a factor of N, then the connection graph Γ N 1 is a union of connected components of Γ N . Remark 6.19. If N 1 is indecomposable, the connection graph Γ N 1 may have more than one component -unless N is structurally and dynamically indecomposable (theorem 6.15). Proposition 6.20. Every asynchronous network N has a factorization a∈q N a as a product of indecomposable asynchronous networks. The factorization is unique, up to the order of factors.
Proof. Existence is obvious. The uniqueness of factorization follows easily from lemma 6.18.